[Yum] Caching remote data for multiple computers

Les Mikesell lesmikesell at gmail.com
Mon Dec 21 14:47:55 UTC 2009


Pierre Guillet wrote:
> Hello list,
> 
> In yum Wiki (http://yum.baseurl.org/wiki/YumMultipleMachineCaching)
> for "rsync /var/cache/yum and set keepcache=1 in yum" solution, James
> wrote in cons point:
> preupgrade/etc. doesn't easily share data with "normal" yum, even if
> they need the same data.
> 
> Can someone (James ?) explain this sentence ? What is "normal" yum ?
> Does it means that I cannot perform a "yum upgrade" on a server, which
> uses the replicated cache ?
> 
> Another question about this configuration:
> 
> Do you know a tool like a local yum makecache to rebuild metadata with
> only the list of downloaded packages (RPM in yum cache) ?
> May be a "createrepo" for each section in the cache ?
> 
> The server, which uses the replicated cache, uses the replicated
> metadata and these metadata contain the list of all packages on the
> original repository.
> If somebody has installed a package directly with rpm command, yum
> update fails if an update is available because the package is not in
> the cache.
> With the metadata restricted to the packages in cache, this problem
> can't occur.

The really simple-minded way to cache files for yum and everthing else that uses 
standard protocols is to set up a squid server in a convenient location 
configured to cache large objects.   Then just export 
http_proxy=your.squid.server:port and ftp_proxy=your.squid.server:port before 
running yum or other programs that get remote files and let standard mechanisms 
do their thing.  Unfortunately if the yum repo uses a mirrorlist, you'll 
probably end up pulling a copy from each of the offered URLs, but that's still 
usually a fairly small number.

-- 
   Les Mikesell
    lesmikesell at gmail.com



More information about the Yum mailing list