[Yum] Maintaining my own copy of UPDATES

Les Mikesell lesmikesell at gmail.com
Thu Mar 26 22:25:17 UTC 2009


James Antill wrote:
>
>>>  It is absolutely fine for two machines on the same network, using the
>>> same proxy, to get two completely different "mirrorlists" (or to have
>>> some of the same data in a different order).
>> Why don't you permit this to be cached for some reasonable length of
>> time so things behind my cache will see the same version (because it
>> will only ask once)?
> 
>  This is a MirrorManager RFE, why are you asking here? Also metalink
> data is over https, so I'm not sure how that's going to get cached by
> anything.

Well if the mirror managers really _want_ to have an order of magnitude 
more traffic to their mirrors than necessary, I guess they can have 
their way.

>>>  As I'm _sure_ you know, MirrorManager has options to allow the user
>>> to pick a "best" mirror for IP ranges they own.
>> If it can do that, it could provide repeatable behavior on its
>> own. Hash the source IP into ranges, give out the same mirror order to
>> everyone in the same range.
> 
>  Again, this is a MirrorManager RFE, why are you asking here?

Because what I see is yum retrying mirrors that fail over and over or 
have horribly slow connections as I update multiple machines when I know 
very well that one or more copies is already available locally.

>>>> If you would permit caching to work the way it is intended, distros
>>>> probably wouldn't need all those mirrors anyway and other people
>>>> wouldn't have had to invent a dozen different ways to work around what
>>>> yum does when updating multiple machines.
>>>  Sure, scaling out from a single point of reference is very easy in
>>> HTTP ... we/Fedora/CentOS/etc. are just too dumb to do it. As are all
>>> of Akami's customers.
>>>  Feel free to enlighten us/Fedora/CentOS/etc.
>> Starting from scratch, I'd have required mirrors to use the same
>> relative locations and returned a bunch of IP addresses in DNS
>> (possibly with some intelligent handler like the F5 GTM) the way every
>> other large scale http distribution works.   Centos3 worked just great
>> that way.
> 
>  Of course, why didn't I think of that. Hmmm, maybe because:
> 
> . Putting IP based dynamic info. into DNS is soooo much easier than
> doing the same from CGIs.

Easier for the gazillion proxies to track.  About the same otherwise. 
And it doesn't have to be dynamic - just answer with the whole list.

> . Having a single mirror failure stop all downloads wouldn't have you
> shouting that we're all morons.

Huh?  If your browser gets a list of IP addresses from a DNS lookup it 
won't quit if the first one in the list fails.  No other client would 
have to either.

> . Lack of ever being able to download from more than one mirror at
> once is obviously a good thing.

Huh again?  Unless you work at it, standard DNS servers will rotate the 
list each lookup to balance usage.

> . Forced use of a single protocol is obviously a good thing.

Http seems to have proven itself.

> . Lack of being able to do any kind of useful client side filtering is
> obviously a good thing (luckily noone uses fastestmirror plugin, and
> it wasn't considered so good that CentOS force installed it).

Is that the thing that points me at broken mirrors all the time? 
Whatever test it does apparently has no relationship to the time it is 
going to take to download an actual file.  Anyway, if you felt like it 
you could just as easily try all the IPs returned by dns.

> . DNSSEC has been deployed for years now, so your suggestion is
> completely secure.

How is that relevant?  Does it affect the ability to return a list of 
addresses?

> . DNS is well known for caching/distributing multiple IPs for one name
> 100% fairly, and RFC3484 has done nothing but help.
>  See: http://drplokta.livejournal.com/109267.html

This article describes client behavior.  Just because one client does 
something less than optimal when it gets multiple IP addresses doesn't 
mean it is a requirement for all clients to behave that way.  I was 
hoping yum could behave intelligently.

-- 
   Les Mikesell
    lesmikesell at gmail.com





More information about the Yum mailing list