[Yum] Survey of Use

Robert G. Brown rgb at phy.duke.edu
Fri Oct 24 14:17:25 UTC 2003


On Fri, 24 Oct 2003, Hedemark, Magnus wrote:

> Garrick Staples [mailto:garrick at usc.edu] said:
> 
> > Our only solution is to take up Fedora and do two things:  continue
> > updates past 1 year, and build it for all arches.  That leaves one
> > question, when do we start?
> 
> Who wants to host the server(s)?  Not all of us are .edu's with free fat
> pipes to sit on. ;)

The same folks that host RH mirrors now, for example.  Who are often
.edu's with "free" fat pipes (which aren't all that free, but which
necessarily have a lot of free capacity at any given time).

As one might expect, this will likely end up a standard tree -- primary
site mirrored to first tier sites, mirrored again to second tier sites,
mirrored down to the LAN level (with dynamic adjustments as required).
As long as most LAN admins build a local yummified mirror, rather than
try to yum at the client level directly FROM the mirrors, this will
scale to infinity and beyond with no more than 1-4 day lags between
primary site and desktop (depending on the tier one mirrors at and the
frequency one uses to resync your mirror).  If the level one tier
resyncs 4x a day (but restricts access to their own local systems and
level two mirrors) and the level two mirrors restrict access and permit
resync 2x a day, the level three and beyond mirrors could resync 1x a
day and still very likely stay within one (additional) day of the
toplevel mirror, which is pretty damn good.

There are probably ways to cut this down in an emergency (a serious
exploit, for example, that needs to be corked in 36 hours or less at the
client level).  A mailing list for all admins with level three or higher
mirrors, where primary site maintainers could announce specific windows
for emergency resync's down the ladder, or a specific directory on the
primary site from which ALL mirrors could rsync "urgent/critical"
updates on a six hour basis (presumed to be small enough that bandwidth
bottlenecking won't be a problem) until the updates propagate down
through the normal tree.  Yum actually makes this latter possibility
rather attractive, as it can handle both the rpm replication and the
additional "critical" repository quite transparently.

   rgb

> _______________________________________________
> Yum mailing list
> Yum at lists.dulug.duke.edu
> https://lists.dulug.duke.edu/mailman/listinfo/yum
> 

-- 
Robert G. Brown	                       http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567  Fax: 919-660-2525     email:rgb at phy.duke.edu






More information about the Yum mailing list