[Yum] Re: yum mirroring

Robert G. Brown rgb at phy.duke.edu
Mon Aug 25 12:28:13 UTC 2003


On 25 Aug 2003, Peter Peltonen wrote:

> On Thu, 2003-08-21 at 15:35, Robert G. Brown wrote:
> > Note that there IS an option that will almost certainly "work" to
> > surpress this unwanted file transfer.  The --size-only flag tells rsync
> > to ignore checksums and use only the size of the file to decide whether
> > or not to transfer.  Given that the file size is NOT changing, I don't
> > see how this could fail to prevent transfers.
> 
> This seems to do the trick. Thanks for the tip!

My pleasure.  Your problems have been a great motivator for me to
"study" the rsync docs a lot more deeply than my own needs have thus far
warranted, and you can't know too much about such an important tool.

Not that I'm not pleased that we finally found something to solve the
problem and can move on...;-)

> With the current RH release timetable it is no use to try to keep the
> servers up to date in RHL releases :( I simply do not have the time to
> test and upgrade. Easier to try to provide the upgrades myself for
> services that are open to network. This is only a security issue: 6.2

Well, security, efficiency/speed, feature availability, driver
availability, maybe even one or two more things.

One of the several reasons for yum's existence is to make it a LOT
easier to upgrade and cope with testing and redistribution.  As in one
can actually share the work many times and ways:

   a) Toplevel, RH tests extensively prerelease across a very large base
of "hacker"-level users and sysadmins via e.g. rawhide.

   b) Upon release, the process cycles a second (continuous, ongoing)
time, with an even larger base of users and with the sysadmins working
through e.g.  bugzilla to update all packages in which a problem
discovered.  There is triage; security problems and serious problems
with functionality receive rapid attention, "feature" problems more
slowly.

   c) Pre yum, the next layer would have been sysadmins in their own
LANs.  However, yum and rsync openly encourage the creation of "layers"
intermediate sized correction and update and permit the open sharing of
fixes across the associated domains.  This distributes the labor and
time among more people (many of whom may not be in your LAN or even your
institution) in a fairly clear if ad hoc repository hierarchy, with
significant fixes to distribution packages still getting fired up to RH
for global sharing.

To sum up how this works to save you work: basically, if you mirror a
yummified repository that is actively maintained from a site with an
extensive installed base, you will be at least as stable as that site is
a priori, and if you contribute a LITTLE work -- almost certainly less
than you do now to deal with networking and the past rsync issue -- you
make the site you mirror just a bit more stable and functional than it
otherwise would be and reduce the burden just a tad of the admin group
that maintains IT.  A classic case of sharing work of many skilled hands
across a wide installed base of systems to achieve maximal scaling and
minimal effort all around.

> has been _very_ stable at least for me. The rsync problem happens with

Hey, LINUX has been very stable for me, compared to any other OS I've
managed or used over close to 30 years (professionally for over 20 of
them).  Even slackware.  RH 5.2 was good, 6.1 and 6.2 were good, 7.1,
7.2, 7.3 were good, 8 was "odd" (came and went before we installed a
stable release around here) and 9 has been, well, pretty good although
there are still issues, probably because of the major library changes
and changes to major tools such as gcc and glibc.  However, those SAME
tool changes are responsible for improved numerical performance (e.g.
support for SSE instructions), ability to access large amounts of
memory, and so much more.  If all your systems are old, these may not
matter, but if you're trying to squeeze the best performance out of
brand new hardware with SSE-capable processors, 4 GB of installed
memory, PXE capable network devices and mobo bios, new video cards,
etc...

PXE alone could save you more time than it would take to upgrade using a
mirror of a well-maintained site.  Being able to just mirror tools like
the GSL (that didn't exist in 6.2 and that at the very least have to be
hand-built for it) saves you more time and can greatly improve
productivity.  Ditto xfree86 (rebuilding of which is a MAJOR pain, given
the number of contributing packages).  Double ditto CUPS -- having
wrangled printers with LPD for years (enough years that I've repeatedly
installed all sorts of the filter scripts and so forth required to make
a printer actually work for a wide range of document types by hand -- by
hand I mean that I actually wrote and hacked the scripts) there is
something positively sensual about simply selecting a printer,
especially a new and unknown printer, off of a list and just "have it
work" as a network printer, about being able to just install print
queues on a system without having to know precisely what one is doing or
micromanage security.  Even for somebody who DOES know how to do it the
hard way...

So I think you underestimate just how much work 6.2 may be costing you
compared to what you could do with the new tools in 9, how much
productivity using 6.2 may be costing your users (even if they don't
know it).  I think MOST sites eventually arrive at the conclusion that
it is OK to be one full revision behind the bleeding edge, but not two
(where one might well count 8 and 9 as one release because of RH's odd
behavior and timing there:-).  If you are worried about 9, fine, but 7.3
is rock solid stable and still being quite actively maintained at a lot
of sites with a large 7.3 base.

   rgb

-- 
Robert G. Brown	                       http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567  Fax: 919-660-2525     email:rgb at phy.duke.edu






More information about the Yum mailing list