[Yum] Newbie question

Rick Thomas rbthomas55 at pobox.com
Sun Nov 16 05:37:39 UTC 2003


I didn't think of that!  It would definitely be a neat hack if it works.

My first question is (again) timing:  The size of any particular 
night's download is unpredictable.  If there are alot of updated 
headers and rpms to download, and the "leading" machine isn't 
finished downloading when one of the "trailing" machines starts up, 
then the trailing machine can get into trouble by not having a full 
deck to play with.

My second question is whether it would work in the case where the 
machines have slightly different configurations.  Yum only 
downloads what it needs.  So if one of the trailing machines needs 
some rpm that isn't in the leading machine's cache (because the 
leading machine didn't need it for its configuration) what then?

I still think it's a game for trained professionals.  And then only 
if you're forced to play at the end of a long thin pipe.  For the 
rest of us, Disk is cheap!

Enjoy!

Rick

On Saturday, November 15, 2003, at 04:36 PM, Owyn wrote:

> On Sat, 2003-11-15 at 12:40, Jeff Smith wrote:
>> If I understood the suggestion correctly, he was saying to share the
>> header cache via nfs, not the repos themselves. Each update on each
>> machine will be trying to update the information in the cache. 
>> And hell
>> could break loose if they start fighting each other for control.
>>
>> But for the sake of simplicity, I like Rick's suggestion. Damn the
>> sharing and duplicate effort all over the place. :-)
>
> Would not "yum -C" eliminate all chances of multiple updates to the
> cache. One machine could do the leading updates. The rest would 
> just use
> the cache.




More information about the Yum mailing list