[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: high performance, highly available web clusters



On Thu, May 20, 2004 at 08:43:35AM -0400 or thereabouts, John Keimel wrote:
> Personally, I can't see the sense in replacing a set of NFS servers with
> individual disks. While you might save money going with local disks in
> the short run your maintenance costs (moreso the time cost than dollar
> cost) would increase accordingly. Just dealing with lots of extra moving
> parts puts a shiver down my spine. 

Each webserver will need local storage for the system anyway.  I would
make that local storage large enough for the static content that is
normally held on the NFS server.  Worried about disks failing?  That
happens, and if a server drops out of the cluster, we just put it back
after repairs.  The cluster offers a level of redundancy that makes a
single failure hardly noticeable.  The problem with NFS is that it
simply was not designed to handle the number of FS operations (90-150/s
now and we want 10X that) that web serving can demand.

You suggest a RAM disk, and yet find the NFS server adequate as well???
> 
> I'm not sure how your 'static content' fits in with your mentioning
> multiple MySQL servers, that seems dynamic to me - or at least, ability
> for much dynamic content. 

Static content is stored on the NFS server, dynamic content is stored on
the Mysql servers.  The vast majority of content are image files.
> 
> If you ARE serving up a lot of static content, I might recommend a
> situation that's similar to a project I worked on for a $FAMOUSAUTHOR
> where we designed multiple web servers behind a pair of L4 switches. The
> pair of switches (pair for redundancy) load balanced for us and we ran
> THTTPD on the servers. There were a few links to offsite content, where
> content hosting providers (cannot remember the first, but they later
> went with Akamai) offered up the larger file people came to download.
> Over the millions of hits we got, it survived quite nicely. We ran out
> of bandwidth (50Mb/s) before the servers even blinked. 

that's awesome.  Sounds like you got that one nailed.
> 
> Perhaps if it IS static you might also consider loading your content
> into a RAMdisk, which would provide probably the fastest access time. I
> might consider such a thing these days with the dirt cheap pricing of
> RAM. 

Actually, I figure a large bank of RAM (say, 4GB) will allow linux to
allocate enough ram to the disk cache that the most commonly used files
will be read right from RAM.  Does this seem reasonable?
> 
> I think some kind of common disk (NFS, whatever, on RAID) is your
> best solution. 

why does it have to be common disk?  why not local that is periodically
updated?  the increase in latency by using NFS (or SMB, whatever) and
the overhead of all the FS operations is just killer.  Besides, when you
aggregate all your storage to a single fileserver, you provide yourself
a single point of failure.  Even with a dual redundant NFS setup, you
still have only one level of redundancy.  With a 10 server web cluster I
could lose half my servers and still serve plenty of content.
> 
> HTH
> 
> j
> -- 
> 
> ==================================================
> + It's simply not       | John Keimel            +
> + RFC1149 compliant!    | john@keimel.com        +
> +                       | http://www.keimel.com  +
> ==================================================
> 
> 
> -- 
> To UNSUBSCRIBE, email to debian-isp-REQUEST@lists.debian.org
> with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org

-- 
*******************************
David Wilk
System Administrator
Community Internet Access, Inc.
myca@cia-g.com



Reply to: