at first you have to separate the layer where you have the problem.
during the last week i made a small test with nginx to able to reach
the 50K req/s on a single host using CentOS and nginx<br>
<br>
linux level:<br><br>/etc/security/limits.conf<br><br>* hard nofile 10000<br>* soft nofile 10000<br><br>It might solve your problem.<br><br>Regards,<br>Istvan<br><br><div class="gmail_quote">On Wed, Jan 14, 2009 at 12:36 PM, Thomas <span dir="ltr"><<a href="mailto:iamkenzo@gmail.com" target="_blank">iamkenzo@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">On Tue, Jan 13, 2009 at 5:50 PM, Ilan Berkner <<a href="mailto:iberkner@gmail.com" target="_blank">iberkner@gmail.com</a>> wrote:<br>
> Thanks for the fast response. Our site is back up :-). Our tech support<br>
> (dedicated server support) did something to fix this issue, I will find out<br>
> later what. I'll keep an eye on the open files as we currently have it set<br>
> pretty high.<br>
><br>
<br>
I remember Zed Shaw talking about such issue back in the days when<br>
people were running Rails through fastcgi. It had something to do with<br>
keep alive connections. The connections would actually never close<br>
themselves.<br>
<font color="#888888"><br>
<br>
--<br>
Self training videos the field of IT: <a href="http://www.digiprof.fr" target="_blank">http://www.digiprof.fr</a><br>
<br>
</font></blockquote></div><br><br clear="all"><br>-- <br>the sun shines for all<br>