<br><br><div class="gmail_quote">On Wed, Dec 22, 2010 at 9:09 PM, eagle sbc <span dir="ltr"><<a href="mailto:sbc19861004@gmail.com">sbc19861004@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
Hi, all:<br><div><br></div><div>This really exhaust me.</div><div><br></div><div>I had an nginx server with "proxy cache" on, the num of cache files is almost 500K, and the files size are all near 50K. But now with just not up to 200 connections, the "load average" is almost to "3", and the I/O of the cache disk is "1000KB/s", and with lots of connections "TIME_OUT". I add the $upstream_cache_status variable to the log, and in the result, 70% are not using proxy cache, 20% are missing and only 10% are hit. </div>
<div>But I had a same server with "proxy store" on, and the num of cache files is also 500K. But on this server, even with 1000 connections, the "load average" is never up to "0.3", and the I/O for read is just up to "200KB/s", and all connections are in a "ESTABLISHED" or "FIT_WAIT" status.</div>
<div><br></div><div>My question is :</div><div><ol><li>Why with so less connections, the I/O is still so high? </li><li>What other operation does "proxy cache" do opposite to "proxy store"? Aren't they all look up the file, and: if found, read file from disk; if not, proxy to the upstream server? While in fact they are so different on the "load average". </li>
</ol></div><div> </div><div><br>
</div>
</blockquote></div><br>I also made a test that using memory to make a tmpfs, and locate the cache all in memory. And the 'load average' performance is then down to '0.3'. But my server can only spare 1.5G memory for the tmpfs, and even with 'max_size=1G' configured, nginx doesn't recycle the files, but soon the filesystem is filling up(may be this is not correct, I seen some memory space freed, but if I configured the max_size=100G for the disk filesystem. when the cache using up to 1G, their also some disk space freed. So I think maybe may other configuration caused that recycle). That's wired. Is that my new writes too frequent than the recycle?<br>
<br>