Igor,<br><br>Thank You very much for your help.<br>Keep up the good works.<br><br><br><br clear="all">Regards,<br>Joe<br>
<br><br><br><div class="gmail_quote">2009/8/30 Igor Sysoev <span dir="ltr"><<a href="mailto:is@rambler-co.ru">is@rambler-co.ru</a>></span><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<div class="im">On Sun, Aug 30, 2009 at 03:51:06AM +0700, Joe wrote:<br>
<br>
> Igor,<br>
><br>
> May I also asking too?<br>
> I am trying to do "getconf PAGESIZE" and the answer is "4096".<br>
> I am also often have "502 Bad Gateway" page, and I change the settings to:<br>
><br>
> fastcgi_connect_timeout 60;<br>
> fastcgi_send_timeout 180;<br>
> fastcgi_read_timeout 180;<br>
> fastcgi_buffer_size 128k;<br>
> fastcgi_buffers 4 256k;<br>
> fastcgi_busy_buffers_size 256k;<br>
> fastcgi_temp_file_write_size 256k;<br>
> fastcgi_intercept_errors on;<br>
><br>
> Are those settings to over and make the cpu and memory work harder?<br>
><br>
> Thank You in advanced Igor.<br>
<br>
</div>It's better to decrease fastcgi_buffer_size to a value that you really need.<br>
It may be 16K, 32K, or more. Also, it's better to create more<br>
fastcgi_buffers of lesser size then small number of large fastcgi_buffers:<br>
<br>
fastcgi_buffers 32 32k;<br>
<br>
is better than<br>
<br>
fastcgi_buffers 4 256k;<br>
<br>
because if you response is 64K, then only two 32K buffers will be allocated<br>
in the former case, and one 256K in the later case.<br>
<div><div></div><div class="h5"><br>
> Regards,<br>
> Joe<br>
><br>
><br>
> 2009/8/30 Igor Sysoev <<a href="mailto:is@rambler-co.ru">is@rambler-co.ru</a>><br>
><br>
> > On Sat, Aug 29, 2009 at 12:35:47PM -0700, Michael Shadle wrote:<br>
> ><br>
> > > fastcgi_buffers 32 8k;<br>
> > ><br>
> > > I have that already...<br>
> ><br>
> > Try<br>
> ><br>
> > fastcgi_buffer_size 16k;<br>
> > fastcgi_buffers 16 16k;<br>
> ><br>
> > fastcgi_buffer_size is buffer where response header is read.<br>
> > It may has different size as compared to fastcgi_buffers.<br>
> ><br>
> > > 2009/8/29 Igor Sysoev <<a href="mailto:is@rambler-co.ru">is@rambler-co.ru</a>>:<br>
> > > > On Sat, Aug 29, 2009 at 11:10:43AM -0700, Michael Shadle wrote:<br>
> > > ><br>
> > > >> Does anyone see a problem here? The fastcgi parser returns back a -2,<br>
> > > >> instead of 0 on a normal request. I don't see anything wrong with this<br>
> > > >> header - the only thing that sticks out is the "//" - but still - I<br>
> > > >> think this is a bug in nginx. Why does it say upstream split a header<br>
> > > >> line? There are no \n \0 \r etc...<br>
> > > >><br>
> > > >> Any help is appreciated - Igor, I sent you a larger chunk of the log<br>
> > > >> privately unedited (I edited the hostnames to protect the innocent on<br>
> > > >> this)<br>
> > > >><br>
> > > >> Thanks!<br>
> > > >><br>
> > > >> 2009/08/28 12:17:01 [debug] 20714#0: *7231991 http fastcgi header:<br>
> > > >> "Set-Cookie:<br>
> > IBBUSER=pT061zDsSOKTipBqqbbOAJG0RMGGkyUimUAbWHVjZPm4QKksTL16sTHCINiUH22GJoE6hnF3GDiBVFEf3nLovNeXec//EkQa7IclJOOCh2wdQ+PhexQNCg5PKFmU72VQriEyYIDUOQXgfwpWTfvzEeHJnNIGiAVRzNbKBSoQIyKjbgVIfhLV+LAFR3mltZBRS+qYH5AIpdSLtNtGtHVu0Fl9/OfvEBRz2zwsFUhVYW2zPIAG/OX/YVc+NV+M1va1pcWcuwkMgHK4FrYdbXjANymt9BsVo7y0+F3kxqcXmiKJioCSiCd+1fQJnLG0lUClD9qLGSDp9KBz4uzzpsmUYmEXvv2JDOTS+WEBL2+f+j/6wlRHFKYxOls9fZgAR2Lhotro+Rbfhu1iaPkubGKEIHI0FU+366pDWs1IKcy7rtJsHQovG+4Z1bDvx6CYC2yOTJ7VvBIRpFw3z2/v1tNcK9DwI/3lUQ4gdXtAXYtMmo42sO7doi18bKkeIGH8z1DtTrruBbZX4OLhSPts0non1d4yGHrY644PbZXbehW4HKpygqbX6sJobZf4eHzKh1nFkuHZUfZUFroE5yT17Bd/4g==;<br>
> > > >> domain=.<a href="http://foo.com" target="_blank">foo.com</a>; expires=Sun, 27-Sep-2009 19:17:01 GMT; path=/"<br>
> > > >> 2009/08/28 12:17:01 [debug] 20714#0: *7231991 http fastcgi parser: -2<br>
> > > >> 2009/08/28 12:17:01 [debug] 20714#0: *7231991 upstream split a header<br>
> > > >> line in FastCGI records<br>
> > > >> 2009/08/28 12:17:01 [error] 20714#0: *7231991 upstream sent too big<br>
> > > >> header while reading response header from upstream, client:<br>
> > > >> 134.134.139.72, server: <a href="http://ssl.foo.com" target="_blank">ssl.foo.com</a>, request: "POST<br>
> > > >> /en-us/login/?TARGET=http%3A%2F%2Ffoo.com%2Fpage%2F HTTP/1.1",<br>
> > > >> upstream: "fastcgi://<a href="http://127.0.0.1:11021" target="_blank">127.0.0.1:11021</a>", host: "<a href="http://ssl.foo.com" target="_blank">ssl.foo.com</a>", referrer:<br>
> > > >> "<a href="http://foo.com/en-us/login" target="_blank">http://foo.com/en-us/login</a>"<br>
> > > ><br>
> > > > What nginx version do use ? In 0.8.8 there are some bugfixes<br>
> > > > in handling FastCGI headers split in records. However, in your case<br>
> > > > it seems there is not enough fastcgi_buffer_size.<br>
> > > ><br>
> > > ><br>
> > > > --<br>
> > > > Igor Sysoev<br>
> > > > <a href="http://sysoev.ru/en/" target="_blank">http://sysoev.ru/en/</a><br>
> > > ><br>
> > > ><br>
> ><br>
> > --<br>
> > Igor Sysoev<br>
> > <a href="http://sysoev.ru/en/" target="_blank">http://sysoev.ru/en/</a><br>
> ><br>
> ><br>
<br>
</div></div>--<br>
<div><div></div><div class="h5">Igor Sysoev<br>
<a href="http://sysoev.ru/en/" target="_blank">http://sysoev.ru/en/</a><br>
<br>
</div></div></blockquote></div><br>