simplely, I get the health of the upstreams using the ngx_http_upstream_rr_peer_t::fails and the ngx_http_upstream_rr_peer_t::max_fails. if fails < max-fails, I think the server died, else I think the server has got up.<br>
<br><div class="gmail_quote">2009/6/11 Michael Shadle <span dir="ltr"><<a href="mailto:mike503@gmail.com">mike503@gmail.com</a>></span><br><blockquote class="gmail_quote" style="border-left: 1px solid rgb(204, 204, 204); margin: 0pt 0pt 0pt 0.8ex; padding-left: 1ex;">
<div class="im">On Wed, Jun 10, 2009 at 2:45 PM, merlin corey<<a href="mailto:merlincorey@dc949.org">merlincorey@dc949.org</a>> wrote:<br>
> How often do you really expect servers to go up and down? I think you<br>
> are correct, though, HUP can take a bit of time/resources. My point<br>
> is, are you really having upstreams die constantly? Seems like you<br>
> would have much worse problems than what it takes to HUP at that<br>
> point...<br>
<br>
</div>In an infrastructure with 10's or 100's of servers, in theory you<br>
could have one going up and down anytime.<br>
<br>
Look at Amazon's whitepaper about Dynamo or how Google addresses the<br>
whole "commodity" issue. Things will go up and down at anytime, and<br>
you should gracefully handle it. nginx is almost capable of gracefully<br>
doing it (mid-transfer I don't think it would unless the client<br>
re-issued the request with a range offset) but with the<br>
try-next-upstream approach it gracefully handles that already...<br>
<br>
I'm looking to have a solution in place which can scale and is "set it<br>
and forget it" - a HUP may be a lot of work, especially if nginx is<br>
being the frontend for so many connections/servers. I don't know. I<br>
guess Igor/Maxim would be the most knowledgeable about what exactly a<br>
HUP will do to all of that...<br>
<br>
</blockquote></div><br>