Quantcast
Channel: Serverphorums.com - HAProxy
Viewing all articles
Browse latest Browse all 5112

Share usage limits across backend servers? (2 replies)

$
0
0
Hi Folks,

I have a multi-tenant HAProxy set-up loosely as follows


frontend main
bind ip:port
various options
ACLs to match domains (client1, client2, etc)
use_backend client1 if client1
use_backend client2 if client2


backend client1
verious options
option httpchk with customized domain/URL as health check target
custom ACLs
server cache1 10.10.10.10:80 weight 50 maxconn 1000 check inter 20s
server cache2 10.10.10.20:80 weight 50 maxconn 1000 check inter 20s

backend client2
verious options
option httpchk with customized domain/URL as health check target
custom ACLs
server cache1 10.10.10.10:80 weight 50 maxconn 1000 check inter 20s
server cache2 10.10.10.20:80 weight 50 maxconn 1000 check inter 20s


As you can see each of the "cacheX" servers is used in multiple places, as
these servers are also multi-tenant.

The per-client backends are utilized both for providing custom ACLs, as
well as providing a client-specific health check to each cache server, for
example in the event that a given cache server is not yet seeded for a
given client.

However, what is missing in this scenario is the ability to set
global/aggregate limits per cache server, so as to fine-tune the amount of
active/queued connections across all backends to a given cache server.

I'm sure I could shim this by creating a local "listen" block for each
cache server and using that IP/port instead of going direct to the server
- but there are some significant drawbacks to that method:

* additional logging (currently I am feeding the clientX backend logs to a
parser)
* disassociation of the "real" aggregate backend counters on the shim from
the properly-named "clientX" backend counters
* disassociation of the session state log portion from the shim to the
clientX backend log
* using a separate shim for each cache server would be necessary to
preserve the health check status, yet this method wouldn't allow a request
to be redistributed if maxconn/maxqueue has been exceeded, as the
connection would already have been made and a 503 issued.

Any thoughts on how to best achieve the goal of being able to set proper
maxconn/maxqueue limits when an individual server is used across multiple
backends as in this scenario?

Best Regards,
Mark

Viewing all articles
Browse latest Browse all 5112

Trending Articles