Quantcast
Channel: Serverphorums.com - HAProxy
Viewing all articles
Browse latest Browse all 5112

Stick table peer syncing details (1 reply)

$
0
0
Hello,

Firstly I'd like to thank everyone who has contributed to making haproxy
great - It has been extremely helpful in making our ingress that handles
tens of thousands of requests per second 24/7 possible (400-500GB of
haproxy logs a day), and it keeps getting better with every release.

To the questions -
I'm working on implementing rate limits for some HTTP APIs.

Requests to the APIs in question are randomly sent through one of three
instances of haproxy 1.5.2 (different hosts due to load reasons) via
multiple A records.

My goal is to get as close as is possible to having per path and/or
per-backend rate limits, with the bucketing in the stick table being on
authorization header or path, depending on the specific instance. Currently
the desire is for the rate limits to only apply to specific path that
triggered them, not the common example of detecting in the backend and
applying a global deny in the frontend.

I've so far successfully gotten pretty close to this by adding the
following to a backend:

errorfile 403 ratelimit.http
stick-table type string len 25 size 1m store http_req_rate(60s) peers lb
acl too_fast sc2_http_req_rate gt 5
tcp-request inspect-delay 3s
tcp-request content track-sc2 hdr(Authorization)
http-request deny if too_fast

If a full working config is required for adequate context, let me know, and
I can mock a test case out.

This addition causes any specific user hitting this entire backend more
than five times per 60 seconds to receive ratelimit.http until their
request rate drops below the threshhold.

I am, however, having two issues -

Firstly, with the following peer list:

peers lbs
peer lb1 10.130.1.32:4444
peer lb2 10.130.1.30:4444

The above stick-table gets synced between nodes as shown in 'echo show
table static-content | socat - /var/lib/haproxy/stats', but the
http_req_rate does not appear to get synced (it remains at 0 on the second
node, and hitting that node appears to have an isolated rate limit
applied). Is this by design? It seems like for the other use-case of having
a per-frontend stick-table that only contains abusers, syncing stick-table
entries instead of contents would be adequate, but I'm actually looking to
get one global limit applied via all servers

Secondly, I'm not super clear on how I can implement granular rate limits
for multiple paths on a single backend cleanly. Say, for example, I've got
a backend that has all requests starting with /api/ routed to it in the
frontend, but I want to rate limit requests to /api/cheap_query and
/api/expensive_query differently. From the docs, it looks like the best way
to do that is, for each individual rate limit required beyond the first per
backend, to create a junk backend, use it to house a stick table, then
explicitly reference that stick table instead of relying on the implicit
usage of the stick table local to the backend. This seems a little clunky
as it'll require the creation of dozens of extra backends which won't have
servers and will show down in monitoring interfaces and generate log
messages. Is there a cleaner way to accomplish this that I'm missing?

Thanks,
Graham Forest
Urban Airship

Viewing all articles
Browse latest Browse all 5112

Latest Images

Trending Articles



Latest Images