Hello,I am been trying to configure my Haproxy for rate limiting our customer usage, and wanted to know/understand some of my options
what i am trying to achieve is to throttle any clients requests/api calls that can take lead to high load and can kill my servers.
First of all here is my configuration i have so far from reading a few articles
frontend www-https
bind xx.xx.xx.xx:443 ssl crt xxxx.pem ciphers AES128+EECDH:AES128+EDH no-sslv3 no-tls-tickets
# Table definition
stick-table type ip size 100k expire 30s store gpc0,conn_cur,conn_rate(3s),http_req_rate(10s),http_err_rate(10s)
# Allow clean known IPs to bypass the filter
tcp-request connection accept if { src -f /etc/haproxy/whitelist.lst }
# this is sending data defined in the stick-table and storing it the stick-table since by default nothing is restored in it
tcp-request connection track-sc0 src
# Shut the new connection as long as the client has already 10 opened
tcp-request connection reject if { src_conn_cur ge 40 }
# if someone has more than 100 connections in over a period of 3 seconds, REJECT
tcp-request connection reject if { src_conn_rate ge 40 }
# tracking connections that are not rejected from clients that don't have 10 connections/don't have 10 connections/3 seconds
#tcp-request connection reject if { src_get_gpc0 gt 0 }
acl abuse_err src_http_err_rate ge 10
acl flag_abuser_err src_inc_gpc0 ge 0
acl abuse src_http_req_rate ge 250
#acl flag_abuser src_inc_gpc0 ge 0
#tcp-request content reject if abuse_err flag_abuser_err
#tcp-request content reject if abuse flag_abuser
use_backend backend_slow_down if abuse
#use_backend backend_slow_down if flag_abuser
use_backend backend_slow_down if abuse_err flag_abuser_err
default_backend www-backend
backend www-backend
balance leastconn
cookie BALANCEID insert indirect nocache secure httponly
option httpchk HEAD /xxx.php HTTP/1.0
redirect scheme https if !{ ssl_fc }
server A1 xx.xx.xx.xx:80 cookie A check
server A2 yy.yy.yy.yy:80 cookie B check
backend backend_slow_down
timeout tarpit 2s
errorfile 500 /etc/haproxy/errors/429.http
http-request tarpit
What i am doing here is that if the http_req_rate > 250 then i want to send them to a another backend which gives them a rate limiting message or if the number of concurrent connections are > 4, then i want to rate limit their usage and allow on 40 connections to come in.
Please feel free to critique my config. Now on to questions,
1) is rate limiting based on IP a good way to do this or has anyone tried of other ways?2) Am i missing anything critical in the configuration?3) when does the src_inc_gpc0 counter really increment? does it increment for every subsequent request from the client in the given timeframe, i have seen it goes from 0 to 6 during my test but wasn't sure about it4) can i not rate limit by just adding the maxconn to the server in the backend or will that throttle everyone instead of the rogue IP...
well this is it for now...might have more questions later...)
what i am trying to achieve is to throttle any clients requests/api calls that can take lead to high load and can kill my servers.
First of all here is my configuration i have so far from reading a few articles
frontend www-https
bind xx.xx.xx.xx:443 ssl crt xxxx.pem ciphers AES128+EECDH:AES128+EDH no-sslv3 no-tls-tickets
# Table definition
stick-table type ip size 100k expire 30s store gpc0,conn_cur,conn_rate(3s),http_req_rate(10s),http_err_rate(10s)
# Allow clean known IPs to bypass the filter
tcp-request connection accept if { src -f /etc/haproxy/whitelist.lst }
# this is sending data defined in the stick-table and storing it the stick-table since by default nothing is restored in it
tcp-request connection track-sc0 src
# Shut the new connection as long as the client has already 10 opened
tcp-request connection reject if { src_conn_cur ge 40 }
# if someone has more than 100 connections in over a period of 3 seconds, REJECT
tcp-request connection reject if { src_conn_rate ge 40 }
# tracking connections that are not rejected from clients that don't have 10 connections/don't have 10 connections/3 seconds
#tcp-request connection reject if { src_get_gpc0 gt 0 }
acl abuse_err src_http_err_rate ge 10
acl flag_abuser_err src_inc_gpc0 ge 0
acl abuse src_http_req_rate ge 250
#acl flag_abuser src_inc_gpc0 ge 0
#tcp-request content reject if abuse_err flag_abuser_err
#tcp-request content reject if abuse flag_abuser
use_backend backend_slow_down if abuse
#use_backend backend_slow_down if flag_abuser
use_backend backend_slow_down if abuse_err flag_abuser_err
default_backend www-backend
backend www-backend
balance leastconn
cookie BALANCEID insert indirect nocache secure httponly
option httpchk HEAD /xxx.php HTTP/1.0
redirect scheme https if !{ ssl_fc }
server A1 xx.xx.xx.xx:80 cookie A check
server A2 yy.yy.yy.yy:80 cookie B check
backend backend_slow_down
timeout tarpit 2s
errorfile 500 /etc/haproxy/errors/429.http
http-request tarpit
What i am doing here is that if the http_req_rate > 250 then i want to send them to a another backend which gives them a rate limiting message or if the number of concurrent connections are > 4, then i want to rate limit their usage and allow on 40 connections to come in.
Please feel free to critique my config. Now on to questions,
1) is rate limiting based on IP a good way to do this or has anyone tried of other ways?2) Am i missing anything critical in the configuration?3) when does the src_inc_gpc0 counter really increment? does it increment for every subsequent request from the client in the given timeframe, i have seen it goes from 0 to 6 during my test but wasn't sure about it4) can i not rate limit by just adding the maxconn to the server in the backend or will that throttle everyone instead of the rogue IP...
well this is it for now...might have more questions later...)