Quantcast
Channel: Serverphorums.com - HAProxy
Viewing all 5112 articles
Browse latest View live

Ouotation (no replies)

$
0
0
Dear Seller,My name is Ryan Williams from Tiestlin Ventures (We are Trading Company base in United States) we are interested on your products. Our company is looking for a reliable supplier who can provide a long term customer ship and maintain our customer’s specification items.
Supplier could be from worldwide but quality should be good and the prices should be reasonable. Soonest reply with your quotation/ fob price will be highly appreciated for the further details of our specification order. I am waiting for a prompt response.Best Regards,Ryan WilliamsPurchasing Manager
Tiestlin Ventures CorporationAddress: 2013 Centre Road, Wilmington, Delaware, United States
Tel: +13028474903

[SPAM] 发现问题并推动解决 (no replies)

$
0
0
Hi:
附件供参考
---------------- 原始邮件 ------------------
发件人: cuszn@lmglasfiber.com
发送时间: 2016-4-15 星期四 6:40:13
收件人: haproxy@formilux.org
抄送: 西蒙通讯产品上海有限公司
主题: 答复: 发现问题并推动解决

use of variables in ACL (1 reply)

$
0
0
Is there anyway to use:

http-request set-var()

to set a var for later use in forming an ACL?

I've tried all the prefixes to make the variable survive past http
processing, but the ACL is always rejected by the config check.

<snip>
http-request set-var(txn.my_v) base,sdbm,mod(100)
acl test_threshold txn.my_v lt 10
</snip>

(no subject) (no replies)

HAProxy 1.6, override for dns/NXdomains on parsing (no replies)

$
0
0
Hi Baptiste.
(cc: HAProxy mailing-list)

I recently came across one of your posts from last year (http://permalink.gmane.org/gmane.comp.web.haproxy/22841) regarding how DNS records are resolved when loading new configuration values (either at parsing during initial startup, or on dynamic reconfiguration via the socket). In this post, you are referring to a possible enhancement:

Currently, HAProxy works like this: "init-addr libc,dns"
A new value could be "init-addr dns"
Or "init-addr 1.2.3.4,dns"

I believe none of this has been implemented yet, am I right? I am running into a situation I would like the latter — a condition can exist where my HAproxy would load before some of my backend server entries can be successfully resolved.

I tried something that seems to fit my particular need and I’d like to share with you and all — please see file-attach.

Essentially, this adds a new global option “override-nxdomain” that can be used, as its name implies, to override the return value when HAproxy uses the systems DNS resolution and receives the indication the hostname can’t be found. A possible use-case is to override (for example) with “localhost” or “127.0.0.1” directly so that HAproxy can initially start even if some records can’t be resolved yet. Once HAproxy starts to resolve on its own upon checking for health of the backend member, it will try to resolve again and hopefully get a valid answer at some point.

It seems to pass simple sanity/functionnal tests, but bear in mind this is my first submission to HAproxy and I would accept if it is considered a hack at this point. :-)

[PATCH] BUG/MINOR: fix maxaccept computation according to the frontend process range (1 reply)

$
0
0
commit 7c0ffd23 is only considering the explicit use of the "process" keyword
on the listeners. But at this step, if it's not defined in the configuration,
the listener bind_proc mask is set to 0. As a result, the code will compute
the maxaccept value based on only 1 process, which is not always true.

For example :
global
nbproc 4

frontend test
bind-process 1-2
bind :80

Here, the maxaccept value for the "test" frontend was set to the global
tune.maxaccept value (default to 64), whereas it should consider 2 processes
will accept connections. As of the documentation, the value should be divided
by twice the number of processes the listener is bound to.

To fix this, we can consider that if no mask is set to the listener, we take
the frontend mask.

This is not critical but it can introduce unfairness distribution of the
incoming connections across the processes.

It should be backported to the same branches as commit 7c0ffd23 (1.6 and 1.5
were in the scope).
---
src/cfgparse.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/cfgparse.c b/src/cfgparse.c
index c3b29d4..2400559 100644
--- a/src/cfgparse.c
+++ b/src/cfgparse.c
@@ -8741,7 +8741,7 @@ out_uri_auth_compat:
int nbproc;

nbproc = my_popcountl(curproxy->bind_proc &
- listener->bind_conf->bind_proc &
+ (listener->bind_conf->bind_proc ? listener->bind_conf->bind_proc : curproxy->bind_proc) &
nbits(global.nbproc));

if (!nbproc) /* no intersection between listener and frontend */
--
2.8.0.rc3

Sharing SSL information via PROXY protocol or HAProxy internally (2 replies)

$
0
0
Hi,

would it be possible to inherit the SSL information from a SSL
listener/frontend via PROXY protocol?
So for example:

listen ssl-relay
mode tcp

...

server rsa unix@/var/run/haproxy_ssl_rsa.sock send-proxy-v2

listen ssl-rsa_ecc
mode tcp

...

bind unix@/var/run/haproxy_ssl_rsa.sock accept-proxy ssl crt
SSl-RSA.PEM user haproxy

frontend http_https
bind :80 # http
bind unix@/var/run/haproxy_ssl.sock accept-proxy user haproxy #
https

redirect scheme https code 301 if !{ssl_fc}


Here the ssl_fc and other SSL related ACLs do not work because the
actual SSL termination has been done in the above ssl-rsa_ecc listener.
Sharing that either internally or via the PROXY protocol would be really
handy, if that's possible.
For now we use the bind "id" to check whether it's the proxy connection
or not but the above would be much easier/better IMHO.

--
Regards,
Christian Ruppert

TTL-based DNS resolution ? (1 reply)

$
0
0
Hi,

are there are plans to support DNS resolution based on TTL a la NGINX? This
would be helpful for use cases where the upstream is an ELB or similar
system. I've pasted a reply from AWS support based on some observations
from a couple of our services that use HAProxy 1.6 in front of ELBs. Note
that I am not contending that the issue of uneven distribution of upstream
IPs is HAProxy's fault (that is a consequence of our design), but the
cycling of ELB nodes when retirement occurs is something that NGINX would
seem to handle in a more satisfactory way.

"
I think an explanation of what happens when ELB scales will be helpful as
background at this point. ELB employs what we term "Graceful Scaling". When
a scaling trigger is breached, let's say for sake of argument this is a
scale-up event, then ELBs controller immediately begins the process of
provisioning new more performant ELB nodes. This usually takes a few
minutes, and once these new nodes pass the controller health-checks, we
remove the old node IPs from the DNS record set, and add in the new ELB
node IPs to the DNS set. Since the TTL with this DNS record is published in
60 seconds, after about 2 - 3 minutes, most traffic will migrate over to
the new node. We do however, do not de-provision the old ELB nodes, but
instead we begin to monitor them to determine when traffic received by
these nodes drops to below a certain threshold level, or a maximum age has
expired (this is several days). This happens to cater for the case where
some clients are caching DNS longer than the TTL value.

Given the way that HA proxy works, when it starts up, it resolves the ELB
name, and obtains the current IPs. HealthChecks are a requirement for the
resolver clause, so HA also begins to perform the configured health-check
on the nodes it learned about at startup.

If the ELB were to scale now, the new nodes would come online but HA proxy
would never learn of them, as the old nodes will continue to pass
health-checks. If traffic continues to increase, at some point the older
ELB nodes will become overwhelmed and will fail a health-check on HA proxy,
at which point that HA proxy node on which the health-check failed, will
learn of the new ELB nodes from DNS, and start to send traffic to the new
one.

Should traffic not increase sufficiently to cause the old nodes to fail a
health-check, then only new HA proxy instances in your fleet will learn of
the new nodes. Eventually the maximum graceful node lifetime will be
reached, and we will terminate the old nodes, at this point all your HA
proxy instances will fail health-checks on their upstream and learn of the
new nodes at the same time.

This process happens in such a fashion that over a time, its conceivable
that each of your HA proxy nodes may know of different back-end IPs. As a
result, traffic on the inside ELB nodes will not be symmetrically
distributed by the HA proxy nodes over time. This is somewhat mitigated on
the back-end by the use of cross-zone load-balancing, so the asymmetry is
not propagated to the back-ends. We do monitor each ELB node individually,
thus the ELB will scale on the monitoring of a single node, rather than the
entire ELB, which further mitigates the effects of any asymmetry on your
ELB nodes.

There is no easy way to make HA proxy work perfectly in front of an ELB,
due to the nature of how HA proxy have implemented DNS resolution.

We often recommend to customers using a reverse proxy in front of ELB, to
rather use Nginx, as this does have the ability to follow DNS TTLs of its
upstreams perfectly. In this case, given the way you have implemented it
means that HA will learn of failed ELB nodes, and eventually learn of
scaling, and the ELB mitigates the imbalance to your back-ends through
cross zone. So, in summary, as of now the only possible way to overcome
this behavior would be to consider using a different reverse proxy solution
between the two ELB tiers instead of HA proxy. I apologize for any
inconvenience. I hope the above information was helpful. Please let us know
if you have any other questions or concerns and we will be happy to assist
you.
"

Regards,

--
Ben

--




*This message and any attachment are confidential and may be privileged or
otherwise protected from disclosure.If you are not the intended recipient
you must not copy this message or attachment or disclose the contents to
any other person.For more information please contact: **groupsecurity@photobox.com
<security@photobox.com>*

*PhotoBox Group, Unit 7, Metal Box Factory, 30 Great Guildford Street,
London, SE1 0HS*

*http://group.photobox.com http://group.photobox.com/*

100% cpu , epoll_wait() (2 replies)

$
0
0
I have haproxy slaved to 2d cpu (CPU1), with frequent config changes
and a '-sf' soft-stop with the now-old non-listening process nannying
old connections.

Sometimes CPU1 goes to %100, and then a few minutes later request
latencies suffer across multiple haproxy peers.

An strace of the nanny haproxy process shows a tight loop of :

epoll_wait(0, {}, 200, 0) = 0
epoll_wait(0, {}, 200, 0) = 0
epoll_wait(0, {}, 200, 0) = 0

I've searched the archives and found similar but old-ish complaints
about similar circumstances, but with fixes/patches mentioned.

This has happened with both 1.5.3 and 1.5.17.

Insights ?

===========

# cat /proc/version
Linux version 3.16.0-0.bpo.4-amd64 (debian-kernel@lists.debian.org)
(gcc version 4.6.3 (Debian 4.6.3-14) ) #1 SMP Debian
3.16.7-ckt25-1~bpo70+1 (2016-04-02)

# haproxy -vv
HA-Proxy version 1.5.17 2016/04/13
Copyright 2000-2016 Willy Tarreau <willy@haproxy.org>

Build options :
TARGET = linux2628
CPU = generic
CC = gcc
CFLAGS = -g -O2 -fstack-protector --param=ssp-buffer-size=4
-Wformat -Werror=format-security -D_FORTIFY_SOURCE=2
OPTIONS = USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1 USE_PCRE=1

Default settings :
maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.7
Compression algorithms supported : identity, deflate, gzip
Built with OpenSSL version : OpenSSL 1.0.1e 11 Feb 2013
Running on OpenSSL version : OpenSSL 1.0.1e 11 Feb 2013
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.30 2012-02-04
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with transparent proxy support using: IP_TRANSPARENT
IPV6_TRANSPARENT IP_FREEBIND

Available polling systems :
epoll : pref=300, test result OK
poll : pref=200, test result OK
select : pref=150, test result OK
Total: 3 (3 usable), will use epoll.

Coding style for coonfig files (no replies)

[SPAM] Non Réception de paiement (no replies)

$
0
0
Cher(e) EDF Client(e) :

Votre paiement a

[PATCH 2/2] CLEANUP: Use server_parse_maxconn_change_request for maxconn CLI updates (no replies)

$
0
0
---
src/dumpstats.c | 24 ++++--------------------
1 file changed, 4 insertions(+), 20 deletions(-)

diff --git a/src/dumpstats.c b/src/dumpstats.c
index da26f80..bb62c41 100644
--- a/src/dumpstats.c
+++ b/src/dumpstats.c
@@ -1827,34 +1827,18 @@ static int stats_sock_parse_request(struct stream_interface *si, char *line)
}
else if (strcmp(args[2], "server") == 0) {
struct server *sv;
- int v;
+ const char *warning;

sv = expect_server_admin(s, si, args[3]);
if (!sv)
return 1;

- if (!*args[4]) {
- appctx->ctx.cli.msg = "Integer value expected.\n";
- appctx->st0 = STAT_CLI_PRINT;
- return 1;
- }
-
- v = atoi(args[4]);
- if (v < 0) {
- appctx->ctx.cli.msg = "Value out of range.\n";
+ warning = server_parse_maxconn_change_request(sv, args[4]);
+ if (warning) {
+ appctx->ctx.cli.msg = warning;
appctx->st0 = STAT_CLI_PRINT;
- return 1;
- }
-
- if (sv->maxconn == sv->minconn) { // static maxconn
- sv->maxconn = sv->minconn = v;
- } else { // dynamic maxconn
- sv->maxconn = v;
}

- if (may_dequeue_tasks(sv, sv->proxy))
- process_srv_queue(sv);
-
return 1;
}
else if (strcmp(args[2], "global") == 0) {
--
2.7.0

[PATCH 1/2] MINOR: Add ability for agent-check to set server maxconn (no replies)

$
0
0
This is very useful in complex architecture systems where HAproxy
is balancing DB connections for example. We want to keep the maxconn
high in order to avoid issues with queueing on the LB level when
there is slowness on another part of the system. Example is a case of
an architecture where each thread opens multiple DB connections, which
if get stuck in queue cause a snowball effect (old connections aren't
closed, new ones cannot be established). These connections are mostly
idle and the DB server has no problem handling thousands of them.

Allowing us to dynamically set maxconn depending on the backend usage
(LA, CPU, memory, etc.) enables us to have high maxconn for situations
like above, but lowering it in case there are real issues where the
backend servers become overloaded (cache issues, DB gets hit hard).
---
doc/configuration.txt | 3 +++
include/proto/server.h | 8 ++++++++
src/checks.c | 18 +++++++++++++++++-
src/server.c | 27 +++++++++++++++++++++++++++
4 files changed, 55 insertions(+), 1 deletion(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index c705a09..640c0f3 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -10146,6 +10146,9 @@ agent-check
weight is reported on the stats page as "DRAIN" since it has the same
effect on the server (it's removed from the LB farm).

+ - An ASCII representation of a positive integer, followed by a single letter
+ 'm'. Values in this format will set the maxconn of a server.
+
- The word "ready". This will turn the server's administrative state to the
READY mode, thus cancelling any DRAIN or MAINT state

diff --git a/include/proto/server.h b/include/proto/server.h
index 872503c..176851a 100644
--- a/include/proto/server.h
+++ b/include/proto/server.h
@@ -113,6 +113,14 @@ const char *server_parse_addr_change_request(struct server *sv,
const char *addr_str, const char *updater);

/*
+ * Parses maxconn_str and configures sv accordingly.
+ * Returns NULL on success, error message string otherwise.
+ */
+const char *server_parse_maxconn_change_request(struct server *sv,
+ const char *maxconn_str);
+
+
+/*
* Return true if the server has a zero user-weight, meaning it's in draining
* mode (ie: not taking new non-persistent connections).
*/
diff --git a/src/checks.c b/src/checks.c
index 35fd020..f3f767d 100644
--- a/src/checks.c
+++ b/src/checks.c
@@ -938,6 +938,7 @@ static void event_srv_chk_r(struct connection *conn)
const char *hs = NULL; /* health status */
const char *as = NULL; /* admin status */
const char *ps = NULL; /* performance status */
+ const char *cs = NULL; /* maxconn */
const char *err = NULL; /* first error to report */
const char *wrn = NULL; /* first warning to report */
char *cmd, *p;
@@ -1039,10 +1040,14 @@ static void event_srv_chk_r(struct connection *conn)
else if (strcasecmp(cmd, "maint") == 0) {
as = cmd;
}
- /* else try to parse a weight here and keep the last one */
+ /* try to parse a weight here and keep the last one */
else if (isdigit((unsigned char)*cmd) && strchr(cmd, '%') != NULL) {
ps = cmd;
}
+ /* try to parse a maxconn here */
+ else if (isdigit((unsigned char)*cmd) && strchr(cmd, 'm') != NULL) {
+ cs = cmd;
+ }
else {
/* keep a copy of the first error */
if (!err)
@@ -1079,6 +1084,17 @@ static void event_srv_chk_r(struct connection *conn)
wrn = msg;
}

+ if (cs) {
+ const char *msg;
+
+ /* Remove character 'm' before setting maxconn */
+ *strchr(cs, 'm') = '\0';
+
+ msg = server_parse_maxconn_change_request(s, cs);
+ if (!wrn || !*wrn)
+ wrn = msg;
+ }
+
/* and finally health status */
if (hs) {
/* We'll report some of the warnings and errors we have
diff --git a/src/server.c b/src/server.c
index 5a2c58a..1095754 100644
--- a/src/server.c
+++ b/src/server.c
@@ -831,6 +831,33 @@ const char *server_parse_addr_change_request(struct server *sv,
return "Could not understand IP address format.\n";
}

+const char *server_parse_maxconn_change_request(struct server *sv,
+ const char *maxconn_str)
+{
+ long int v;
+ char *end;
+
+ if (!*maxconn_str)
+ return "Require <maxconn>.\n";
+
+ v = strtol(maxconn_str, &end, 10);
+ if (end == maxconn_str)
+ return "maxconn string empty or preceded by garbage";
+ else if (end[0] != '\0')
+ return "Trailing garbage in maxconn string";
+
+ if (sv->maxconn == sv->minconn) { // static maxconn
+ sv->maxconn = sv->minconn = v;
+ } else { // dynamic maxconn
+ sv->maxconn = v;
+ }
+
+ if (may_dequeue_tasks(sv, sv->proxy))
+ process_srv_queue(sv);
+
+ return NULL;
+}
+
int parse_server(const char *file, int linenum, char **args, struct proxy *curproxy, struct proxy *defproxy)
{
struct server *newsrv = NULL;
--
2.7.0

[SPAM] Le meilleur crédit adapté à votre profil (no replies)

$
0
0
KreditConso.com
Un crédit vous engage et doit être remboursé. Vérifiez vos capacités de remboursement avant de vous engager

Winsonic Modbus Remote I/O with RS485/RS422, Ethernet, USB communication (no replies)

$
0
0
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','//www.google-analytics.com/analytics.js','ga');

ga('create', 'UA-38009271-1', 'auto');
ga('send', 'pageview');


Winsonic Modbus Remote I/O with RS485/RS422, Ethernet, USB communication











This message contains graphics. If you do not see the graphics, click here to view.



Winsonic Modbus Remote I/O with RS485/RS422, Ethernet, USB communication



Modbus (support RTU and TCP/UDP protocol) Remote I/O
Winsonic provide all in one interface MODBUS remote I/O products to customers which equipped the RS485/RS422, Ethernet, USB interface together for industrial control and is flexible to implement by customer*s application and equipped these interfaces also can be the redundancy of increasing the safety of the communication
.




Main Feature :




♂&nbsp; &nbsp; Support for both Modbus RTU, TCP/UDP protocol
♂&nbsp; &nbsp; Modbus with 3 ways communication interfaces RS485/RS422, Ethernet (10/100 Mbs) and USB
♂&nbsp; &nbsp; Web-based Remote I/O
♂&nbsp; &nbsp; Remote and local monitoring
♂&nbsp; &nbsp; Flexible user-defined Modbus address.
♂&nbsp; &nbsp; Multi-channel data acquisition module.
♂&nbsp; &nbsp; Power input DC 12V - 48V.







Application:














More detail information about Modbus Remote I/O, please visit our website: www.ewinsonic.com










886 3 3704789


886 3 3704722


sales@ewinsonic.com

Pesticide intermediates (no replies)

$
0
0
&#20320;&#22909;&nbsp;
&nbsp;&nbsp; &#25105;&#21496;&#19987;&#19994;&#25215;&#25509;&#21270;&#24037;&#31995;&#21015;&#20135;&#21697;&#30340;&#22269;&#38469;&#33322;&#31354;&#24555;&#36882;&#20197;&#21450;&#21361;&#38505;&#21697;&#22269;&#38469;&#24555;&#36882; &#65292;&#24555;&#36882;&#21270;&#24037;&#20135;&#21697;&#26377;&#30528;&#22810;&#24180;&#20016;&#23500;&#25805;&#20316;&#32463;&#39564;&#21644;&#33322;&#32447;&#36164;&#28304;&#12290;&#26080;&#38656;&#25552;&#20379;&#20219;&#20309;&#36164;&#26009;&#21450;DGM&#37492;&#23450;&#25253;&#21578;&#31561;&#36164;&#26009;&#65292;&#24744;&#21482;&#38656;&#25552;&#20379;&#26679;&#21697;&#20197;&#21450;&#25910;&#20214;&#23458;&#25143;&#22320;&#22336;&#20132;&#32473;&#25105;&#20204;&#23601;&#21487;&#20197;&#23433;&#25490;&#23433;&#20840;&#24555;&#36882;&#20986;&#21475;&nbsp;,&nbsp;&#36865;&#36798;&#20840;&#29699;&#12290;&#22914;&#36149;&#20844;&#21496;&#26377;&#30097;&#38590;&#38382;&#39064;&#27426;&#36814;&#21672;&#35810;&#20102;&#35299;&#12290;&nbsp;&nbsp;&nbsp;&nbsp;









&nbsp;








&nbsp;&nbsp;&nbsp;&nbsp; &#24352;&#23567;&#22992;

&nbsp;&nbsp; T &#65306;136 2175 9443&nbsp;&nbsp;

&nbsp;










&nbsp;&nbsp; Qh&nbsp;Q:2624340815











&nbsp;





&nbsp;




&nbsp;&nbsp;








&nbsp;

&nbsp;



&nbsp;&nbsp;&nbsp;&nbsp;
&nbsp;&nbsp;&nbsp;
&nbsp;&nbsp;&nbsp;&nbsp;




&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;





&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;

Rate limiting with multiple haproxy servers (no replies)

$
0
0
Hi,

We have multiple haproxy servers receiving traffic from our firewall, we
want to apply some rate limiting that takes into account counters from all
the haproxy servers.

I am testing this with 1.6.4 and I tried the peer feature, but not able to
get it to work. I understand that counter aggregation does not happen, but
even replication doesn¹t seem to be working for me.

Conf:
Peers article

peer haproxy1 127.0.0.1:11023

peer haproxy2 127.0.0.1:11024



global

stats socket /tmp/haproxy.sock mode 600 level admin

#maxconn 3000

#maxconn 10000





defaults

log 127.0.0.1 local1

option httplog

mode http

timeout server 120s

timeout queue 1000s

timeout client 1200s # CLient Inactive time

timeout connect 100s # timeout for server connection

timeout check 500s # timeout for server check pings

maxconn 10000

retries 2

option redispatch

option http-server-close



frontend haproxy1_l2

mode http

option forwardfor

capture cookie egnyte-proxy len 32

capture request header host len 32



bind *:1443 ssl crt /home/egnyte/haproxy/conf/key.pem crt
/home/egnyte/haproxy/conf/certs

tcp-request inspect-delay 5s

tcp-request content accept if { req_ssl_hello_type 1 }



stick-table type string size 1M expire 10m store conn_cur peers
article

acl is_range hdr_sub(Range) bytes=

acl is_path_throttled path_beg /public-api/v1/fs-content-download

acl is_path_throttled path_end /get_file

acl is_path_throttled path_beg /wsgi/print_headers.py

#tcp-request content track-sc1 base32 if is_range is_path_throttled

http-request set-header X-track % http-request track-sc1 req.hdr(X-track) if is_range is_path_throttled http-request deny if { sc1_conn_cur gt 2 } is_range is_path_throttled default_backend apache_l1 backend apache_l1 mode http maxconn 10000 reqadd X-Haproxy-L1:\ true server apache_l1 127.0.0.1:80 Is there any other way to have rate limiting that can track the counters across haproxy servers? How about seeding counters in to redis using lua and then reading them to rate limit ­ is it even feasible, I have not looked at it in detail yet, just wanted to see if somebody has tried something similar. Thanks Sachin

Cow leather handbag from shera bag factory (no replies)

$
0
0
Dear Purchasing manager,


How are you ?
I am writing to introduce our fashion bags .
These bags are use the high quality real cow leather to made with the exquisite craftsmanship.
The colors ,shape and details can be change as you request.
Could I know if you come to the Canton Fair on April ?
Is there anything I can help for you ? such as booking hotel , pick you up at the airport and so on.
Look forward to hearing from you.


--

Thanks and Best Regards
Candy
Sales Manager
Guangzhou Shera-Bag Factory
Phone: 86-20-34329687
Tel: 86-13825029557
Email: candy@sherabag.com
Web: http://www.sherabag.com
Add: Rom1705 West Tower, Building C, Poly World Trade Center, No.1000 Xingandong Road, Haizhu District Guangzhou China.

[SPAM] 8% par an net d'impôt c'est possible! (no replies)

$
0
0
*|MC:SUBJECT|*
INVESTIR DANS LE DIAMANT
8% par an
Net d'impôt c'est possible!
Alternative d'épargne au livret A, PEL ou Assurance vie
EN SAVOIR PLUS
" Placer son cash aujourd’hui est devenu un véritable casse-tête. Les produits court terme et sans risque rapportent de moins en moins. Quelles sont les alternatives ?"
EN SAVOIR PLUS
La presse en parle
une valeur refuge aujourd'hui et investissement financier d'avenir.
Voir le reportage.
EN SAVOIR PLUS

subscribe (no replies)

Viewing all 5112 articles
Browse latest View live


Latest Images