Quantcast
Channel: Serverphorums.com - HAProxy

Problems setting up SMTP health checks with Sophos email gateway (2 replies)

$
0
0
I have a very simple configuration that I've setup to handle load balancing with my Sophos email gateway.

listen smtp_relay
bind IP:25
mode tcp
option smtpchk EHLO domain.com
balance roundrobin
server SMTPGATEWAY IP:25 check
server ALTERNATEGATEWAY IP:25 backup check

According to the logs on the Sophos appliance the health checks are sending in this format

EHLO domain.com\r\n

Which throws an error "501 Syntactically invalid EHLO argument(s)"

If I telnet to the host, and manually use EHLO domain.com it works fine, but if I do EHLO domain.com\r\n it reproduces the error.

I also tested on my Postfix and Exchange servers, and they seem to handle the \r\n just fine, but the Email gateway freaks out. I've sent a ticket in to them as well, but I was wondering if there was a way that I'm not seeing in documentation to surpress the \r\n in the health check without writing a custom check.

Thanks!

________________________________

Information in this e-mail may be confidential. It is intended only for the addressee(s) identified above. If you are not the addressee(s), or an employee or agent of the addressee(s), please note that any dissemination, distribution, or copying of this communication is strictly prohibited. If you have received this e-mail in error, please notify the sender of the error.

Loan Officers (no replies)

$
0
0
Hi,



Would you be interested in an email leads of Loan Officers? We can help you reach out to.



Title includes:
? Loan Officer

? Loan Coordinator

? Loan Processor

? Lending Manager

? Lending Officer



I'd be happy to send over few sample records on your target audience, and set up a time to discuss further.



Have a great day!



Regards,



Lopez Evans / Pro Solutions

If you don't wish to receive emails from us reply back with "Leave Out".

H2O - an optimized HTTP server (2 replies)

[ANNOUNCE] haproxy-1.9-dev3 (5 replies)

$
0
0
Subject: [ANNOUNCE] haproxy-1.9-dev3
To: haproxy@formilux.org

Hi,

Now that Kernel Recipes is over (it was another awesome edition), I'm back
to my haproxy activities. Well, I was pleased to see that my coworkers
reserved me a nice surprise by fixing the pending bugs that were plaguing
dev2. I should go to conferences more often, maybe it's a message from
them to make me understand I'm disturbing them when I'm at the office ;-)

So I thought that it was a good opportunity to issue dev3 now and make it
what dev2 should have been, and forget that miserable one, eventhough I
was told that I'll soon get another batch of patches to merge, but then
we'll simply emit dev4 so there's no need to further delay pending fixes.

HAProxy 1.9-dev3 was released on 2018/09/29. It added 35 new commits
after version 1.9-dev2.

There's nothing fancy here. The connection issues are supposedly addressed
(please expect a bit more in this area soon). The HTTP/1 generic parser is
getting smarter since we're reimplementing the features that were in the
old HTTP code (content-length and transfer-encoding now handled). Lua now
can access stick-tables. I haven't checked precisely how but I saw that
Adis updated the doc so all info should be there.

Ah, a small change is that we now build with -Wextra after having addressed
all warnings reported up to gcc 7.3 and filtered a few useless ones. If you
get some build warnings, please report them along with your gcc version and
your build options. I personally build with -Werror in addition to this one,
and would like to keep this principle to catch certain bugs or new compiler
jokes earlier in the future.

As usual, this is an early development version. It's fine if you want to
test the changes, but avoid putting this into production if it can cost
you your job!

Please find the usual URLs below :
Site index : http://www.haproxy.org/
Discourse : http://discourse.haproxy.org/
Sources : http://www.haproxy.org/download/1.9/src/
Git repository : http://git.haproxy.org/git/haproxy.git/
Git Web browsing : http://git.haproxy.org/?p=haproxy.git
Changelog : http://www.haproxy.org/download/1.9/src/CHANGELOG
Cyril's HTML doc : http://cbonte.github.io/haproxy-dconv/

Willy
---
Complete changelog :
Adis Nezirovic (1):
MEDIUM: lua: Add stick table support for Lua.

Bertrand Jacquin (1):
DOC: Fix typos in lua documentation

Christopher Faulet (3):
MINOR: h1: Add H1_MF_XFER_LEN flag
BUG/MEDIUM: h1: Really skip all updates when incomplete messages are parsed
BUG/MEDIUM: http: Don't parse chunked body if there is no input data

Dragan Dosen (1):
BUG/MEDIUM: patterns: fix possible double free when reloading a pattern list

Moemen MHEDHBI (1):
DOC: Update configuration doc about the maximum number of stick counters.

Olivier Houchard (4):
BUG/MEDIUM: process_stream: Don't use si_cs_io_cb() in process_stream().
MINOR: h2/stream_interface: Reintroduce te wake() method.
BUG/MEDIUM: h2: Wake the task instead of calling h2_recv()/h2_process().
BUG/MEDIUM: process_stream(): Don't wake the task if no new data was received.

Willy Tarreau (24):
BUG/MINOR: h1: don't consider the status for each header
MINOR: h1: report in the h1m struct if the HTTP version is 1.1 or above
MINOR: h1: parse the Connection header field
MINOR: http: add http_hdr_del() to remove a header from a list
MINOR: h1: add headers to the list after controls, not before
MEDIUM: h1: better handle transfer-encoding vs content-length
MEDIUM: h1: deduplicate the content-length header
CLEANUP/CONTRIB: hpack: remove some h1 build warnings
BUG/MINOR: tools: fix set_net_port() / set_host_port() on IPv4
BUG/MINOR: cli: make sure the "getsock" command is only called on connections
MINOR: stktable: provide an unchecked version of stktable_data_ptr()
MINOR: stream-int: make si_appctx() never fail
BUILD: ssl_sock: remove build warnings on potential null-derefs
BUILD: stats: remove build warnings on potential null-derefs
BUILD: stream: address null-deref build warnings at -Wextra
BUILD: http: address a couple of null-deref warnings at -Wextra
BUILD: log: silent build warnings due to unchecked __objt_{server,applet}
BUILD: dns: fix null-deref build warning at -Wextra
BUILD: checks: silence a null-deref build warning at -Wextra
BUILD: connection: silence a couple of null-deref build warnings at -Wextra
BUILD: backend: fix 3 build warnings related to null-deref at -Wextra
BUILD: sockpair: silence a build warning at -Wextra
BUILD: build with -Wextra and sort out certain warnings
BUG/CRITICAL: hpack: fix improper sign check on the header index value

---

Do `tune.rcvbuf.server` and `tune.sndbuf.server` (and their `tune.*.client` equivalents) lead to TCP fragmentation? (16 replies)

$
0
0
Hello all!

I've played with `tune.rcvbuf.server`, `tune.sndbuf.server`,
`tune.rcvbuf.client`, and `tune.sndbuf.client` and explicitly set them
to various values ranging from 4k to 256k. Unfortunately in all cases
it seems that this generates too large TCP packets (larger than the
advertised and agreed MSS in both direction), which in turn leads to
TCP fragmentation and reassembly. (Both client and server are Linux
>4.10. The protocol used was HTTP 1.1 over TLS 1.2.)

The resulting outcome was a bandwidth of about 100 KB (for a
client-server latency of 160ms).

The only setting that din't have this effect was not to set them. The
resulting bandwidth was around 10 MB.

(I have tested the backend server without HAProxy, in fact two
different webservers, both with and without HAProxy, and I would
exclude them as the issue.)


Thus I was wondering if anyone encountered similar issues and how
they've fixed it. (My guess is that it's more due to the Linux TCP
implementation stack than from HAProxy.)


As a sidenote is the following interpretation correct:
* `tune.*buf.server` refers to the TCP sockets that the frontend binds
to and listens for actual clients;
* `tune.*buf.client` refers to the TCP sockets that the backend
creates and connects to the actual servers;

Thanks,
Ciprian.

Allow configuration of pcre-config path (3 replies)

$
0
0
Dear all,

I added haproxy to buildroot and to do so, I added a way of configuring the
path of pcre-config and pcre-config2. So, please find attached a patch. As
this my first contribution to haproxy, please excuse me if I made any
mistakes.

Best Regards,

Fabrice

[PATCH] DOC: clarify force-private-cache is an option (1 reply)

$
0
0
"boolean" may confuse users into thinking they need to provide
additional arguments, like false or true. This is a simple option
like many others, so lets not confuse the users with internals.

Also fixes an additional typo.

Should be backported to 1.8 and 1.7.
---
doc/configuration.txt | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/doc/configuration.txt b/doc/configuration.txt
index 336ef1f..d890b0b 100644
--- a/doc/configuration.txt
+++ b/doc/configuration.txt
@@ -1660,7 +1660,7 @@ tune.ssl.cachesize <number>
this value to 0 disables the SSL session cache.

tune.ssl.force-private-cache
- This boolean disables SSL session cache sharing between all processes. It
+ This option disables SSL session cache sharing between all processes. It
should normally not be used since it will force many renegotiations due to
clients hitting a random process. But it may be required on some operating
systems where none of the SSL cache synchronization method may be used. In
@@ -6592,7 +6592,7 @@ option smtpchk <hello> <domain>
yes | no | yes | yes
Arguments :
<hello> is an optional argument. It is the "hello" command to use. It can
- be either "HELO" (for SMTP) or "EHLO" (for ESTMP). All other
+ be either "HELO" (for SMTP) or "EHLO" (for ESMTP). All other
values will be turned into the default command ("HELO").

<domain> is the domain name to present to the server. It may only be
--
2.7.4

Re: Get a Free Report on Issues Killing Your Website :haproxy.com (no replies)

$
0
0
Hello haproxy.com Team,

Hope you are doing well.

My name is Peter Schrum, I'm a Search Specialist and was doing research for
another client when I came across your web site.

I wanted to share a few major issues I discovered that are currently
harming your website search rankings:

• There are several 'bad' links pointing to your website. You can confirm
this by searching on the major search engines search boxes - Google, Yahoo
& Bing. You can simply search by inputting link:your domain name

• Your website has multiple undesirable technical errors such as HTML
errors, broken links, missing image alt tags etc. Confirm this by searching
your domain or website URL on validator.w3.org
https://improveurbusiness-dot-yamm-track.appspot.com/Redirect?ukey=1wq-J3FdZImNOkEI_rxbiI5FnHlQvJdCNALumA1cIrc8-0&key=YAMMID-97881524&link=http%3A%2F%2Fvalidator.w3.org%2F
, brokenlinkcheck.com
https://improveurbusiness-dot-yamm-track.appspot.com/Redirect?ukey=1wq-J3FdZImNOkEI_rxbiI5FnHlQvJdCNALumA1cIrc8-0&key=YAMMID-97881524&link=http%3A%2F%2Fbrokenlinkcheck.com%2F
, feedthebot.com/tools/alt/
https://improveurbusiness-dot-yamm-track.appspot.com/Redirect?ukey=1wq-J3FdZImNOkEI_rxbiI5FnHlQvJdCNALumA1cIrc8-0&key=YAMMID-97881524&link=http%3A%2F%2Ffeedthebot.com%2Ftools%2Falt%2F

• Duplicate content has been found which can be adversely affecting your
website. You can confirm this at copyscape.com
https://improveurbusiness-dot-yamm-track.appspot.com/Redirect?ukey=1wq-J3FdZImNOkEI_rxbiI5FnHlQvJdCNALumA1cIrc8-0&key=YAMMID-97881524&link=http%3A%2F%2Fcopyscape.com%2F

• The number of high quality and/or authoritative links pointing to your
site is extremely low. You can confirm this by visiting ahrefs.com
https://improveurbusiness-dot-yamm-track.appspot.com/Redirect?ukey=1wq-J3FdZImNOkEI_rxbiI5FnHlQvJdCNALumA1cIrc8-0&key=YAMMID-97881524&link=http%3A%2F%2Fahrefs.com%2F

We can help you fix these issues and get your website ranking on the 1st
page of Google!

Feel free to ask questions or you can provide your phone number for me to
call you.



*Kind Regards,*


* Peter Schrum | Senior SEO Consultant
--------------------------------------------------------------------------------------------------------------------------------Note:-
If you are interested then my Sales Manager will come back to you with an
affordable SEO & Digital Marketing plan which contains our services,
client reference, price list etc.*
[image: beacon]

lua time tracking (1 reply)

$
0
0
What are the thoughts around putting time tracking stats around LUA
calls? Just really basic info like how long a request spent running LUA
code. Similar to how we already have metrics for time spent in queue,
connecting, waiting on response, etc.

Currently I accomplish this manually by grabbing the timestamp at the
beginning and end of all LUA actions, store the duration in a
transaction variable, and add that variable to the log. But I was
wondering if this should instead have a native solution.
I see we already have `tune.lua.service-timeout`, so it appears some
timing data is already tracked. However one difference between that data
and my custom implementation is that mine includes sleep time. So we
might want to expose the LUA time as 2 different metrics: one as CPU run
time, and another as wall clock time.

The use case is that as we put more code into LUA scripts, we want to be
aware of the impact this code is having on the performance of the
requests, and the response times.

-Patrick

HAProxy is not supporting MySQL-8.0 default user authentication plugin (caching_sha2_password) (no replies)

$
0
0
Hi Team,

HAProxy is not supporting MySQL-8.0 default user authentication plugin
(caching_sha2_password).

HAProxy verison info :

$ haproxy -vv
HA-Proxy version 1.5.18 2016/05/10
Copyright 2000-2016 Willy Tarreau <willy@haproxy.org>

Build options :
TARGET = linux2628
CPU = generic
CC = gcc
CFLAGS = -O2 -g -fno-strict-aliasing
OPTIONS = USE_LINUX_TPROXY=1 USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1
USE_PCRE=1

Default settings :
maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.3
Compression algorithms supported : identity, deflate, gzip
Built with OpenSSL version : OpenSSL 1.0.1e-fips 11 Feb 2013
Running on OpenSSL version : OpenSSL 1.0.1e-fips 11 Feb 2013
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 7.8 2008-09-05
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT
IP_FREEBIND

Available polling systems :
epoll : pref=300, test result OK
poll : pref=200, test result OK
select : pref=150, test result OK
Total: 3 (3 usable), will use epoll.

$

Error Info :

Sep 27 04:22:55 localhost haproxy[29022]: Server mysql-cluster/node1 is
DOWN, reason: Layer7 wrong status, code: 0, info: "Client does not support
authentication protocol requested by server; consider upgrading MySQL
client", check duration: 0ms. 1 active and 0 backup servers left. 0
sessions active, 0 requeued, 0 remaining in queue.
Sep 27 04:22:56 localhost haproxy[29023]: Server mysql-cluster/node2 is
DOWN, reason: Layer7 wrong status, code: 0, info: "Client does not support
authentication protocol requested by server; consider upgrading MySQL
client", check duration: 0ms. 0 active and 0 backup servers left. 0
sessions active, 0 requeued, 0 remaining in queue.



--
Best Regards,

*Ramesh Sivaraman*
*Senior QA Engineer, Percona*
http://www.percona.com/ http://percona.com/
Phone : +91 8606432991
Skype : rameshvs02

High CPU Usage followed by segfault error (1 reply)

$
0
0
Hello,

We are currently using haproxy 1.8.3 with single process multithreaded
configuration.
We have 1 process and 10 threads each mapped to a separate core [0-9]. We
are running our haproxy instances on a c4.4xlarge aws ec2 instance. The
only other CPU intensive process running on this server is a log shipper
which is explicity mapped to cpu cores 13 - 16 explicitly using taskset
command. Also we have given 'SCHED_RR' priority 99 for haproxy processes.

OS: Ubuntu 14
Kernel: 4.4.0-134-generic

The issue we are seeing with Haproxy is all of a sudden CPU usage spikes to
100% on cores which haproxy is using & causing latency spikes and high load
on the server. We are seeing the following error messages in system /
kernel logs when this issue happens.

haproxy[92558]: segfault at 8 ip 000055f04b1f5da2 sp 00007ffdab2bdd40 error
6 in haproxy[55f04b10100
0+170000]

Sep 29 12:21:02 marathonlb-int21 kernel: [2223350.996059] sched: RT
throttling activated

We are using marathonlb for auto discovery and reloads are quite frequent
on this server. Last time when this issue happened we had seen haproxy
using 750% of CPU and it went into D state. Also the old process was also
taking cpu.

hard-stop-after was not set in our hap configuration and we were seeing
multiple old pid's running on the server. After the last outage we had with
CPU we set 'hard-stop-after' to 10s and now we are not seeing multiple hap
instances running after reload. I would really appreciate if some one can
explain us why the CPU usage spikes with the above segfault error & what
this error exactly means.

FYI: There was no traffic spike on this hap instance when the issue
happened. We have even seen the same issue in a non-prod hap where no
traffic was coming & system went down due to CPU usage & found the same
segfault error in the logs.

Thanks

Thanks

Few problems seen in haproxy? (threads, connections). (no replies)

$
0
0
Hi Willy, and community developers,

I am not sure if I am doing something wrong, but wanted to report
some issues that I am seeing. Please let me know if this is a problem.

1. HAProxy system:
Kernel: 4.17.13,
CPU: 48 core E5-2670 v3
Memory: 128GB memory
NIC: Mellanox 40g with IRQ pinning

2. Client, 48 core similar to server. Test command line:
wrk -c 4800 -t 48 -d 30s http://<IP:80>/128

3. HAProxy version: I am testing both 1.8.14 and 1.9-dev3 (git checkout as
of
Oct 2nd).
# haproxy-git -vv
HA-Proxy version 1.9-dev3 2018/09/29
Copyright 2000-2018 Willy Tarreau <willy@haproxy.org>

Build options :
TARGET = linux2628
CPU = generic
CC = gcc
CFLAGS = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement
-fwrapv -fno-strict-overflow -Wno-unused-label -Wno-sign-compare
-Wno-unused-parameter -Wno-old-style-declaration -Wno-ignored-qualifiers
-Wno-clobbered -Wno-missing-field-initializers -Wtype-limits
OPTIONS = USE_ZLIB=yes USE_OPENSSL=1 USE_PCRE=1

Default settings :
maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Built with OpenSSL version : OpenSSL 1.0.2g 1 Mar 2016
Running on OpenSSL version : OpenSSL 1.0.2g 1 Mar 2016
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT
IP_FREEBIND
Encrypted password support via crypt(3): yes
Built with multi-threading support.
Built with PCRE version : 8.38 2015-11-23
Running on PCRE version : 8.38 2015-11-23
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with zlib version : 1.2.8
Running on zlib version : 1.2.8
Compression algorithms supported : identity("identity"),
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with network namespace support.

Available polling systems :
epoll : pref=300, test result OK
poll : pref=200, test result OK
select : pref=150, test result OK
Total: 3 (3 usable), will use epoll.

Available multiplexer protocols :
(protocols markes as <default> cannot be specified using 'proto' keyword)
h2 : mode=HTTP side=FE
<default> : mode=TCP|HTTP side=FE|BE

Available filters :
[SPOE] spoe
[COMP] compression
[TRACE] trace

4. HAProxy results for #processes and #threads
# Threads-RPS Procs-RPS
1 20903 19280
2 46400 51045
4 96587 142801
8 172224 254720
12 210451 437488
16 173034 437375
24 79069 519367
32 55607 586367
48 31739 596148

5. Lock stats for 1.9-dev3: Some write locks on average took a lot more time
to acquire, e.g. "POOL" and "TASK_WQ". For 48 threads, I get:
Stats about Lock FD:
# write lock : 143933900
# write unlock: 143933895 (-5)
# wait time for write : 11370.245 msec
# wait time for write/lock: 78.996 nsec
# read lock : 0
# read unlock : 0 (0)
# wait time for read : 0.000 msec
# wait time for read/lock : 0.000 nsec
Stats about Lock TASK_RQ:
# write lock : 2062874
# write unlock: 2062875 (1)
# wait time for write : 7820.234 msec
# wait time for write/lock: 3790.941 nsec
# read lock : 0
# read unlock : 0 (0)
# wait time for read : 0.000 msec
# wait time for read/lock : 0.000 nsec
Stats about Lock TASK_WQ:
# write lock : 2601227
# write unlock: 2601227 (0)
# wait time for write : 5019.811 msec
# wait time for write/lock: 1929.786 nsec
# read lock : 0
# read unlock : 0 (0)
# wait time for read : 0.000 msec
# wait time for read/lock : 0.000 nsec
Stats about Lock POOL:
# write lock : 2823393
# write unlock: 2823393 (0)
# wait time for write : 11984.706 msec
# wait time for write/lock: 4244.788 nsec
# read lock : 0
# read unlock : 0 (0)
# wait time for read : 0.000 msec
# wait time for read/lock : 0.000 nsec
Stats about Lock LISTENER:
# write lock : 184
# write unlock: 184 (0)
# wait time for write : 0.011 msec
# wait time for write/lock: 60.554 nsec
# read lock : 0
# read unlock : 0 (0)
# wait time for read : 0.000 msec
# wait time for read/lock : 0.000 nsec
Stats about Lock PROXY:
# write lock : 291557
# write unlock: 291557 (0)
# wait time for write : 109.694 msec
# wait time for write/lock: 376.235 nsec
# read lock : 0
# read unlock : 0 (0)
# wait time for read : 0.000 msec
# wait time for read/lock : 0.000 nsec
Stats about Lock SERVER:
# write lock : 1188511
# write unlock: 1188511 (0)
# wait time for write : 854.171 msec
# wait time for write/lock: 718.690 nsec
# read lock : 0
# read unlock : 0 (0)
# wait time for read : 0.000 msec
# wait time for read/lock : 0.000 nsec
Stats about Lock LBPRM:
# write lock : 1184709
# write unlock: 1184709 (0)
# wait time for write : 778.947 msec
# wait time for write/lock: 657.501 nsec
# read lock : 0
# read unlock : 0 (0)
# wait time for read : 0.000 msec
# wait time for read/lock : 0.000 nsec
Stats about Lock BUF_WQ:
# write lock : 669247
# write unlock: 669247 (0)
# wait time for write : 252.265 msec
# wait time for write/lock: 376.939 nsec
# read lock : 0
# read unlock : 0 (0)
# wait time for read : 0.000 msec
# wait time for read/lock : 0.000 nsec
Stats about Lock STRMS:
# write lock : 9335
# write unlock: 9335 (0)
# wait time for write : 0.910 msec
# wait time for write/lock: 97.492 nsec
# read lock : 0
# read unlock : 0 (0)
# wait time for read : 0.000 msec
# wait time for read/lock : 0.000 nsec
Stats about Lock VARS:
# write lock : 901947
# write unlock: 901947 (0)
# wait time for write : 299.224 msec
# wait time for write/lock: 331.753 nsec
# read lock : 0
# read unlock : 0 (0)
# wait time for read : 0.000 msec
# wait time for read/lock : 0.000 nsec

6. CPU utilization after test for processes/threads: haproxy-1.9-dev3 runs
at 4800% (48 cpus) for 30 seconds after the test is done. For 1.8.14,
this behavior was not seen. Ran the following command for both:
"ss -tnp | awk '{print $1}' | sort | uniq -c | sort -n"
1.8.14 during test:
451 SYN-SENT
9166 ESTAB
1.8.14 after test:
2 ESTAB

1.9-dev3 during test:
109 SYN-SENT
9400 ESTAB
1.9-dev3 after test:
2185 CLOSE-WAIT
2187 ESTAB
All connections that were in CLOSE-WAIT were from the client, while all
connections in ESTAB state were to the server. This lasted for 30
seconds.
On the client system, all sockets were in FIN-WAIT-2 state:
2186 FIN-WAIT-2
This (2185/2186) seems to imply that client closed the connection but
haproxy did not close the socket for 30 seconds. This also results in
high CPU utilization on haproxy for some reason (100% for each process
for 30 seconds), which is also unexpected as the remote side has closed
the
socket.

7. Configuration file for process mode:
global
daemon
maxconn 26000
nbproc 48
stats socket /var/run/ha-1-admin.sock mode 600 level admin process 1
# (and so on for 48 processes).

defaults
option http-keep-alive
balance leastconn
retries 2
option redispatch
maxconn 25000
option splice-response
option tcp-smart-accept
option tcp-smart-connect
option splice-auto
timeout connect 5000ms
timeout client 30000ms
timeout server 30000ms
timeout client-fin 30000ms
timeout http-request 10000ms
timeout http-keep-alive 75000ms
timeout queue 10000ms
timeout tarpit 15000ms

frontend fk-fe-upgrade-80
mode http
default_backend fk-be-upgrade
bind <VIP>:80 process 1
# (and so on for 48 processes).

backend fk-be-upgrade
mode http
default-server maxconn 2000 slowstart
# 58 server lines follow, e.g.: "server <name> <ip:80>"

8. Configuration file for thread mode:
global
daemon
maxconn 26000
stats socket /var/run/ha-1-admin.sock mode 600 level admin
nbproc 1
nbthread 48
# cpu-map auto:1/1-48 0-39

defaults
option http-keep-alive
balance leastconn
retries 2
option redispatch
maxconn 25000
option splice-response
option tcp-smart-accept
option tcp-smart-connect
option splice-auto
timeout connect 5000ms
timeout client 30000ms
timeout server 30000ms
timeout client-fin 30000ms
timeout http-request 10000ms
timeout http-keep-alive 75000ms
timeout queue 10000ms
timeout tarpit 15000ms

frontend fk-fe-upgrade-80
mode http
bind <VIP>:80 process 1/1-48
default_backend fk-be-upgrade

backend fk-be-upgrade
mode http
default-server maxconn 2000 slowstart
# 58 server lines follow, e.g.: "server <name> <ip:80>"

I had also captured 'perf' output for the system for thread vs processes,
can send it later if required.

Thanks,
- Krishna

Redirect to HTTPS (1 reply)

$
0
0
I would like to redirect everything from HTTP to HTTPS except a specific URL. Here is what I have but it doesn’t seem to be working.

redirect scheme https if !{ ssl_fc } OR !{ hdr(Host) -m -I www.blah.com }

Thanks,

[PATCH] MINOR: generate-certificates for BoringSSL (no replies)

$
0
0
Hi,

For generate-certificates, X509V3_EXT_conf is used but it's an (very) old API
call: X509V3_EXT_nconf must be preferred. Openssl compatibility is ok
because it's inside #ifdef SSL_CTRL_SET_TLSEXT_HOSTNAME, introduce 5
years after X509V3_EXT_nconf.
(BoringSSL only have X509V3_EXT_nconf)

Christopher, if you have time to check this little patch :)

++
Manu

haproxy start problem (2 replies)

$
0
0
I'm new to this list and I subscribed hoping to get help on an
installation problem. OS is ubuntu 18.04

and haproxy ist the latest stable version 1.8.

systemctl restart haproxy and a subsequent

systemctl status haproxy.service

gives:


● haproxy.service - HAProxy Load Balancer
   Loaded: loaded (/lib/systemd/system/haproxy.service; disabled;
vendor preset: enabled)
   Active: failed (Result: exit-code) since Wed 2018-10-03 16:43:15
CEST; 14s ago
     Docs: man:haproxy(1)
           file:/usr/share/doc/haproxy/configuration.txt.gz
  Process: 7535 ExecStart=/usr/sbin/haproxy -Ws -f $CONFIG -p $PIDFILE
$EXTRAOPTS (code=exited, status=1/FAILURE)
  Process: 7534 ExecStartPre=/usr/sbin/haproxy -f $CONFIG -c -q
$EXTRAOPTS (code=exited, status=0/SUCCESS)
 Main PID: 7535 (code=exited, status=1/FAILURE)

Oct 03 16:43:14 myserver.org systemd[1]: haproxy.service: Main process
exited, code=exited, status=1/FAILURE
Oct 03 16:43:14 myserver.org systemd[1]: haproxy.service: Failed with
result 'exit-code'.
Oct 03 16:43:14 myserver.org systemd[1]: Failed to start HAProxy Load
Balancer.
Oct 03 16:43:15 myserver.org systemd[1]: haproxy.service: Service
hold-off time over, scheduling restart.
Oct 03 16:43:15 myserver.org systemd[1]: haproxy.service: Scheduled
restart job, restart counter is at 5.
Oct 03 16:43:15 myserver.org systemd[1]: Stopped HAProxy Load Balancer.
Oct 03 16:43:15 myserver.org systemd[1]: haproxy.service: Start request
repeated too quickly.
Oct 03 16:43:15 myserver.org systemd[1]: haproxy.service: Failed with
result 'exit-code'.
Oct 03 16:43:15 myserver.org systemd[1]: Failed to start HAProxy Load
Balancer.


/etc/haproxy.cfg:

global
        log /dev/log    local0
        log /dev/log    local1 notice
        chroot /var/lib/haproxy
        stats socket /run/haproxy/admin.sock mode 660 level admin
        stats timeout 30s
        user haproxy
        group haproxy
        daemon

        # Default SSL material locations
        ca-base /etc/ssl/certs
        crt-base /etc/ssl/private

        # Default ciphers to use on SSL-enabled listening sockets.
        # For more information, see ciphers(1SSL). This list is from:
        # https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
        ssl-default-bind-ciphers
ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS
        ssl-default-bind-options no-sslv3

defaults
        log     global
        mode    http
        option  httplog
        option  dontlognull
        timeout connect 5000
        timeout client  50000
        timeout server  50000
        errorfile 400 /etc/haproxy/errors/400.http
        errorfile 403 /etc/haproxy/errors/403.http
        errorfile 408 /etc/haproxy/errors/408.http
        errorfile 500 /etc/haproxy/errors/500.http
        errorfile 502 /etc/haproxy/errors/502.http
        errorfile 503 /etc/haproxy/errors/503.http
        errorfile 504 /etc/haproxy/errors/504.http
    log global
    mode http
    compression algo gzip
    compression type text/html text/css text/plain text/vcard
text/vnd.rim.location.xloc text/vtt text/x-component
text/x-cross-domain-policy application/atom+xml application/javascript
application/x-javascript application/json application/ld+json
application/manifest+json application/rss+xml application/vnd.geo+json
application/vnd.ms-fontobject application/x-font-ttf
application/x-web-app-manifest+json application/xhtml+xml
application/xml font/opentype image/bmp image/svg+xml image/x-icon
text/cache-manifest
    balance roundrobin
    option dontlog-normal
    option dontlognull
    option httpclose
    option forwardfor

frontend http-in
    bind *:80
    acl is_static       path_beg /export/ /opencms/ /resources/
/javadoc/ /VAADIN/ /workplace /opencms-login/
    acl is_website      hdr_beg(host) -i website
    use_backend website-static if is_website is_static server
cms.myserver.org 127.0.0.1:8080
    use_backend website if is_website

backend website-static
    server cms.myserver.org 127.0.0.1:8080

backend website
    reqirep ^([^\ :]*)\ /(.*) \1\ /opencms/\2
    server www.myserver.org 127.0.0.1:8080


------------------------------





Tomcat is listening on localhost:8080

netstat -an (excerpt):

tcp        0      0 127.0.0.1:8080 0.0.0.0:*               LISTEN


any clues what might be wrong?


--

Christoph

[PATCH] REGTEST/MINOR: compatibility: use unix@ instead of abns@ (1 reply)

$
0
0
Hi Frederic, Willy,

Attached a patch that will change /reg-tests/connection/b00000.vtc to
use unix@ sockets so it is compatible with FreeBSD and possibly other OS's.

As discussed in the other thread
https://www.mail-archive.com/haproxy@formilux.org/msg31370.html.

Regards,
PiBa-NL (Pieter)

From 8c5ff12b4603e3525445d6f708f6239974003df4 Mon Sep 17 00:00:00 2001
From: PiBa-NL <PiBa.NL.dev@gmail.com>
Date: Wed, 3 Oct 2018 23:54:49 +0200
Subject: [PATCH] REGTEST/MINOR: compatibility: use unix@ instead of abns@
sockets

Changes the /reg-tests/connection/b00000.vtc test to use unix@ instead of abns@ sockets.
This to allow the test to complete on other operating systems like FreeBSD that do not have 'namespaces'.
---
reg-tests/connection/b00000.vtc | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/reg-tests/connection/b00000.vtc b/reg-tests/connection/b00000.vtc
index 3a873848..cbb8a7b0 100644
--- a/reg-tests/connection/b00000.vtc
+++ b/reg-tests/connection/b00000.vtc
@@ -36,14 +36,14 @@ haproxy h1 -conf {

listen http
bind-process 1
- bind abns@http accept-proxy name ssl-offload-http
+ bind unix@${testdir}/http.socket accept-proxy name ssl-offload-http
option forwardfor

listen ssl-offload-http
option httplog
bind-process 2-4
bind "fd@${ssl}" ssl crt ${testdir}/common.pem ssl no-sslv3 alpn h2,http/1.1
- server http abns@http send-proxy
+ server http unix@${testdir}/http.socket send-proxy
} -start


--
2.18.0.windows.1

Redirecting one https site to another (2 replies)

$
0
0
Hi,

I'm not sure if this is possible as haproxy isn't terminating SSL in this instance, but I'd like to redirect https://urlone.co.uk to https://www.urlone.co.uk

I have urlone.co.uk pointed to 185.90.33.47 via a DNS A record

bind 181.70.33.47:80
redirect location https://www.urlone.co.uk:443

bind 181.70.33.47:443
redirect location https://www.urlone.co.uk:443


www.urlone.co.ukhttp://www.urlone.co.uk is pointed to 185.90.33.48 via a DNS A record and I have a config like this:

frontend in-redirect-ssl-www.urlone.co.uk
mode http
bind 181.70.33.48:80
redirect scheme https if !{ ssl_fc }

frontend in-www.urlone.co.uk
mode tcp
bind 181.70.33.48:443
default_backend www.urlone.co.uk

backend www.urlone.co.uk
mode tcp
balance roundrobin
stick-table type ip size 200k expire 30m
stick on src
server prod-web-01 192.168.33.211:443 check port 443
server prod-web-02 192.168.33.212:443 check port 443
server Sorry_Server 192.168.33.200:80 check backup


When I hit urlone.co.uk on http I get redirected to https://www.urlone.co.uk. All good. However when I hit urlone.co.uk on https it fails with 'This site can't provide a secure connection' (jn Chrome, message is probably different in other browsers)

Is what I am trying to achieve possible? Grateful for any suggestions.

Thanks,

Mark

BI WORLDWIDE Limited | Registered in England No 01445905 | Registered address 1 Vantage Court, Newport Pagnell, Bucks, MK16 9EZ | +44 (0) 1908 214 700

This e-mail message is being sent solely for use by the intended recipient(s) and may contain confidential information. Any unauthorised review, use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by phone or reply by e-mail, delete the original message and destroy all copies. Thank you

Please consider the environment before printing this email

This is something you should NEVER miss (closing soon) (no replies)

Compression disabling on chunked response (no replies)

$
0
0
Hi,

I see this in the documentation:

Compression is disabled when:
* ...
* response header "Transfer-Encoding" contains "chunked" (Temporary
Workaround)
* ....

Is this still accurate?

I have tested a lot of responses from Server with compression enabled
in backend
and server sending chunked response, haproxy is compressing the stream
correctly.

What am I missing? I am trying to figure out in what cases could
haproxy not compress

a response from server.

Combine different ACLs under same name (2 replies)

$
0
0
Hello,


I have tested that some types of acls can't be combined, as example:

Server 192.138.1.1, acl with combined rules:

acl rule1 hdr_dom(host) -i test.com
acl rule1 src 192.168.1.2/24
redirect prefix https://yes.com code 301 if rule1
redirect prefix https://no.com

Request from 192.168.1.2:

$ curl -I -H "host: test.com" 192.138.1.1
HTTP/1.1 301 Moved Permanently
Content-length: 0
Location: https://yes.com/

Request from 192.168.1.3:

$ curl -I -H "host: test.com" 192.138.1.1
HTTP/1.1 301 Moved Permanently
Content-length: 0
Location: https://yes.com/



Server 192.138.1.1, acl with two rules:

acl rule1 hdr_dom(host) -i test.com
acl rule2 src 192.168.1.2/24
redirect prefix https://yes.com code 301 if rule1 rule2
redirect prefix https://no.com

Request from 192.168.1.2:

$ curl -I -H "host: test.com" 192.138.1.1
HTTP/1.1 301 Moved Permanently
Content-length: 0
Location: https://yes.com/

Request from 192.168.1.3:

$ curl -I -H "host: test.com" 192.138.1.1
HTTP/1.1 301 Moved Permanently
Content-length: 0
Location: https://no.com/

I look for this behaviour on the documentation but I don't find any
reference to it. Please, can someone know where it is documented?


Thanks,

Ambiguity in documentation for `http-request set-header` (no replies)

$
0
0
The documentation for `http-request set-header` is a little ambiguous about
whether it removes all occurrences of the header if it previously existed
or just the first one. From experimentation it appears it is all
occurrences (which I think is preferable).

May I suggest rewording "except that the header name is first removed if it
existed" to "except that all occurrences of the header name are first
removed if any existed"?

秋高气爽贮之黄金屋 现有188綵金送你 详情加我薇*心【v336458】 哦 地之604951 C-O-M (no replies)

$
0
0
亲亲 想你 家我企鹅825232129 特送你188菜金给你 地址604956 C O M
平臺有很多永利讓您安心娛樂。特碼48.89倍,資金更安全。特碼送送送

王渊wangyuan皎洁之心赠~你188採金 加我企鹅627361158领 连接604954 C-0-M 快快来哦 (no replies)

$
0
0
!老同学 在吗?在就加我威*信【d38061】送你188圆前 给你 地之604953 C OM
更有性感異域真人面對面,手拵餸最高3888開獎快快快逢8必發特碼48.89倍,,每日中中中

贮之黄金屋 现有188綵金送你 详情加我企鹅2229572857 哦 地之604958 C-O-M (no replies)

$
0
0
DDD 604958点C O M送你188财金哦 加K服维*新【v554638】 了解哦
2⒋ 小時存取O手續百♂萬提秒到。手拵餸最高3888平特壹肖2.15倍特碼48.89倍,每日葒苞雨

haproxy support for stratum protocol ? (5 replies)

$
0
0
Hello,

Please CC me as I'm not subscribed on the list.
Does haproxy will consider support for the stratum protocol ?

Thanks,
Julien

李鹏程lipengcheng若不忘来见 好运相伴 速加薇*心【v337418】送你188圆前 地址 604954点COM (no replies)

$
0
0
虚度了年华给你发188财金吧!速加唯*心【v554628 领取 地址 604952点CO·M
天天還可搶紅包,首拵餸最高3888提款無限次.0手續費.隨時提款隨時到賬.

HA-Proxy configuration (2 replies)

$
0
0
Hi Team,


I am looking for HA-Proxy configuration Help in over project, can i know some one who can give more information on configuration using 2 different HA-Proxy servers for high availability.


Feel free to contact me on - 9849916124


Regards,

Anjireddy.

The information contained in this electronic message and any attachments to this message are intended for the exclusive use of the addressee(s) and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately and destroy all copies of this message and any attachments. WARNING: Computer viruses can be transmitted via email. The recipient should check this email and any attachments for the presence of viruses. The company accepts no liability for any damage caused by any virus transmitted by this email. www.wipro.com

人生就像弈棋温故而知新好运来临 现有188菜金送你 详情加我薇*心d38053 哦 地之604958 CO M (no replies)

$
0
0
[%拼音姓名]~感觉到了吗?我好想你哦 速加薇*心v337418领取188材金 地址 604958C OM
更有性感異域真人面對面,手拵餸最高3888開獎快快快逢8必發特碼48.89倍,,每日中中中

我愿携一地秋实DDD 604953点C O M送你188财金哦 加K服维*新v554638 了解哦 (no replies)

$
0
0
耐合情深啊DDD 604957点COM送你188綵金哦 加K服企鹅 2229572857 了解哦
平臺有很多永利讓您安心娛樂。特碼48.89倍,資金更安全。特碼送送送

好想你 特送你188綵金 速加企鹅860144148 咨询 链接 604959点C 0 M (no replies)

$
0
0
何以笙箫默感觉到了吗?我好想你哦 速加企鹅1991900074 领取188圆前 地址604952 C=O=M
更有性感異域真人面對面,手拵餸最高3888特碼48.89倍,永不降低,全程擔保交意




Latest Images