Linux version:
Red Hat Enterprise Linux Server release 5.11 (Tikanga)
Linux dpoweb08 2.6.18-417.el5 #1 SMP Sat Nov 19 14:54:59 EST 2016 x86_64 x86_64 x86_64 GNU/Linux
HAProxy versión: 1.7.5
Summary:
After upgrading to HAProxy 1.7.5, requests with "SD" status in logs have started to appear in one webservice. There have been no changes to the webservice we invoke. The log shows:
Jun 16 12:41:06 localhost.lognet.pre.logalty.es haproxy[17315]: 172.31.2.70:59365 [16/Jun/2017:12:41:06.890] HTTP_INTERNO BUS/BUS1 1/0/0/65/75 200 225565 390 - - SD-- 12/2/0/0/0 0/0 {|Keep-Alive|174|} {|close|} "POST /lgt/lgtbus/rest/private/receiverServiceLight/getSignedBinaryContent HTTP/1.1"
The same POST in HAProxy 1.5.12 ends with “ 200 “ and “----“.
But tcpdump in the server shows perfect:
4 12:41:06.894048 172.31.2.200 172.31.2.10 HTTP 462 POST /lgt/lgtbus/rest/private/receiverServiceLight/getSignedBinaryContent HTTP/1.1 (application/json) /lgt/lgtbus/rest/private/receiverServiceLight/getSignedBinaryContent
…
127 12:41:06.966459 172.31.2.200 172.31.2.10 TCP 66 46535 → 18001 [FIN, ACK] Seq=397 Ack=225567 Win=94208 Len=0 TSval=1159994022 TSecr=1163264220
128 12:41:06.966467 172.31.2.10 172.31.2.200 TCP 66 18001 → 46535 [ACK] Seq=225567 Ack=398 Win=6912 Len=0 TSval=1163264222 TSecr=1159994022
The config in HAPRoxy shows:
defaults
load-server-state-from-file global
log global
mode http
option dontlognull
maxconn 6000
timeout connect 4s
timeout client 1m
timeout server 30m
timeout http-request 10s
timeout http-keep-alive 4s
# timeout queue 5m
Frontend HTTP_INTERNO
bind web_1:80
log-format %ci:%cp\ [%t]\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\ %ST\ %B\ %U\ %CC\ %CS\ %tsc\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %hr\ %hs\ %{+Q}r
option forwardfor
capture request header User-Agent len 100
capture request header Connection len 25
capture request header Content-Length len 10
capture request header contractId len 10
capture response header Content-Length len 10
capture response header Connection len 25
capture response header requestOrigin len 30
acl is_bus url_beg /lgt/lgtbus
use_backend BUS if is_bus
backend BUS
option httpchk GET /lgt/lgtbus/rest/services/admin/status HTTP/1.1\r\nHost:localhost
http-check expect status 200
timeout connect 5s
timeout server 2m
timeout queue 30s
option httpclose
# Pruebas para evitar el malcomportamiento del curl con el forceclose
option http-pretend-keepalive
server BUS1 emi_0:18001 weight 1 maxconn 100 check inter 5s
I attach the tcpdump capture if its helps.
Thanks,
Juampa.
Juan Pablo Mora
Logalty | Av. Industria, 49 28108 Alcobendas, Madrid
Mov. +34 618 530 508 | e-mail: juanpablo.mora@logalty.com<mailto:juanpablo.mora@logalty.com>
Red Hat Enterprise Linux Server release 5.11 (Tikanga)
Linux dpoweb08 2.6.18-417.el5 #1 SMP Sat Nov 19 14:54:59 EST 2016 x86_64 x86_64 x86_64 GNU/Linux
HAProxy versión: 1.7.5
Summary:
After upgrading to HAProxy 1.7.5, requests with "SD" status in logs have started to appear in one webservice. There have been no changes to the webservice we invoke. The log shows:
Jun 16 12:41:06 localhost.lognet.pre.logalty.es haproxy[17315]: 172.31.2.70:59365 [16/Jun/2017:12:41:06.890] HTTP_INTERNO BUS/BUS1 1/0/0/65/75 200 225565 390 - - SD-- 12/2/0/0/0 0/0 {|Keep-Alive|174|} {|close|} "POST /lgt/lgtbus/rest/private/receiverServiceLight/getSignedBinaryContent HTTP/1.1"
The same POST in HAProxy 1.5.12 ends with “ 200 “ and “----“.
But tcpdump in the server shows perfect:
4 12:41:06.894048 172.31.2.200 172.31.2.10 HTTP 462 POST /lgt/lgtbus/rest/private/receiverServiceLight/getSignedBinaryContent HTTP/1.1 (application/json) /lgt/lgtbus/rest/private/receiverServiceLight/getSignedBinaryContent
…
127 12:41:06.966459 172.31.2.200 172.31.2.10 TCP 66 46535 → 18001 [FIN, ACK] Seq=397 Ack=225567 Win=94208 Len=0 TSval=1159994022 TSecr=1163264220
128 12:41:06.966467 172.31.2.10 172.31.2.200 TCP 66 18001 → 46535 [ACK] Seq=225567 Ack=398 Win=6912 Len=0 TSval=1163264222 TSecr=1159994022
The config in HAPRoxy shows:
defaults
load-server-state-from-file global
log global
mode http
option dontlognull
maxconn 6000
timeout connect 4s
timeout client 1m
timeout server 30m
timeout http-request 10s
timeout http-keep-alive 4s
# timeout queue 5m
Frontend HTTP_INTERNO
bind web_1:80
log-format %ci:%cp\ [%t]\ %ft\ %b/%s\ %Tq/%Tw/%Tc/%Tr/%Tt\ %ST\ %B\ %U\ %CC\ %CS\ %tsc\ %ac/%fc/%bc/%sc/%rc\ %sq/%bq\ %hr\ %hs\ %{+Q}r
option forwardfor
capture request header User-Agent len 100
capture request header Connection len 25
capture request header Content-Length len 10
capture request header contractId len 10
capture response header Content-Length len 10
capture response header Connection len 25
capture response header requestOrigin len 30
acl is_bus url_beg /lgt/lgtbus
use_backend BUS if is_bus
backend BUS
option httpchk GET /lgt/lgtbus/rest/services/admin/status HTTP/1.1\r\nHost:localhost
http-check expect status 200
timeout connect 5s
timeout server 2m
timeout queue 30s
option httpclose
# Pruebas para evitar el malcomportamiento del curl con el forceclose
option http-pretend-keepalive
server BUS1 emi_0:18001 weight 1 maxconn 100 check inter 5s
I attach the tcpdump capture if its helps.
Thanks,
Juampa.
Juan Pablo Mora
Logalty | Av. Industria, 49 28108 Alcobendas, Madrid
Mov. +34 618 530 508 | e-mail: juanpablo.mora@logalty.com<mailto:juanpablo.mora@logalty.com>