Quantcast
Channel: Serverphorums.com - HAProxy
Viewing all 5112 articles
Browse latest View live

[PATCH] MINOR: server: Don't make "server" in frontend fatal. (1 reply)

0
0
Hi,

Right now, when we have "server", "default-server", or "server-template"
in a frontend, we warn about it being ignored, only to be considered fatal
later.
That sounds a bit silly, so the attached patch makes it non-fatal.

Regards,

Olivier
From 9d2ab5b57dd4d14bce82923cb9b35bb74ac642bb Mon Sep 17 00:00:00 2001
From: Olivier Houchard <ohouchard@haproxy.com>
Date: Tue, 24 Jul 2018 16:48:59 +0200
Subject: [PATCH] BUG/MINOR: servers: Don't make "server" in a frontend fatal.

When parsing the configuration, if "server", "default-server" or
"server-template" are found in a frontend, we first warn that it will be
ignored, only to be considered a fatal error later. Be true to our word, and
just ignore it.

This should be backported to 1.8 and 1.7.
---
src/server.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/src/server.c b/src/server.c
index d96edc77a..4498fd878 100644
--- a/src/server.c
+++ b/src/server.c
@@ -1937,7 +1937,7 @@ int parse_server(const char *file, int linenum, char **args, struct proxy *curpr
goto out;
}
else if (warnifnotcap(curproxy, PR_CAP_BE, file, linenum, args[0], NULL))
- err_code |= ERR_ALERT | ERR_FATAL;
+ err_code |= ERR_WARN;

/* There is no mandatory first arguments for default server. */
if (srv) {
--
2.14.3

Configuring HAProxy session limits (1 reply)

0
0
Hi Friends,

I am trying to bump session limits via the maxconn in the global section as
below:

cat /etc/haproxy/redacted-haproxy.cfg
global
maxconn 10000
stats socket /var/run/redacted-haproxy-stats.sock user haproxy group
haproxy
mode 660 level operator expose-fd listeners

frontend redacted-frontend
mode tcp
bind :2004
default_backend redacted-backend

backend redacted-backend
mode tcp
balance leastconn
hash-type consistent

server redacted_0 redacted01.qa:8443 check agent-check agent-port 8080
weight 100
send-proxy
server redacted-684994ccd-6rn9q 192.168.39.223:8443 check port 8443
weight 100
send-proxy
server redacted-684994ccd-c88d9 192.168.46.66:8443 check port 8443 weight
100
send-proxy
server redacted-canary-58ccdb7cf4-47f4m 192.168.53.47:8443 check port 8443
weight 100 send-proxy

NOTE: I removed some portion of the config for conciseness sake.

However this did not seem to have any impact on HAProxy after a reload as
seen
below:

echo "show stat" | socat unix-connect:/var/run/redacted-haproxy-stats.sock
stdio
| cut -d"," -f7
slim
2000




200

I do not know where 2000 and 200 are coming from as I did not at any point
configure that, the maxconn was previously 4096.

A more detailed stats output is below:

echo "show stat" | socat unix-connect:/var/run/redacted-haproxy-stats.sock
stdio
#
pxname,svname,qcur,qmax,scur,smax,slim,stot,bin,bout,dreq,dresp,ereq,econ,eresp,wretr,wredis,status,weight,act,bck,chkfail,chkdown,lastchg,downtime,qlimit,pid,iid,sid,throttle,lbtot,tracked,type,rate,rate_lim,rate_max,check_status,check_code,check_duration,hrsp_1xx,hrsp_2xx,hrsp_3xx,hrsp_4xx,hrsp_5xx,hrsp_other,hanafail,req_rate,req_rate_max,req_tot,cli_abrt,srv_abrt,comp_in,comp_out,comp_byp,comp_rsp,lastsess,last_chk,last_agt,qtime,ctime,rtime,ttime,agent_status,agent_code,agent_duration,check_desc,agent_desc,check_rise,check_fall,check_health,agent_rise,agent_fall,agent_health,addr,cookie,mode,algo,conn_rate,conn_rate_max,conn_tot,intercepted,dcon,dses,
redacted-frontend,FRONTEND,,,0,2,2000,3694,0,0,0,0,0,,,,,OPEN,,,,,,,,,1,2,0,,,,0,3,0,9,,,,,,,,,,,0,0,0,,,0,0,0,0,,,,,,,,,,,,,,,,,,,,,tcp,,3,9,3694,,0,0,
redacted-backend,redacted_0,0,0,0,1,,2,0,0,,0,,0,0,0,0,UP,94,1,0,0,0,1582,0,,1,3,1,,2,,2,0,,1,L4OK,,0,,,,,,,,,,,0,0,,,,,683,,via
agent : up,0,0,0,0,L7OK,0,50,Layer4 check passed,Layer7 check
passed,2,3,4,1,1,1,10.185.57.54:8443,,tcp,,,,,,,,
redacted-backend,redacted-684994ccd-6rn9q,0,0,0,1,,46,0,0,,0,,0,0,0,0,UP,100,1,0,0,0,1582,0,,1,3,2,,46,,2,0,,1,L4OK,,0,,,,,,,,,,,0,0,,,,,6,,,0,0,0,1,,,,Layer4
check passed,,2,3,4,,,,192.168.39.223:8443,,tcp,,,,,,,,
redacted-backend,redacted-684994ccd-c88d9,0,0,0,1,,45,0,0,,0,,0,0,0,0,UP,100,1,0,0,0,1582,0,,1,3,3,,45,,2,0,,1,L4OK,,0,,,,,,,,,,,0,0,,,,,12,,,0,0,0,0,,,,Layer4
check passed,,2,3,4,,,,192.168.46.66:8443,,tcp,,,,,,,,
redacted-backend,redacted-canary-58ccdb7cf4-47f4m,0,0,0,1,,45,0,0,,0,,0,0,0,0,UP,100,1,0,0,0,1582,0,,1,3,4,,45,,2,0,,1,L4OK,,0,,,,,,,,,,,0,0,,,,,10,,,0,0,0,1,,,,Layer4
check passed,,2,3,4,,,,192.168.53.47:8443,,tcp,,,,,,,,
redacted-backend,BACKEND,0,0,0,2,200,3694,0,0,0,0,,0,0,0,0,UP,394,4,0,,0,1582,0,,1,3,0,,138,,1,3,,9,,,,,,,,,,,,,,0,0,0,0,0,0,6,,,0,0,0,1,,,,,,,,,,,,,,tcp,leastconn,,,,,,,

I need guidance on what I need to do to configure session limits correctly
and
also make it reflect in the exported metrics.

Thanks!

Abejide Ayodele
It always seems impossible until it's done. --Nelson Mandela

force-persist and use_server combined (no replies)

0
0
Hi,

I'd like to understand if I've made a mistake in configuration or there
might be a bug in HAproxy 1.7.11.

defaults section has "option redispatch".

backend load_balancer
mode http
option httplog
option httpchk HEAD /load_balance_health HTTP/1.1\r\nHost:\ foo.bar
balance url_param file_id
hash-type consistent

acl status0 path_beg -i /dl/
acl status1 path_beg -i /haproxy
use-server local_http_frontend if status0 or status1
force-persist if status0 or status1

server local_http_frontend /var/run/haproxy.sock.http-frontend check
send-proxy
server remote_http_frontend 192.168.1.52:8080 check send-proxy


The idea here is that HAproxy statistics page, some other backend
statistics and also some remote health checks running against path under
/dl/ would always reach only local_http_frontend, never go anywhere else
even when local really is down, not just marked as down.

This config does not work, it forwards /haproxy?stats request to
remote_http_frontend when local_http_frontend is really down.

Is it expected? Any ways to overcome this limitation?

Thanks in advance,
Veiko

Duplicate haproxy processes after setting server to MAINT via stats page (no replies)

0
0
Hi,Running haproxy 1.5 under Ubuntu trusty as a service (service haproxy start/stop), I noticed that sometimes (not always) when I set a server to MAINT via the haproxy_stats page, I end up with duplicate haproxy processes.
Any ideas? Has this problem been fixed in haproxy 1.8?
Thank you in advance,Alessandro

Regarding HA proxy configuration with denodo (1 reply)

0
0
We have two different denodo servers installed on two machines (LINUX) installed on AWS and one load balancer installed on one of those machines . Can you please provide the steps required or the configuration that need to be done to connect HA proxy with the available denodo servers . HA proxy should be able to connect either of the denodo server available .

Thanks.


The information contained in this electronic message and any attachments to this message are intended for the exclusive use of the addressee(s) and may contain proprietary, confidential or privileged information. If you are not the intended recipient, you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately and destroy all copies of this message and any attachments. WARNING: Computer viruses can be transmitted via email. The recipient should check this email and any attachments for the presence of viruses. The company accepts no liability for any damage caused by any virus transmitted by this email. www.wipro.com

haproxy.com is missing out on Google ranking (no replies)

0
0
Hello,

I found you are doing Adwords campaign for your website. Are you really
getting a return on your hard earned money by investing in Adwords?

Your website looks fine, but I think you are not receiving the anticipated
outcome from it. If you allow me I would like to offer you another
proposition to enhance your marketing outcomes and better your ROI over
online marketing. With our innovative SEO & Digital Marketing strategies,
we can help your website rank higher in the organic search results of
Google and can enhance your business identity.

Do you want to know more? On your request, my "Sales Manager" will forward
our detailed SEO & Digital Marketing plans based on the latest trend.

Looking forward to hearing from you!

With Warm Regards,
Robin Jackson | Online Marketing Consultant
Email: robin@webprotop.com
Skype: seo.seophalanx
Phone: +91 8917297372
[image: beacon]

lua socket settimeout has no effect (1 reply)

0
0
Hi,

We are using a http-req lua action to dynamically set some app specific
metadata headers. The lua handler connects to a upstream memcache like
service over tcp to fetch additional metadata.

Functionally everything works ok, but I am seeing that socket.settimeout
has no effect. Irrespective of what I set in settimeout if the upstream
service is unreachable, connect always timesout at 5 seconds, and read
timeout around 10 seconds. It seems like settimeout has no effect and it
always picks defaults of 5 seconds for connect timeout and 10 seconds for
read timeout.

Haproxy conf call:

http-request lua.get_proxy

Lua code sample:

function get_proxy(txn)
local sock = core.tcp()
sock:settimeout(2)
status, error = sock:connect(gds_host, gds_port)
if not status then
core.Alert("1 Error in connecting:" .. key .. ":" .. error)
return result, "Error: " .. error
end
sock:send(key .. "\r\n")
....
....


core.register_action("get_proxy", { "http-req" }, get_proxy)

Haproxy version:

HA-Proxy version 1.7.8 2017/07/07
Copyright 2000-2017 Willy Tarreau <willy@haproxy.org>

Build options :
TARGET = linux2628
CPU = generic
CC = gcc
CFLAGS = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement
-fwrapv -DTCP_USER_TIMEOUT=18
OPTIONS = USE_LINUX_TPROXY=1 USE_ZLIB=1 USE_REGPARM=1 USE_OPENSSL=1
USE_LUA=1 USE_PCRE=1

Default settings :
maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200

Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.7
Running on zlib version : 1.2.7
Compression algorithms supported : identity("identity"),
deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with OpenSSL version : OpenSSL 1.0.1e-fips 11 Feb 2013
Running on OpenSSL version : OpenSSL 1.0.1e-fips 11 Feb 2013
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 8.32 2012-11-30
Running on PCRE version : 8.32 2012-11-30
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with Lua version : Lua 5.3.2
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT
IP_FREEBIND

Available polling systems :
epoll : pref=300, test result OK
poll : pref=200, test result OK
select : pref=150, test result OK
Total: 3 (3 usable), will use epoll.

Available filters :
[COMP] compression
[TRACE] trace
[SPOE] spoe



Thanks
Sachin

Cannot unsubscribe (1 reply)

0
0
Hi,

I would like to unsubscribe but from this list but cannot, we have changed email domains and while I can receive on the old one I cannot send on it.


I tried mailing haproxy+help@formilux.org<mailto:haproxy+help@formilux.org> but just got back an automated response that said "Hello,"


Can one of the list owners assist please?

Kind regards,

John Lanigan.

Link Addition Request (no replies)

0
0
Hey! I have a quick request for you.



I'm just reaching out because I came across your domain where you have
mentioned list of tools and domains that work on internet security and
privacy.


Must say you have done an amazing work.


I was super impressed by it and wanted to reach out because the website I
work for vpnranks.com published a list of Best VPN for use. The website has
been working on providing solutions for internet security and safety
online.



If it was any good, might you consider including a link to it in your piece?



Our team has put a lot of time and effort into doing a complete test of the
VPN services listed in our guide that work to provide internet security and
safety online to the users. and I believe it will add value to users on
your website as well.



I'll let you be the judge though... Here's the link to the guide:
https://www.vpnranks.com/best-vpn/


Regards,
Lisa

Performance of using lua calls for map manipulation on every request (no replies)

0
0
Hi,

We are doing about 10K requests/minute on a single haproxy server, we have
enough CPUs and memory. Right now each requests looks up a map for backend
info. It works well.

Now we need to build some expire logic around the map. Like ignore some
entries in the map entries after some time. I could do this in lua, but it
woud mean that every request would make a lua call to look up a map value
and make a decision.

My lua method looks like this:

function get_proxy_from_map(txn)
local host = txn.http:req_get_headers()["host"][0]
local value = proxy_map_v2:lookup(host)
if value then
local values = split(value, ",")
local proxy = values[1]
local time = values[2]
if os.time() > tonumber(time) then
core.Alert("Expired: returning nil: " .. host)
return
else
return proxy
end
end
return
end


Any suggestions on how this would impact performance, our tests looks ok.

Thanks
Sachin

Possibility to modify PROXY protocol header (no replies)

0
0
Hi,

is there any possibilty to modify the client ip in the PROXY Protocol
header before it is send to a backend server?

My use case is a local integration/functional testing suite (multiple local
docker containers for testing the whole stack - haproxy, cache layer,
webserver, etc.).

I would like to test functionalities which are dependent of/need specific
IP ranges or IP addresses.

----------------------------------------------------------------
Best Regards / Mit freundlichen Grüßen

Bjoern

haproxy doesn't reuse server connections (2 replies)

0
0
Hi,I'm running haproxy 1.8.12 on Ubuntu 14.04. For some reason, haproxy does not reuse connections to backend servers. For testing purposes, I'm sending the same HTTP request multiple times over the same TCP connection.
The servers do not respond with Connection: close and do not close the connections. The wireshark capture shows haproxy RST-ing the connections  a few hundred milliseconds after the servers reply. The servers send no FIN nor RST to haproxy.

I tried various settings (http-reuse always, option http-keep-alive, both at global and backend level), no luck.
The problem goes away if I have a single backend server, but obviously that's not a viable option in real life.
Here's my haproxy.cfg:
global        #daemon        maxconn 256
defaults        mode http        timeout connect 5000ms        timeout client 50000ms        timeout server 50000ms
        option http-keep-alive        timeout http-keep-alive 30s        http-reuse always
frontend http-in        bind 10.220.178.236:80        default_backend servers
backend servers        server server1 10.220.178.194:80 maxconn 32        server server2 10.220.232.132:80 maxconn 32
Any suggestions?
Thanks in advance,Alessandro

Help with backend server sni setup (2 replies)

0
0
Hi.

I have the following Setup.

APP -> Internal Haproxy -(HTTPS)-> external HAProxy -> APP

The external HAProxy is configured with multiple TLS Vhost.

I assume that when I add `server .... sni appinternal.domain.com` to the
server line will be set the hostname field in the TLS session to this
value.

I'm not sure if this could work from the doc reading.

https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#5.2-sni

Could this work?

Best regards
Aleks

Understanding certain balance configuration (no replies)

0
0
Hi,

I'm trying to understand how balance url_param hash-type consistent
should work. Haproxy 1.7.11.

Lets say, we have a config of two haproxy instances that balance content
between local and remote (sibling).

server0 (10.0.0.1) would have config section like this:

backend load_balancer
balance url_param file_id
hash-type consistent
server local_backend /path/to/socket id 1
server remote_backend 10.0.0.2:80 id 2

backend local_backend
balance url_param file_id
hash-type consistent
server server0 127.0.0.1:100
server server1 127.0.0.1:200

server1 (10.0.0.2) would have config section like this:

backend load_balancer
balance url_param file_id
hash-type consistent
server local_backend /path/to/socket id 2
server remote_backend 10.0.0.1:80 id 1

backend local_backend
balance url_param file_id
hash-type consistent
server server0 127.0.0.1:100
server server1 127.0.0.1:200

Assuming that all requests indeed have URL parameter "file_id", should
requests on both servers only reach single "local_backend" server since
they are already balanced and are not anymore divided in "local_backend"
because of identical configuration on both "load_balancer" and
"local_backend"?

thanks in advance,
Veiko

[ANNOUNCE] haproxy-1.8.13 (5 replies)

0
0
Hi,

HAProxy 1.8.13 was released on 2018/07/30. It added 28 new commits
after version 1.8.12.

Nothing critical this time, however we finally got rid of the annoying
CLOSE_WAIT on H2 thanks to the continued help from Milan Petruzelka,
Janusz Dziemidowicz and Olivier Doucet. Just for this it was worth
emitting a release. During all these tests we also met a case where
sending a POST to the stats applet over a slow link using H2 could
sometimes result in haproxy busy waiting for data, causing 100% CPU
being seen. It was fixed, along with another bug affecting applets
like stats, possibly causing occasional CPU spikes.

While developing on 1.9 we found a few interesting corner cases with
threads, one of which causes performance to significantly drop when
reaching a server maxconn *if* there are more threads than available
CPUs. It turned out to be caused by the synchronization point not
leaving enough CPU to sleeping threads to be scheduled and join. You
should never ever use less threads than CPUs, but config errors
definitely happen and we'd rather limit their impact.

Speaking about config errors, another case existed where a "process"
directive on a "bind" line could reference non-existing threads. If
only non-existing threads were referenced, it didn't trigger an error
and would silently start, but with nobody to accept the traffic. It
easily happens when reducing the number of threads in a config. This
was addressed similarly to the process case, where the threads are
automatically remapped and a warning is emitted in this case.

An issue was addressed with the proxy protocol header sent to servers.
If a "http-request set-src" directive is used, it is possible to end up
with a mix of IPv4 and IPv6, which cannot be transported by the protocol
(since it makes no sense from a network perspective). Till now a server
would only receive "PROXY UNKNOWN" and would not even be able to get the
client's address. Tim Duesterhus addressed this by converting the IPv4
address to IPv6 if exactly one of the addresses is IPv6. It is the only
way not to lose information

Christopher addressed a rare issue which could trigger during soft
reloads with threads enabled : if a thread quits at the exact moment a
thread sync is requested, the remaining threads could wait for it
forever.

Vincent Bernat updated the systemd unit file so that when quitting, if
the master reports 143 (SIGTERM+128) as the exit status due to the fact
that it reports the last killed worker's status, systemd doesn't consider
this as a failure.

The remaining changes are pretty minor. Some H2 debugging code developed
to fix the CLOSE_WAIT issues was backported in orther to simplify the
retrieval of internal states when such issue shappen.

A small update happened to the download directory, the sha256 of the
tar.gz files are now present in addition to the (quite old) md5 ones.
We may start to think about phasing md5 signatures out, for example
after 1.9 is released.

As usual, it's worth updating if you're on 1.8, especially if you're
using H2 and/or threads. If you think you've found a bug that is not
addressed in the changelog below, please update and try again before
reporting it. There are so many possible side effects from H2 issues
and thread issues that it is possible that your issue is a different
manifestation of one of these.

Please find the usual URLs below :
Site index : http://www.haproxy.org/
Discourse : http://discourse.haproxy.org/
Sources : http://www.haproxy.org/download/1.8/src/
Git repository : http://git.haproxy.org/git/haproxy-1.8.git/
Git Web browsing : http://git.haproxy.org/?p=haproxy-1.8.git
Changelog : http://www.haproxy.org/download/1.8/src/CHANGELOG
Cyril's HTML doc : http://cbonte.github.io/haproxy-dconv/

Willy
---
Complete changelog :
Christopher Faulet (4):
BUG/MINOR: http: Set brackets for the unlikely macro at the right place
MINOR: debug: Add check for CO_FL_WILL_UPDATE
MINOR: debug: Add checks for conn_stream flags
BUG/MEDIUM: threads: Fix the exit condition of the thread barrier

Olivier Houchard (2):
BUG/MINOR: servers: Don't make "server" in a frontend fatal.
BUG/MINOR: threads: Handle nbthread == MAX_THREADS.

Tim Duesterhus (2):
BUILD: Generate sha256 checksums in publish-release
MEDIUM: proxy_protocol: Convert IPs to v6 when protocols are mixed

Vincent Bernat (1):
MINOR: systemd: consider exit status 143 as successful

Willy Tarreau (19):
BUG/MINOR: ssl: properly ref-count the tls_keys entries
MINOR: mux: add a "show_fd" function to dump debugging information for "show fd"
MINOR: h2: implement a basic "show_fd" function
BUG/MINOR: h2: remove accidental debug code introduced with show_fd function
MINOR: h2: keep a count of the number of conn_streams attached to the mux
MINOR: h2: add the mux and demux buffer lengths on "show fd"
BUG/MEDIUM: h2: don't accept new streams if conn_streams are still in excess
BUG/MEDIUM: h2: never leave pending data in the output buffer on close
BUG/MEDIUM: h2: make sure the last stream closes the connection after a timeout
MINOR: h2: add the error code and the max/last stream IDs to "show fd"
BUG/MEDIUM: stream-int: don't immediately enable reading when the buffer was reportedly full
BUG/MEDIUM: stats: don't ask for more data as long as we're responding
BUG/MEDIUM: threads/sync: use sched_yield when available
BUG/MEDIUM: h2: prevent orphaned streams from blocking a connection forever
BUG/MINOR: config: stick-table is not supported in defaults section
BUG/MEDIUM: threads: properly fix nbthreads == MAX_THREADS
MINOR: threads: move "nbthread" parsing to hathreads.c
BUG/MEDIUM: threads: unbreak "bind" referencing an incorrect thread number
SCRIPTS: git-show-backports: add missing quotes to "echo"

---

IPBurger VPN Services | Sponsorship & Affiliate Proposal (no replies)

0
0
Hi!

<support@freenode.net?__xts__=>
I'm Donald, the co-owner of IPBurger VPN services (
https://secure.ipburger.com/);

We're a small but growing VPN company (Dedicated and Shared IP space) and
we would be very grateful if you could include our company among similar
services on your website.

URL Example: http://www.haproxy.org/#tact

Like you have linked to *Private Internet Access*

As you are linking back to Private Internet Access.

We located your website after doing research on our competitor.


*We offer a 30% recurring affiliate commission:*
Step 1: Register for a client account here:
(https://secure.ipburger.com/register.php)
Step 2: Activate your Affiliate account: (https://secure.ipburger.com/a
ffiliates.php)

We can provide any resources you need whether that's page content, banners,
test accounts or anything else you need if we can be placed on your website.

You can reach me personally at donald@ipburger.com

Feel free to reach out to me if you need anything.

Thank you for your time,
Donald

IPBurger VPN Services Linkback Programme (no replies)

0
0
Hi!

<support@freenode.net?__xts__=>
I'm Donald, the co-owner of IPBurger VPN services (
<support@freenode.net?__xts__=>https://secure.ipburger.com/);

We're a small but growing VPN company (Dedicated and Shared IP space) and
we would be very grateful if you could include our company among similar
services on your website.

URL Example: http://www.haproxy.org/#tact

As you are linking back to Private Internet Access in external links.

We located your website after doing research on our competitor.


*We offer a 30% recurring affiliate commission:*
Step 1: Register for a client account here:
(https://secure.ipburger.com/register.php)
Step 2: Activate your Affiliate account: (https://secure.ipburger.com/a
ffiliates.php)

We can provide any resources you need whether that's page content, banners,
test accounts or anything else you need if we can be placed on your website.

You can reach me personally at donald@ipburger.com

Feel free to reach out to me if you need anything.

Thank you for your time,
Donald

haproxy.com : Improve your mobile website for better rankings (no replies)

0
0
Hi haproxy.com,


I noticed something interesting while going through your website,
haproxy.com.

It's apparent that you have used Adwords marketing to promote your business
in the past; however your website does see some organic search traffic here
and there. Now, I believe I can help increase that portion of organic
traffic significantly, at haproxy.com.

I believe you would like to come top on searches for keywords related to:
haproxy.com... I found a number of SEO issues such as broken links, page
speed issue, HTML validation errors, and images with no ALT text on your
website, that's stopping you from getting that traffic.

How about I fix those, and also promote you through engaging content on
relevant places on the web (read, social media).

I guarantee you will see a drastic change in your search ranking and
traffic once these issues are fixed. Also, this is one time, so no paying
Adwords every month.
Is this something you are interested in? We have a very special offer for
the month of JULY 2018.
I also prepared a “Free Website Audit Report” for your website. If you are
interested i can show you the report.
I'd be happy to send you our package, pricing and past work details, if
you'd like to assess our work.
I look forward to hearing from you.



Best Regards,

Harry Williams | Digital Marketing Specialist

haproxy.org : Improve your mobile website for better rankings (no replies)

0
0
Hi haproxy.org,


I noticed something interesting while going through your website,
haproxy.org.

It's apparent that you have used Adwords marketing to promote your business
in the past; however your website does see some organic search traffic here
and there. Now, I believe I can help increase that portion of organic
traffic significantly, at haproxy.org.

I believe you would like to come top on searches for keywords related to:
haproxy.org... I found a number of SEO issues such as broken links, page
speed issue, HTML validation errors, and images with no ALT text on your
website, that's stopping you from getting that traffic.

How about I fix those, and also promote you through engaging content on
relevant places on the web (read, social media).

I guarantee you will see a drastic change in your search ranking and
traffic once these issues are fixed. Also, this is one time, so no paying
Adwords every month.
Is this something you are interested in? We have a very special offer for
the month of JULY 2018.
I also prepared a “Free Website Audit Report” for your website. If you are
interested i can show you the report.
I'd be happy to send you our package, pricing and past work details, if
you'd like to assess our work.
I look forward to hearing from you.



Best Regards,

Harry Williams | Digital Marketing Specialist

Using haproxy together with NFS (5 replies)

0
0
Hi guys,

I’ve been playing around today with two NFS servers (each on their own storage array), synced by Unison to provide a bit higher uptime.

To allow NFS clients to use a single IP, I’ve configured an haproxy install (1 now, two when in prod), where I want to talk over tcp mode to the NFS servers.
My idea is that all traffic is directed based on the source IP balancing, so the traffic will be somewhat 50/50 on each NFS server.

My question is if anyone have actually ever got a setup like this to work, I’m using NFS 4.0, and whenever I try to mount the NFS mount on the client, it does communicate with haproxy, and I do see traffic on the NFS server itself, meaning the communication seems to work.

The issue I’m facing, is that the mounting will actually never complete due to some weird behavior when I go through haproxy:

Aug 01 21:44:44 nfs-server-f8209dc4-a1a6-4baf-86fa-eba0b0254bc9 kernel: RPC: fragment too large: 1347571544
Aug 01 21:44:44 nfs-server-f8209dc4-a1a6-4baf-86fa-eba0b0254bc9 kernel: RPC: fragment too large: 1347571544
Aug 01 21:44:44 nfs-server-f8209dc4-a1a6-4baf-86fa-eba0b0254bc9 kernel: RPC: fragment too large: 1347571544
Aug 01 21:44:45 nfs-server-f8209dc4-a1a6-4baf-86fa-eba0b0254bc9 kernel: RPC: fragment too large: 1347571544

It will continue to give this “fragment too large”.
If I bypass haproxy it works completely fine, so I know the NFS Server is configured correctly for the client to connect.

My haproxy configuration looks like this:

global
log 127.0.0.1 local1 debug
nbproc 4
user haproxy
group haproxy
daemon
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid

defaults
mode tcp
log global
option tcplog
timeout client 1m
timeout server 1m
timeout connect 10s
balance source

frontend nfs-in1
bind *:2049
use_backend nfs_backend1
frontend nfs-in2
bind *:111
use_backend nfs_backend2
frontend nfs-in3
bind *:46716
use_backend nfs_backend3
frontend nfs-in4
bind *:36856
use_backend nfs_backend4

backend nfs_backend1
server nfs1 217.xx.xx.xx:2049 send-proxy
backend nfs_backend2
server nfs1 217.xx.xx.xx:111 send-proxy
backend nfs_backend3
server nfs1 217.xx.xx.xx:46716 send-proxy
backend nfs_backend4
server nfs1 217.xx.xx.xx:36856 send-proxy

I use the “send-proxy” to let the NFS Server see the actual source IP, instead of the haproxy machine IP.

If anyone has any idea what can be the cause of the “fragment too large” when going via haproxy, or an actual working config for haproxy to do NFS 4.0 or 4.1 traffic – then please let me know!

Best Regards,
Lucas Rolff
Viewing all 5112 articles
Browse latest View live




Latest Images