I have this problem with reloading the HAProxy using this command:
haproxy -D -f gateway.cfg -p /var/run/haproxy.pid -D -sf $(cat /var/run/haproxy.pid)
The error result
[ALERT] 169/001728 (3844) : Starting frontend proxy: cannot bind socket
I have tried adding user root or Administrator in the config but to no avail. The file permission according to ls -la is Administrator none. It makes me think HAProxy does not completely support windows and I wonder how does -sf/-st prefix work? (I tried in unix system and it turns out working correctly. The HAProxy config is shown below
global
daemon
maxconn 1024
pidfile /var/run/haproxy.pid
defaults
log global
mode http
option httplog
option dontlognull
retries 3
option redispatch
contimeout 5000
clitimeout 50000
srvtimeout 50000
frontend proxy
bind *:80
default_backend servers
backend servers
balance roundrobin
option httpchk GET /
option forwardfor
option httpclose
stats enable
stats refresh 10s
stats hide-version
stats uri /admin?stats
stats auth admin:admin
stats realm Haproxy\ Statistics
server svr0 127.0.0.1 check inter 5000
HAProxy generally does not support Windows, even under Cygwin. HAProxy contains very specific optimisations for Linux and a variety of UNIX systems which make it very hard to be able to run it on Windows.
And even if you would somehow make it run, it would result in abysmal performance and would never get a stable or even moderately fast system. It just doesn't make any sense to run HAProxy on Windows and trying to deal with various emulation layers when you get great performance even out of a sub-1-Watt ARM box running on Linux.
You can run most of haproxy version under windows. Here is the 1.4.24 compilated using cygwin:
http://www.mediafire.com/download/7l4yg7fa5w185bo/haproxy.zip
You can use it for testing purpose, but you should avoid production with it, only to be able to develop under windows with an easy transfert to linux for example...
Related
I've setup debug configuration for PhpStorm and it is successfully validated by PhpStorm:
Xdebug helper for Chrome is also installed.
The problem is that nothing happens when I start listening for debug connections and reload the required page with Xdebug helper switched on. Also tried this bookmarklets with no luck.
No errors or something, just nothing.
Also tried to set different IPs as dockerhost: from 192.168.. range (from network settings), from 172.* range (from nginx), from 10.* range (10.0.75.1 is default). Also tried docker.for.mac.internal.host which failed when containers were starting.
Docker 17.02, macOS Sierra, PhpStorm 2017.3
If you're on linux, be sure to create corresponding rules in your firewall.
But to troubleshoot this more effectively you need to gather more info.
Enable xdebug logging xdebug.remote_log=/var/www/xdebug.log in you
xdebug.ini or you can append that in the "Cli Interpreters > Configuration Options" in PHPStorm (xdebug.remote_log, /path/inside/workspace/container/xdebug.log)
Another step you could take is to monitor the incoming connections to your machine. (run this on where you installed docker). It will start listening to all incoming connection attempts on port 9000.
sudo tcpdump -i any port 9000
Now run the debugger once, check the logs inside the container (workspace by default) and see if any incoming connection attempts have gone through from the container.
If you see something like Time-out connecting to client (Waited: 200 ms). :-(, chances are that your firewall is blocking the incoming connections.
To open them up you could add a rule using ufw
sudo ufw allow in from 172.22.0.0/24 to any port 9000 (or write down a specific ip) Be sure to double check that this is the ip trying to connect
this will allow all connections on port 9000 from 172.22.0.* (which is what laradock uses for its virtual networks). Be sure to double check the logs maybe your setup uses different ip range)
My working xdebug.ini (both in php-fpm and workspace containers are the same)
xdebug.remote_host=dockerhost
xdebug.remote_connect_back=0
xdebug.remote_port=9000
xdebug.idekey=PHPSTORM
xdebug.remote_autostart=1
xdebug.remote_enable=1
xdebug.remote_log=/var/www/xdebug.log
xdebug.cli_color=1
xdebug.profiler_enable=0
xdebug.profiler_output_dir="~/path/to/profiler.log"
xdebug.remote_handler=dbgp
xdebug.remote_mode=req
xdebug.var_display_max_children=-1
xdebug.var_display_max_data=-1
xdebug.var_display_max_depth=-1
If none of the above works, another step would be to also check if you have any containers running on port 9000 already. If so, then you'll need to use another, port, just don't forget to expose it from docker.
(Explanation: docker binds (exposes) ports to the host machine so that any incoming connections get directed to the right container, if 9000 is taken, xdebug won't be able to connect to any IDE on your machine, even if the IDE says it is running the listener)
Hope this helps.
I am running an haproxy configuration on mac that works perfect on linux but I can't get the proxy to even respond. Here is my config:
defaults
mode http
timeout connect 5000ms
timeout client 5000ms
timeout server 5000ms
frontend http
bind *:80
acl oracle_content hdr(ContentType) -i application/vnd.api+json
acl oracle_accept hdr(Accept) -i application/vnd.api+json
use_backend oracle_be if oracle_content
use_backend oracle_be if oracle_accept
default_backend matrix_be
backend oracle_be
balance roundrobin
server oracle1 theoracle.stage.company.com:8080
backend matrix_be
balance roundrobin
server matrix1 192.168.1.6:3000
docker run -d --name cc -v /Users/cbongiorno/development/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro haproxy
docker -v
Docker version 1.12.0, build 8eab29e
the only machine specific config is the IP adress of the matrix_be entry which has to be my local interface. It's not working on 2 macs and I have tried binding the proxy to multiple interfaces. I am not even getting a 504 which would indicate the proxy is fine but one of the backend services is misconfigured.
Ideas?
Due to current docker on mac limitations, the -p 80:80 flag must be passed even if the container declares port 80 open for business
My haproxy.cfg
global
log 127.0.0.1 local0
maxconn 20000
user haproxy
group haproxy
stats socket /var/run/haproxy/haproxy.sock level admin
stats timeout 2m
listen admin
bind *:8080
stats enable
defaults
log global
mode http
option httplog
option dontlognull
retries 3
option redispatch
# timeout http-request 5s
timeout connect 5000
timeout client 60000
timeout server 60000
frontend http-in
bind *:80
default_backend monkey
backend monkey
stats enable
stats uri /haproxy?stats
stats realm Strictly\ Private
stats auth admin:hello
stats auth Another_User:hi
mode http
compression algo gzip
compression type text/html text/plain text/css application/json
balance roundrobin
option httpclose
option forwardfor
default-server inter 1s fall 1
server cd-test-1 1.2.3.4:80 check
server cd-test-2 5.6.7.8:80 check
I have been using socat to disable a node from HAproxy cluster.
below is the command
echo "disable server monkey/cd-test-1"| socat stdio /var/run/haproxy/haproxy.sock
The above disables my node from haproxy. But if I use the ip address(1.2.3.4) instead of "cd-test-1" it returns No such server.
I am using ansible to automated this. I use {{inventory_hostname}} and delegate the command to my HAproxy server. Hence the issue.
- name: Disable {{ inventory_hostname }} in haproxy and letting the services drain
shell: echo "disable server monkey/{{inventory_hostname}}"| socat stdio /var/run/haproxy/haproxy.sock
become_user: root
delegate_to: "{{ item }}"
with_items: groups.haproxy_backend
This returns "No such server." and moves along.
Can someone please help me find the issue with using the IP instead of the name of server. Might be doing something very silly. Any help is appreciated.
Disabling and enabling HAproxy using socat, the server alias name has to be mentioned.
Otherwise, we will be getting a No such server error.
I am trying to run haproxy with docker. I followed the instructions here :
https://hub.docker.com/_/haproxy/
I was able to build the docker image but after trying to run it.
using
docker run -d --link another_container:another_container --name mc-ha -v haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro my_own_haproxy:latest
I get this error :
[ALERT] 298/054910 (1) : [haproxy.main()] No enabled listener found (check for 'bind' directives) ! Exiting.
I searched for it , but the only thing I found is the source code of ha proxy.
Here is my haproxy.cfg
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
# Default ciphers to use on SSL-enabled listening sockets.
# For more information, see ciphers(1SSL).
ssl-default-bind-ciphers kEECDH+aRSA+AES:kRSA+AES:+AES256:RC4-SHA:!kEDH:!LOW:!EXP:!MD5:!aNULL:!eNULL
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
frontend esNodes
bind *:8091
mode http
default_backend srNodes
backend srNodes
mode http
balance roundrobin
option forwardfor
http-request set-header X-Forwarded-Port %[dst_port]
http-request add-header X-Forwarded-Proto https if { ssl_fc }
option httpchk HEAD / HTTP/1.1\r\nHost:localhost
server web01 0.0.0.0:10903/project/es check
EDIT: Btw I also tried changing the backend node url to my docker host ip. But still no luck.
Thanks to #Michael comment. I was able to solve the problem.
First I remove the haproxy command from the dockerfile. And then I run the haproxy command manually inside the container.
Voila! My config file is not a file. Its a directory. LOL
The problem is in my docker command -v.
I change it to full path
-v FULL_PATH/customhaproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg
You will want to remove the daemon keyword from your docker file - docker needs a foreground process to be running otherwise docker will exit immediately.
I think the error message you are seeing is because docker exits quicker than haproxy binds to any ports.
I actually had to restart docker-machine (using OSX), otherwise every time I was running container with volume mount option (tried both absolute and relative paths) - it was mounting haproxy.cfg as a directory
ls -l /etc/ssl/certs
ls -l /etc/ssl/private
chmod -r 400 /etc/ssl/private
perhaps also change the permissions on certs but I'm not sure. Starting up haproxy with globally readable ssl keys is a very bad security practice so they disable startup completely.
It seems as if many developers trying to move from non-scaled apps (like the diy cartridge) to scaled versions of their apps are having trouble configuring their cartridges to interact properly with the default configuration of HAProxy created by Openshift and getting their start and stop action hooks to deal with scaling portions of their app, myself included. Most often because we're new and we don't quite understand what the default configuration of openshift's HAProxy does...
HAProxy's default configuration
#---------------------------------------------------------------------
# Example configuration for a possible web application. See the
# full configuration options online.
#
# http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
#
#---------------------------------------------------------------------
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
# to have these messages end up in /var/log/haproxy.log you will
# need to:
#
# 1) configure syslog to accept network log events. This is done
# by adding the '-r' option to the SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
#
# 2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like the following can be added to
# /etc/sysconfig/syslog
#
# local2.* /var/log/haproxy.log
#
#log 127.0.0.1 local2
maxconn 256
# turn on stats unix socket
stats socket /var/lib/openshift/{app's ssh username}/haproxy//run/stats level admin
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
#option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 128
listen stats 127.2.31.131:8080
mode http
stats enable
stats uri /
listen express 127.2.31.130:8080
cookie GEAR insert indirect nocache
option httpchk GET /
balance leastconn
server local-gear 127.2.31.129:8080 check fall 2 rise 3 inter 2000 cookie local-{app's ssh username}
Often it seems like both sides of the application are up and running but HAProxy isn't sending http requests to where we'd expect. And from numerous questions asked on openshift we know that this line:
option httpchk GET /
Is HAProxy's sanity check to make sure the app is working, but often times whether that line is edited or removed we'll still get something like this in HAProxy's logs:
[WARNING] 240/150442 (404099) : Server express/local-gear is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[ALERT] 240/150442 (404099) : proxy 'express' has no server available!
Yet inside the gears we often have our apps listening to $OPENSHIFT_CARTNAME_IP and $OPENSHIFT_CARTNAME_PORT and we'll see they've started, and sometimes are rejecting the sanity check.
ERROR [DST 127.2.31.129 sid=1] SHOUTcast 1 client connection rejected. Stream not available as there is no source connected. Agent: `'
A cut and dry manifest, like the one from the diy cartridge
Name: hws
Cartridge-Short-Name: HWS
Display-Name: Hello World of Scaling Apps
Description: A Scaling App on Openshift
Version: '0.1'
License: ASL 2.0
License-Url: http://www.apache.org/licenses/LICENSE-2.0.txt
Cartridge-Version: 0.0.10
Compatible-Versions:
- 0.0.10
Cartridge-Vendor: you
Vendor: you
Categories:
- web_framework
- experimental
Website:
Help-Topics:
Getting Started: urltosomeinfo
Provides:
- hws-0.1
- hws
Publishes:
Subscribes:
set-env:
Type: ENV:*
Required: false
Scaling:
Min: 1
Max: -1
Group-Overrides:
- components:
- web-proxy
Endpoints:
- Private-IP-Name: IP
Private-Port-Name: PORT
Private-Port: 8080
Public-Port-Name: PROXY_PORT
Protocols:
- http
- ws
Options:
primary: true
Mappings:
- Frontend: ''
Backend: ''
Options:
websocket: true
- Frontend: "/health"
Backend: ''
Options:
health: true
Start Hook (inside bin/control or in .openshift/action_hooks)
RESPONSE=$(curl -o /dev/null --silent --head --write-out '%{http_code}\n' "http://${OPENSHIFT_APP_DNS}:80")
${RESPONSE} > ${OPENSHIFT_DIY_LOG_DIR}/checkserver.log
echo ${RESPONSE}
if [ "${RESPONSE}" -eq "503" ]
then
nohup ${OPENSHIFT_REPO_DIR}/diy/serverexec ${OPENSHIFT_REPO_DIR}/diy/startfromscratch.conf > ${OPENSHIFT_DIY_LOG_DIR}/server.log 2>&1 &
else
nohup ${OPENSHIFT_REPO_DIR}/diy/serverexec ${OPENSHIFT_REPO_DIR}/diy/secondorfollowinggear.conf > ${OPENSHIFT_DIY_LOG_DIR}/server.log 2>&1 &
fi
Stop Hook (inside bin/control or in .openshift/action_hooks)
kill `ps -ef | grep serverexec | grep -v grep | awk '{ print $2 }'` > /dev/null 2>&1
exit 0
The helpful questions for new developers:
Avoiding a killer sanity check
Is there a way of configuring the app using the manifest.yml to avoid these collisions? Or vice/versa a little tweak to the default HAProxy configuration so that the app will run on appname-appdomain.rhcloud.com:80/ without returning 503 errors?
Setting up more convenient access to the app
My shoutcast example, as hinted by the error works so long as I'm streaming to it first. What additional parts to the manifest and HAProxy would let a user connect directly (from an external url) to the first gear's port 80? As opposed to port-forwarding into the app all the time.
Making sure the app starts and stops as if it weren't scaled
Lastly many non-scaled applications have a quick and easy script to start up and shutdown because it seems openshift accounts for the fact the app has to have the first gear running. How would a stop action hook be adjusted to run through and stop all the gears? What would have to be added to the start action hook to get the first gear back up online with all of it's components (not just HAProxy)?