It seems as if many developers trying to move from non-scaled apps (like the diy cartridge) to scaled versions of their apps are having trouble configuring their cartridges to interact properly with the default configuration of HAProxy created by Openshift and getting their start and stop action hooks to deal with scaling portions of their app, myself included. Most often because we're new and we don't quite understand what the default configuration of openshift's HAProxy does...
HAProxy's default configuration
#---------------------------------------------------------------------
# Example configuration for a possible web application. See the
# full configuration options online.
#
# http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
#
#---------------------------------------------------------------------
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
# to have these messages end up in /var/log/haproxy.log you will
# need to:
#
# 1) configure syslog to accept network log events. This is done
# by adding the '-r' option to the SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
#
# 2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like the following can be added to
# /etc/sysconfig/syslog
#
# local2.* /var/log/haproxy.log
#
#log 127.0.0.1 local2
maxconn 256
# turn on stats unix socket
stats socket /var/lib/openshift/{app's ssh username}/haproxy//run/stats level admin
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
#option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 128
listen stats 127.2.31.131:8080
mode http
stats enable
stats uri /
listen express 127.2.31.130:8080
cookie GEAR insert indirect nocache
option httpchk GET /
balance leastconn
server local-gear 127.2.31.129:8080 check fall 2 rise 3 inter 2000 cookie local-{app's ssh username}
Often it seems like both sides of the application are up and running but HAProxy isn't sending http requests to where we'd expect. And from numerous questions asked on openshift we know that this line:
option httpchk GET /
Is HAProxy's sanity check to make sure the app is working, but often times whether that line is edited or removed we'll still get something like this in HAProxy's logs:
[WARNING] 240/150442 (404099) : Server express/local-gear is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[ALERT] 240/150442 (404099) : proxy 'express' has no server available!
Yet inside the gears we often have our apps listening to $OPENSHIFT_CARTNAME_IP and $OPENSHIFT_CARTNAME_PORT and we'll see they've started, and sometimes are rejecting the sanity check.
ERROR [DST 127.2.31.129 sid=1] SHOUTcast 1 client connection rejected. Stream not available as there is no source connected. Agent: `'
A cut and dry manifest, like the one from the diy cartridge
Name: hws
Cartridge-Short-Name: HWS
Display-Name: Hello World of Scaling Apps
Description: A Scaling App on Openshift
Version: '0.1'
License: ASL 2.0
License-Url: http://www.apache.org/licenses/LICENSE-2.0.txt
Cartridge-Version: 0.0.10
Compatible-Versions:
- 0.0.10
Cartridge-Vendor: you
Vendor: you
Categories:
- web_framework
- experimental
Website:
Help-Topics:
Getting Started: urltosomeinfo
Provides:
- hws-0.1
- hws
Publishes:
Subscribes:
set-env:
Type: ENV:*
Required: false
Scaling:
Min: 1
Max: -1
Group-Overrides:
- components:
- web-proxy
Endpoints:
- Private-IP-Name: IP
Private-Port-Name: PORT
Private-Port: 8080
Public-Port-Name: PROXY_PORT
Protocols:
- http
- ws
Options:
primary: true
Mappings:
- Frontend: ''
Backend: ''
Options:
websocket: true
- Frontend: "/health"
Backend: ''
Options:
health: true
Start Hook (inside bin/control or in .openshift/action_hooks)
RESPONSE=$(curl -o /dev/null --silent --head --write-out '%{http_code}\n' "http://${OPENSHIFT_APP_DNS}:80")
${RESPONSE} > ${OPENSHIFT_DIY_LOG_DIR}/checkserver.log
echo ${RESPONSE}
if [ "${RESPONSE}" -eq "503" ]
then
nohup ${OPENSHIFT_REPO_DIR}/diy/serverexec ${OPENSHIFT_REPO_DIR}/diy/startfromscratch.conf > ${OPENSHIFT_DIY_LOG_DIR}/server.log 2>&1 &
else
nohup ${OPENSHIFT_REPO_DIR}/diy/serverexec ${OPENSHIFT_REPO_DIR}/diy/secondorfollowinggear.conf > ${OPENSHIFT_DIY_LOG_DIR}/server.log 2>&1 &
fi
Stop Hook (inside bin/control or in .openshift/action_hooks)
kill `ps -ef | grep serverexec | grep -v grep | awk '{ print $2 }'` > /dev/null 2>&1
exit 0
The helpful questions for new developers:
Avoiding a killer sanity check
Is there a way of configuring the app using the manifest.yml to avoid these collisions? Or vice/versa a little tweak to the default HAProxy configuration so that the app will run on appname-appdomain.rhcloud.com:80/ without returning 503 errors?
Setting up more convenient access to the app
My shoutcast example, as hinted by the error works so long as I'm streaming to it first. What additional parts to the manifest and HAProxy would let a user connect directly (from an external url) to the first gear's port 80? As opposed to port-forwarding into the app all the time.
Making sure the app starts and stops as if it weren't scaled
Lastly many non-scaled applications have a quick and easy script to start up and shutdown because it seems openshift accounts for the fact the app has to have the first gear running. How would a stop action hook be adjusted to run through and stop all the gears? What would have to be added to the start action hook to get the first gear back up online with all of it's components (not just HAProxy)?
Related
haproxy.exe -f haproxy.cfg -d
When I run HAProxy, I get an error:
'''
Available polling systems :
poll : pref=200, test result OK
select : pref=150, test result FAILED
Total: 2 (1 usable), will use poll.
Available filters :
[SPOE] spoe
[CACHE] cache
[FCGI] fcgi-app
[COMP] compression
[TRACE] trace
Using poll() as the polling mechanism.
[NOTICE] (1036) : haproxy version is 2.4.0-6cbbecf
[ALERT] (1036) : Starting proxy warelucent: cannot bind socket (Address already in use) [0.0.0.0:5672]
[ALERT] (1036) : [haproxy.main()] Some protocols failed to start their listeners! Exiting.
'''
In the meantime, no other services are running, and I have the RabbitMQ service open.
My haproxy.cfg file is as follows:
'''
#logging options
global
log 127.0.0.1 local0 info
maxconn 1500
daemon
quiet
nbproc 20
defaults
log global
mode tcp
#if you set mode to tcp,then you nust change tcplog into httplog
option tcplog
option dontlognull
retries 3
option redispatch
maxconn 2000
timeout connect 10s
timeout client 10s
timeout server 10s
#front-end IP for consumers and producters
listen warelucent
bind 0.0.0.0:5672
#配置TCP模式
mode tcp
#balance url_param userid
#balance url_param session_id check_post 64
#balance hdr(User-Agent)
#balance hdr(host)
#balance hdr(Host) use_domain_only
#balance rdp-cookie
#balance leastconn
#balance source //ip
#简单的轮询
balance roundrobin
server one 1.1.1.1:5672 check inter 5000 rise 2 fall 2
server two 2.2.2.2:5672 check inter 5000 rise 2 fall 2
server three 3.3.3.3:5672 check inter 5000 rise 2 fall 2
listen stats
bind 127.0.0.1:8100
mode http
option httplog
stats enable
stats uri /rabbitmq-stats
stats refresh 5s
'''
Most of the Internet is due to the version, but I checked the official website, the version is the latest, and I also started the RabbitMQ service, so I don't know where the error is at present
(Address already in use) [0.0.0.0:5672]
it means that the port 5672 (RabbitMQ is already in use. Most likely you have a rabbitmq node running in that machine.
So just change the HA-PROXY port.
If I run ps aux | grep kibana
It shows:
kibana 14993 36.7 7.8 1382596 312372 ? Ssl 14:24 0:10 /usr/share/kibana/bin/../node/bin/node --no-warnings --max-http-header-size=65536 /usr/share/kibana/bin/../src/cli -c /etc/kibana/kibana.yml
If I run sudo systemctl status kibana.service
It shows:
● kibana.service - Kibana
Loaded: loaded (/etc/systemd/system/kibana.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2019-02-27 14:24:06 CST; 6s ago
Main PID: 14993 (node)
Tasks: 11 (limit: 4574)
CGroup: /system.slice/kibana.service
└─14993 /usr/share/kibana/bin/../node/bin/node --no-warnings --max-http-header-size=65536 /usr/share/kibana/bin/../src/cli -c /
Feb 27 14:24:06 aero systemd[1]: Started Kibana.
But if I run nmap:
PORT STATE SERVICE
22/tcp open ssh
631/tcp open ipp
1080/tcp open socks
6001/tcp open X11:1
9200/tcp open wap-wsp
65000/tcp open unknown
Here is my /etc/kibana/kibana.yml
# Kibana is served by a back end server. This setting specifies the port to use.
#server.port: 5601
# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
#server.host: "localhost"
# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""
# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# This setting was effectively always `false` before Kibana 6.3 and will
# default to `true` starting in Kibana 7.0.
#server.rewriteBasePath: false
# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576
# The Kibana server's name. This is used for display purposes.
#server.name: "your-hostname"
# The URLs of the Elasticsearch instances to use for all your queries.
#elasticsearch.hosts: ["http://localhost:9200"]
# When this setting's value is true Kibana uses the hostname specified in the server.host
# setting. When the value of this setting is false, Kibana uses the hostname of the host
# that connects to this Kibana instance.
#elasticsearch.preserveHost: true
# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
#kibana.index: ".kibana"
# The default application to load.
#kibana.defaultAppId: "home"
# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
#elasticsearch.username: "user"
#elasticsearch.password: "pass"
# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key
# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files validate that your Elasticsearch backend uses the same key files.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key
# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]
# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full
# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500
# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000
# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]
# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}
# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000
# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.
#elasticsearch.startupTimeout: 5000
# Logs queries sent to Elasticsearch. Requires logging.verbose set to true.
#elasticsearch.logQueries: false
# Specifies the path where Kibana creates the process ID file.
#pid.file: /var/run/kibana.pid
# Enables you specify a file where Kibana stores log output.
#logging.dest: stdout
# Set the value of this setting to true to suppress all logging output.
#logging.silent: false
# Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false
# Set the value of this setting to true to log all events, including system usage information
# and all requests.
#logging.verbose: false
# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000
# Specifies locale to be used for all localizable strings, dates and number formats.
#i18n.locale: "en"
Of course, I could manually start Kibana with:
sudo /usr/share/kibana/node/bin/node /usr/share/kibana/src/cli -c /etc/kibana/kibana.yml
Try running:
netstat -an | grep 5601
to see which host:port Kibana has binded to.
I saw the same error when configuring SSL for ELK stack and securely connecting Kibana with Elastic Search.
I followed the steps here for 8.x ELK version
https://www.elastic.co/blog/configuring-ssl-tls-and-https-to-secure-elasticsearch-kibana-beats-and-logstash
The error occurred when launching kibana at port 5601 from its public-facing Url for the first time. However, when I refreshed the browser, it prompted for a login/password, and I could load Kibana successfully at port 5601 like http://<X.X.X.X>:5601
My haproxy.cfg
global
log 127.0.0.1 local0
maxconn 20000
user haproxy
group haproxy
stats socket /var/run/haproxy/haproxy.sock level admin
stats timeout 2m
listen admin
bind *:8080
stats enable
defaults
log global
mode http
option httplog
option dontlognull
retries 3
option redispatch
# timeout http-request 5s
timeout connect 5000
timeout client 60000
timeout server 60000
frontend http-in
bind *:80
default_backend monkey
backend monkey
stats enable
stats uri /haproxy?stats
stats realm Strictly\ Private
stats auth admin:hello
stats auth Another_User:hi
mode http
compression algo gzip
compression type text/html text/plain text/css application/json
balance roundrobin
option httpclose
option forwardfor
default-server inter 1s fall 1
server cd-test-1 1.2.3.4:80 check
server cd-test-2 5.6.7.8:80 check
I have been using socat to disable a node from HAproxy cluster.
below is the command
echo "disable server monkey/cd-test-1"| socat stdio /var/run/haproxy/haproxy.sock
The above disables my node from haproxy. But if I use the ip address(1.2.3.4) instead of "cd-test-1" it returns No such server.
I am using ansible to automated this. I use {{inventory_hostname}} and delegate the command to my HAproxy server. Hence the issue.
- name: Disable {{ inventory_hostname }} in haproxy and letting the services drain
shell: echo "disable server monkey/{{inventory_hostname}}"| socat stdio /var/run/haproxy/haproxy.sock
become_user: root
delegate_to: "{{ item }}"
with_items: groups.haproxy_backend
This returns "No such server." and moves along.
Can someone please help me find the issue with using the IP instead of the name of server. Might be doing something very silly. Any help is appreciated.
Disabling and enabling HAproxy using socat, the server alias name has to be mentioned.
Otherwise, we will be getting a No such server error.
I am trying to run haproxy with docker. I followed the instructions here :
https://hub.docker.com/_/haproxy/
I was able to build the docker image but after trying to run it.
using
docker run -d --link another_container:another_container --name mc-ha -v haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro my_own_haproxy:latest
I get this error :
[ALERT] 298/054910 (1) : [haproxy.main()] No enabled listener found (check for 'bind' directives) ! Exiting.
I searched for it , but the only thing I found is the source code of ha proxy.
Here is my haproxy.cfg
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
# Default ciphers to use on SSL-enabled listening sockets.
# For more information, see ciphers(1SSL).
ssl-default-bind-ciphers kEECDH+aRSA+AES:kRSA+AES:+AES256:RC4-SHA:!kEDH:!LOW:!EXP:!MD5:!aNULL:!eNULL
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
frontend esNodes
bind *:8091
mode http
default_backend srNodes
backend srNodes
mode http
balance roundrobin
option forwardfor
http-request set-header X-Forwarded-Port %[dst_port]
http-request add-header X-Forwarded-Proto https if { ssl_fc }
option httpchk HEAD / HTTP/1.1\r\nHost:localhost
server web01 0.0.0.0:10903/project/es check
EDIT: Btw I also tried changing the backend node url to my docker host ip. But still no luck.
Thanks to #Michael comment. I was able to solve the problem.
First I remove the haproxy command from the dockerfile. And then I run the haproxy command manually inside the container.
Voila! My config file is not a file. Its a directory. LOL
The problem is in my docker command -v.
I change it to full path
-v FULL_PATH/customhaproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg
You will want to remove the daemon keyword from your docker file - docker needs a foreground process to be running otherwise docker will exit immediately.
I think the error message you are seeing is because docker exits quicker than haproxy binds to any ports.
I actually had to restart docker-machine (using OSX), otherwise every time I was running container with volume mount option (tried both absolute and relative paths) - it was mounting haproxy.cfg as a directory
ls -l /etc/ssl/certs
ls -l /etc/ssl/private
chmod -r 400 /etc/ssl/private
perhaps also change the permissions on certs but I'm not sure. Starting up haproxy with globally readable ssl keys is a very bad security practice so they disable startup completely.
I have this problem with reloading the HAProxy using this command:
haproxy -D -f gateway.cfg -p /var/run/haproxy.pid -D -sf $(cat /var/run/haproxy.pid)
The error result
[ALERT] 169/001728 (3844) : Starting frontend proxy: cannot bind socket
I have tried adding user root or Administrator in the config but to no avail. The file permission according to ls -la is Administrator none. It makes me think HAProxy does not completely support windows and I wonder how does -sf/-st prefix work? (I tried in unix system and it turns out working correctly. The HAProxy config is shown below
global
daemon
maxconn 1024
pidfile /var/run/haproxy.pid
defaults
log global
mode http
option httplog
option dontlognull
retries 3
option redispatch
contimeout 5000
clitimeout 50000
srvtimeout 50000
frontend proxy
bind *:80
default_backend servers
backend servers
balance roundrobin
option httpchk GET /
option forwardfor
option httpclose
stats enable
stats refresh 10s
stats hide-version
stats uri /admin?stats
stats auth admin:admin
stats realm Haproxy\ Statistics
server svr0 127.0.0.1 check inter 5000
HAProxy generally does not support Windows, even under Cygwin. HAProxy contains very specific optimisations for Linux and a variety of UNIX systems which make it very hard to be able to run it on Windows.
And even if you would somehow make it run, it would result in abysmal performance and would never get a stable or even moderately fast system. It just doesn't make any sense to run HAProxy on Windows and trying to deal with various emulation layers when you get great performance even out of a sub-1-Watt ARM box running on Linux.
You can run most of haproxy version under windows. Here is the 1.4.24 compilated using cygwin:
http://www.mediafire.com/download/7l4yg7fa5w185bo/haproxy.zip
You can use it for testing purpose, but you should avoid production with it, only to be able to develop under windows with an easy transfert to linux for example...