CentOS 6.5 - haproxy fatal error - https

Getting error in CentOS 6.5, HA-Proxy version 1.4.24 2013/06/17
Please advise how to make it work? i need to do HTTPS to localhost:8888 , which is failing.
# service haproxy start
[ALERT] 238/084310 (24365) : parsing [/etc/haproxy/haproxy.cfg:18] : timeout 'tunnel': must be 'client', 'server', 'connect', 'check', 'queue', 'http-keep-alive', 'http-request' or 'tarpit'
[ALERT] 238/084310 (24365) : parsing [/etc/haproxy/haproxy.cfg:22] : 'redirect' expects 'code', 'prefix', 'location', 'set-cookie', 'clear-cookie', 'drop-query' or 'append-slash' (was 'scheme').
[ALERT] 238/084310 (24365) : parsing [/etc/haproxy/haproxy.cfg:24] : 'bind' only supports the 'transparent', 'defer-accept', 'name', 'id', 'mss' and 'interface' options.
[ALERT] 238/084310 (24365) : Error(s) found in configuration file : /etc/haproxy/haproxy.cfg
[ALERT] 238/084310 (24365) : Fatal errors found in configuration.
Errors in configuration file, check with haproxy check.
My config is:
global
log 127.0.0.1 local0 debug
maxconn 8000
user haproxy
group haproxy
defaults
log global
option httplog
option dontlognull
option http-server-close
option redispatch
retries 3
mode http
maxconn 5000
timeout connect 5s
timeout client 30s
timeout server 30s
timeout tunnel 12h
frontend www
bind :80
option forwardfor
redirect scheme https if !{ ssl_fc }
frontend lb
bind :443 ssl crt /etc/haproxy/sslkeys/cert.pem ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-RC4-SHA:ECDHE-RSA-RC4-SHA:ECDH-ECDSA-RC4-SHA:ECDH-RSA-RC4-SHA:ECDHE-RSA-AES256-SHA:RC4-SHA
option forwardfor
reqadd X-Forwarded-Proto:\ https
default_backend api
acl is_websocket hdr(Upgrade) -i WebSocket
acl is_websocket hdr_beg(Host) -i ws
acl is_api hdr_beg(Host) -i api
use_backend ws if is_websocket
use_backend api if is_api
backend api
balance roundrobin
server service 127.0.0.1:5001 weight 1 maxconn 2500 check
backend ws
balance roundrobin
server service 127.0.0.1:5001 weight 1 maxconn 2500 check
EDIT: upgraded to http://silverdire.com/2014/03/19/haproxy-1-5-dev22-rpm-repo/
but still giving error
# service haproxy start
[ALERT] 238/085833 (25096) : parsing [/etc/haproxy/haproxy.cfg:22] : error detected in frontend 'www' while parsing redirect rule : error in condition: unknown fetch method 'ssl_fc' in ACL expression 'ssl_fc'.
[ALERT] 238/085833 (25096) : parsing [/etc/haproxy/haproxy.cfg:24] : 'bind :9999' unknown keyword 'ssl'. Registered keywords :
[ TCP] defer-accept
[ TCP] interface <arg>
[ TCP] mss <arg>
[ TCP] transparent
[ TCP] v4v6
[ TCP] v6only
[STAT] level <arg>
[UNIX] gid <arg>
[UNIX] group <arg>
[UNIX] mode <arg>
[UNIX] uid <arg>
[UNIX] user <arg>
[ ALL] accept-proxy
[ ALL] backlog <arg>
[ ALL] id <arg>
[ ALL] maxconn <arg>
[ ALL] name <arg>
[ ALL] nice <arg>
[ALERT] 238/085833 (25096) : Error(s) found in configuration file : /etc/haproxy/haproxy.cfg
[ALERT] 238/085833 (25096) : Fatal errors found in configuration.
Errors found in configuration file, check it with 'haproxy check'.

CentOS 7 (compatible) - Not compiled it with SSL support. To resolve it:
Step 1
yum remove haproxy
yum install openssl-devel pcre-devel
OR
apt-get install libssl-dev libpcre3
Step 2
Install Haproxy stable
$ wget http://www.haproxy.org/download/1.5/src/haproxy-1.5.3.tar.gz
$ tar -xvzf haproxy-1.5.3.tar.gz -C /var/tmp
$ cd /var/tmp/haproxy-1.5.3
$ make TARGET=linux2628 USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1 USE_CRYPT_H=1 USE_LIBCRYPT=1
$ make install
$ ./haproxy -vv
HA-Proxy version 1.5.3 2014/07/25
Copyright 2000-2014 Willy Tarreau <w#1wt.eu>
Build options :
TARGET = linux2628
CPU = generic
CC = gcc
CFLAGS = -O2 -g -fno-strict-aliasing
OPTIONS = USE_LIBCRYPT=1 USE_CRYPT_H=1 USE_ZLIB=1 USE_OPENSSL=1 USE_PCRE=1
Default settings :
maxconn = 2000, bufsize = 16384, maxrewrite = 8192, maxpollevents = 200
Encrypted password support via crypt(3): yes
Built with zlib version : 1.2.3
Compression algorithms supported : identity, deflate, gzip
Built with OpenSSL version : OpenSSL 1.0.1e-fips 11 Feb 2013
Running on OpenSSL version : OpenSSL 1.0.1e-fips 11 Feb 2013
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports prefer-server-ciphers : yes
Built with PCRE version : 7.8 2008-09-05
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND
Available polling systems :
epoll : pref=300, test result OK
poll : pref=200, test result OK
select : pref=150, test result OK
Total: 3 (3 usable), will use epoll.
Step 3
SSL is now supported
$ ./haproxy -f configfile.cfg
NOTE:
/etc/haproxy/sslkeys/cert.pem: this file should have Private key, Certificate, Intermediate certificate (optional) in one file

I would recommend to use rpm-build instead, at least for all RHEL-like distros
Prepare environment
# cat /etc/redhat-release
CentOS Linux release 7.4.1708 (Core)
# haproxy -v
HA-Proxy version 1.5.18 2016/05/10
Copyright 2000-2016 Willy Tarreau <willy#haproxy.org>
# yum install rpm-build make gcc-c++ openssl-devel pcre-devel
# cd /root/ && wget https://www.haproxy.org/download/1.8/src/haproxy-1.8.1.tar.gz
Build the package
# USE_ZLIB=1 USE_LIBCRYPT=1 USE_OPENSSL=1 rpmbuild -ta haproxy-1.8.1.tar.gz
Executing(%prep): /bin/sh -e /var/tmp/rpm-tmp.I61pDI
+ umask 022
+ cd /root/rpmbuild/BUILD
+ cd /root/rpmbuild/BUILD
+ rm -rf haproxy-1.8.1
+ /usr/bin/gzip -dc /root/haproxy-1.8.1.tar.gz
+ /usr/bin/tar -xf -
...
Checking for unpackaged file(s): /usr/lib/rpm/check-files /root/rpmbuild/BUILDROOT/haproxy-1.8.1-1.x86_64
Wrote: /root/rpmbuild/SRPMS/haproxy-1.8.1-1.src.rpm
Wrote: /root/rpmbuild/RPMS/x86_64/haproxy-1.8.1-1.x86_64.rpm
Wrote: /root/rpmbuild/RPMS/x86_64/haproxy-debuginfo-1.8.1-1.x86_64.rpm
Executing(%clean): /bin/sh -e /var/tmp/rpm-tmp.jo5GXH
+ umask 022
+ cd /root/rpmbuild/BUILD
+ cd haproxy-1.8.1
+ '[' /root/rpmbuild/BUILDROOT/haproxy-1.8.1-1.x86_64 '!=' / ']'
+ /usr/bin/rm -rf /root/rpmbuild/BUILDROOT/haproxy-1.8.1-1.x86_64
+ exit 0
Install/upgrade the package
# rpm -Uvh /root/rpmbuild/RPMS/x86_64/haproxy-1.8.1-1.x86_64.rpm
Preparing... ################################# [100%]
Updating / installing...
1:haproxy-1.8.1-1 ################################# [ 50%]
Cleaning up / removing...
2:haproxy-1.5.18-6.el7 ################################# [100%]
Check output
# haproxy -vv
HA-Proxy version 1.8.1 2017/12/03
Copyright 2000-2017 Willy Tarreau <willy#haproxy.org>
Build options :
TARGET = linux26
CPU = generic
CC = gcc
CFLAGS = -m64 -march=x86-64 -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv -Wno-unused-label
OPTIONS = USE_ZLIB=1 USE_OPENSSL=1 USE_PCRE=1
Default settings :
maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200
Built with OpenSSL version : OpenSSL 1.0.2k-fips 26 Jan 2017
Running on OpenSSL version : OpenSSL 1.0.2k-fips 26 Jan 2017
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : SSLv3 TLSv1.0 TLSv1.1 TLSv1.2
Built with transparent proxy support using: IP_TRANSPARENT IP_FREEBIND
Encrypted password support via crypt(3): yes
Built with PCRE version : 8.32 2012-11-30
Running on PCRE version : 8.32 2012-11-30
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with zlib version : 1.2.7
Running on zlib version : 1.2.7
Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with network namespace support.
Available polling systems :
epoll : pref=300, test result OK
poll : pref=200, test result OK
select : pref=150, test result OK
Total: 3 (3 usable), will use epoll.
Available filters :
[SPOE] spoe
[COMP] compression
[TRACE] trace
For systemd based system you should install systemd-devel package and pass USE_SYSTEMD=1 option

Related

HAProxy no such ACL : 'ssl_fc'

I'm using HAProxy in Docker:
FROM haproxy:1.8.9
My configurations looks like this:
global
maxconn 256
log 127.0.0.1 local0
nbproc 1
defaults
log global
mode http
log-format frontend:%f/%H/%fi:%fp\ client:%ci:%cp\ GMT:%T\ body:%[capture.req.hdr(0)]\ request:%r
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
timeout queue 60000ms
timeout http-request 15000ms
timeout http-keep-alive 15000ms
option redispatch
option forwardfor
option http-server-close
# option httplog
# option dontlognull
frontend http-in
bind *:80
bind *:443 ssl crt /secrets/server.pem
redirect scheme https if !{ ssl_fc }
mode http
default_backend splunk_servers
During startup I'm getting:
parsing [/usr/local/etc/haproxy/haproxy.cfg:26] : error detected in frontend 'http-in' while parsing redirect rule : error in condition: no such ACL : 'ssl_fc'
Do you know why?
According to HAProxy docs this ACL should be available.
When I run it with -vv option the output looks like this:
Build options :
TARGET = linux2628
CPU = generic
CC = gcc
CFLAGS = -O2 -g -fno-strict-aliasing -Wdeclaration-after-statement -fwrapv -fno-strict-overflow -Wno-null-dereference -Wno-unused-label
OPTIONS = USE_ZLIB=1 USE_OPENSSL=1 USE_LUA=1 USE_PCRE=1
Default settings :
maxconn = 2000, bufsize = 16384, maxrewrite = 1024, maxpollevents = 200
Built with OpenSSL version : OpenSSL 1.1.0f 25 May 2017
Running on OpenSSL version : OpenSSL 1.1.0f 25 May 2017
OpenSSL library supports TLS extensions : yes
OpenSSL library supports SNI : yes
OpenSSL library supports : TLSv1.0 TLSv1.1 TLSv1.2
Built with Lua version : Lua 5.3.3
Built with transparent proxy support using: IP_TRANSPARENT IPV6_TRANSPARENT IP_FREEBIND
Encrypted password support via crypt(3): yes
Built with multi-threading support.
Built with PCRE version : 8.39 2016-06-14
Running on PCRE version : 8.39 2016-06-14
PCRE library supports JIT : no (USE_PCRE_JIT not set)
Built with zlib version : 1.2.8
Running on zlib version : 1.2.8
Compression algorithms supported : identity("identity"), deflate("deflate"), raw-deflate("deflate"), gzip("gzip")
Built with network namespace support.
It looks like HAProxy is built with SSL, so what's wrong?

Environment variable `no_proxy` uneffective on `setup.py upload`

I am trying to upload a python artifact onto a custom nexus repository located on my company network, and it doesn't work:
$ python2 setup.py bdist_wheel --universal upload -r nexus
running bdist_wheel
running build
running build_py
running build_scripts
installing to build/bdist.linux-x86_64/wheel
running install
running install_lib
creating build/bdist.linux-x86_64/wheel
. . .
. . .
. . .
running upload
Submitting /home/<user>/dist/processing_catalog-0.1.1-py2.py3-none-any.whl to http://nexus3.mycompany.net/repository/pip/
Upload failed (503): Service Unavailable
error: Upload failed (503): Service Unavailable
I am behind a corporate HTTP proxy (10.200.1.1:3128). However, the nexus repository is in the same network, so I "no-proxied" the company local domain. Here are my proxy environment variables:
$ env | grep -Fi proxy
NO_PROXY=localhost,127.0.0.0/8,::1,.mycompany.net
http_proxy=http://10.200.1.1:3128
https_proxy=http://10.200.1.1:3128
HTTPS_PROXY=http://http://10.200.1.1:3128/
no_proxy=localhost,127.0.0.1,.mycompany.net
HTTP_PROXY=http://http://10.200.1.1:3128/
A tcpdump while performing the upload command shows that the flow goes out to the proxy:
$ sudo tcpdump -Q out -i eth0 -nn dst host 10.200.1.1 and dst port 3128
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes
11:36:58.537988 IP 10.100.59.19.38048 > 10.200.1.1.3128: Flags [S], seq 3601157597, win 29200, options [mss 1460,sackOK,TS val 1913260691 ecr 0,nop,wscale 7], length 0
11:36:58.539684 IP 10.100.59.19.38048 > 10.200.1.1.3128: Flags [.], ack 3499932568, win 229, options [nop,nop,TS val 1913260692 ecr 2310337451], length 0
11:36:58.539746 IP 10.100.59.19.38048 > 10.200.1.1.3128: Flags [.], seq 0:7240, ack 1, win 229, options [nop,nop,TS val 1913260692 ecr 2310337451], length 7240
11:36:58.539753 IP 10.100.59.19.38048 > 10.200.1.1.3128: Flags [.], seq 7240:14480, ack 1, win 229, options [nop,nop,TS val 1913260692 ecr 2310337451], length 7240
11:36:58.542743 IP 10.100.59.19.38048 > 10.200.1.1.3128: Flags [P.], seq 14480:16847, ack 1, win 229, options [nop,nop,TS val 1913260695 ecr 2310337454], length 2367
11:36:58.567501 IP 10.100.59.19.38048 > 10.200.1.1.3128: Flags [.], ack 4009, win 291, options [nop,nop,TS val 1913260720 ecr 2310337476], length 0
11:36:58.567866 IP 10.100.59.19.38048 > 10.200.1.1.3128: Flags [F.], seq 16847, ack 4010, win 291, options [nop,nop,TS val 1913260721 ecr 2310337476], length 0
^C
7 packets captured
7 packets received by filter
0 packets dropped by kernel
This sticks well with the fact that no logs are received on the nexus server side, when performing the upload.
My upload configuration:
$ cat ~/.pypirc
[distutils]
index-servers =
nexus
[nexus]
repository: http://nexus3.mycompany.net/repository/pip/
username:
password:
$ sudo cat /etc/pip.conf
[global]
timeout = 60
trusted-host = pypi.python.org
nexus3.mycompany.net
index-url = https://pypi.python.org/simple
extra-index-url = http://nexus3.mycompany.net/repository/pip/simple
On the other hand, a pip download works just fine:
$ pip download processing-catalog
Collecting processing-catalog
Downloading http://nexus3.mycompany.net/repository/pip/packages/processing-catalog/0.1.2/processing_catalog-0.1.2-py2.py3-none-any.whl
. . .
. . .
. . .
Successfully downloaded processing-catalog . . .
Same goes for a simple curl:
$ curl -sv -u <username>:<password> http://nexus3.mycompany.net/repository/pip/packages/processing-catalog/0.1.1/processing_catalog-0.1.1-py2.py3-none-any.whl -o processing_catalog-0.1.1-py2.py3-none-any.whl
* Trying 10.100.58.110...
* Connected to nexus3.mycompany.net (10.100.58.110) port 80 (#0)
* Server auth using Basic with user '<username>'
> GET /repository/pip/packages/processing-catalog/0.1.1/processing_catalog-0.1.1-py2.py3-none-any.whl HTTP/1.1
> Host: nexus3.mycompany.net
> Authorization: Basic <token>
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Tue, 27 Feb 2018 10:51:47 GMT
< Server: Nexus/3.6.2-01 (OSS)
< X-Frame-Options: SAMEORIGIN
< X-Content-Type-Options: nosniff
< Last-Modified: Tue, 27 Feb 2018 10:11:01 GMT
< Content-Type: application/zip
< Content-Length: 14229
<
{ [4096 bytes data]
* Connection #0 to host nexus3.mycompany.net left intact
So, why does setup.py upload go out on the proxy?
Answering my own question
I think that, unlike curl or pip, setup.py upload cannot read, in the no_proxy environment variable, domain format with a . at the beginning. See:
No . at the beginning of the domain in the no_proxy variable: it works
$ no_proxy=mycompany.net python2 setup.py bdist_wheel --universal upload -r nexus
running bdist_wheel
running build
running build_py
running build_scripts
installing to build/bdist.linux-x86_64/wheel
running install
running install_lib
creating build/bdist.linux-x86_64/wheel
creating build/bdist.linux-x86_64/wheel/processing_catalog
. . .
. . .
. . .
running upload
Submitting /home/<user>/dist/processing_catalog-0.1.1-py2.py3-none-any.whl to http://nexus3.mycompany.net/repository/pip/
Server response (200): OK
. at the beginning of the domain in the no_proxy variable: it doesn't work
$ no_proxy=.mycompany.net python2 setup.py bdist_wheel --universal upload -r nexus
running bdist_wheel
running build
running build_py
running build_scripts
installing to build/bdist.linux-x86_64/wheel
running install
running install_lib
creating build/bdist.linux-x86_64/wheel
creating build/bdist.linux-x86_64/wheel/processing_catalog
. . .
. . .
. . .
running upload
Submitting /home/<user>/dist/processing_catalog-0.1.1-py2.py3-none-any.whl to http://nexus3.mycompany.net/repository/pip/
Upload failed (503): Service Unavailable
error: Upload failed (503): Service Unavailable

SNMP not working in amazon server

I'm trying to monitor amazon server from my local server,I installed and configured snmpd but I can't arrive from my local server to amazon server using snmpwalk.
I check it with the command:
snmpwalk -Os -c public -v 2c XX.XX.XX.XX
from amazon server the snmp is working correctly but from local server is not working.
The SNMP configuration in amazon server is:
agentAddress udp:0.0.0.0:161
view systemonly included .1.3.6.1.2.1.1
view systemonly included .1.3.6.1.2.1.25.1
rocommunity public 0.0.0.0
The configuration in /etc/default/snmp is:
SNMPDOPTS='-Lsd -Lf /dev/null -u snmp -g snmp -I -smux -p /var/run/snmpd.pid'
and the security group in amazon server I opened UDP to anywhere:
Custom UDP Rule - UDP - 161 - 0.0.0.0/0
-- Netstat result:
root## netstat -an | grep 161
udp 0 0 0.0.0.0:161 0.0.0.0:*
-- In my firewall i added this rule:
# cat rules | grep 161
ACCEPT loc:ip_local_server net:ip_amazon_server udp 161
I don't know what I have to check more.
any suggestion?
Thank you!
Lunching tcpdump I see this result in amazon server when I lunch snmpwalk in my local server:
17:38:23.591513 IP 1-1-1-1.ea.com.35403 > .snmp: GetNextRequest(25)
17:38:23.591690 IP .snmp > 1-1-1-1.ea.com.35403: GetResponse(114) system.sysDescr.0="Linux ip-17-3-2-2 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt11-1 (2015-05-24) x86_64"
17:38:24.592491 IP 1-1-1-1.ea.com.35403 > .snmp: GetNextRequest(25)
But the result in local server is:
Timeout: No Response from 1.1.1.1

squid ssl-bump 3.5.4: error - Error negotiating SSL connection on FD 10: Success (0)

I am trying to install squid 3.5.4 (on docker, running debian 8) and run it in ssl-bump mode.
Compilation:
./configure --prefix=/opt/squid --srcdir=. --disable-maintainer-mode \
--disable-dependency-tracking --disable-silent-rules --enable-inline\
--disable-arch-native --enable-async-io=8 \
--enable-storeio=ufs,aufs,diskd,rock \
--enable-removal-policies=lru,heap --enable-delay-pools \
--enable-cache-digests --enable-icap-client \
--enable-follow-x-forwarded-for \
--enable-auth-basic=DB,fake,getpwnam,LDAP,MSNT-multi-domain,NCSA,NIS,PAM,POP3,RADIUS,SASL,SMB \
--enable-auth-digest=file,LDAP \
--enable-auth-negotiate=kerberos,wrapper \
--enable-auth-ntlm=fake,smb_lm \
--enable-external-acl-helpers=file_userip,kerberos_ldap_group,LDAP_group,session,SQL_session,unix_group,wbinfo_group \
--enable-url-rewrite-helpers=fake --enable-eui \
--enable-esi --enable-icmp --enable-zph-qos \
--disable-translation --with-filedescriptors=65536 \
--with-large-files --with-default-user=squid \
--enable-linux-netfilter \
CFLAGS="-g -O2 -fPIE -Wall" LDFLAGS="-fPIE -pie -Wl,-z,relro -Wl,-z,now" CPPFLAGS="-D_FORTIFY_SOURCE=2" \
CXXFLAGS="-g -O2 -fPIE " --enable-ssl --with-openssl --enable-ssl-crtd
Changed configuration (squid.conf) (rest is default):
# Squid normally listens to port 3128 \
http_port 9090
sslcrtd_program /opt/squid/libexec/ssl_crtd -s /opt/squid/var/lib/ssl_db -M 4MB
https_port 8080 intercept ssl-bump generate-host-certificates=on dynamic_cert_mem_cache_size=4MB key=/opt/squid/certs/private.pem cert=/opt/squid/certs/public.pem
### New configuration for Squid version 3.5
acl step1 at_step SslBump1
ssl_bump peek step1
ssl_bump bump all
### New config ends
sslproxy_capath /etc/ssl/certs
sslproxy_cert_error allow all
always_direct allow all
sslproxy_flags DONT_VERIFY_PEER
Generated certificates:
openssl req -new -newkey rsa:2048 -sha256 -days 365 -nodes -x509 -keyout private.pem -out public.pem
Generate squid certs dir and change ownership:
/opt/squid/libexec/ssl_crtd -c -s /opt/squid/var/lib/ssl_db -M 4MB
chown -R squid:squid /opt/squid/var/lib/ssl_db
CA Root certs are present in the default path
squid#525f5d9c759a:/opt/squid/certs$ ls -lsthr /etc/ssl/certs | wc -l
741
I am testing this configuration, using HTTP CONNECT, configuring the proxy directly in the browser.
ISSUE:
I get the following error when the browser request hits the proxy
8zjv9ksCWknblqfZ3rjWczvKNRboHpu940olZAbvSP0JWSXhFfRRTIsHIHD2/rt/
n5/qsURq/WLodLffFxuk+bLVTDZu
-----END PRIVATE KEY-----
2015/05/04 15:13:46.468 kid1| client_side.cc(3981) sslCrtdHandleReply: Certificate for 172.17.0.7 was successfully recieved from ssl_crtd
2015/05/04 15:13:46.468 kid1| client_side.cc(3664) httpsCreate: will negotate SSL on local=172.17.0.7:2222 remote=172.17.42.1:40686 FD 10 flags=33
2015/05/04 15:13:46.468 kid1| AsyncCall.cc(26) AsyncCall: The AsyncCall ConnStateData::requestTimeout constructed, this=0x7f0357a16c10 [call105]
2015/05/04 15:13:46.468 kid1| Error negotiating SSL connection on FD 10: Success (0)
2015/05/04 15:13:46.468 kid1| AsyncCall.cc(93) ScheduleCall: comm.cc(730) will call ConnStateData::connStateClosed(FD -1, data=0x7f03575d43b8) [call95]
2015/05/04 15:13:46.468 kid1| AsyncCallQueue.cc(55) fireNext: entering ConnStateData::connStateClosed(FD -1, data=0x7f03575d43b8)
2015/05/04 15:13:46.468 kid1| AsyncCall.cc(38) make: make call ConnStateData::connStateClosed [call95]
2015/05/05 10:00:25| pinger: Initialising ICMP pinger ...
2015/05/05 10:00:25| icmp_sock: (1) Operation not permitted
2015/05/05 10:00:25| pinger: Unable to start ICMP pinger.
2015/05/05 10:00:25| icmp_sock: (1) Operation not permitted
2015/05/05 10:00:25| pinger: Unable to start ICMPv6 pinger.
2015/05/05 10:00:25| FATAL: pinger: Unable to open any ICMP sockets.
Sending a curl request shows this:
curl --proxy https://localhost:8080 -w '\n' https://google.com -v
* Rebuilt URL to: https://google.com/
* Trying ::1...
* Connected to localhost (::1) port 8080 (#0)
* Establish HTTP proxy tunnel to google.com:443
> CONNECT google.com:443 HTTP/1.1
> Host: google.com:443
> User-Agent: curl/7.42.0
> Proxy-Connection: Keep-Alive
>
* Proxy CONNECT aborted
* Connection #0 to host localhost left intact
curl: (56) Proxy CONNECT aborted
Can anyone help with this?
Response got on Squid mailing lists:
http://squid-web-proxy-cache.1019090.n4.nabble.com/Error-negotiating-SSL-connection-on-FD-12-Success-td4671090.html
Summary: use http_port for handling the requests from browsers, which have proxy information directly specified.
Use https_port with ssl-bump and corresponding tag "intercept" or "tproxy" to use in transparent mode.

What is the proper setup for HAProxy with Openshift Custom Cartridges?

It seems as if many developers trying to move from non-scaled apps (like the diy cartridge) to scaled versions of their apps are having trouble configuring their cartridges to interact properly with the default configuration of HAProxy created by Openshift and getting their start and stop action hooks to deal with scaling portions of their app, myself included. Most often because we're new and we don't quite understand what the default configuration of openshift's HAProxy does...
HAProxy's default configuration
#---------------------------------------------------------------------
# Example configuration for a possible web application. See the
# full configuration options online.
#
# http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
#
#---------------------------------------------------------------------
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
# to have these messages end up in /var/log/haproxy.log you will
# need to:
#
# 1) configure syslog to accept network log events. This is done
# by adding the '-r' option to the SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
#
# 2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like the following can be added to
# /etc/sysconfig/syslog
#
# local2.* /var/log/haproxy.log
#
#log 127.0.0.1 local2
maxconn 256
# turn on stats unix socket
stats socket /var/lib/openshift/{app's ssh username}/haproxy//run/stats level admin
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
#option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 128
listen stats 127.2.31.131:8080
mode http
stats enable
stats uri /
listen express 127.2.31.130:8080
cookie GEAR insert indirect nocache
option httpchk GET /
balance leastconn
server local-gear 127.2.31.129:8080 check fall 2 rise 3 inter 2000 cookie local-{app's ssh username}
Often it seems like both sides of the application are up and running but HAProxy isn't sending http requests to where we'd expect. And from numerous questions asked on openshift we know that this line:
option httpchk GET /
Is HAProxy's sanity check to make sure the app is working, but often times whether that line is edited or removed we'll still get something like this in HAProxy's logs:
[WARNING] 240/150442 (404099) : Server express/local-gear is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[ALERT] 240/150442 (404099) : proxy 'express' has no server available!
Yet inside the gears we often have our apps listening to $OPENSHIFT_CARTNAME_IP and $OPENSHIFT_CARTNAME_PORT and we'll see they've started, and sometimes are rejecting the sanity check.
ERROR [DST 127.2.31.129 sid=1] SHOUTcast 1 client connection rejected. Stream not available as there is no source connected. Agent: `'
A cut and dry manifest, like the one from the diy cartridge
Name: hws
Cartridge-Short-Name: HWS
Display-Name: Hello World of Scaling Apps
Description: A Scaling App on Openshift
Version: '0.1'
License: ASL 2.0
License-Url: http://www.apache.org/licenses/LICENSE-2.0.txt
Cartridge-Version: 0.0.10
Compatible-Versions:
- 0.0.10
Cartridge-Vendor: you
Vendor: you
Categories:
- web_framework
- experimental
Website:
Help-Topics:
Getting Started: urltosomeinfo
Provides:
- hws-0.1
- hws
Publishes:
Subscribes:
set-env:
Type: ENV:*
Required: false
Scaling:
Min: 1
Max: -1
Group-Overrides:
- components:
- web-proxy
Endpoints:
- Private-IP-Name: IP
Private-Port-Name: PORT
Private-Port: 8080
Public-Port-Name: PROXY_PORT
Protocols:
- http
- ws
Options:
primary: true
Mappings:
- Frontend: ''
Backend: ''
Options:
websocket: true
- Frontend: "/health"
Backend: ''
Options:
health: true
Start Hook (inside bin/control or in .openshift/action_hooks)
RESPONSE=$(curl -o /dev/null --silent --head --write-out '%{http_code}\n' "http://${OPENSHIFT_APP_DNS}:80")
${RESPONSE} > ${OPENSHIFT_DIY_LOG_DIR}/checkserver.log
echo ${RESPONSE}
if [ "${RESPONSE}" -eq "503" ]
then
nohup ${OPENSHIFT_REPO_DIR}/diy/serverexec ${OPENSHIFT_REPO_DIR}/diy/startfromscratch.conf > ${OPENSHIFT_DIY_LOG_DIR}/server.log 2>&1 &
else
nohup ${OPENSHIFT_REPO_DIR}/diy/serverexec ${OPENSHIFT_REPO_DIR}/diy/secondorfollowinggear.conf > ${OPENSHIFT_DIY_LOG_DIR}/server.log 2>&1 &
fi
Stop Hook (inside bin/control or in .openshift/action_hooks)
kill `ps -ef | grep serverexec | grep -v grep | awk '{ print $2 }'` > /dev/null 2>&1
exit 0
The helpful questions for new developers:
Avoiding a killer sanity check
Is there a way of configuring the app using the manifest.yml to avoid these collisions? Or vice/versa a little tweak to the default HAProxy configuration so that the app will run on appname-appdomain.rhcloud.com:80/ without returning 503 errors?
Setting up more convenient access to the app
My shoutcast example, as hinted by the error works so long as I'm streaming to it first. What additional parts to the manifest and HAProxy would let a user connect directly (from an external url) to the first gear's port 80? As opposed to port-forwarding into the app all the time.
Making sure the app starts and stops as if it weren't scaled
Lastly many non-scaled applications have a quick and easy script to start up and shutdown because it seems openshift accounts for the fact the app has to have the first gear running. How would a stop action hook be adjusted to run through and stop all the gears? What would have to be added to the start action hook to get the first gear back up online with all of it's components (not just HAProxy)?

Resources