Laradock is a set of laravel related docker images (services) that can be used to get it up and running. However I can't seem to get redis working out of the box, it has authentication issues no matter what configuration I tried. I also switched from v7 to v6 and copied someone else's config file with no luck.
https://laradock.io/
docker-compose up -d apache2 mysql php-fpm redis
Also "protected-mode no" doesn't even appear to work at all.
CACHE_DRIVER=redis
QUEUE_CONNECTION=redis
SESSION_DRIVER=redis
SESSION_LIFETIME=480
SESSION_SAME_SITE=null
REDIS_HOST=redis
REDIS_PASSWORD=null
REDIS_PORT=6379
default config above, connection etc. seems to be fine.
REDIS_PASSWORD=null
results in
NOAUTH Authentication required.
and
REDIS_PASSWORD=foobared
results in
WRONGPASS invalid username-password pair
I am unable to use config:clear as it keeps throwing the same error.
This is strange because laradock is basically made for laravel, and doesn't appear to work with a default setup. Maybe our project code uses a different type/method of auth? But I don't see this mentioned anywhere wether this is configurable or not.
I didn't really change much about any of the dockerfiles or configurations, just added a standard apache config.
redis.conf
bind 127.0.0.1
protected-mode yes
port 6379
tcp-backlog 511
timeout 0
tcp-keepalive 300
daemonize no
supervised no
pidfile /var/run/redis_6379.pid
loglevel notice
logfile ""
databases 16
always-show-logo yes
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir ./
replica-serve-stale-data yes
replica-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
replica-priority 100
requirepass foobared
lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
replica-lazy-flush no
appendonly no
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
aof-use-rdb-preamble yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
stream-node-max-bytes 4096
stream-node-max-entries 100
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit replica 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
dynamic-hz yes
aof-rewrite-incremental-fsync yes
rdb-save-incremental-fsync yes
redis Dockerfile
FROM redis:6.0.16
MAINTAINER Mahmoud Zalt <mahmoud#zalt.me>
RUN mkdir -p /usr/local/etc/redis
COPY redis.conf /usr/local/etc/redis/redis.conf
VOLUME /data
EXPOSE 6379
CMD ["redis-server", "/usr/local/etc/redis/redis.conf"]
#CMD ["redis-server"]
Related
I'm experiencing intermittent failed to response when make an outbound connection such as RPC call, it is logged by my application (Java) like this :
org.apache.http.NoHttpResponseException: RPC_SERVER.com:443 failed to respond !
Outbound connection flow
Kubernetes Node -> ELB for internal NGINX -> internal NGINX ->[Upstream To]-> ELB RPC server -> RPC server instance
This problem is not occurred on usual EC2 (AWS).
I'm able to reproduce on my localhost by doing this
Run main application which act as client in port 9200
Run RPC server in port 9205
Client will make a connection to server using port 9202
Run $ socat TCP4-LISTEN:9202,reuseaddr TCP4:localhost:9205 that will listen on port 9202 and then forward it to 9205 (RPC Server)
Add rules on iptables using $ sudo iptables -A INPUT -p tcp --dport 9202 -j DROP
Trigger a RPC calling, and it will return the same error message as I desrcibe before
Hypothesis
Caused by NAT on kubernetes, as far as I know, NAT is using conntrack, conntrack and break the TCP connection if it was idle for some period of time, client will assume the connection is still established although it isn't. (CMIIW)
I also have tried scaling kube-dns into 10 replica, and the problem still occurred.
Node Specification
Use calico as network plugin
$ sysctl -a | grep conntrack
net.netfilter.nf_conntrack_acct = 0
net.netfilter.nf_conntrack_buckets = 65536
net.netfilter.nf_conntrack_checksum = 1
net.netfilter.nf_conntrack_count = 1585
net.netfilter.nf_conntrack_events = 1
net.netfilter.nf_conntrack_expect_max = 1024
net.netfilter.nf_conntrack_generic_timeout = 600
net.netfilter.nf_conntrack_helper = 1
net.netfilter.nf_conntrack_icmp_timeout = 30
net.netfilter.nf_conntrack_log_invalid = 0
net.netfilter.nf_conntrack_max = 262144
net.netfilter.nf_conntrack_tcp_be_liberal = 0
net.netfilter.nf_conntrack_tcp_loose = 1
net.netfilter.nf_conntrack_tcp_max_retrans = 3
net.netfilter.nf_conntrack_tcp_timeout_close = 10
net.netfilter.nf_conntrack_tcp_timeout_close_wait = 3600
net.netfilter.nf_conntrack_tcp_timeout_established = 86400
net.netfilter.nf_conntrack_tcp_timeout_fin_wait = 120
net.netfilter.nf_conntrack_tcp_timeout_last_ack = 30
net.netfilter.nf_conntrack_tcp_timeout_max_retrans = 300
net.netfilter.nf_conntrack_tcp_timeout_syn_recv = 60
net.netfilter.nf_conntrack_tcp_timeout_syn_sent = 120
net.netfilter.nf_conntrack_tcp_timeout_time_wait = 120
net.netfilter.nf_conntrack_tcp_timeout_unacknowledged = 300
net.netfilter.nf_conntrack_timestamp = 0
net.netfilter.nf_conntrack_udp_timeout = 30
net.netfilter.nf_conntrack_udp_timeout_stream = 180
net.nf_conntrack_max = 262144
Kubelet config
[Service]
Restart=always
Environment="KUBELET_KUBECONFIG_ARGS=--kubeconfig=/etc/kubernetes/kubelet.conf --require-kubeconfig=true"
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"
Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local"
Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt"
Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=0"
Environment="KUBELET_CLOUD_ARGS=--cloud-provider=aws"
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_EXTRA_ARGS $KUBELET_CLOUD_ARGS
Kubectl version
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.5", GitCommit:"17d7182a7ccbb167074be7a87f0a68bd00d58d97", GitTreeState:"clean", BuildDate:"2017-08-31T09:14:02Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.7", GitCommit:"8e1552342355496b62754e61ad5f802a0f3f1fa7", GitTreeState:"clean", BuildDate:"2017-09-28T23:56:03Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Kube-proxy Log
W1004 05:34:17.400700 8 server.go:190] WARNING: all flags other than --config, --write-config-to, and --cleanup-iptables are deprecated. Please begin using a config file ASAP.
I1004 05:34:17.405871 8 server.go:478] Using iptables Proxier.
W1004 05:34:17.414111 8 server.go:787] Failed to retrieve node info: nodes "ip-172-30-1-20" not found
W1004 05:34:17.414174 8 proxier.go:483] invalid nodeIP, initializing kube-proxy with 127.0.0.1 as nodeIP
I1004 05:34:17.414288 8 server.go:513] Tearing down userspace rules.
I1004 05:34:17.443472 8 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_max' to 262144
I1004 05:34:17.443518 8 conntrack.go:52] Setting nf_conntrack_max to 262144
I1004 05:34:17.443555 8 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I1004 05:34:17.443584 8 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I1004 05:34:17.443851 8 config.go:102] Starting endpoints config controller
I1004 05:34:17.443888 8 config.go:202] Starting service config controller
I1004 05:34:17.443890 8 controller_utils.go:994] Waiting for caches to sync for endpoints config controller
I1004 05:34:17.443916 8 controller_utils.go:994] Waiting for caches to sync for service config controller
I1004 05:34:17.544155 8 controller_utils.go:1001] Caches are synced for service config controller
I1004 05:34:17.544155 8 controller_utils.go:1001] Caches are synced for endpoints config controller
$ lsb_release -s -d
Ubuntu 16.04.3 LTS
Check the value of sysctl net.netfilter.nf_conntrack_tcp_timeout_close_wait inside the pod that contains your program. It is possible that the value on the node that you listed (3600) isn't the same as the value inside the pod.
If the value in the pod is too small (e.g. 60), and your Java client half-closes the TCP connection with a FIN when it finishes transmitting, but the response takes longer than the close_wait timeout to arrive, nf_conntrack will lose the connection state and your client program will not receive the response.
You may need to change the behavior of the client program to not use a TCP half-close, OR modify the value of net.netfilter.nf_conntrack_tcp_timeout_close_wait to be larger. See https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/.
I'm trying to get ElasticSearch running with Laradock. ES looks to be supported out of the box with Laradock.
Here's my docker command (run from <project root>/laradock/:
docker-compose up -d nginx postgres redis beanstalkd elasticsearch
However if I run docker ps, the elasticsearch container isn't running.
Both ports 9200 and 9300 are not consumed:
lsof -i :9200
Not sure why the elasticsearch container doesn't persist, it seems to just self close.
output of docker ps -a after running docker-compose up ...
http://pastebin.com/raw/ymfvLPLT
Condensed version:
IMAGE STATUS PORTS
laradock_nginx Up 36 seconds 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp
laradock_elasticsearch Exited (137) 34 seconds ago
laradock_beanstalkd Up 37 seconds 0.0.0.0:11300->11300/tcp
laradock_php-fpm Up 38 seconds 9000/tcp
laradock_workspace Up 39 seconds 0.0.0.0:2222->22/tcp
tianon/true Excited (0) 41 seconds ago
laradock_postgres Up 41 seconds 0.0.0.0:5432->5432/tcp
laradock_redis Up 40 seconds 0.0.0.0:6379->6379/tcp
Output of docker events after running docker-compose up ...
http://pastebin.com/cE9bjs6i
Try to check logs first:
docker logs laradock_elasticsearch_1
(or another name of elasticsearch container)
In my case it was
ERROR: [1] bootstrap checks failed
[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
I found solution here
namely, i've run on my Ubuntu machine
sudo sysctl -w vm.max_map_count=262144
I don't think the problem is related to Laradock, since Elasticsearch is supposed to be running on it's own, I would first check the memory:
open Docker Dashboard -> Settings -> Resources -> Advanced: and increase the memory.
check your Machine memory, Elasticsearch won't run if there is not enough memory in your machine.
or:
open your docker-compose.yml file
increase the mem_limit: 1g then
docker-compose up -d --build elasticsearch
If it's still not working, remove all the images, update laradock to latest version and setup it new.
I am trying to run haproxy with docker. I followed the instructions here :
https://hub.docker.com/_/haproxy/
I was able to build the docker image but after trying to run it.
using
docker run -d --link another_container:another_container --name mc-ha -v haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro my_own_haproxy:latest
I get this error :
[ALERT] 298/054910 (1) : [haproxy.main()] No enabled listener found (check for 'bind' directives) ! Exiting.
I searched for it , but the only thing I found is the source code of ha proxy.
Here is my haproxy.cfg
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
# Default ciphers to use on SSL-enabled listening sockets.
# For more information, see ciphers(1SSL).
ssl-default-bind-ciphers kEECDH+aRSA+AES:kRSA+AES:+AES256:RC4-SHA:!kEDH:!LOW:!EXP:!MD5:!aNULL:!eNULL
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
errorfile 400 /etc/haproxy/errors/400.http
errorfile 403 /etc/haproxy/errors/403.http
errorfile 408 /etc/haproxy/errors/408.http
errorfile 500 /etc/haproxy/errors/500.http
errorfile 502 /etc/haproxy/errors/502.http
errorfile 503 /etc/haproxy/errors/503.http
errorfile 504 /etc/haproxy/errors/504.http
frontend esNodes
bind *:8091
mode http
default_backend srNodes
backend srNodes
mode http
balance roundrobin
option forwardfor
http-request set-header X-Forwarded-Port %[dst_port]
http-request add-header X-Forwarded-Proto https if { ssl_fc }
option httpchk HEAD / HTTP/1.1\r\nHost:localhost
server web01 0.0.0.0:10903/project/es check
EDIT: Btw I also tried changing the backend node url to my docker host ip. But still no luck.
Thanks to #Michael comment. I was able to solve the problem.
First I remove the haproxy command from the dockerfile. And then I run the haproxy command manually inside the container.
Voila! My config file is not a file. Its a directory. LOL
The problem is in my docker command -v.
I change it to full path
-v FULL_PATH/customhaproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg
You will want to remove the daemon keyword from your docker file - docker needs a foreground process to be running otherwise docker will exit immediately.
I think the error message you are seeing is because docker exits quicker than haproxy binds to any ports.
I actually had to restart docker-machine (using OSX), otherwise every time I was running container with volume mount option (tried both absolute and relative paths) - it was mounting haproxy.cfg as a directory
ls -l /etc/ssl/certs
ls -l /etc/ssl/private
chmod -r 400 /etc/ssl/private
perhaps also change the permissions on certs but I'm not sure. Starting up haproxy with globally readable ssl keys is a very bad security practice so they disable startup completely.
It seems as if many developers trying to move from non-scaled apps (like the diy cartridge) to scaled versions of their apps are having trouble configuring their cartridges to interact properly with the default configuration of HAProxy created by Openshift and getting their start and stop action hooks to deal with scaling portions of their app, myself included. Most often because we're new and we don't quite understand what the default configuration of openshift's HAProxy does...
HAProxy's default configuration
#---------------------------------------------------------------------
# Example configuration for a possible web application. See the
# full configuration options online.
#
# http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
#
#---------------------------------------------------------------------
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
# to have these messages end up in /var/log/haproxy.log you will
# need to:
#
# 1) configure syslog to accept network log events. This is done
# by adding the '-r' option to the SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
#
# 2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like the following can be added to
# /etc/sysconfig/syslog
#
# local2.* /var/log/haproxy.log
#
#log 127.0.0.1 local2
maxconn 256
# turn on stats unix socket
stats socket /var/lib/openshift/{app's ssh username}/haproxy//run/stats level admin
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
#option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 128
listen stats 127.2.31.131:8080
mode http
stats enable
stats uri /
listen express 127.2.31.130:8080
cookie GEAR insert indirect nocache
option httpchk GET /
balance leastconn
server local-gear 127.2.31.129:8080 check fall 2 rise 3 inter 2000 cookie local-{app's ssh username}
Often it seems like both sides of the application are up and running but HAProxy isn't sending http requests to where we'd expect. And from numerous questions asked on openshift we know that this line:
option httpchk GET /
Is HAProxy's sanity check to make sure the app is working, but often times whether that line is edited or removed we'll still get something like this in HAProxy's logs:
[WARNING] 240/150442 (404099) : Server express/local-gear is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[ALERT] 240/150442 (404099) : proxy 'express' has no server available!
Yet inside the gears we often have our apps listening to $OPENSHIFT_CARTNAME_IP and $OPENSHIFT_CARTNAME_PORT and we'll see they've started, and sometimes are rejecting the sanity check.
ERROR [DST 127.2.31.129 sid=1] SHOUTcast 1 client connection rejected. Stream not available as there is no source connected. Agent: `'
A cut and dry manifest, like the one from the diy cartridge
Name: hws
Cartridge-Short-Name: HWS
Display-Name: Hello World of Scaling Apps
Description: A Scaling App on Openshift
Version: '0.1'
License: ASL 2.0
License-Url: http://www.apache.org/licenses/LICENSE-2.0.txt
Cartridge-Version: 0.0.10
Compatible-Versions:
- 0.0.10
Cartridge-Vendor: you
Vendor: you
Categories:
- web_framework
- experimental
Website:
Help-Topics:
Getting Started: urltosomeinfo
Provides:
- hws-0.1
- hws
Publishes:
Subscribes:
set-env:
Type: ENV:*
Required: false
Scaling:
Min: 1
Max: -1
Group-Overrides:
- components:
- web-proxy
Endpoints:
- Private-IP-Name: IP
Private-Port-Name: PORT
Private-Port: 8080
Public-Port-Name: PROXY_PORT
Protocols:
- http
- ws
Options:
primary: true
Mappings:
- Frontend: ''
Backend: ''
Options:
websocket: true
- Frontend: "/health"
Backend: ''
Options:
health: true
Start Hook (inside bin/control or in .openshift/action_hooks)
RESPONSE=$(curl -o /dev/null --silent --head --write-out '%{http_code}\n' "http://${OPENSHIFT_APP_DNS}:80")
${RESPONSE} > ${OPENSHIFT_DIY_LOG_DIR}/checkserver.log
echo ${RESPONSE}
if [ "${RESPONSE}" -eq "503" ]
then
nohup ${OPENSHIFT_REPO_DIR}/diy/serverexec ${OPENSHIFT_REPO_DIR}/diy/startfromscratch.conf > ${OPENSHIFT_DIY_LOG_DIR}/server.log 2>&1 &
else
nohup ${OPENSHIFT_REPO_DIR}/diy/serverexec ${OPENSHIFT_REPO_DIR}/diy/secondorfollowinggear.conf > ${OPENSHIFT_DIY_LOG_DIR}/server.log 2>&1 &
fi
Stop Hook (inside bin/control or in .openshift/action_hooks)
kill `ps -ef | grep serverexec | grep -v grep | awk '{ print $2 }'` > /dev/null 2>&1
exit 0
The helpful questions for new developers:
Avoiding a killer sanity check
Is there a way of configuring the app using the manifest.yml to avoid these collisions? Or vice/versa a little tweak to the default HAProxy configuration so that the app will run on appname-appdomain.rhcloud.com:80/ without returning 503 errors?
Setting up more convenient access to the app
My shoutcast example, as hinted by the error works so long as I'm streaming to it first. What additional parts to the manifest and HAProxy would let a user connect directly (from an external url) to the first gear's port 80? As opposed to port-forwarding into the app all the time.
Making sure the app starts and stops as if it weren't scaled
Lastly many non-scaled applications have a quick and easy script to start up and shutdown because it seems openshift accounts for the fact the app has to have the first gear running. How would a stop action hook be adjusted to run through and stop all the gears? What would have to be added to the start action hook to get the first gear back up online with all of it's components (not just HAProxy)?
I have this problem with reloading the HAProxy using this command:
haproxy -D -f gateway.cfg -p /var/run/haproxy.pid -D -sf $(cat /var/run/haproxy.pid)
The error result
[ALERT] 169/001728 (3844) : Starting frontend proxy: cannot bind socket
I have tried adding user root or Administrator in the config but to no avail. The file permission according to ls -la is Administrator none. It makes me think HAProxy does not completely support windows and I wonder how does -sf/-st prefix work? (I tried in unix system and it turns out working correctly. The HAProxy config is shown below
global
daemon
maxconn 1024
pidfile /var/run/haproxy.pid
defaults
log global
mode http
option httplog
option dontlognull
retries 3
option redispatch
contimeout 5000
clitimeout 50000
srvtimeout 50000
frontend proxy
bind *:80
default_backend servers
backend servers
balance roundrobin
option httpchk GET /
option forwardfor
option httpclose
stats enable
stats refresh 10s
stats hide-version
stats uri /admin?stats
stats auth admin:admin
stats realm Haproxy\ Statistics
server svr0 127.0.0.1 check inter 5000
HAProxy generally does not support Windows, even under Cygwin. HAProxy contains very specific optimisations for Linux and a variety of UNIX systems which make it very hard to be able to run it on Windows.
And even if you would somehow make it run, it would result in abysmal performance and would never get a stable or even moderately fast system. It just doesn't make any sense to run HAProxy on Windows and trying to deal with various emulation layers when you get great performance even out of a sub-1-Watt ARM box running on Linux.
You can run most of haproxy version under windows. Here is the 1.4.24 compilated using cygwin:
http://www.mediafire.com/download/7l4yg7fa5w185bo/haproxy.zip
You can use it for testing purpose, but you should avoid production with it, only to be able to develop under windows with an easy transfert to linux for example...