I'm trying to get ElasticSearch running with Laradock. ES looks to be supported out of the box with Laradock.
Here's my docker command (run from <project root>/laradock/:
docker-compose up -d nginx postgres redis beanstalkd elasticsearch
However if I run docker ps, the elasticsearch container isn't running.
Both ports 9200 and 9300 are not consumed:
lsof -i :9200
Not sure why the elasticsearch container doesn't persist, it seems to just self close.
output of docker ps -a after running docker-compose up ...
http://pastebin.com/raw/ymfvLPLT
Condensed version:
IMAGE STATUS PORTS
laradock_nginx Up 36 seconds 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp
laradock_elasticsearch Exited (137) 34 seconds ago
laradock_beanstalkd Up 37 seconds 0.0.0.0:11300->11300/tcp
laradock_php-fpm Up 38 seconds 9000/tcp
laradock_workspace Up 39 seconds 0.0.0.0:2222->22/tcp
tianon/true Excited (0) 41 seconds ago
laradock_postgres Up 41 seconds 0.0.0.0:5432->5432/tcp
laradock_redis Up 40 seconds 0.0.0.0:6379->6379/tcp
Output of docker events after running docker-compose up ...
http://pastebin.com/cE9bjs6i
Try to check logs first:
docker logs laradock_elasticsearch_1
(or another name of elasticsearch container)
In my case it was
ERROR: [1] bootstrap checks failed
[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
I found solution here
namely, i've run on my Ubuntu machine
sudo sysctl -w vm.max_map_count=262144
I don't think the problem is related to Laradock, since Elasticsearch is supposed to be running on it's own, I would first check the memory:
open Docker Dashboard -> Settings -> Resources -> Advanced: and increase the memory.
check your Machine memory, Elasticsearch won't run if there is not enough memory in your machine.
or:
open your docker-compose.yml file
increase the mem_limit: 1g then
docker-compose up -d --build elasticsearch
If it's still not working, remove all the images, update laradock to latest version and setup it new.
Related
I have two containers it was builded with command > docker-compose up --build -d.
All containers build normally and stays up, but when I leave the machine the containers stays up at least 2 hours until que he drops again.
This containers is running an API in PHP LARAVEL Framework and a nginx reverse proxy.
Docker Image Started as 46Hours ago and UP 2 seconds
When I start the application and leave the machine where Docker is installed, it is in max two hours running. If I access the machine via ssh and then after that access the application and it is running without the need to do a docker-compose up. And the api was written in Laravel PHP with a Nginx container making a reverse Proxy.
What do I have to do to make these containers stand up as a productive environment?
There is a command that can help you when it goes down or stops:
sudo docker run --restart unless-stopped --name <Name you want to use> <Name of your container>
don't use these <> in your command
after doing this anytime that container is down it will restart the container for you automatically.
I think this trick is really useful when you have multiple containers running, and helpful when you want to update the server packages too.
very new to docker. following this tutorial: https://medium.com/thecodefountain/develop-a-spring-boot-and-mysql-application-and-run-in-docker-end-to-end-15b7cdf3a2ba
I followed all the instructions (my application is called accessing-data-mysql) but I think I should have two containers running: one for mysql and one for the application. But when running docker container ls I only see the mysql container listed. Below I am creating a docker container for my application's image and linking it to the running instance of mysql container.
PS C:\projects\project1> docker run -d -p 8089:8089 --name accessing-data-mysql --link mysql-standalone:mysql accessing-data-mysql
82f499c6897d1f6bd2eeaabe4aa25ae786508146929a7039785e4ca37d691435
PS C:\projects\project1> docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
62029a53b9d4 mysql "docker-entrypoint.s…" 16 minutes ago Up 16 minutes 3306/tcp, 33060/tcp mysql-standalone
PS C:\projects\project1> docker run -d -p 8089:8089 --name accessing-data-mysql --link mysql-standalone:mysql accessing-data-mysql
my docker file:
FROM openjdk:12
ADD target/user-mysql.jar user-mysql.jar
EXPOSE 8089
ENTRYPOINT ["java", "-jar", "user-mysql.jar"]
when connecting via browser to localhost:8089, I get connection refused error. Not even sure if the service is running.
below is the result of running docker logs:
PS C:\projects\project1> docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
82f499c6897d accessing-data-mysql "java -jar accessing…" 50 minutes ago Exited (0) 50 minutes ago accessing-data-mysql
62029a53b9d4 mysql "docker-entrypoint.s…" About an hour ago Up About an hour 3306/tcp, 33060/tcp mysql-standalone
PS C:\projects\project1> docker logs accessing-data-mysql
Hibernate ORM core version 5.4.27.Final
PS C:\projects\project1>
EDIT:
When I run locally directly from Idea, I see the error: No such host is known (mysql-standalone), which is the mysql url I configured to connect to docker mysql. As soon as I change the mysql url to localhost:3306, it can connect. Does this mean that somehow the my-sql docker instance is not accepting connections?
Your container is not up and running. You need to see the container as up when you do docker ps. You should see something like this:
~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
62029a53b9d4 mysql "docker-entrypoint.s…" 16 minutes ago Up 16 minutes 3306/tcp, 33060/tcp mysql-standalone
XXXXXXXXXXXX openjdk "java" XX minutes ago Up xx minutes 8089/tcp accessing-data-mysql
If you are not able to see the second container, it means it's not running. You will need to find the reason for not running by getting the logs with docker logs accessing-data-mysql and see why the second container is not starting.
Also, consider creating a docker-compose for both containers and establish a separate network. This is not required, your example can work without this, but it makes the things much easier for management and also for troubleshooting.
I'm trying to setup Kubernetes cluster with multi master and external etcd cluster. Followed these steps as described in kubernetes.io. I was able to create static manifest pod files in all the 3 hosts at /etc/kubernetes/manifests folder after executing Step 7.
After that when I executed command 'sudo kubeadmin init', the initialization got failed because of kubelet errors. Also verified journalctl logs, the error says misconfiguration of cgroup driver which is similar to this SO link.
I tried as said in the above SO link but not able to resolve.
Please help me in resolving this issue.
For installation of docker, kubeadm, kubectl and kubelet, I followed kubernetes.io site only.
Environment:
Cloud: AWS
EC2 instance OS: Ubuntu 18.04
Docker version: 18.09.7
Thanks
After searching few links and doing few trails, I am able to resolve this issue.
As given in the Container runtime setup, the Docker cgroup driver is systemd. But default cgroup driver of Kubelet is cgroupfs. So as Kubelet alone cannot identify cgroup driver automatically (as given in kubernetes.io docs), we have to provide cgroup-driver externally while running Kubelet like below:
cat << EOF > /etc/systemd/system/kubelet.service.d/20-etcd-service-manager.conf
[Service]
ExecStart=
ExecStart=/usr/bin/kubelet --cgroup-driver=systemd --address=127.0.0.1 --pod->manifest-path=/etc/kubernetes/manifests
Restart=always
EOF
systemctl daemon-reload
systemctl restart kubelet
Moreover, no need to run sudo kubeadm init, as we are providing --pod-manifest-path to Kubelet, it runs etcd as Static POD.
For debugging, logs of Kubelet can be checked using below command
journalctl -u kubelet -r
Hope it helps. Thanks.
I'm starting out using Docker on macOS, and get stuck when trying to complete part 4 of the Get Started guide. I have created two extra virtual machines (myvm1 and myvm2), set myvm1 as swarm manager, and myvm2 as a worker.
I have then deployed a stack with 5 Flask web servers using the docker-compose.yml from part 3 of the tutorial. The processes seem to start fine, and are distributed between the two machines, but I am not able to reach them from the host using a browser.
How should I configure the port forwarding/network to be able to reach the web servers in the swarm from the host of the virtual machines running the docker container?
The following is a list of commands that I have run, some with resulting output.
$ docker-machine create --driver virtualbox myvm1
$ docker-machine create --driver virtualbox myvm2
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
myvm1 - virtualbox Running tcp://192.168.99.100:2376 v18.09.0
myvm2 - virtualbox Running tcp://192.168.99.101:2376 v18.09.0
$ docker-machine ssh myvm1 "docker swarm init --advertise-addr 192.168.99.100"
$ docker-machine ssh myvm2 "docker swarm join --token <my-token-inserted-here> 192.168.99.100:2377"
$ eval $(docker-machine env myvm1)
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
myvm1 * virtualbox Running tcp://192.168.99.100:2376 v18.09.0
myvm2 - virtualbox Running tcp://192.168.99.101:2376 v18.09.0
$ docker stack deploy -c docker-compose.yml getstartedlab
$ docker stack ps getstartedlab
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
it9asz4zpdmi getstartedlab_web.1 mochr/test_repo:friendly_hello myvm2 Running Preparing 18 seconds ago
645gvtnde7zz getstartedlab_web.2 mochr/test_repo:friendly_hello myvm1 Running Preparing 18 seconds ago
fpq6cvcf3e0e getstartedlab_web.3 mochr/test_repo:friendly_hello myvm2 Running Preparing 18 seconds ago
plkpximnpobf getstartedlab_web.4 mochr/test_repo:friendly_hello myvm1 Running Preparing 18 seconds ago
gr2p8a0asatb getstartedlab_web.5 mochr/test_repo:friendly_hello myvm2 Running Preparing 18 seconds ago
The docker-compose.yml:
version: "3"
services:
web:
image: mochr/test_repo:friendly_hello
deploy:
replicas: 5
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
ports:
- "4000:80"
networks:
- webnet
networks:
webnet:
It looks like this is a known problem with the current version of boot2docker: https://github.com/docker/machine/issues/4608
The workaround is either to use a swarm based on machines that do not require boot2docker (e.g. AWS, DigitalOcean, etc.), wait until a newer version of boot2docker is released, or use an earlier version of boot2docker, as described in that link. To use an earlier version:
export VIRTUALBOX_BOOT2DOCKER_URL=https://github.com/boot2docker/boot2docker/releases/download/v18.06.1-ce/boot2docker.iso
before creating your virtual machines with docker-machine. (Remove your existing virtual machines first, then use that export, then docker-machine create myvm1)
Then, you should be able to bring up your stack and access your containers at either 192.168.99.100:4000 or 192.168.99.101:4000 (or whatever IP addresses are revealed by docker-machine ls)
I'm trying to use the docker's image elk-docker (https://elk-docker.readthedocs.io/) , using Docker Compose. The .yml file, is like this:
elk:
image: sebp/elk
ports:
- "5601:5601"
- "9200:9200"
- "5044:5044"
When I run the command: sudo docker-compose up, the console shows:
* Starting Elasticsearch Server
sysctl: setting key "vm.max_map_count": Read-only file system
...fail!
waiting for Elasticsearch to be up (1/30)
waiting for Elasticsearch to be up (2/30)
waiting for Elasticsearch to be up (3/30)
waiting for Elasticsearch to be up (4/30)
waiting for Elasticsearch to be up (5/30)
waiting for Elasticsearch to be up (6/30)
waiting for Elasticsearch to be up (7/30)
waiting for Elasticsearch to be up (8/30)
waiting for Elasticsearch to be up (9/30)
waiting for Elasticsearch to be up (10/30)
waiting for Elasticsearch to be up (11/30)
waiting for Elasticsearch to be up (12/30)
waiting for Elasticsearch to be up (13/30)
waiting for Elasticsearch to be up (14/30)
waiting for Elasticsearch to be up (15/30)
waiting for Elasticsearch to be up (16/30)
waiting for Elasticsearch to be up (17/30)
waiting for Elasticsearch to be up (18/30)
waiting for Elasticsearch to be up (19/30)
waiting for Elasticsearch to be up (20/30)
waiting for Elasticsearch to be up (21/30)
waiting for Elasticsearch to be up (22/30)
waiting for Elasticsearch to be up (23/30)
waiting for Elasticsearch to be up (24/30)
waiting for Elasticsearch to be up (25/30)
waiting for Elasticsearch to be up (26/30)
waiting for Elasticsearch to be up (27/30)
waiting for Elasticsearch to be up (28/30)
waiting for Elasticsearch to be up (29/30)
waiting for Elasticsearch to be up (30/30)
Couln't start Elasticsearch. Exiting.
Elasticsearch log follows below.
cat: /var/log/elasticsearch/elasticsearch.log: No such file or directory
docker_elk_1 exited with code 1
How can I resolve the problem?
You can do that in 2 ways.
Temporary set max_map_count:
sudo sysctl -w vm.max_map_count=262144
but this will only last till you restart your system.
Permament
In your host machine
vi /etc/sysctl.conf
make entry
vm.max_map_count=262144
restart
You probably need to set vm.max_map_count in /etc/sysctl.conf on the host itself, so that Elasticsearch does not attempt to do that from inside the container. If you don't know the desired value, try doubling the current setting and keep going until Elasticsearch starts successfully. Documentation recommends at least 262144.
In Docker host terminal (Docker CLI) run commands:
docker-machine ssh
sudo sysctl -w vm.max_map_count=262144
exit
Go inside your docker container
# docker exec -it <container_id> /bin/bash
You can view your current max_map_count value
# more /proc/sys/vm/max_map_count
Temporary set max_map_count value(container(host) restart will not persist max_map_count value)
# sysctl -w vm.max_map_count=262144
Permanently set value
1. # vi /etc/sysctl.conf
vm.max_map_count=262144
2. # vi /etc/rc.local
#put parameter inside rc.local file
echo 262144 > /proc/sys/vm/max_map_count
You have to set the vm.max_map_count variable in /etc/sysctl.conf at least with 262144
After that you can reload the settings with sysctl --system