Couldn't connect to Docker daemon on Mac OS X - macos

I would like to run multi-container application using docker-compose on Mac OS X El Capitan (v10.11.2).
However, the command $ docker-compose up command complains that it can't connect to the Docker daemon.
ERROR: Couldn't connect to Docker daemon - you might need to run docker-machine start default.
Only after executing $ eval "$(docker-machine env default)" I do have access to the docker-compose command.
Why is this and how can I overcome this extra step?

Update for Docker versions that come with Docker.app
The Docker experience on macOS has improved since this answer was posted:
The only prerequisite is now for Docker.app to be running. Note that starting it on demand takes a while, because the underlying Linux VM must be started.
Any shell then has access to Docker functionality.
By default, Docker.app is launched at login time (you can change that via its preferences).
If you instead prefer starting and stopping Docker on demand from the command line, here are bash scripts that do that, docker-start and docker-stop; place them anywhere in your $PATH.
When docker-start launches Docker.app, it waits until Docker has finished starting up and is ready.
docker-start:
#!/usr/bin/env bash
case $1 in
-h|--help)
echo $'usage: docker-start\n\nStarts Docker (Docker.app) on macOS and waits until the Docker environment is initialized.'
exit 0
;;
esac
(( $# )) && { echo "ARGUMENT ERROR: Unexpected argument(s) specified. Use -h for help." >&2; exit 2; }
[[ $(uname) == 'Darwin' ]] || { echo "This function only runs on macOS." >&2; exit 2; }
echo "-- Starting Docker.app, if necessary..."
open -g -a Docker.app || exit
# Wait for the server to start up, if applicable.
i=0
while ! docker system info &>/dev/null; do
(( i++ == 0 )) && printf %s '-- Waiting for Docker to finish starting up...' || printf '.'
sleep 1
done
(( i )) && printf '\n'
echo "-- Docker is ready."
docker-stop:
#!/usr/bin/env bash
case $1 in
-h|--help)
echo $'usage: docker-stop\n\nStops Docker (Docker.app) on macOS.'
exit 0
;;
esac
(( $# )) && { echo "ARGUMENT ERROR: Unexpected argument(s) specified. Use -h for help." >&2; exit 2; }
[[ $(uname) == 'Darwin' ]] || { echo "This function only runs on macOS." >&2; exit 2; }
echo "-- Quitting Docker.app, if running..."
osascript - <<'EOF' || exit
tell application "Docker"
if it is running then quit it
end tell
EOF
echo "-- Docker is stopped."
echo "Caveat: Restarting it too quickly can cause errors."
Original, obsolete answer:
Kevan Ahlquist's helpful answer shows what commands to add to your Bash profile (~/.bash_profile) to automatically initialize Docker on opening an interactive shell.
Note that you can always initialize Docker in a new shell tab/window by opening application /Applications/Docker/Docker Quickstart Terminal.app (e.g., via Spotlight).
From an existing shell, you can invoke it as open -a 'Docker Quickstart Terminal.app' (which also opens a new shell tab).
What this answer offers is a convenient way to start Docker in the current shell.
Adding the Bash shell functions below - docker-start and docker-stop - improves on Kevan's approach in the following respects:
You can run docker-start on demand, without the overhead of starting the VM on opening the shell (once the Docker VM is running, initialization is much faster, but still takes a noticeable amount of time).
(Of course, you can still opt to invoke docker-start right from your profile.)
docker-stop allows stopping Docker and cleaning up the environment variables on demand.
The functions ensure that Docker's error messages are not suppressed, and they pass Docker error exit codes through.
Additional status information is provided.
You may pass a VM name as a parameter; default is default.
Example:
$ docker-start
-- Starting Docker VM 'default' (`docker-machine start default`; this will take a while)...
Starting "default"...
(default) Check network to re-create if needed...
(default) Waiting for an IP...
Machine "default" was started.
Waiting for SSH to be available...
Detecting the provisioner...
Started machines may have new IP addresses. You may need to re-run the `docker-machine env` command.
-- Setting DOCKER_* environment variables (`eval "$(docker-machine env default)"`)...
DOCKER_CERT_PATH="/Users/jdoe/.docker/machine/machines/default"
DOCKER_HOST="tcp://192.168.99.100:2376"
DOCKER_MACHINE_NAME="default"
DOCKER_TLS_VERIFY="1"
-- Docker VM 'default' is running.
$ docker-stop
-- Stopping Docker VM 'default' (`docker-machine stop default`)...
Stopping "default"...
Machine "default" was stopped.
-- Unsetting DOCKER_* environment variables (DOCKER_CERT_PATH, DOCKER_HOST, DOCKER_MACHINE_NAME, DOCKER_TLS_VERIFY)...
-- Docker VM 'default' is stopped.
Shell functions for on-demand starting and stopping of Docker (place them in, e.g., ~/.bash_profile for global availability in your interactive shells).
Note: The functions work in bash, ksh, and zsh, but in ksh you have to rename them so as not to include a '-' in the function names.
function docker-start {
typeset vm=${1:-default} sts
case $vm in
-h|--help)
echo $'usage: docker-start [<vm>]\n\nEnsures that the specified/default Docker VM is started\nand the environment is initialized.'
return 0
;;
esac
sts=$(docker-machine status "$vm") || return
[[ $sts == 'Running' ]] && echo "(Docker VM '$vm' is already running.)" || {
echo "-- Starting Docker VM '$vm' (\`docker-machine start "$vm"\`; this will take a while)...";
docker-machine start "$vm" || return
}
echo "-- Setting DOCKER_* environment variables (\`eval \"\$(docker-machine env "$vm")\"\`)..."
# Note: If the machine hasn't fully finished starting up yet from a
# previously launched-but-not-waited-for-completion `docker-machine status`,
# the following may output error messages; alas, without signaling failure
# via the exit code. Simply rerun this function to retry.
eval "$(docker-machine env "$vm")" || return
export | grep -o 'DOCKER_.*'
echo "-- Docker VM '$vm' is running."
}
function docker-stop {
typeset vm=${1:-default} sts envVarNames fndx
case $vm in
-h|--help)
echo $'usage: docker-stop [<vm>]\n\nEnsures that the specified/default Docker VM is stopped\nand the environment is cleaned up.'
return 0
;;
esac
sts=$(docker-machine status "$vm") || return
[[ $sts == 'Running' ]] && {
echo "-- Stopping Docker VM '$vm' (\`docker-machine stop "$vm"\`)...";
docker-machine stop "$vm" || return
} || echo "(Docker VM '$vm' is not running.)"
[[ -n $BASH_VERSION ]] && fndx=3 || fndx=1 # Bash prefixes defs. wit 'declare -x '
envVarNames=( $(export | awk -v fndx="$fndx" '$fndx ~ /^DOCKER_/ { sub(/=.*/,"", $fndx); print $fndx }') )
if [[ -n $envVarNames ]]; then
echo "-- Unsetting DOCKER_* environment variables ($(echo "${envVarNames[#]}" | sed 's/ /, /g'))..."
unset "${envVarNames[#]}"
else
echo "(No DOCKER_* environment variables to unset.)"
fi
echo "-- Docker VM '$vm' is stopped."
}

I have the following in my ~/.bash_profile so I don't have to run the env command every time:
docker-machine start default 2>/dev/null # Hide output if machine is already running
eval `docker-machine env default`

In the Quickstart Terminal, I restarted the "default" machine solved my problem
docker-machine restart default
eval $(docker-machine env default)
Then I was able to start composing my container with docker-compose up -d --build

I my case helped: stop + remove all docker containers (Docker version 1.13.0-rc4)
docker stop $(docker ps -a -q)
docker rm $(docker ps -a -q)
After this "docker-compose up" run without Error "ERROR: Couldn't connect to Docker daemon. You might need to start Docker for Mac."
Perhaps in some cases this Error-message is only caused by another errors, i.e. memory space problems.

I've written a Homebrew tap, here: https://github.com/nejckorasa/mac-docker-go
It includes a script to start/restart Docker daemon.
Usage: dckr [options]
Options:
-k | --kill Kill Docker daemon
-s | --start Start Docker daemon
-r | --restart Restart Docker daemon
-ka | --killall Kill all running Docker containers
-h Display help
Defaults to restart if no options are present

Related

Linux script run with run-this-one doesn't work with docker

I'm experiencing an issue in which I run a command in a cronjob and want to make sure that it's not already being executed. I achieve that running as run-one [command] (man-page).
If I want to cancel the already running command and force the new command to run, I run as run-this-one [command].
At least this is what I expected, but if the command runs a docker container, the other process seems to be terminated (but isn't), the terminal shows Terminated, but continues to show the command output that is running in the container (but the commands after the container ends running are not executed). In this case, the command that runs run-this-one is not executed (not expected).
Example:
/path/to/file.sh
#!/bin/bash
set -eou pipefail
echo "sleep started..." >&2
docker run --rm alpine /bin/sh -c 'echo "sleep started inside..." && sleep 5 && echo "sleep ended inside..."'
echo "sleep ended..." >&2
If I run in a terminal window sudo run-one /path/to/file.sh, and then run in another terminal (before the previous command ends running) the command sudo run-one /path/to/file.sh, this command is not executed, as expected, and that command ends succesfully.
Terminal1:
user#host:/path$ sudo run-one /path/to/file.sh
sleep started...
sleep started inside...
sleep ended inside...
sleep ended...
user#host:/path$
Terminal2:
user#host:/path$ sudo run-one /path/to/file.sh
user#host:/path$
But if I run in a terminal window sudo run-one /path/to/file.sh, and then run in another terminal (before the previous command ends running) the command sudo run-this-one /path/to/file.sh, this command is not executed, which is not expected, and that command shows in the terminal Terminated, with the terminal showing user#host:/path$, but the output in the container still shows (the command is still running in the container created in the 1st terminal).
Terminal1:
user#host:/path$ sudo run-one /path/to/file.sh
sleep started...
sleep started inside...
Terminated
user#host:/path$ sleep ended inside...
# terminal doesn't show new input from the keyboard, but I can run commands after
Terminal2:
user#host:/path$ sudo run-this-one /path/to/file.sh
user#host:/path$
It works if the file is changed to:
/path/to/file.sh
#!/bin/bash
set -eou pipefail
echo "sleep started..." >&2
sleep 5
echo "sleep ended..." >&2
The above script file with docker was just an example, in my case it's different, but the problem is the same, and occurs independently of running the container with or without -it.
Someone knows why this is happening? Is there a (not very complex and not very hackish) solution to this problem? I've executed the above commands in Ubuntu 20.04 inside a VirtualBox machine (with vagrant).
Update (2021-07-15)
Based on #ErikMD comment and #DannyB answer, I put a trap and a cleanup function to remove the container, as can be seen in the script below:
/path/to/test
#!/bin/bash
set -eou pipefail
trap 'echo "[error] ${BASH_SOURCE[0]}:$LINENO" >&2; exit 3;' ERR
RED='\033[0;31m'
NC='\033[0m' # No Color
function error {
msg="$(date '+%F %T') - ${BASH_SOURCE[0]}:${BASH_LINENO[0]}: ${*}"
>&2 echo -e "${RED}${msg}${NC}"
exit 2
}
file="${BASH_SOURCE[0]}"
command="${1:-}"
if [ -z "$command" ]; then
error "[error] no command entered"
fi
shift;
case "$command" in
"cmd1")
function cleanup {
echo "cleaning $command..."
sudo docker rm --force "test-container"
}
trap 'cleanup; exit 4;' ERR
args=( "$file" "cmd:unique" )
echo "$command: run-one ${args[*]}" >&2
run-one "${args[#]}"
;;
"cmd2")
function cleanup {
echo "cleaning $command..."
sudo docker rm --force "test-container"
}
trap 'cleanup; exit 4;' ERR
args=( "$file" "cmd:unique" )
echo "$command: run-this-one ${args[*]}" >&2
run-this-one "${args[#]}"
;;
"cmd:unique")
"$file" "cmd:container"
;;
"cmd:container")
echo "sleep started..." >&2
sudo docker run --rm --name "test-container" alpine \
/bin/sh -c 'echo "sleep started inside..." && sleep 5 && echo "sleep ended inside..."'
echo "sleep ended..." >&2
;;
*)
echo -e "${RED}[error] invalid command: $command${NC}"
exit 1
;;
esac
If I run /path/to/test cmd1 (run-one) and /path/to/test cmd2 (run-this-one) in another terminal, it works as expected (the cmd1 process is stopped and removes the container, and the cmd2 process runs successfully).
If I run /path/to/test cmd2 in 2 terminals, it also works as expected (the 1st cmd2 process is stopped and removes the container, and the 2nd cmd2 process runs successfully).
But not so good: in the 2 cases above, sometimes the 2nd process stops with an error before the 1st removes the container (this can occur intermittently, probably due to a race condition).
And it gets worse: if I run /path/to/test cmd1 in 2 terminals, both commands fail, although the 1st cmd1 should run successfully (it fails because the 2nd cmd1 removes the container in the cleanup).
I tried to put the cleanup in the cmd:unique command instead (removing from the other 2 places), so as to call only by the single process running, to avoid the problem above, but weirdly the cleanup is not called there, even if the trap is also defined there.
Just to simplify your question, I would use this command to reproduce the problem:
run-one docker run --rm -it alpine sleep 10
As can be seen - either with run-one and run-this-one - the behavior is definitely not the desired one.
Since the command creates a process managed by docker, I suspect that the run-one set of tools is not the right tool for the job, since docker containers should not be killed with pkill, but rather with docker kill.
One relatively easy solution, is to embrace the way docker wants you to kill containers, and create your short run-one scripts that handle docker properly.
run-one-docker.sh
#!/usr/bin/env bash
if [[ "$#" -lt 2 ]]; then
echo "Usage: ./run-one-docker.sh NAME COMMAND"
echo "Example: ./run-one-docker.sh temp alpine sleep 10"
exit 1
fi
name="$1"
command=("${#:2}")
container_is_running() {
[ "$( docker container inspect -f '{{.State.Running}}' "$1" 2> /dev/null)" == "true" ]
}
if container_is_running "$name"; then
echo "$name is already running, aborting"
exit 1
else
docker run --rm -it --name "$name" "${command[#]}"
fi
run-this-one-docker.sh
#!/usr/bin/env bash
if [[ "$#" -lt 2 ]]; then
echo "Usage: ./run-this-one-docker.sh NAME COMMAND"
echo "Example: ./run-this-one-docker.sh temp alpine sleep 10"
exit 1
fi
name="$1"
command=("${#:2}")
container_is_running() {
[ "$( docker container inspect -f '{{.State.Running}}' "$1" 2> /dev/null)" == "true" ]
}
if container_is_running "$name"; then
echo "killing old $name"
docker kill "$name" > /dev/null
fi
docker run --rm -it --name "$name" "${command[#]}"

SSH not exiting properly inside if statement in bash heredoc

So i am running this script to check if a java server is up remotely by sshing into remote. If it is down, i am trying to exit and run another script locally. However, after the exit command, it is still in the remote directory.
ssh -i ec2-user#$DNS << EOF
if ! lsof -i | grep -q java ; then
echo "java server stopped running"
# want to exit ssh
exit
# after here when i check it is still in ssh
# I want to run another script locally in the same directory as the current script
./other_script.sh
else
echo "java server up"
fi;
EOF
The exit is exiting the ssh session and so never gets to the execution of the other_script.sh line in the HEREDOC. It would be better to place this outside of the script and actioned from the exit status of the HEREDOC/ssh and so:
ssh -i ec2-user#$DNS << EOF
if ! lsof -i | grep -q java ; then
echo "java server stopped running"
exit 7 # Set the exit status to a number that isn't standard in case ssh fails
else
echo "java server up"
fi;
EOF
if [[ $? -eq 7 ]]
then
./other_script.sh
fi

Starting multiple services using shell script in Dockerfile

I am creating a Dockerfile to install and start the WebLogic 12c services using startup scripts at "docker run" command. I am passing the shell script in the CMD instruction which executes the startWeblogic.sh and startNodeManager.sh script. But when I logged in to the container, it has started only the first script startWeblogic.sh and not even started the second script which is obvious from the docker logs.
The same script executed inside the container manually and it is starting both the services. What is the right instruction for running the script to start multiple processes in a container and not to exit the container?
What am I missing in this script and in the dockerfile? I know that container can run only one process, but in a dirty way, how to start multiple services for an application like WebLogic which has a nameserver, node manager, managed server and creating managed domains and machines. The managed server can only be started when WebLogic nameserver is running.
Script: startscript.sh
#!/bin/bash
# Start the first process
/u01/app/oracle/product/wls122100/domains/verdomain/bin/startWebLogic.sh -D
status=$?
if [ $status -ne 0 ]; then
echo "Failed to start my_first_process: $status"
exit $status
fi
# Start the second process
/u01/app/oracle/product/wls122100/domains/verdomain/bin/startNodeManager.sh -D
status=$?
if [ $status -ne 0 ]; then
echo "Failed to start my_second_process: $status"
exit $status
fi
while sleep 60; do
ps aux |grep "Name=adminserver" |grep -q -v grep
PROCESS_1_STATUS=$?
ps aux |grep node |grep -q -v grep
PROCESS_2_STATUS=$?
# If the greps above find anything, they exit with 0 status
# If they are not both 0, then something is wrong
if [ $PROCESS_1_STATUS -ne 0 -o $PROCESS_2_STATUS -ne 0 ]; then
echo "One of the processes has already exited."
exit 1
fi
done
Truncated the dockerfile.
RUN unzip $WLS_PKG
RUN $JAVA_HOME/bin/java -Xmx1024m -jar /u01/app/oracle/$WLS_JAR -silent -responseFile /u01/app/oracle/wls.rsp -invPtrLoc /u01/app/oracle/oraInst.loc > install.log
RUN rm -f $WLS_PKG
RUN . $WLS_HOME/server/bin/setWLSEnv.sh && java weblogic.version
RUN java weblogic.WLST -skipWLSModuleScanning create_basedomain.py
WORKDIR /u01/app/oracle
CMD ./startscript.sh
docker build and run commands:
docker build -f Dockerfile-weblogic --tag="weblogic12c:startweb" /var/dprojects
docker rund -d -it weblogic12c:startweb
docker exec -it 6313c4caccd3 bash
Please use supervisord for running multiple services in a docker container. It will make the whole process more robust and reliable.
Run supervisord -n as your CMD command and configure all your services in /etc/supervisord.conf.
Sample conf would look like:
[program:WebLogic]
command=/u01/app/oracle/product/wls122100/domains/verdomain/bin/startWebLogic.sh -D
stderr_logfile = /var/log/supervisord/WebLogic-stderr.log
stdout_logfile = /var/log/supervisord/WebLogic-stdout.log
autorestart=unexpected
[program:NodeManager]
command=/u01/app/oracle/product/wls122100/domains/verdomain/bin/startNodeManager.sh -D
stderr_logfile = /var/log/supervisord/NodeManager-stderr.log
stdout_logfile = /var/log/supervisord/NodeManager-stdout.log
autorestart=unexpected
It will handle all the things you are trying to do with a shell script.
Hope it helps!

Docker Check if DB is Running

entrypoint.sh contains various cqlsh commands that require Cassandra. Without something like script.sh, cqlsh commands fail because Cassandra doesn't have enough time to start. When I execute the following locally, everything appears to work properly. However, when I run via Docker, script.sh never finishes. In other words, $status never changes from 1 to 0.
Dockerfile
FROM cassandra
RUN apt-get update && apt-get install -y netcat
RUN mkdir /dir
ADD ./scripts /dir/scripts
RUN /bin/bash -c 'service cassandra start'
RUN /bin/bash -c '/dir/scripts/script.sh'
RUN /bin/bash -c '/dir/scripts/entrypoint.sh'
script.sh
#!/bin/bash
set -e
cmd="$#"
status=$(nc -z localhost 9042; echo $?)
echo $status
while [ $status != 0 ]
do
sleep 3s
status=$(nc -z localhost 9042; echo $?)
echo $status
done
exec $cmd
Alternatively, I could do something like until cqlsh -e 'some code'; do .., as noted here for psql, but that doesn't appear to work for me. Wondering how best to approach the problem.
You're misusing the RUN command in your Dockerfile. It's not for starting services, it's for making filesystem changes in your image. The reason $status doesn't update is because you can't start Cassandra via a RUN command.
You should add service cassandra start and /dir/scripts/entrypoint.sh to your script.sh file, and make that the CMD that's executed by default:
Dockerfile
CMD ['/bin/bash', '-c', '/dir/scripts/script.sh']
script.sh
#!/bin/bash
set -e
# NOTE: I removed your `cmd` processing in favor of invoking entrypoint.sh
# directly.
# Start Cassandra before waiting for it to boot.
service cassandra start
status=$(nc -z localhost 9042; echo $?)
echo $status
while [ $status != 0 ]
do
sleep 3s
status=$(nc -z localhost 9042; echo $?)
echo $status
done
exec /bin/bash -c /dir/scripts/entrypoint.sh

rc.d start does not terminate?

So I wrote the Arch Linux rc.d script for mongod daemon (following an example), but when I do:
sudo rc.d start mongod
it just gets stuck on:
:: Starting /usr/bin/mongod [BUSY]
and never transitions to "DONE" phase. Any tips?
Here is my script:
#!/bin/bash
# import predefined functions
. /etc/rc.conf
. /etc/rc.d/functions
# Point to the binary
DAEMON=/usr/bin/mongod
# Get the ARGS from the conf
. /etc/conf.d/crond
# Function to get the process id
PID=$(get_pid $DAEMON)
case "$1" in
start)
stat_busy "Starting $DAEMON"
# Check the PID exists - and if it does (returns 0) - do no run
[ -z "$PID" ] && $DAEMON $ARGS &> /dev/null
if [ $? = 0 ]; then
add_daemon $DAEMON
stat_done
else
stat_fail
exit 1
fi
;;
stop)
stat_busy "Stopping $DAEMON"
kill -HUP $PID &>/dev/null
rm_daemon $DAEMON
stat_done
;;
restart)
$0 stop
sleep 1
$0 start
;;
*)
echo "usage: $0 {start|stop|restart}"
esac
I've looked at how apache does it, but I can't figure out what they are doing that's different. Here's a piece of their httpd script:
case "$1" in
start)
stat_busy "Starting Apache Web Server"
[ ! -d /var/run/httpd ] && install -d /var/run/httpd
if $APACHECTL start >/dev/null ; then
add_daemon $daemon_name
stat_done
else
stat_fail
exit 1
fi
;;
For one thing, you are passing an $ARGS variable that is never actually defined. You will probably want to either pass some configuration options, or the location of a mongodb.conf file using the -f or --config option, to inform the daemon of the location of your database, log file, IP bindings, etc.
The mongod defaults assume that you database location is /data/db/. If this does not exist, or the daemon does not have permissions to that location, then the init script will fail.
You should probably also run the daemon with a user account other than yourself or root (the default pacman package creates a user named mongodb), and give this user read/write access to the data path and log file.
[ -z "$PID" ] && /bin/su mongodb -c "/usr/bin/mongod --config /etc/mongodb.conf --fork" > /dev/null
I would suggest referring to the mongodb init script provided in the Arch Community package, and comparing that to what you have here. Or, install MongoDB using pacman, which sets all of this up for you.
If all else fails, add some 'echo' commands inside of your if and else blocks to track down exactly where the init script is hanging, check mongodb's logs, and report back to us.

Resources