I build an image with Python installed and a Python application too. My Python application is a Hello, World! application, just printing "Hello, World!" on the screen. Dockerfile:
FROM python:2-onbuild
CMD ["python", "./helloworld.py"]
In the console I execute:
docker run xxx/zzz
I can see the Hello, World! output. Now I am trying to execute the same application, using the task from ECS. I already pulled it to Docker Hub.
How can I see the output Hello, World!? Is there a way to see that my container runs correctly?
docker logs <container id> will show you all the output of the container run. If you're running it on ECS, you'll probably need to set DOCKER_HOST=tcp://ip:port for the host that ran the container.
To view the logs of a Docker container in real time, use the following command:
docker logs -f <CONTAINER>
The -f or --follow option will show live log output. Also if the container is stopped it will fetch its logs.
Maybe beside of tracing logs is better idea to enter into container with:
docker exec -it CONTAINER_ID /bin/sh
and investigate your process from inside.
You can log in onto your container instance and do, for example, a docker ps there.
This guide describes how to connect to your container instance:
http://docs.aws.amazon.com/AmazonECS/latest/developerguide/troubleshooting.html#instance-connect
You cam use basic output redirection to a file.
Whatever command you have running in your Dockerfile at the end of the command put >> /root/file.txt
So...
RUN ifconfig >> /root/file.txt
RUN curl google.com >> /root/file.txt
Then all you meed to do is log in to the container and type "cat /root/file.txt" to see exactly what was on screen. Is it possible to copy from container to host at end of the Dockerfile? idk but Maybe.
Related
When I run or start a Docker container, it will not stay running.
Docker start will just return the name of whatever container I gave it, but wont actually do anything. Docker run (ex $ docker run -p 8080:80 --name hello -d hello-world) will create it but it will exit immediately.
If I run docker ps after one of these, it will show nothing listed as currently running.
If I run docker ps -a, it will show all of my containers and show the one that I just attempted to run having exited a few seconds ago.
Is this common and how do I get my containers to stay running? I am trying to learn how to use Docker and it has been one of the worst experiences. Thank you for any help or suggestions
Docker containers are generally used to run applications/processes in an isolated environment.
When you run the hello-world image, it creates a container which has only purpose of printing out the name using standard output. That is the only process that ran and the container was done with its work. That is why you see nothing when done docker ps.
In order to keep a container running, you need to have a process inside that container that will run (for example: a server, database, application etc.)
Try creating a container form mysql image, and then check the running container.
In your command, you specify the -d flag (aka detach), which means Run container in background and print container ID (from Docker docs). See more discussion about this here: Docker container will automatically stop after "docker run -d"
docker run -p 8080:80 --name hello -d hello-world
If you run it without the -d flag, it should run in the foreground and send output to your terminal
docker run -p 8080:80 --name hello hello-world
You don't see it running in docker ps -a because that container just executes the hello-world script and exits. If the container starts a long running process then you'll be able to find it in docker ps -a. To verify this, you can try running the nginx demo containers (e.g. nginx-hello) which serve up 'hello world'/demo pages.
To know what's wrong with your container use (docker logs (your container name)) command.
then you can sort it out what went wrong with your container
Is this common and how do I get my containers to stay running?
What happen when you start a Docker container ?
By default, it executes the command/the entrypoint specified in the Dockerfile image.
Generally that command or the entrypoint is a script or a program located in the image.
When that script/program exits, the container exits too. That's all.
To keep a container alive, the script/program has to stay running.
You start an hello image container, a "hello" container says "hello" and exits.
That may be a script as simple as :
#!/bin/sh
echo "hello"
So that is expected to finish and exit the container.
Run a database or a web server and you will see a different behavior. The script/program keeps running... while you don't stop that. So the container also stays running while you don't stop that.
To experiment, you can run your hello-world container with an endless command :
docker run -p 8080:80 --name hello -d hello-world --entrypoint tail -f /dev/null
You will see that the container stays running.
A docker container exits when its main process finishes. The hello-world main process just prints some text and exits, so container exits too.
You can run this command straightly to see it's text:
docker run hello-world
If you want a running container, maybe you can try a nginx demo:
docker run --name nginx-demo -p 8080:80 -d nginx
then you can visit http://localhost:8080 using your web browser.
I'm very new to docker.
Also I'm using Docker for Windows (ie Image and Container are for Windows OS).
I'm trying to get a list of all the folders and subfolders to resolve another issue I'm having. I read several post and blogs and seems like I should be able to run
docker exec -it <container id> dir
To get the info as it is suppose to allow me to run commands against the container.
I even ran
docker exec -it f83eb1533b67 help
which gave me a list of commands (because no one tells what are acceptable 'commands'...) and it is listed. however I get the following message when I run DIR command
PS P:\docker\tmp\SqlServerSetup> `docker exec -it f83eb1533b67 dir`
container f83eb1533b671b4462b8a1562da7343185b2dd27e94ff360e0230969d432ec37 encountered an error during CreateProcess: failure in a Windows system call: The system cannot find the file specified. (0x2)
[Event Detail: Provider: 00000000-0000-0000-0000-000000000000] extra info: {"CommandLine":"dir","WorkingDirectory":"C:\\","Environment":{"ACCEPT_EULA":"Y","attach_dbs":"[]","sa_password":"Pass1.4DBAs","sa_password_path":"C:\\ProgramData\\Docker\\secrets\\sa-password"},"EmulateConsole":true,"CreateStdInPipe":true,"CreateStdOutPipe":true,"ConsoleSize":[0,0]}
PS P:\docker\tmp\SqlServerSetup>
Please note: I don't want to persist a volume. Seems like that option is for people that are trying to reuse data.
UPDATE:
This is the statement that i'm using to create the container:
docker run -p 1433:1433 -e sa_password=Pass1.4DBAs -e ACCEPT_EULA=Y -p 11433:1433 --name sqlTraining --cap-add SYS_PTRACE -d microsoft/mssql-server-windows-developer
It works fine. Container is created, but I want to view the filesystem within that container.
For Windows containers, prefix the command with the command shell (cmd) and the /c parameter. For example:
docker exec <container id> cmd /c dir
This will execute the dir command on the specified container and terminate.
Try running:
docker exec -it <container id> sh
to start the interactive shell console. This should help you with debugging.
The answers from this question do not work.
The docker container always exits before I can attach or won't accept the -t flag. I could list all of the commands I've tried, but it's a combination of start exec attach with various -it flags and /bin/bash.
How do I start an existing container into bash? Why is this so difficult? Is this an "improper" use of Docker?
EDITS:
I created the container with docker run ubuntu. The information about the container: 60b93bda690f ubuntu "/bin/bash" About an hour ago Exited (0) 50 minutes ago ecstatic_euclid
First of all, a container is not a virtual machine. A container is an isolation environment for running a process. The life-cycle of the container is bound to the process running inside it. When the process exits, the container also exits, and the isolation environment is gone. The meaning of "attach to container" or "enter an container" actually means you go inside the isolation environment of the running process, so if your process has been exited, your container has also been exited, thus there's no container for you to attach or enter. So the command of docker attach, docker exec are target at running container.
Which process will be started when you docker run is configured in a Dockerfile and built into a docker image. Take image ubuntu as an example, if you run docker inspect ubuntu, you'll find the following configs in the output:
"Cmd": ["/bin/bash"]
which means the process got started when you run docker run ubuntu is /bin/bash, but you're not in an interactive mode and does not allocate a tty to it, so the process exited immediately and the container exited. That's why you have no way to enter the container again.
To start a container and enter bash, just try:
docker run -it ubuntu
Then you'll be brought into the container shell. If you open another terminal and docker ps, you'll find the container is running and you can docker attach to it or docker exec -it <container_id> bash to enter it again.
You can also refer to this link for more info.
Here is a very simple Dockerfile with instructions as comments ... launch it to spin up a running container you can exec login to
FROM ubuntu:20.04
ENV TERM linux
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update
RUN apt-get install -y
CMD ["/bin/bash"]
# ... save this file as Dockerfile then in same dir issue following
#
# docker build --tag stens_ubuntu . # creates image stens_ubuntu
#
# docker run -d stens_ubuntu sleep infinity # launches container
#
# docker ps # show running containers
#
#
# ... find CONTAINER ID from above and put into something like this
#
# docker exec -ti $( docker ps | grep stens_ubuntu | cut -d' ' -f1 ) bash # login to running container
# docker exec -ti 3cea1993ed28 bash # login to running container using sample containerId
#
A container will exit normally when it has no work to do ... if you give it no work it will exit immediately upon launch for this reason ... typically the last command of your Dockerfile is the execution of some flavor of a server which stays alive due to an internal event loop and in so doing keeps alive its enclosing container ... short of that you can mention a server executable which has been installed into the container as the final parameter of your call to
docker run -d my-image-name my-server-executable
I am trying to run a container and modify certain files in it. I am trying to do this using a script. If I use:
docker run -i -t <container> <image>, it is giving me
STDERR: cannot enable tty mode on non tty input
If I use:
docker run -d <container> <image> bash, the container is not starting.
Is there anyway to do this?
Thanks
Run the docker image in background using:
docker run -d <image>:<version>
Check running docker containers using:
docker ps
If there is only one container running you can use below command to attach to a running docker container and use bash to browser files/directories inside container:
docker exec -it $(docker ps -q) bash
You can then modify/edit any file you want and restart the container.
To stop a running container:
docker stop $(docker ps -q)
To run a stopped container:
docker start -ia $(docker ps -lq)
So to start off, the -i -t is for an interactive tty mode for interacting with the container. If you are invoking this in a script then it's likely that this won't work as you expect.
This is not really the way containers are meant to be used. If it is a permanent change, you should be rebuilding the image and using that for the container.
However, if you want to make changes to files that are reflected in the container, you could consider using volumes to mount directories from the host into the container. This would look something like:
docker run -v /some/host/dir:/some/container/dir -d container
At this point anything you change within /some/host/dir will be within the container at /some/container/dir. You can then make your changes with a script on the host, without having to invoke the docker cli.
I want to ssh or bash into runned docker container. Please, see example:
$ sudo docker run -d webserver
webserver is clean image from ubuntu:14.04
$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
665b4a1e17b6 webserver:latest /bin/bash ... ... 22/tcp, 80/tcp loving_heisenberg
now I want to get something like this (go into runned container):
$ sudo docker run -t -i webserver (or maybe 665b4a1e17b6 instead)
$ root#665b4a1e17b6:/#
Previously I used Vagrant so I want to get behavior similar to vagrant ssh. Please, could anyone help me?
After the release of Docker version 1.3, the correct way to get a shell or other process on a running container is using the docker exec command. For example, you would run the following to get a shell on a running container:
docker exec -it myContainer /bin/bash
You can find more information in the documentation.
The answer is docker attach command.
For information see: https://askubuntu.com/a/507009/159189