Currently, my project's setup for testing is twofold: as for day-to-day development, I run testcafe through foreman on MacOS (to take advantage of my personal .env file), and on the CI server (BitBucket), I use testcafe through the testcafe/testcafe docker image.
However, not using the same environment during development and CI is not optimal, so I figured using docker(-compose) in both scenarios would be the best way to go. After reading testcafe issue 1880 and PR 2574, I figured my command for development should be something like:
docker run -v /Users/bert/Development/m4e/ui_factory/test/tests:/test -p 1337:1337 -p 1338:1338 -it testcafe/testcafe -- remote /test --hostname localhost
but I seem unable to connect Safari to http://localhost:1337 in this case:
Safari can't open the page "172.17.0.2:1337/browser/connect/ryD70k" because Safari can't connect to the server "172.17.0.2"
Anyone has an idea on how to tackle this?
Please delete unnecessary " -- " in the following record:
testcafe/testcafe -- remote
Here is a help topic, which describes how to use TestCafe Docker Image:
Using TestCafe Docker Image
As #Marion pointed out: the culprit is the -- in the command. I used it to ensure
the arguments of the command were clearly separated from the docker arguments.
This is not simply 'unnecessary', it is simply wrong.
Related
With the license change for Docker Desktop on Windows, I'm looking for an alternative. Podman + WSL2 seems to do the trick for me. Except for Testcontainers in my Quarkus tests.
I'm able to run my tests within WSL2 by starting podman system service in WSL2 (podman system service -t 0 tcp:localhost:8880) and setting the DOCKER_HOST env var (DOCKER_HOST=tcp://localhost:8880).
Now this works, but isn't really what I need, since at my company we develop in VSCode, IntelliJ and Eclipse. I'd like to be able to run the tests from within those IDE's. Is there any way to pass the podman uri (from WSL) to my IDE in Windows while running Quarkus tests?
If anyone would know any other docker desktop alternatives that work with TestContainers, that would be awesome as well. I have tried Rancher Desktop, but it gets stuck and the tests eventually time out.
You have to install podman-remote packages on your windows host machine, then configure it to use tcp://WSL2_IP:8880 (podman documentation) and finally make an alias for the program docker -> podman.exe.
Now you are able to run docker commands as usual... docker ps docker run etc. But it does not mean that all tools will work out of the box. You have to tune it.
For example for testcontainers you have to set env variables on host machine:
PowerShell
[System.Environment]::SetEnvironmentVariable("DOCKER_HOST", "tcp://WSL2_IP:8880", [System.EnvironmentVariableTarget]::User)
[System.Environment]::SetEnvironmentVariable("TESTCONTAINERS_CHECKS_DISABLE", "True", [System.EnvironmentVariableTarget]::User)
[System.Environment]::SetEnvironmentVariable("TESTCONTAINERS_RYUK_DISABLED", "True", [System.EnvironmentVariableTarget]::User)
P.S. All that kind of variables were set for you by docker, but from now you have to do it by yourself.
We ran the testcontainers-java tests using various solutions for Docker.
I don't know if running in WSL changes a lot compared to the Windows only setup.
In general, Testcontainers doesn't rely only on the CLI commands only and works best with compatible Docker environments. Based on the findings in that experiment, you can try minikube.
enter image description hereTake IntelliJ for example, you can set DOCKER_HOST env var by "Run/Debug Configurations" and it works perfectly.
I'm new using Docker. I have been trying to deploy a Linux container (with Windows as a host) with a Google Cloud image inside using Docker. I'm able to do everything well, at the end the server is running perfectly, but when I want to check the server, using the localhost in the browser, I got a blank page with:
Blank page
This is the Dockerfile:
FROM google/cloud-sdk
ENV PATH /usr/lib/google-cloud-sdk/bin:$PATH
WORKDIR docker_folder
COPY local_folder/ .
RUN pwd
EXPOSE 8080
CMD ["java_dev_appserver.sh", "."]
This is the command I'm using to build my image (in the CMD):
docker build --tag serverdeploy .
This is the command I'm using to run my container
docker run -p 8080:8080 serverdeploy
This is the stack trace that I got when I run the server
where I know that I running the server
I did some research and looks like Docker had a problem with the ports when you use a Linux container in Windows (Not sure if it's already solved or not). I've already tried all the possible solutions that I found out there (even trying to replace 'localhost' by all the ip's that I get when I run ipconfig on the cmd) but I still get the same error.
And, as last hope, I need your help to understand what I'm doing wrong, or if I missing something
You are running your service bind to localhost - that means no remote connections are accepted (as well as binding to 127.0.0.1. And for your container the host is a remote connection.
Change binding to 0.0.0.0 (which I guess is default) and enjoy.
Btw sharing your java_dev_appserver.sh would be helpful for answering the question.
I want to build a cassandra cluster with docker. The documentation already tells you how to this so this is not the problem I have.
However I am currently using Docker on Windows 10 and obviously it cannot execute the nested command in docker run --name some-cassandra2 -d -e CASSANDRA_SEEDS="$(docker inspect --format='{{ .NetworkSettings.IPAddress }}' some-cassandra)" cassandra:tag which results in an empty seed list for the container.
How can I nest a command like this in Windows or - if this is not possible - get a workaround for this?
I managed to fix it thanks to a docker-compose.yml by Jason Giedymin. It should work in v1 as well as v2 of docker-compose. By doing it this way you just let docker do the linking from the get go and tell cassandras about other seeds with the environment variable the container already gives you.
The sleep 30 part is pretty smart as well as it makes sure that the second container doesn't try to connect on a container that isn't fully up yet.
One thing I would recommend though, is using external_links instead of links. This way other containers don't rely on all of the cassandra containers to be up to start/work. This would defeat the purpose of a distributed database.
I still don't know how to nest Windows cmd commands into each other so I would still be thankful for some tips.
I am running the latest Docker Toolbox, using latest Oracle VirtualBox, with Windows 7 as a host OS.
I am trying to enable non-TLS access to Docker remote API, so I could use Postman REST client running on Windows and hit docker API running on docker-machine in the VirtualBox. I found that if Docker configuration included -H tcp://0.0.0.0:2375, that would do the trick exposing the API on port 2375 of the docker machine, but for the life of me I can't find where this configuration is stored and can be changed.
I did docker-machine ssh from the Toolbox CLI, and then went and pocked around the /etc/init.d/docker file, but no changes to the file survive docker-machine restart.
I was able to find answer to this question for Ubuntu and OSX, but not for Windows.
#CarlosRafaelRamirez mentioned the right place, but I will add a few details and provide more detailed, step-by-step instructions, because Windows devs are often not fluent in Linux ecosystem.
Disclaimer: following steps make it possible to hit Docker Remote API from Windows host, but please keep in mind two things:
This should not be done in production as it makes Docker machine very not secure.
Current solution disables most of the docker-machine and all docker CLI functionality. docker-machine ssh remains operational, forcing one to SSH into docker machine to access docker commands.
Solution
Now, here are the steps necessary to switch Docker API to non-TLS port. (Docker machine name is assumed to be "default". If your machine name has a different name, you will need to specify it in the commands below.)
Start "Docker Quickstart Terminal". It starts Bash shell and is the place where all following commands will be run. Run docker-machine ip command and note the IP address of the docker host machine. Then do
docker-machine ssh
cd /var/lib/boot2docker
sudo vi profile This starts "vi" editor in elevated privileges mode required for editing "profile" file, where Docker host settings are. (If as a Windows user you are not familiar with vi, here's is super-basic crash course on it. When file is open in the vi, vi is not in editing mode. Press "i" to start edit mode. Now you can make changes. After you made all the changes, hit Esc and then ZZ to save changes and exit vi. If you need to exit vi without saving changes, after Esc please type :q! and hit Enter. ":" turns on vi's command mode, and "q!" command means exit without saving. Detailed vi command info is here.)
Using vi, change DOCKER_HOST to be DOCKER_HOST='-H tcp://0.0.0.0:2375', and set DOCKER_TLS=no. Save changes as described above.
exit to leave SSH session.
docker-machine restart
After doocker machine has restarted, your sould be able to hit docker API URL, like http://dokerMachineIp:2375/containers/json?all=1, and get valid JSON back.
This is the end of steps required to achieve the main goal.
However, if at this point you try to run docker-machine config or docker images, you will see an error message indicating that docker CLI client is trying to get to the Docker through the old port/TLS settings, which is understandable. What was not expected to me though, is that even after I followed all the Getting Started directions, and ran export DOCKER_HOST=tcp://192.168.99.101:2375 and export DOCKER_TLS_VERIFY=0, resulting in
$ env | grep DOCKER
DOCKER_HOST=tcp://192.168.99.101:2375
DOCKER_MACHINE_NAME=default
DOCKER_TLS_VERIFY=0
DOCKER_TOOLBOX_INSTALL_PATH=C:\Program Files\Docker Toolbox
DOCKER_CERT_PATH=C:\Users\USERNAME\.docker\machine\machines\default
the result was the same:
$ docker-machine env
Error checking TLS connection: Error checking and/or regenerating the certs: There was an error validating certificates for host
"192.168.99.101:2376"
If you see a problem with how I changed environment variables to point Docker CLI to the new Docker host address, please comment.
To work around this problem, use docker-machine ssh command and run your docker commands after that.
I encountered the same problem and thanks to #VladH made it working not changing any internal Docker profile properties. All you have to do is correctly define Windows local env variables (or configure maven plugin properties, if you use io.fabric8 docker-maven-plugin).
Note that 2375 port is used for non-TLS connections, and 2376 only for TLS connections.
DOCKER_HOST=tcp://192.168.99.100:2376
DOCKER_TLS_VERIFY=0
DOCKER_TOOLBOX_INSTALL_PATH=C:\Program Files\Docker Toolbox
DOCKER_CERT_PATH=C:\Users\USERNAME\.docker\machine\machines\default
I am looking for best practices about front-end developing on OSX with docker and I have found number of projects on github. Here they are:
docker-osx-dev
boot2docker-xhyve
coreos-xhyve
docker-unison
hodor
The fact is I need two-way syncing files from host system to virtual container and vice versa via mounted (synced) folder and IO performance should be like native one. Therefore I don't consider shared folders FS like vboxsf and vmhgfs. Also it's needed to have some build tools (gulp etc) with working wathcer within shared folder.
What do you think about xhyve (with NFS) instead of VirtualBox? Who tried the unison, what the performance docker provides with it?
At last I have a special task I want to run app.js via nodejs through host to container ENV if it is possible. In other words I have to add ENV variable for PATH to nodejs (within virtual container) to my ~/.bash_profile. Is there any chance to do passthrough NODE_PATH from host to container at all?
Thanks.
Not sure if "best practice" is asking for opinions (which is against SO policy), note that this also heavily depends on your tools chain.
I'm not a fan of boot2docker as it works to date (although it may improve and it may be the best approach in the long term as it is the official approach maintained by the docker team).
EDIT: boot2docker was discontinued and replaced by Docker Machine which does pretty much the same thing but in a more generic way, allowing you to manage Docker daemons locally, in LAN or in the cloud.
For Me, I'm on Windows, but I face the same (even more) difficulties as OSX devs. As I'm using Hyper-V, boot2docker (VirtualBox) can't run, so I have to roll my own. Also, last time I tried boot2docker - it ran TinyCoreLinux, which is another Linux distribution I'd have to learn while my focus is CoreOS in the cloud, so I'd rather just focus on CoreOS.
The target for setting up your dev is as follows:
Have ssh access with mounting rights to a docker host (either in VM or on LAN): this is CoreOS on Hyper-V for me.
Have a native docker client & export DOCKER_HOST=<ip or hostname here>
mount /mnt/from/host working directory into your docker host for live reload: this works through mount.cifs on CoreOS with a systemd unit for me.
Make dev.Dockerfile for your dev requirements, if you're a node developer, start from the node image, npm install gulp/browserify/.. whatever you need as a base image for your projects & docker build -f dev.Dockerfile -t my_dev_container .
docker run -it -v /mnt/from/host/:/src/app/ -e my_dev_container
You are now in a terminal with a fully isolated environment which can be put under source control & replicated between project members and has full live reload abilities.
Draw backs: if you rely on REPL or intelliSense from your IDE, you'll have to have an IDE that can use the remote server. Or you have to run your IDE within the dev container (cloud9 or use X server).
Of course if you live in a terminal and are fluent in vim, you are good to go.