Programmatically run cypress tests on docker - cypress

I am currently running cypress tests using:
await cypress.run({config inserted here})
Wondering if there is a way to spin up one of cypress's docker containers and then point the cypress tests there using the statement above. The suggestions online are to run the tests using command line, but I'm hoping to still be able to use cypress.run() and perhaps pass in an option that tells cypress to point the tests to a container?

Cypress docker containers will invoke cypress run by default. To change that, you'll need to override the container entrypoint to invoke node instead of cypress, and then pass your script file that's invoking cypress via the module api (cypress.run()) as the container command. You could do this via the command line, but it's a bit long because of all the options you'll need to pass:
# Assumes you're running this from the Cypress project directory on the host machine
$ docker run -it --rm -v $PWD:/tests -w /tests --entrypoint "/usr/local/bin/node" cypress/included:3.2.0 "myScript.js"
Warning: you're mounting in your npm dependencies with this method, which might cause problems if any of them were compiled on a different os like Windows.

Related

Cypress run some of tests in parallel

I have done lot of research of running some tests in parallel and some in series. Still, not have any great options for doing that. We'll have 4 virtual machines to done this. All tose vm's has own docker for application with own database as well. I have few tests which need to be run at the same machine.
I'll thinking can there be given some tags or something which can be configured so that spesified tags will run on VM 1 example?
Found solution by plugin named Cypress grep, link is below. With it you can write tags to tests like this:
it('works', { tags: '#smoke' }, () => ...)
run all tests tagged #smoke
$ npx cypress run --env grepTags=#smoke
run all tests except tagged #smoke
$ npx cypress run --env grepTags=-#smoke
Install instructions and usage: https://www.npmjs.com/package/#cypress/grep
Work the way as the Robot Framework tags.

How to specify an alternative main class in spring boot using bootBuildImage and packeto

When calling the spring boot plugin bootBuildImage task in gradle, a docker image is created using packeto. It starts the main class specified in the springBoot plugin. Below you can find an excerpt of the build.gradle file.
springBoot {
mainClass = 'MyMainApp'
}
bootBuildImage {
imageName = "$docker_repo/${project.name}"
}
When calling docker run, docker will run a container starting MyMainApp.
However, I want to run an alternative main class, using the same docker image. I tried the following:
specifying -Dloader.main=MyOtherApp as the cmd in docker run
specifying -Dloader.main=MyOtherApp in the JAVA_TOOL_OPTIONS environment variable
specifying LOADER_MAIN=MyOtherApp as an environment variable
None of those options start MyOtherApp.
An image created by Buildpacks provides some helpful tools for launching your application. While this is nice, overriding the default start command isn't as easy as just specifying a new command to docker run.
All of the facilities provided by Buildpacks for starting up various processes in an image are described in the docs.
I'm guessing here a bit, but it sounds like you want to run your own custom process (not the process detected by the buildpack), so try this one here.
You can even override the buildpack-defined process types:
docker run --rm --entrypoint launcher -it multi-process-app bash
docker run --rm --entrypoint launcher -it multi-process-app echo hello "$WORLD" # $WORLD is evaluated on the host machine
docker run --rm --entrypoint launcher -it multi-process-app echo hello '$WORLD' # $WORLD is evaluated in the container after profile scripts are sourced
Java should be on the path, so you can run java -Dloader.main=MyOtherApp org.springframework.boot.loader.PropertiesLauncher.
https://www.baeldung.com/spring-boot-main-class#using-cli
Alternatively, you could change your app to use PropetiesLoader by default & rebuild your image. The buildpack is just pulling the launcher for the start command out of the MANIFEST.MF file. You need to use PropertiesLauncher though as that is what supports loader.main. See https://stackoverflow.com/a/66367534/1585136.

Docker Build/Deploy using Bash Script

I have a deploy script that I am trying to use for my server for CD but I am running into issues writing the bash script to complete some of my required steps such as running npm and the migration commands.
How would I go about getting into a container bash, from this script, running the commands below and then exiting to finish pulling up the changes?
Here is the script I am trying to automate:
cd /Project
docker-compose -f docker-compose.prod.yml down
git pull
docker-compose -f docker-compose.prod.yml build
# all good until here because it opens bash and does not allow more commands to run
docker-compose -f docker-compose.prod.yml run --rm web bash
npm install # should be run inside of web bash
python manage.py migrate_all # should be run inside of web bash
exit # should be run inside of web bash
# back out of web bash
docker-compose -f docker-compose.prod.yml up -d
Typically a Docker image is self-contained, and knows how to start itself up without any user intervention. With some limited exceptions, you shouldn't ever need to docker-compose run interactive shells to do post-deploy setup, and docker exec should be reserved for emergency debugging.
You're doing two things in this script.
The first is to install Node packages. These should be encapsulated in your image; your Dockerfile will almost always look something like
FROM node
WORKDIR /app
COPY package*.json .
RUN npm ci # <--- this line
COPY . .
CMD ["node", "index.js"]
Since the dependencies are in your image, you don't need to re-install them when the image starts up. Conversely, if you change your package.json file, re-running docker-compose build will re-run the npm install step and you'll get a clean package tree.
(There's a somewhat common setup that puts the node_modules directory into an anonymous volume, and overwrites the image's code with a bind mount. If you update your image, it will get the old node_modules directory from the anonymous volume and ignore the image updates. Delete these volumes: and use the code that's built into the image.)
Database migrations are a little trickier since you can't run them during the image build phase. There are two good approaches to this. One is to always have the container run migrations on startup. You can use an entrypoint script like:
#!/bin/sh
python manage.py migrate_all
exec "$#"
Make this script be executable and make it be the image's ENTRYPOINT, leaving the CMD be the command to actually start the application. On every container startup it will run migrations and then run the main container command, whatever it may be.
This approach doesn't necessarily work well if you have multiple replicas of the container (especially in a cluster environment like Docker Swarm or Kubernetes) or if you ever need to downgrade. In these cases it might make more sense to manually run migrations by hand. You can do that separately from the main container lifecycle with
docker-compose run web \
python manage.py migrate_all
Finally, in terms of the lifecycle you describe, Docker images are immutable: this means that it's safe to rebuild new images while the old ones are running. A minimum-downtime approach to the upgrade sequence you describe might look like:
git pull
# Build new images (includes `npm install`)
docker-compose build
# Run migrations (if required)
docker-compose run web python manage.py migrate_all
# Restart all containers
docker-compose up --force-recreate

How to run node.js with gulp inside docker and find the right bashsrc

I am new to Docker and I do have some problems.
My goal:
use the Dockerfile to create a docker-container and stay inside the container / don't drop out of it.
running a local Docker-Container
installing "gulp" with the package.json
installing "gulp global" on Docker
copy any files to my Docker container
execute "gulp --version" and the default "gulp task" and stay inside the terminal.
Here is my setup:
Dockerfile
FROM node:10
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install --global gulp-cli
RUN npm install
COPY . .
EXPOSE 3000
CMD ["gulp --version" , "gulp"] or? [gulp --version , gulp]
package.json
{
"name": "docker-test",
"version": "1.0.0",
"description": "Testing Docker",
"main": "index.js",
"scripts": {
"test": "test"
},
"author": "",
"license": "",
"devDependencies": {
"gulp": "^4.0.0"
}
}
gulpfile.js
function defaultTask(cb) {
console.log("Default task for gulp 4.0 for docker")
cb();
}
exports.default = defaultTask
docker-compose.yml (I don't think we need this for my question but I will post it anyway since I am not exactly sure If this could make some problems)
version: '3'
services:
html:
container_name: gulp-docker-test
restart: always
build: .
ports:
- '80:3000'
My problems right now:
First of all I am really confused about the workflow of docker.
Do I understand it correctly if I run:
docker build . --tag gulp-docker-test
I will create a new docker-container on my computer with the content of the dockerfile?
If I need to update anything inside it I have to run it again so the container is updated?
If I use:
docker start gulp-docker-test
it will start the container? What if I change anything inside it? Will it be back on reboot of the container? Or is it gone because it is only a temporary image?
Beside that if I try to run it I get this error:
ERROR: for gulp-docker-test Cannot start service html: OCI runtime create failed: container_linux.go:344: starting container process caused "exec: \"gulp --version\": executable file not found in $PATH": unknown
ERROR: for html Cannot start service html: OCI runtime create failed: container_linux.go:344: starting container process caused "exec: \"gulp --version\": executable file not found in $PATH": unknown
I did try those things: execute it with exec, removing the quotes inside the CMD of the Dockerfile but I think I do have some basic knowledge missing. I don't understand how to boot this container inside the shell so docker knows the $path.
Thank you for your help in advance
Edit:
I did found out how to run docker with the shell.
docker run -it --entrypoint bash gulp-docker-test3
root#8a27dc3a9c85:/usr/src/app# gulp -v
[15:01:53] CLI version 2.0.1
[15:01:53] Local version 4.0.0
root#8a27dc3a9c85:/usr/src/app# gulp
[15:02:38] Using gulpfile /usr/src/app/gulpfile.js
[15:02:38] Starting 'default'...
Default Task von Gulp 4.0 für Docker
[15:02:38] Finished 'default' after 4.28 ms
root#8a27dc3a9c85:/usr/src/app#
It looks like it should work if I can add default bash to the dockerfile.
If I run docker build I will create a new container?
It will execute the contents of the Dockerfile, and create a new image.
You need to docker run (or, rarely, docker create) the image to create a container from it. When you update the Dockerfile or your application source, you need to repeat the docker build step, docker stop && docker rm the existing container, and docker run a new one. Your docker-compose.yml fragment encapsulates this, but note that Docker Compose will delete and recreate a container when it's appropriate.
If I use docker start gulp-docker-test...
It will start a container with that name. That's a separate namespace from the image namespace. The container has to already exist and be stopped (usually from an explicit docker stop command). This is a slightly unusual state to be in.
CMD ["gulp --version" , "gulp"]
This looks for a binary named gulp --version, and runs it with a single parameter gulp. Since you probably don't have a single file named /usr/local/bin/gulp --version (with the spaces and "version" as part of the filename) you get an error.
You only get one CMD in a Dockerfile. (Or one ENTRYPOINT instead, but I tend to find CMD preferable except in a couple of extremely specific cases.) Each "word" you'd type in a shell becomes a separate "word" in the syntax. So you could, for instance, write
CMD ["gulp", "--version"]
Alternatively, if you leave off the brackets, Docker will wrap the CMD text in sh -c ..., so something closer to what you actually wrote is
CMD gulp --version && gulp
In practice you'd usually run build tools like Gulp as part of building the image, and use the CMD to actually start your application.
RUN gulp
CMD ["npm", "start"]
First
I will create a new docker-container on my computer with the content
of the dockerfile? If I need to update anything inside it I have to
run it again so the container is updated?
docker-build creates an image (like ISO). To create a container, you have to start/run this image. Container is a running image, which can differ from original because during run you can modify the file system inside. When you stop and remove the container, all changes are lost. Docker practice is not to store data in images - if image produces something valuable, it should be stored outside (consider volumes for that).
Second
CMD ["gulp --version" , "gulp"]
This is incorrect. JSON notation requires you to put each argument in a separate array element. This is correct:
CMD ["guld", "--version"]
Conclusion
You create an image with
docker build -t my-image .
You start it (create container) with
docker run --name=my-image-instance my-image
If you need to control running container, you can use friendly name my-image-instance or, if you didnt provide it, container's ID

How to run a docker command in Jenkins Build Execute Shell

I'm new to Jenkins and I have been searching around but I couldn't find what I was looking for.
I'd like to know how to run docker command in Jenkins (Build - Execute Shell):
Example: docker run hello-world
I have set Docker Installation for "Install latest from docker.io" in Jenkins Configure System and also have installed several Docker plugins. However, it still didn't work.
Can anyone help me point out what else should I check or set?
John
One of the following plugins should work fine:
CloudBees Docker Custom Build Environment Plugin
CloudBees Docker Pipeline Plugin
I normally run my builds on slave nodes that have docker pre-installed.
I came across another generic solution. Since I'm not expert creating a Jenkins-Plugin out of this, here the manual steps:
Create/change your Jenkins (I use portainer) with environment variables DOCKER_HOST=tcp://192.168.1.50 (when using unix protocol you also have to mount your docker socket) and append :/var/jenkins_home/bin to the actual PATH variable
On your docker host copy the docker command to the jenkins image "docker cp /usr/bin/docker jenkins:/var/jenkins_home/bin/"
Restart the Jenkins container
Now you can use docker command from any script or command line. The changes will persist an image update.

Resources