Bitbucket Pipeline: Store .pdf artifact after pdfLaTeX - continuous-integration

I try to use Bitbucket's pipeline feature for a LaTeX git repository. I just want to build my .tex file and store the .pdf artifact to the repository download folder. I found some helpful guides here and a similar answer at SO.
This is my pipeline.yml
options:
docker: true
image: kaspersoerensen/latex-docker
pipelines:
default:
- step:
script:
- pdflatex --shell-escape TEST.tex # Build once
#- bibtex TEST # Build bibtex
- pdflatex -shell-escape TEST.tex # Build again
- pdflatex -shell-escape TEST.tex # And last time
- curl -X POST --user "${BB_AUTH_STRING}" "https://api.bitbucket.org/2.0/repositories/${BITBUCKET_REPO_OWNER}/${BITBUCKET_REPO_SLUG}/downloads" --form files=#"TEST.pdf"
To setup curl I used this official guideline from Atlassian.
Everything seems to be okay and the build is successful without any errors.
Problem: My repository download folder does not contain any artifacts.
EDIT
Build output:
Build setup
4s
pdflatex --shell-escape TEST.tex
1s
pdflatex -shell-escape TEST.tex
1s
pdflatex -shell-escape TEST.tex
1s
curl -X POST --user "${BB_AUTH_STRING}" "https://api.bitbucket.org/2.0/repositories/${BITBUCKET_REPO_OWNER}/${BITBUCKET_REPO_SLUG}/downloads" --form files=#"TEST.pdf"
<1s
Build teardown
Docker output:
time="2019-12-20T14:56:23.511294820Z" level=warning msg="could not change group /var/run/docker.sock to docker: group docker not found"
time="2019-12-20T14:56:23.511451739Z" level=warning msg="[!] DON'T BIND ON ANY IP ADDRESS WITHOUT setting --tlsverify IF YOU DON'T KNOW WHAT YOU'RE DOING [!]"
time="2019-12-20T14:56:23.544238141Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/165536.165536/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
time="2019-12-20T14:56:23.562813661Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "ip: can't find device 'aufs'\nmodprobe: can't change directory to '/lib/modules': No such file or directory\n": exit status 1"
time="2019-12-20T14:56:23.563473734Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.zfs" error="path /var/lib/docker/165536.165536/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter"
time="2019-12-20T14:56:23.563610398Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/165536.165536/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
time="2019-12-20T14:56:23.563626718Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "ip: can't find device 'aufs'\nmodprobe: can't change directory to '/lib/modules': No such file or directory\n": exit status 1"
time="2019-12-20T14:56:23.563636848Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/165536.165536/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter"
time="2019-12-20T14:56:23.695604523Z" level=warning msg="Running modprobe bridge br_netfilter failed with message: ip: can't find device 'bridge'\nbridge 167936 1 br_netfilter\nstp 16384 1 bridge\nllc 16384 2 bridge,stp\nip: can't find device 'br_netfilter'\nbr_netfilter 24576 0 \nbridge 167936 1 br_netfilter\nmodprobe: can't change directory to '/lib/modules': No such file or directory\n, error: exit status 1"
time="2019-12-20T14:56:23.706219675Z" level=warning msg="Running modprobe nf_nat failed with message: `ip: can't find device 'nf_nat'\nnf_nat_ipv6 16384 1 ip6table_nat\nnf_nat_ipv4 16384 2 ipt_MASQUERADE,iptable_nat\nnf_nat 32768 3 nf_nat_ipv6,xt_nat,nf_nat_ipv4\nnf_conntrack 139264 8 nf_nat_ipv6,nf_conntrack_netlink,xt_nat,xt_conntrack,ip_vs,ipt_MASQUERADE,nf_nat_ipv4,nf_nat\nlibcrc32c 16384 3 ip_vs,nf_nat,nf_conntrack\nmodprobe: can't change directory to '/lib/modules': No such file or directory`, error: exit status 1"
time="2019-12-20T14:56:23.716309547Z" level=warning msg="Running modprobe xt_conntrack failed with message: `ip: can't find device 'xt_conntrack'\nxt_conntrack 16384 42 \nnf_conntrack 139264 8 nf_nat_ipv6,nf_conntrack_netlink,xt_nat,xt_conntrack,ip_vs,ipt_MASQUERADE,nf_nat_ipv4,nf_nat\nmodprobe: can't change directory to '/lib/modules': No such file or directory`, error: exit status 1"

I'd recommend to use the bitbucket-upload-file pipe instead of sending API requests with curl to upload files in Bitbucket. The pipe will provide better feedback if something is wrong. Here is an example how you could do that:
options:
docker: true
image: kaspersoerensen/latex-docker
pipelines:
default:
- step:
name: Build the pdf
script:
- pdflatex --shell-escape TEST.tex
artifacts:
- TEST.pdf
- step:
name: Upload
script:
- pipe: atlassian/bitbucket-upload-file:0.1.2
variables:
BITBUCKET_USERNAME: $BITBUCKET_USERNAME
BITBUCKET_APP_PASSWORD: $BITBUCKET_APP_PASSWORD
FILENAME: 'TEST.pdf'
Note, that I'm using to separate steps to build and upload, which is considered a good way to structure your pipelines.

Thanks to everyone who spent time to answer my question.
Now I found a working solution like this:
In general follow this nice written guide!
Be sure that you use two factor authentification!
BB_AUTH_STRING must be YOUR_USERNAME:YOUR_APP_PASSWORD (Don't use the App Password label! This was my fault)
Use the following .yml code.
.
image: dme26/latex-builder:latest
pipelines:
default:
- step:
script:
- pdflatex TEST.tex; pdflatex TEST.tex
- ls
- curl -X POST "https://${BB_AUTH_STRING}#api.bitbucket.org/2.0/repositories/${BITBUCKET_REPO_OWNER}/${BITBUCKET_REPO_SLUG}/downloads" --form files=#"TEST.pdf"

Related

ERROR: (gcloud.auth.activate-service-account) The .json key file is not in a valid format -- via impersonate-service-account

Is it possible to use short-lived credentials, with docker-compose, to run a bash scripted gcloud command?
Related posts that I attempted to use but they are 5+ years old and I've been led to believe that the gcloud auth command has changed during this time:
ERROR: (gcloud.auth.activate-service-account) Failed to activate the given service account. Please ensure provided key file is valid
gcloud auth activate-service-account [ERROR] Please ensure provided key file is valid
Setup
there is a lot going on but I've attempted to abbreviate to the relevant parts
Makefile
auth: ## commands for short lived auth
#gcloud config set project ${GCP_PROJECT}
#gcloud auth application-default login --impersonate-service-account="inst-dataflow-svc#${GCP_PROJECT}.iam.gserviceaccount.com"
#gcloud auth configure-docker $(REGION)-docker.pkg.dev
gcloud-flex-build: ## build & push base docker image
docker-compose build gcloud-build-flex-local
docker-compose run gcloud-build-flex-local
docker-compose.yaml
version: '3.4'
services:
gcloud-build-flex-local:
build:
dockerfile: docker/gcloud-build-flex-template.dockerfile
context: .
image: us-central1-docker.pkg.dev/gcp-project/dataflow-docker-registry/local-build/pubsub-to-gbq-build-flex-template
volumes:
- type: bind
source: ${HOME}/.config/gcloud/
target: /tmp
docker/gcloud-build-flex-template.dockerfile
FROM gcr.io/google.com/cloudsdktool/cloud-sdk:408.0.1
COPY docker/scripts/gcloud-build-flex-template.sh /app/gcloud-build-flex-template.sh
COPY dataflow/pubsub-to-gbq/pubsub-to-gbq-metadata /app/pubsub-to-gbq-metadata
WORKDIR /app
ENTRYPOINT "/app/gcloud-build-flex-template.sh"
/app/gcloud-build-flex-template.sh
#!/bin/bash
set -euo pipefail
SERVICE_ACCOUNT_EMAIL=inst-dataflow-svc#gcp-project.iam.gserviceaccount.com
GCP_PROJECT=gcp-project
export GOOGLE_APPLICATION_CREDENTIALS=/tmp/application_default_credentials.json
# debugging
echo $GOOGLE_APPLICATION_CREDENTIALS
ls -lah /tmp/
cat $GOOGLE_APPLICATION_CREDENTIALS
gcloud auth activate-service-account $SERVICE_ACCOUNT_EMAIL --project=$GCP_PROJECT --key-file=$GOOGLE_APPLICATION_CREDENTIALS
Execution
make auth
make gcloud-flex-build
Error
ERROR: (gcloud.auth.activate-service-account) The .json key file is not in a valid format.
make: *** [gcloud-flex-build] Error 1
stdout (abbreviated)
docker-compose build gcloud-build-flex-local
[+] Building 0.4s (9/9) FINISHED
...
docker-compose run gcloud-build-flex-local
drwxr-xr-x 17 root root 544 Dec 30 10:36 .
drwxr-xr-x 1 root root 4.0K Dec 30 10:40 ..
-rw------- 1 root root 591 Dec 30 10:36 application_default_credentials.json
{
"delegates": [],
"service_account_impersonation_url": "https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/inst-dataflow-svc#gcp-project.iam.gserviceaccount.com:generateAccessToken",
"source_credentials": {
"client_id": "alphanumeric string .apps.googleusercontent.com",
"client_secret": "alphanumeric string",
"refresh_token": "alphanumeric string",
"type": "authorized_user"
},
"type": "impersonated_service_account"
}
I can make it work via docker run by spoofing the credentials to include only the "source_credentials" object, passed in as a volume, but this same trick doesn't seem to work with docker-compose running a script inside a container...
There is a similar type of configuration mentioned in this document. This involves three major steps:
Create short-lived credentials for your service account and download
your service account keys.
Create the configuration files for making your docker environment up. Use
the above cred files for granting required permissions.
Once you have all the configuration files in place use your docker-compose
commands for making your environment up.
Follow this documentation for more details.

Failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB) in quic golang appengine

I am using a google cloud app engine to deploy my quic-go server. But getting the error:
failed to sufficiently increase receive buffer size (was: 208 kiB, wanted: 2048 kiB, got: 416 kiB).
I am using app.yaml file to build a docker file which is as follows:
FROM golang:1.18.3
RUN mkdir /app
ADD . /app
WORKDIR /app
RUN apt-get update && apt-get install -y ffmpeg
CMD sudo --sysctl net.core.rmem_default=15000000
CMD sudo --sysctl net.core.rmem_max=15000000
RUN go build -x server.go
ENV GCS_BUCKETNAME xyz
ENV AI_CLIENT_SSL_CERT /path to cert
ENV AI_CLIENT_SSL_KEY /path to key
ENV GCP_BUCKET_SERVICE_ACCOUNT_CREDS /path to google cloud service account credential
CMD [ "./server" ]
This is my app.yaml
runtime: custom
env: flex
env_variables:
GCS_BUCKETNAME : "xyz"
AI_CLIENT_SSL_CERT : "./path to cert"
AI_CLIENT_SSL_KEY : "./path to key"
GCP_BUCKET_SERVICE_ACCOUNT_CREDS : "./path to google cloud credential.json file"
service: streaming-app
automatic_scaling:
min_num_instances: 1
max_num_instances: 20
cpu_utilization:
target_utilization: 0.85
target_concurrent_requests: 100
Any sort of help will be appreciated.
Since sysctl is an OS-level config that doesn't fit in line with App Engine's principle use case. App Engine does not currently have any way of configuring the underlying sysctl config files. I believe that Google Kubernetes engine may be a better use case for running that server, as App Engine environments have a limited set of configurable settings.
can you tell me the scenarios when this file is not present in the kernel?
I’m not sure about the scenarios as I have least experience with kernel. For me it seems a different question rather than original post. you can raise a new StackOverflow question regarding this.

go application build with bazel can't link when running inside container

I am trying to containerize my application build, though, when running the build that uses bazel with bazel-gazelle inside a container I will get this error:
$ bazel run --spawn_strategy=local //:gazelle --verbose_failures
INFO: Analyzed target //:gazelle (0 packages loaded, 0 targets configured).
INFO: Found 1 target...
ERROR: /home/workstation/.cache/bazel/_bazel_workstation/fb227af4c7b6aa39cc5b15d7fd9b737a/external/go_sdk/BUILD.bazel:43:15: GoToolchainBinary external/go_sdk/builder [for host] failed: (Exit 1): go failed: error executing command
(cd /home/workstation/.cache/bazel/_bazel_workstation/fb227af4c7b6aa39cc5b15d7fd9b737a/execroot/__main__ && \
exec env - \
GOROOT_FINAL=GOROOT \
external/go_sdk/bin/go tool link -o bazel-out/host/bin/external/go_sdk/builder bazel-out/host/bin/external/go_sdk/builder.a)
# Configuration: e0f1106e28100863b4221c55fca6feb935acec078da5376e291cf644e275dae5
# Execution platform: #local_config_platform//:host
/opt/go/pkg/tool/linux_amd64/link: mapping output file failed: invalid argument
Target //:gazelle failed to build
INFO: Elapsed time: 2.302s, Critical Path: 0.35s
INFO: 2 processes: 2 internal.
FAILED: Build did NOT complete successfully
FAILED: Build did NOT complete successfully
I tried to run it standalone:
$ /home/workstation/.cache/bazel/_bazel_workstation/fb227af4c7b6aa39cc5b15d7fd9b737a/external/go_sdk/bin/go tool link -o bazel-out/host/bin/external/go_sdk/builder bazel-out/host/bin/external/go_sdk/builder.a
/opt/go/pkg/tool/linux_amd64/link: mapping output file failed: invalid argument
and still got no success.
Never had this kind of link problem and the linker don't provide much more information. Tried to install all packages I could think of and still no luck.
For context:
Running Ubuntu 20.04 LTS
Docker 20.10.9
Bazel 4.2.2
Rules GO v0.31.0
Bazel Gazelle v0.25.0
Also tried to run it with the strace, though I don't think I am skilled enough to find meaningful information from the tool output.
#edit
For more context:
e$ /home/workstation/.cache/bazel/_bazel_workstation/fb227af4c7b6aa39cc5b15d7fd9b737a/external/go_sdk/bin/go tool link -v -o bazel-out/host/bin/external/go_sdk/builder bazel-out/host/bin/external/go_sdk/builder.a
HEADER = -H5 -T0x401000 -R0x1000
searching for runtime.a in /opt/go/pkg/linux_amd64/runtime.a
/opt/go/pkg/tool/linux_amd64/link: mapping output file failed: invalid argument

Unable to get heroku started on cloud9 ide

I am running the command - curl https://cli-assets.heroku.com/install-ubuntu.sh | sh
Which throws the following error -
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 1232 100 1232 0 0 5133 0 --:--:-- --:--:-- --:--:-- 5112
This script requires superuser access to install apt packages.
You will be prompted for your password by sudo.
+ dpkg -s apt-transport-https
+ echo ''
sh: line 4: /etc/apt/sources.list.d/heroku.list: No such file or directory
I also ran sudo snap install --classic heroku which returned sudo: snap: command not found.
Then I ran sudo apt install snapd which returned the following -
apt: invalid flag: install
Usage: apt <apt and javac options> <source files>
where apt options include:
-classpath <path> Specify where to find user class files and annotation processor factories
-cp <path> Specify where to find user class files and annotation processor factories
-d <path> Specify where to place processor and javac generated class files
-s <path> Specify where to place processor generated source files
-source <release> Provide source compatibility with specified release
-version Version information
-help Print a synopsis of standard options; use javac -help for more options
-X Print a synopsis of nonstandard options
-J<flag> Pass <flag> directly to the runtime system
-A[key[=value]] Options to pass to annotation processors
-nocompile Do not compile source files to class files
-print Print out textual representation of specified types
-factorypath <path> Specify where to find annotation processor factories
-factory <class> Name of AnnotationProcessorFactory to use; bypasses default discovery process
See javac -help for information on javac options.
warning: The apt tool and its associated API are planned to be
removed in the next major JDK release. These features have been
superseded by javac and the standardized annotation processing API,
javax.annotation.processing and javax.lang.model. Users are
recommended to migrate to the annotation processing features of
javac; see the javac man page for more information.
Finally, I ran wget 0- wget https://toolbelt.heroku.com/install-ubuntu.sh | sh which returned
--2020-09-10 18:15:53-- http://0-/
Resolving 0- (0-)... failed: Name or service not known.
wget: unable to resolve host address ‘0-’
--2020-09-10 18:15:54-- http://wget/
Resolving wget (wget)... failed: Name or service not known.
wget: unable to resolve host address ‘wget’
--2020-09-10 18:15:54-- https://toolbelt.heroku.com/install-ubuntu.sh
Resolving toolbelt.heroku.com (toolbelt.heroku.com)... 54.164.74.108, 107.23.162.152, 34.194.108.77, ...
Connecting to toolbelt.heroku.com (toolbelt.heroku.com)|54.164.74.108|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 719 [text/plain]
Saving to: ‘install-ubuntu.sh’
install-ubuntu.sh 100%[=============================================================>] 719 --.-KB/s in 0s
2020-09-10 18:15:54 (105 MB/s) - ‘install-ubuntu.sh’ saved [719/719]
FINISHED --2020-09-10 18:15:54--
Total wall clock time: 0.4s
Downloaded: 1 files, 719 in 0s (105 MB/s)
So then, I ran bash install-ubuntu.sh which returned -
This script requires superuser access to install apt packages.
You will be prompted for your password by sudo.
sh: line 3: /etc/apt/sources.list.d/heroku.list: No such file or directory
sh: line 6: apt-key: command not found
--2020-09-10 18:16:51-- https://toolbelt.heroku.com/apt/release.key
Resolving toolbelt.heroku.com (toolbelt.heroku.com)... 54.145.36.98, 54.164.74.108, 107.23.162.152, ...
Connecting to toolbelt.heroku.com (toolbelt.heroku.com)|54.145.36.98|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1737 (1.7K) [application/octet-stream]
Saving to: ‘STDOUT’
- 0%[ ] 0 --.-KB/s in 0s
Cannot write to ‘-’ (Success).
sh: line 9: apt-get: command not found
sh: line 12: apt-get: command not found
I am taking an online course on Upskill and am on video #125. Please advise how to progress.
Thanks for your time and help.
I have a gist I use specifically for setting up rails on Cloud9 (though I haven't updated it for Rails 6 yet). Recommend you read through it twice before you try to do it. You may need to go out of order in some cases, as exact approach depends on if you are cloning an existing repo or building a new app.
https://gist.github.com/MyklClason/791d6b14606bc56e72eba2995aab8e76
You probably don't need snap.
Also useful bash aliases for Cloud9:
https://gist.github.com/MyklClason/d71a39ace28b9ec9f0ad
As for your actual issue. Heroku toolbelt is obsolete, use this instead:
wget -qO- https://cli-assets.heroku.com/install-ubuntu.sh | sh
Also it's often best to just look online and check how to install the heroku cli (or anything really) for your OS using the offical documentation. Though that may not work for older setup. However, heroku is something where you basically have to use the newest version otherwise you are going to run into problems.
If you didn't figure this out...
nvm i v8
Followed by...
npm install -g heroku

How to analyze image build that fails silently in CI tool?

My Docker image is failing during build in GitLab CI and it fails silently without giving any errors to work with. I can build the image locally and no problem whatsoever so the problem is in CI environment. Something that is not obvious causes the build to fail. After doing some research about this I've learned the best thing to do to SSH into the CI server and "poke around" to find out what's happening. In particular I've learned that I can get a log of the last layer before the build fails to get insight into why it might be failing. However, GitLab doesn't support direct SSH connection into CI server. Supports only fixed SSH commands executed towards the server from the build environment (.gitlab-ci.yml) which isn't very helpful because I need to use SSH to access build layers of the image.
What are my other options as to how can I debug / analyze an image during build in CI ?
Any feedback much appreciated.
Dockerfile:
###########
# BUILDER #
###########
# base image
FROM node:11.12.0-alpine as builder
# set working directory
WORKDIR /usr/src/app
RUN apk add --no-cache --virtual .gyp python make g++
# install app dependencies
ENV PATH /usr/src/app/node_modules/.bin:$PATH
COPY package.json /usr/src/app/package.json
COPY package-lock.json /usr/src/app/package-lock.json
RUN npm install --no-optional
RUN npm install react-scripts#2.1.8 -g --silent --no-optional
# set environment variables
ARG REACT_APP_USERS_SERVICE_URL
ENV REACT_APP_USERS_SERVICE_URL $REACT_APP_USERS_SERVICE_URL
ARG NODE_ENV
ENV NODE_ENV $NODE_ENV
# create build
COPY . /usr/src/app
RUN npm run build
#########
# FINAL #
#########
# base image
FROM nginx:1.15.9-alpine
# update nginx conf
RUN rm -rf /etc/nginx/conf.d
COPY conf /etc/nginx
# copy static files
COPY --from=builder /usr/src/app/build /usr/share/nginx/html
# expose port
EXPOSE 80
# run nginx
CMD ["nginx", "-g", "daemon off;"]
.gitlab-ci.yml file:
...
...
after_script:
- bash ./docker-push.sh
- docker-compose down
docker-push.sh script that builds the image for pushing into ECR on AWS:
echo "building the client image ..."
docker -D build $CLIENT_REPO -t $CLIENT:$COMMIT -f Dockerfile-prod --build-arg REACT_APP_USERS_SERVICE_URL="" # this line is failing
if [ $? -ne 0 ]; then
echo "Failure. Exiting now..."
exit 1
fi
docker -D tag $CLIENT:$COMMIT $REPO/$CLIENT:$TAG
docker -D push $REPO/$CLIENT:$TAG
docker build $USERS_REPO -t $USERS:$COMMIT -f Dockerfile-$DOCKER_ENV
docker tag $USERS:$COMMIT $REPO/$USERS:$TAG
docker push $REPO/$USERS:$TAG
docker build $USERS_DB_REPO -t $USERS_DB:$COMMIT -f Dockerfile
docker tag $USERS_DB:$COMMIT $REPO/$USERS_DB:$TAG
docker push $REPO/$USERS_DB:$TAG
docker build $SWAGGER_REPO -t $SWAGGER:$COMMIT -f Dockerfile-$DOCKER_ENV
docker tag $SWAGGER:$COMMIT $REPO/$SWAGGER:$TAG
docker push $REPO/$SWAGGER:$TAG
job log from gitlab ci (relevant part only):
Login Succeeded
building the client image ...
time="2020-04-14T08:54:23Z" level=debug msg="Skipping excluded path: .dockerignore"
time="2020-04-14T08:54:23Z" level=debug msg="Skipping excluded path: Dockerfile"
time="2020-04-14T08:54:23Z" level=debug msg="Skipping excluded path: Dockerfile-prod"
time="2020-04-14T08:54:23Z" level=debug msg="Skipping excluded path: Dockerfile-stage"
time="2020-04-14T08:54:23Z" level=debug msg="Skipping excluded path: .dockerignore"
time="2020-04-14T08:54:23Z" level=debug msg="Skipping excluded path: Dockerfile-prod"
time="2020-04-14T08:54:23Z" level=debug msg="Skipping excluded path: Dockerfile"
time="2020-04-14T08:54:23Z" level=debug msg="Skipping excluded path: Dockerfile-stage"
Step 1/25 : FROM node:11.12.0-alpine as builder
---> 09084e4ff58d
Step 2/25 : WORKDIR /usr/src/app
---> Using cache
---> 9c6639a8a785
Step 3/25 : RUN apk add --no-cache --virtual .gyp python make g++
---> Using cache
---> 0d5320ee514b
Step 4/25 : ENV PATH /usr/src/app/node_modules/.bin:$PATH
---> Using cache
---> c041f8c64b34
Step 5/25 : COPY package.json /usr/src/app/package.json
---> 02d18d67a517
Step 6/25 : COPY package-lock.json /usr/src/app/package-lock.json
---> 2d94e8e8fb6c
Step 7/25 : RUN npm install --no-optional
---> Running in 59660215041e
> cypress#4.1.0 postinstall /usr/src/app/node_modules/cypress
> node index.js --exec install
Installing Cypress (version: 4.1.0)
[08:55:20] Downloading Cypress [started]
[08:55:20] Downloading Cypress 0% 0s [title changed]
[08:55:20] Downloading Cypress 2% 5s [title changed]
...
...
[08:55:39] Unzipping Cypress 9% 167s [title changed]
[08:55:39] Unzipping Cypress 100% 0s [title changed]
[08:55:39] Unzipped Cypress [title changed]
[08:55:39] Unzipped Cypress [completed]
[08:55:39] Finishing Installation [started]
[08:55:40] Finished Installation /root/.cache/Cypress/4.1.0 [title changed]
[08:55:40] Finished Installation /root/.cache/Cypress/4.1.0 [completed]
You can now open Cypress by running: node_modules/.bin/cypress open
https://on.cypress.io/installing-cypress
added 2034 packages from 768 contributors and audited 38602 packages in 77.201s
found 1073 vulnerabilities (1058 low, 14 moderate, 1 high)
run `npm audit fix` to fix them, or `npm audit` for details
Saving cache
00:02
Uploading artifacts for successful job
00:02
Job succeeded

Resources