Bitbucket Pipeline doesn't show the full output of Yarn build - yarnpkg

I have a bitbucket pipeline with a yarn build command. The problem is that the output always misses the last lines:
+ yarn build
yarn run v1.22.19
$ react-scripts build
Creating an optimized production build...
Compiled successfully.
File sizes after gzip:
It is suppose to have more lines after File sizes after gzip:.
My bitbucket-pipelines.yml:
image: node:latest
pipelines:
branches:
staging:
steps:
- step:
name: Build
caches:
- node
script:
- yarn install
- yarn build
The biggest issue is when yarn build errors. Because I don't see the last lines, I can't tell why it failed. When this happens, I have to manually run yarn build on my local machine to see the full logs, which is not the best solution.
When I run the same yarn build command on my machine, I get the full logs:
❯ yarn build
yarn run v1.22.19
$ react-scripts build
Creating an optimized production build...
Compiled successfully.
File sizes after gzip:
383.56 kB build/static/js/main.a23388a2.js
13.39 kB build/static/css/main.06459f93.css
1.79 kB build/static/js/787.44808443.chunk.js
The project was built assuming it is hosted at /.
You can control this with the homepage field in your package.json.
The build folder is ready to be deployed.
You may serve it with a static server:
yarn global add serve
serve -s build
Find out more about deployment here:
https://cra.link/deployment
Done in 19.59s.
I found that the output is truncated on many other commands, not just yarn build. I created a simple echo "Hello, World!" that is executed remotely on a server via atlassian/ssh-run, and the "Hello, World!" message was truncated.

Solution: refresh the page.
tl;dr;
This is a three-year bug (and counting) on the BitBucket interface:
https://jira.atlassian.com/browse/BCLOUD-18574
It seems their interface doesn't work properly getting the results in real-time, but the logs are properly saved and can be retrieved by reloading the page.

Related

Share a file between two workflows in CircleCI

In our repo build and deploy are two different workflows.
In build we call lerna to check for changed packages and save the output in a file saved to the current workspace.
check_changes:
working_directory: ~/project
executor: node
steps:
- checkout
- attach_workspace:
at: ~/project
- run:
command: npx lerna changed > changed.tmp
- persist_to_workspace:
root: ./
paths:
- changed.tmp
I'd like to pass the exact same file from build workflow to deploy workflow and access it in another job. How do I do that?
read_changes:
working_directory: ~/project
executor: node
steps:
- checkout
- attach_workspace:
at: ~/project
- run:
command: |
echo 'Reading changed.tmp file'
cat changed.tmp
According to this blog post
Unlike caching, workspaces are not shared between runs as they no
longer exists once a workflow is complete
it feels that caching would be the only option.
But according to the CircelCI documentation, my case doesn't fit their cache defintions:
Use the cache to store data that makes your job faster, but, in the
case of a cache miss or zero cache restore, the job still runs
successfully. For example, you might cache NPM package directories
(known as node_modules).
I think you can totally use caching here. Make sure you choose your key template(s) wisely.
The caveat to keep in mind is that (unlike the job level where you can use the requires key), there's no *native *way to sequentially execute workflows. Although you could consider using an orb for that; for example the roopakv/swissknife orb.
So you'll need to make sure that the job (that needs the file) in the deploy workflow, doesn't move to the restore_cache step until the save_cache in the other job has happened.

Cypress binary is missing and Gitlab CI pipeline

I'm trying to integrate cypress testing into gitlab pipeline.
I've tried about 10 different configurations which all fail.. I've included what I think are the relevant portions of of the gitlab.yml file, as well as the screenshot of the error on gitlab.
Thanks for any help
variables:
GIT_SUBMODULE_STRATEGY: recursive
cache:
paths:
- src/ui/node_modules/
- /root/.cache/Cypress/ //added this, also have tried src/ui/cypress/
build_ui:
image: node:16.14.2
stage: build
script:
- cd src/ui
- yarn install --pure-lockfile --prefer-offline --cache-folder .yarn
ui_test:
image: node:16.14.2
stage: test
needs: [build_ui]
script:
- cd src/ui
- yarn run runCypressHeadless
Each job gets its own separate environment. Therefore, you need to install your dependencies in each job. Add your yarn install command to the ui_test job.
The reason why your cache: did not restore to the job from the previous stage is because caches are per job by default (e.g. caches are restored from previous pipelines that ran the same job). If you want subsequent jobs in the same pipeline to use the cache, set the cache:key: to something like $CI_COMMIT_SHA or use cache:key:files: to use a file key, like your lockfile(s).
Also, you can only cache paths in the workspace. So you won't be able to cache/restore /root/.cache/... -- instead you should change the cache location to somewhere in the workspace.
For additional reference, see: caching in GitLab CI and caching NodeJS dependencies.

CircleCI setup with Cypress and React-testing-library

I would like to use CircleCi to run my Cypress and react-testing-library tests because I want to test my react app.
On local env I would run (which work fine):
yarn run test to execute my react-testing-library tests
yarn cypress run to execute Cypress test
Now, I have found resources on how to make circleci config.yaml however nothing have worked. For reference link1, link2, link3, link4, link5
Some of the tests failed due to: error cypress#7.1.0: The engine "node" is incompatible with this module. Expected version ">=12.0.0". Got "10.24.1" or wrong cashing or something else. After 20 runs I am clueless, can someone help me, please?
As I was browsing resources I thought this should work for Cypress tests but it did not.
version: 2.1
orbs:
cypress: cypress-io/cypress#1
workflows:
build:
jobs:
- cypress/install:
build: yarn run build # run a custom app build step
yarn: true
- cypress/run:
requires:
- cypress/install
parallel: true # split all specs across machines
parallelism: 4 # use 4 CircleCI machines to finish quickly
yarn: true
group: 'all tests' # name this group "all tests" on the dashboard
start: yarn start # start server before running tests
For those who will search this issue later. I overcome errors:
error cypress#7.1.0: The engine "node" is incompatible with this module. Expected version ">=12.0.0". Got "10.24.1" by not using orb and instead use workflow -> jobs -> steps
fsevents not accessible from jest-haste-map by using yarn instead of npm
Lastly, some of your errors may come from your app (at least in my case react app) taking configuration from .env file that is not uploaded to github and therefore is not checkout to CircleCI docker and therefore during the test of the app will not work.
The working solution that I am using is:
version: 2.1
jobs:
run_tests:
docker:
- image: cypress/base:12
environment:
# this enables colors in the output
TERM: xterm
working_directory: ~/portalo
steps:
- checkout
- run:
name: Install project dependencies
command: yarn install --frozen-lockfile
- run:
name: Compile and start development server on port 3000
command: yarn startOnPort3000Linux
background: true
- run:
name: Wait for development server to start
command: 'yarn wait-on http://localhost:3000'
- run:
name: Run routing tests with react-testing-library via yarn test
command: 'yarn test ~/portalo/src/tests/react-testing-library/routing.test.tsx'
- run:
name: Run e2e tests with Cypruss via cypress run
command: $(yarn bin)/cypress run
workflows:
version: 2.1
build_and_test:
jobs:
- run_tests
Note: wait-on had to be added. In my case by yarn add wait-on
Note2: All steps have to be in a single to have present all installed packages. It could be tweet by using save/restore cache.

Gitlab CI : how to cache node_modules from a prebuilt image?

The situation is this:
I'm running Cypress tests in a Gitlab CI (launched by vue-cli). To speed up the execution, I built a Docker image that contains the necessary dependencies.
How can I cache node_modules from the prebuilt image to use it in the test job ?
Currently I'm using an awful (but working) solution:
testsE2e:
image: path/to/prebuiltImg
stage: tests
script:
- ln -s /node_modules/ /builds/path/to/prebuiltImg/node_modules
- yarn test:e2e
- yarn test:e2e:report
But I think there must be a cleaner way using the Gitlab CI cache.
I've been testing:
cacheE2eDeps:
image: path/to/prebuiltImg
stage: dependencies
cache:
key: e2eDeps
paths:
- node_modules/
script:
- find / -name node_modules # check that node_modules files are there
- echo "Caching e2e test dependencies"
testsE2e:
image: path/to/prebuiltImg
stage: tests
cache:
key: e2eDeps
script:
- yarn test:e2e
- yarn test:e2e:report
But the job cacheE2eDeps displays a "WARNING: node_modules/: no matching files" error.
How can I do this successfully? The Gitlab documentation doesn't really talk about caching from a prebuilt image...
The Dockerfile used to build the image :
FROM cypress/browsers:node13.8.0-chrome81-ff75
COPY . .
RUN yarn install
There is not documentation for caching data from prebuilt images, because it’s simply not done. The dependencies are already available in the image so why cache them in the first place? It would only lead to an unnecessary data duplication.
Also, you seem to operate under the impression that cache should be used to share data between jobs, but it’s primary use case is sharing data between different runs of the same job. Sharing data between jobs should be done using artifacts.
In your case you can use cache instead of prebuilt image, like so:
variables:
CYPRESS_CACHE_FOLDER: "$CI_PROJECT_DIR/cache/Cypress"
testsE2e:
image: cypress/browsers:node13.8.0-chrome81-ff75
stage: tests
cache:
key: "e2eDeps"
paths:
- node_modules/
- cache/Cypress/
script:
- yarn install
- yarn test:e2e
- yarn test:e2e:report
The first time the above job is run, it’ll install dependencies from scratch, but the next time it’ll fetch them from the runner cache. The caveat is that unless all runners that run this job share cache, each time you run it on a new runner it’ll install the dependencies from scratch.
Here’s the documentation about using yarn with GitLab CI.
Edit:
To elaborate on using cache vs artifacts - artifacts are meant for both storing job output (eg. to manually download it later) and for passing results of one job to another one from a subsequent stage, while cache is meant to speed up job execution by preserving files that the job needs to download from the internet. See GitLab documentation for details.
Contents of node_modules directory obviously fit into the second category.

AWS Code Pipeline deploy results in a 404

I am trying to create a pipeline for an existing application. It is a React/Java Spring Boot application. It usually gets bundled into a single war file and uploaded to ElasticBeanstalk. I created my codebuild project and when I run it manually it will generate a war file that I can then upload to ElasticBeanstalk and everything works correctly. The buildspec for that is below:
version: 0.2
phases:
install:
commands:
- echo Nothing to do in the install phase...
pre_build:
commands:
- echo Nothing to do in the pre_build phase...
build:
commands:
- echo Build started on `date`
- mvn -Pprod package -X
post_build:
commands:
- echo Build completed on `date`
- mv target/cogcincinnati-0.0.1-SNAPSHOT.war cogcincinnati-0.0.1-SNAPSHOT.war
artifacts:
files:
- cogcincinnati-0.0.1-SNAPSHOT.war
When I run this build step in my pipeline it generates a zip file that gets dropped onto S3. My deploy step takes that build artifact and sends it to ElasticBeanstalk. Elasticbeanstalk does not give me any errors, but when I navigate to my url, I get a 404.
I have tried uploading the zip directly to Elasticbeanstalk and I get the same result. I have unzipped the file and it does appear to have all of my project files.
When I look at the server logs, I do not see any errors. I don't understanding why codebuild appears to be generating a war file when I run it manually, but a zip when executed in code pipeline.
Change artifacts war file name to ROOT.war this will resolve your problem actually your application is deployed successfully but on a different path, this is tomcat inbuild functionality by changing the ROOT it will run the application on '/'
So updated buildspec.yml will be
version: 0.2
phases:
install:
commands:
- echo Nothing to do in the install phase...
pre_build:
commands:
- echo Nothing to do in the pre_build phase...
build:
commands:
- echo Build started on `date`
- mvn -Pprod package -X
post_build:
commands:
- echo Build completed on `date`
- mv target/cogcincinnati-0.0.1-SNAPSHOT.war ROOT.war
artifacts:
files:
- ROOT.war
Seems your application is failing, you should review the logs from the Beanstalk environment, specially:
"tomcat8/catalina.out"
"tomcat8/catalina.[date].log"
[1] Viewing logs from Amazon EC2 instances in your Elastic Beanstalk environment - https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.logging.html
For more details about using Tomcat platform on EB environment, you can refer to this document:
- Using the Elastic Beanstalk Tomcat platform - https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/java-tomcat-platform.html
About the folder structuring in your project, please refer to this document:
- Structuring your project folder - https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/java-tomcat-platform-directorystructure.html
Try adding discard-paths: yes at the end of the buildspec.yml file. That will help you resolving the path error.

Resources