Jekyll & Wercker - How to deploy a subdirectory - ftp

Problem: I am having difficulty deploying the Jekyll build folder to an FTP server via Wercker.
I've been using Wercker for continuos integration of a Jekyll site I'm working on. Using the script below, The build process: jekyll build and jekyll doctor appear to be working as intended.
My deploy step should upload the "_site" folder to my FTP server. I'm currently using duleorlovic's ftp-deploy wercker step. It's currently uploading the entire directory, instead of just the build folder.
However, Jekyll uses the /_site folder as the directory for where the site gets built to ... how could I limit my upload to just the /_site build folder?
Thanks.
Current wercker.yml as follows:
# Wercker Configuration
# continuous delivery platform that helps software developers
# build and deploy their applications and microservices
box: ruby
build:
steps:
# Install dependencies
- bundle-install
# Execute jeykyll doctor command to validate the
# site against a list of known issues.
- script:
name: jekyll doctor
code: bundle exec jekyll doctor
- script:
name: jekyll build
code: bundle exec jekyll build --trace
deploy:
steps:
- duleorlovic/ftp-deploy:
destination: $FTP_SERVER
username: $FTP_USERNAME
password: $FTP_PASSWORD
timeout: 15
remote-file: remote.txt

Solved the question.
Apparently Worker offers an environment variable called $WERCKER_OUTPUT_DIR. This directory is the folder that gets piped to the deploy step when the build step passes. If not passed anything, the deploy step just used the root directory (aka not your build folder).
The working wercker.yml contains the jekyll build step as follows:
- script:
name: jekyll build
code: bundle exec jekyll build --trace --destination "$WERCKER_OUTPUT_DIR"
I wasn't able to find much for Wercker docs on the matter, since it seems like thier in transition between versions, but I found the solution in an example of how to use wercker.
You can see the example of using the Output Directory in their guide How Wercker Works.

By passing the cwd: argument to the step you can change the working directory (according to the wercker doc):
deploy:
steps:
- duleorlovic/ftp-deploy:
cwd: _site/
destination: $FTP_SERVER
username: $FTP_USERNAME
password: $FTP_PASSWORD
timeout: 15
remote-file: remote.txt

Related

Bitbucket Pipeline doesn't show the full output of Yarn build

I have a bitbucket pipeline with a yarn build command. The problem is that the output always misses the last lines:
+ yarn build
yarn run v1.22.19
$ react-scripts build
Creating an optimized production build...
Compiled successfully.
File sizes after gzip:
It is suppose to have more lines after File sizes after gzip:.
My bitbucket-pipelines.yml:
image: node:latest
pipelines:
branches:
staging:
steps:
- step:
name: Build
caches:
- node
script:
- yarn install
- yarn build
The biggest issue is when yarn build errors. Because I don't see the last lines, I can't tell why it failed. When this happens, I have to manually run yarn build on my local machine to see the full logs, which is not the best solution.
When I run the same yarn build command on my machine, I get the full logs:
❯ yarn build
yarn run v1.22.19
$ react-scripts build
Creating an optimized production build...
Compiled successfully.
File sizes after gzip:
383.56 kB build/static/js/main.a23388a2.js
13.39 kB build/static/css/main.06459f93.css
1.79 kB build/static/js/787.44808443.chunk.js
The project was built assuming it is hosted at /.
You can control this with the homepage field in your package.json.
The build folder is ready to be deployed.
You may serve it with a static server:
yarn global add serve
serve -s build
Find out more about deployment here:
https://cra.link/deployment
Done in 19.59s.
I found that the output is truncated on many other commands, not just yarn build. I created a simple echo "Hello, World!" that is executed remotely on a server via atlassian/ssh-run, and the "Hello, World!" message was truncated.
Solution: refresh the page.
tl;dr;
This is a three-year bug (and counting) on the BitBucket interface:
https://jira.atlassian.com/browse/BCLOUD-18574
It seems their interface doesn't work properly getting the results in real-time, but the logs are properly saved and can be retrieved by reloading the page.

CircleCI setup with Cypress and React-testing-library

I would like to use CircleCi to run my Cypress and react-testing-library tests because I want to test my react app.
On local env I would run (which work fine):
yarn run test to execute my react-testing-library tests
yarn cypress run to execute Cypress test
Now, I have found resources on how to make circleci config.yaml however nothing have worked. For reference link1, link2, link3, link4, link5
Some of the tests failed due to: error cypress#7.1.0: The engine "node" is incompatible with this module. Expected version ">=12.0.0". Got "10.24.1" or wrong cashing or something else. After 20 runs I am clueless, can someone help me, please?
As I was browsing resources I thought this should work for Cypress tests but it did not.
version: 2.1
orbs:
cypress: cypress-io/cypress#1
workflows:
build:
jobs:
- cypress/install:
build: yarn run build # run a custom app build step
yarn: true
- cypress/run:
requires:
- cypress/install
parallel: true # split all specs across machines
parallelism: 4 # use 4 CircleCI machines to finish quickly
yarn: true
group: 'all tests' # name this group "all tests" on the dashboard
start: yarn start # start server before running tests
For those who will search this issue later. I overcome errors:
error cypress#7.1.0: The engine "node" is incompatible with this module. Expected version ">=12.0.0". Got "10.24.1" by not using orb and instead use workflow -> jobs -> steps
fsevents not accessible from jest-haste-map by using yarn instead of npm
Lastly, some of your errors may come from your app (at least in my case react app) taking configuration from .env file that is not uploaded to github and therefore is not checkout to CircleCI docker and therefore during the test of the app will not work.
The working solution that I am using is:
version: 2.1
jobs:
run_tests:
docker:
- image: cypress/base:12
environment:
# this enables colors in the output
TERM: xterm
working_directory: ~/portalo
steps:
- checkout
- run:
name: Install project dependencies
command: yarn install --frozen-lockfile
- run:
name: Compile and start development server on port 3000
command: yarn startOnPort3000Linux
background: true
- run:
name: Wait for development server to start
command: 'yarn wait-on http://localhost:3000'
- run:
name: Run routing tests with react-testing-library via yarn test
command: 'yarn test ~/portalo/src/tests/react-testing-library/routing.test.tsx'
- run:
name: Run e2e tests with Cypruss via cypress run
command: $(yarn bin)/cypress run
workflows:
version: 2.1
build_and_test:
jobs:
- run_tests
Note: wait-on had to be added. In my case by yarn add wait-on
Note2: All steps have to be in a single to have present all installed packages. It could be tweet by using save/restore cache.

AWS Code Pipeline deploy results in a 404

I am trying to create a pipeline for an existing application. It is a React/Java Spring Boot application. It usually gets bundled into a single war file and uploaded to ElasticBeanstalk. I created my codebuild project and when I run it manually it will generate a war file that I can then upload to ElasticBeanstalk and everything works correctly. The buildspec for that is below:
version: 0.2
phases:
install:
commands:
- echo Nothing to do in the install phase...
pre_build:
commands:
- echo Nothing to do in the pre_build phase...
build:
commands:
- echo Build started on `date`
- mvn -Pprod package -X
post_build:
commands:
- echo Build completed on `date`
- mv target/cogcincinnati-0.0.1-SNAPSHOT.war cogcincinnati-0.0.1-SNAPSHOT.war
artifacts:
files:
- cogcincinnati-0.0.1-SNAPSHOT.war
When I run this build step in my pipeline it generates a zip file that gets dropped onto S3. My deploy step takes that build artifact and sends it to ElasticBeanstalk. Elasticbeanstalk does not give me any errors, but when I navigate to my url, I get a 404.
I have tried uploading the zip directly to Elasticbeanstalk and I get the same result. I have unzipped the file and it does appear to have all of my project files.
When I look at the server logs, I do not see any errors. I don't understanding why codebuild appears to be generating a war file when I run it manually, but a zip when executed in code pipeline.
Change artifacts war file name to ROOT.war this will resolve your problem actually your application is deployed successfully but on a different path, this is tomcat inbuild functionality by changing the ROOT it will run the application on '/'
So updated buildspec.yml will be
version: 0.2
phases:
install:
commands:
- echo Nothing to do in the install phase...
pre_build:
commands:
- echo Nothing to do in the pre_build phase...
build:
commands:
- echo Build started on `date`
- mvn -Pprod package -X
post_build:
commands:
- echo Build completed on `date`
- mv target/cogcincinnati-0.0.1-SNAPSHOT.war ROOT.war
artifacts:
files:
- ROOT.war
Seems your application is failing, you should review the logs from the Beanstalk environment, specially:
"tomcat8/catalina.out"
"tomcat8/catalina.[date].log"
[1] Viewing logs from Amazon EC2 instances in your Elastic Beanstalk environment - https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.logging.html
For more details about using Tomcat platform on EB environment, you can refer to this document:
- Using the Elastic Beanstalk Tomcat platform - https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/java-tomcat-platform.html
About the folder structuring in your project, please refer to this document:
- Structuring your project folder - https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/java-tomcat-platform-directorystructure.html
Try adding discard-paths: yes at the end of the buildspec.yml file. That will help you resolving the path error.

How to deploy Spring boot application with GitLab serverless?

I configured a Google could demo project and created a cluster for it in the GitLab Serverless settings for a Hello world Spring boot application. The only information I find on deploying applications is https://docs.gitlab.com/ee/user/project/clusters/serverless/#deploying-serverless-applications which might explain how to deploy a Ruby application only. I'm not sure about that because none of the variables used in the script are explained and the hint
Note: You can reference the sample Knative Ruby App to get started.
is somehow confusing because if I'm not familiar with building Ruby applications which I'm not, that won't get me started.
Following the instruction to put
stages:
- build
- deploy
build:
stage: build
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
only:
- master
script:
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > /kaniko/.docker/config.json
- /kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/Dockerfile --destination $CI_REGISTRY_IMAGE
deploy:
stage: deploy
image: gcr.io/triggermesh/tm#sha256:e3ee74db94d215bd297738d93577481f3e4db38013326c90d57f873df7ab41d5
only:
- master
environment: production
script:
- echo "$CI_REGISTRY_IMAGE"
- tm -n "$KUBE_NAMESPACE" --config "$KUBECONFIG" deploy service "$CI_PROJECT_NAME" --from-image "$CI_REGISTRY_IMAGE" --wait
in .gitlab-ci.yml causes the deploy stage to fail due to
$ tm -n "$KUBE_NAMESPACE" --config "$KUBECONFIG" deploy service "$CI_PROJECT_NAME" --from-image "$CI_REGISTRY_IMAGE" --wait
2019/02/09 11:08:09 stat /root/.kube/config: no such file or directory, falling back to in-cluster configuration
2019/02/09 11:08:09 Can't read config file
ERROR: Job failed: exit code 1
My Dockerfile which allows to build locally looks as follows:
FROM maven:3-jdk-11
COPY . .
RUN mvn --batch-mode --update-snapshots install
EXPOSE 8080
CMD java -jar target/hello-world-0.1-SNAPSHOT.jar
(the version in the filename doesn't make sense for further deployment, but that's a follow-up problem).
Reason is a mismatch in the environment value specified in .gitlab-ci.yml and the GitLab Kubernetes configuration, see https://docs.gitlab.com/ee/user/project/clusters/#troubleshooting-missing-kubeconfig-or-kube_token for details.

CircleCI 2.0 Workflow - Deploy not working

I'm trying to set up a workflow in CircleCI for my React project.
What I want to achieve is to get a job to build the stuff and another one to deploy the master branch to Firebase hosting.
This is what I have so far after several configurations:
witmy: &witmy
docker:
- image: circleci/node:7.10
version: 2
jobs:
build:
<<: *witmy
steps:
- checkout
- restore_cache:
keys:
- v1-dependencies-{{ checksum "package.json" }}
- v1-dependencies-
- run: yarn install
- save_cache:
paths:
- node_modules
key: v1-dependencies-{{ checksum "package.json" }}
- run:
name: Build app in production mode
command: |
yarn build
- persist_to_workspace:
root: .
deploy:
<<: *witmy
steps:
- attach_workspace:
at: .
- run:
name: Deploy Master to Firebase
command: ./node_modules/.bin/firebase deploy --token=MY_TOKEN
workflows:
version: 2
build-and-deploy:
jobs:
- build
- deploy:
requires:
- build
filters:
branches:
only: master
The build job always success, but with the deploy I have this error:
#!/bin/bash -eo pipefail
./node_modules/.bin/firebase deploy --token=MYTOKEN
/bin/bash: ./node_modules/.bin/firebase: No such file or directory
Exited with code 1
So, what I understand is that the deploy job is not running in the same place the build was, right?
I'm not sure how to fix that. I've read some examples they provide and tried several things, but it doesn't work. I've also read the documentation but I think it's not very clear how to configure everything... maybe I'm too dumb.
I hope you guys can help me out on this one.
Cheers!!
EDITED TO ADD MY CURRENT CONFIG USING WORKSPACES
I've added Workspaces... but still I'm not able to get it working, after a loooot of tries I'm getting this error:
Persisting to Workspace
The specified paths did not match any files in /home/circleci/project
And also it's a real pain to commit and push to CircleCI every single change to the config file when I want to test it... :/
Thanks!
disclaimer: I'm a CircleCI Developer Advocate
Each job is its own running Docker container (or VM). So the problem here is that nothing in node_modules exists in your deploy job. There's 2 ways to solve this:
Install Firebase and anything else you might need, on the fly, just like you do in the build job.
Utilize CircleCI Workspaces to carry over your node_modules directory from the build job to the deploy job.
In my opinion, option 2 is likely your best bet because it's more efficient.

Resources