CodeceptJS Puppeteer Test in Bitbucket Pipelines - continuous-integration

The tests run after deployment step of the angular apps in the pipeline. The problem I'm facing is that the deployment step creates and docker image that is pushed to the docker registry, and then Azure waits for the image and applies the new changes so it takes time untill the changes are actually live. However the test starts and goes to the website immediately after the deployment step in the pipeline so it is basically testing the old version of the website since the new changes are not yet live. Is there a way to test the app within the pipeline itself, like serving the app within the pipeline or some other way?

Related

How to deploy an app to pipelines in heroku?

I followed the documentation here https://devcenter.heroku.com/articles/pipelines and setup a pipeline with two stages:
The above link only talks about how to promote an app, without specifying how to deploy, this section specifically (we can read beyond this point and see there is no instruction on how to deploy):
Pipelines let you define how your deployed code flows from one
environment to the next. For example, you can deploy code to your
staging app (which builds it into a slug) and later promote that same
slug to production. This promotion flow ensures that production
contains the exact same code that you tested in staging, and it’s also
much faster than rebuilding the slug.
So I used the same command we use for deploying non-pipeline based apps (as instructed here):
docker push registry.heroku.com/ultimate-hello-world-cra-stage/web
This appears to be a wrong approach. You can see in snapshot above that STAGING app did get deployed, but when I try to promote the app using this command heroku pipelines:promote --app ultimate-hello-world-cra-stage, I get this error: Error: Pipeline promotions are not supported on apps pushed via Heroku.
What is the correct way to deploy to a pipeline and where can I find it in the docs?

How to prevent GitLab CI/CD from deleting the whole build

I'm currently having a frustrating issue.
I have a setup of GitLab CI on a VPS server, which is working completely fine, I have my pipelines running without a problem.
The issue comes after having to redo a pipeline. Each time GitLab deletes the whole folder, where the build is and builds it again to deploy it. My problem is that I have a "uploads" folder, that stores all user content, that was uploaded, and each time I redo a pipeline everything gets deleted from this folder and I obviously need this content, because it's the purpose of the app.
I have tried GitLab CI cache - no luck. I have also tried making a new folder, that isn't in the repository, it deletes it too.
Running my first job looks like so:
Job
As you can see there are a lot of lines, that says "Removing ..."
In order to persist a folder with local files while integrating CI pipelines, the best approach is to use Docker data persistency, as you'll be able to delete everything from the last build while keeping local files inside your application between your builds, while maintains the ability to start from stretch every time you start a new pipeline.
Bind-mount volumes
Volumes managed by Docker
GitLab's CI/CD Documentation provides a short briefing on how to persist storage between jobs when using Docker to build your applications.
I'd also like to point out that if you're using Gitlab Runner through SSH, they explicitly state they do not support caching between builds when using this functionality. Even when using the standard Shell executor, they highly discourage saving data to the Builds folder. so it can be argued that the best practice approach is to use a bind-mount volume to your host and isolate the application from the user uploaded data.

How should I deploy the result of the CI/CD pipeline on my production server

I am having this GitLab CI/CD which builds then tests and pushes my projects container to GitLab container register successfully. But now I am wondering how I can do the deployment stage automated too. currently, I am doing it manually and after each successful pipeline, I SSH to my server and run several commands to pull the latest images from the GitLab.com container registry and then run them. But I would like to make this step automated as well. Yet, I don't know how?
Actually I have seen some examples of opening an ssh session from CI/CD pipeline but it doesn't feel secure enough. So I was wondering is there a better way for this or I have to actually do this.
Not that I am using gitlab.com so the GitLab server is not installed on my machine and I can't share assets between them directly
There are many ways to achieve this, depending on your setup, other requirements, scale etc.
I'll just give you two options.
I. Kubernetes
create cluster (ie control plane) somewhere
add your cluster to GitLab (now GitLab can even create cluster for you in AWS and GCP, check this page)
attach your target machine as a worker node to the cluster
create Kubernetes YAML files \ Helm chart for your application and deploy via usual ways, e.g. kubectl apply -f ... or helm install ..., or rely on Auto DevOps to do this step for you
This is quite complex but sort of "right" way of doing things.
II. Private GitLab runner
go to Settings > CI/CD > Runners of your GitLab project or group
obtain the registration token
install your own GitLab runner right on the target machine, and register it on the GitLab server using registration token, see example
give runner some specific tag
use that tag in your .gitlab-ci.yml file, see documentation
then deployment process is just a local process of docker pull... and docker run ... for your image
This is a lot simpler, but is a "wrong" way, as you are mixing CI\CD infrastructure with target environment.

Best practise/way to deploy Laravel + Vue SPA application to AWS

I have 2 repositories residing in Bitbucket - Backend (Laravel app as the API and entry point) and Frontend (Main application front-end - VueJs app). My goal is to set up continuous deployment so whenever something is pushed in either of the repos in master (or other branch selected by me) branch it triggers something so that the whole app builds and reaches the AWS EC2 server.
I have considered/tried the following:
AWS CodePipeline and/or CodeDeploy. This looked like a great option
since the servers are in AWS as well. However, there is no support
for Bitbucket out of the box, so it would have to go to Bitbucket
Pipeline -> AWS Lambda -> AWS S3 -> AWS CodePipeline/CodeDeploy ->
AWS EC2. This seems like a very lengthy journey and I am not sure if
that's a good practice whatsoever.
Using Laravel Forge to deploy the Laravel app, and add additional steps to build the VueJS app. This seemed like a very basic solution,
however, the build process seems to fail there as it just takes long
time and crashes with no errors (whereas I can run exact same process
on my local machine or a different server hosted elsewhere). I am not
sure if this is issue with the way server is provisioned, the way
Forge runs deployment script or the server is too weak to handle it.
The main question of mine would be what are the best pracises for deploying the app of such components? I have read many tutorials/articles about deploying a NodeJS app, or a Laravel app, but haven't gotten good information about a scenario like this.
Would it be better to build the front-end app locally and version control the built JS file? Or should I create a Pipeline in Bitbucket that would build the app and then deploy it? Or is it the best to just version control and deploy the source files and leave the whole build process as the last step in the deployment process that will be done by the server that is hosting the app itself? There are also some articles suggesting hosting the whole front-end app in S3 bucket - would that be bad practise as well?
Appreciate any help and resources that would help!
From the sounds of things it sounds like you have two types of deployments you might want to run.
Laravel API: If you're using Laravel Forge already then this is a great way to go about deploying your Laravel App, takes care of most of the process and easy server management.
Vue.js App: Few things you can do here, I personally prefer using a provider like Vercel or Netlify who let you deploy your static sites/frontends for free-low costs. You can write custom build steps but they have great presets that should work out the box.
If you really want to keep everything on AWS then look into how to host static sites on AWS

gcloud automatic redeployment Golang app

I have a Golang app running on Google Cloud App Engine that I can update manually with "gcloud app deploy" but I cannot figure out how to schedule automatic redeployments. I'm assuming I have to use cron.yaml, but then I'm confused about what url to use. Basically it's just a web app with one main index.html page with changing content, and I would like to schedule automatic redeployments... how do I have to go about that?
If you want to automatically re-deploy your app when the code changes, you need what's called CI/CD (Continuous integration/deployment). What a CI does is, for each new commit to your repository, check out the new code and run a test script. If all the tests pass (or if you don't have any tests at all), the CI server can then deploy your code to App Engine, all automatically.
One free (for open-source projects) CI provider is Travis CI. To configure it, you need to make an account with Travis, and a file called .travis.yml in the root of your repository. To set up App Engine deploys, you can follow this guide to set up a service account and add the encrypted file to your repo. It will run a gcloud app deploy from a container on their servers, whenever you push code to a certain branch (master by default) in your repo.
Another option, which avoids setting up CI at all, is to simply change your app to generate the dynamic parts of the page when it gets requested. Reading the documentation for html/template would point you in the right direction.

Resources