Advice for Continuous Integration / Development - laravel

I've got a Docker based PHP project. PHP framework is Laravel.
The project is setup in Gitlab and I use Jenkins for CI/CD.
When I merge into the master branch, a new build is triggered in Jenkins. I clone the repo, run Unit tests etc etc.
Once completed, I build a new Docker image with the latest codebase inside and push this image up to the Docker registry.
My jenkinsfile then calls a script on the production server that pulls down the latest docker image and stops / starts the running container.
I setup a Nginx proxy/Load balancer so users do not see any down time during the starting and stopping of containers.
This workflow works very well but I have one issue:
The storage folder in Laravel gets wiped when I do a new deployment, so any files uploaded by users are lost.
How do I overcome this?
I've recently started working on a new version of the project that sends all file uploads to Digital Ocean Spaces but I've found this to be very very slow.
I'm assuming S3 will be the same.
All suggestions are welcome.

My solution was to map a volume in container to the host, when I run started my docker container.
I also had to set permissions but now I have persistence during deployments.
No requirement for S3 or Spaces.

Related

How to migrate from certificate-based Kubernetes integration to the Gitlab agent for Kubernetes

I have some integration tubes for more than a year working without any problem. And I realize that until this month February 2023 support for the integration of certified-based Kubernetes ends. So I have to migrate to using something called Gitlab agent. Which apparently exists a few days ago and I had not noticed. I have created the agent without problems in my Kubernetes cluster following the gitlab documentation, but I have a small problem.
How can I tell my CI/CD workflow to stop using my old certificate-based integration and now start using my new Gitlab agent to authenticate/authorize/integrate with my kubernetes cluster.
I have followed these instructions https://docs.gitlab.com/ee/user/infrastructure/clusters/migrate_to_gitlab_agent.html
But the part I'm not quite clear on is what additional instructions I should add to my .gitlab-ci.yml file to tell it to use the gitlab agent.
I already created gitlab agent, created config.yaml, also gitlab says that they are connected with my GKE.
Try adding a tag in my config.yaml like this.
ci_access:
projects:
-id: path/to/project
In the same way, the integration pipeline worked without problems, but I am sure that it is working because it is connected in the other way. Any way to ensure that it is working with the gitlab agent?

CI/CD involves EC2

Our code is divided to modules and stored on local Git server. The various modules are built and uploaded to the ECR.
Question: currently, can execute deployment on certain EC2 instance. What will be the preferred for my local Jenkins server to run the Deploy actions on the EC2?
Note: I've finished Working with SSM in the past with BAD impression!!
Thx - Albert

Set up the jobs of CI in a VM server instead of docker image

GitLab CI is highly integrated with Docker. But in some cases the applications need some interactions with some app (which cannot be deployed in docker)
so i want to make my jobs (on gitlab-ci.yml) be running on a Linux VM Server.
how can i set up that in Gitlab? i searched in many website but i didn't find the answer.
thanks you
You can use different executors with Gitlab. For your case, you should set up Gitlab Runner as shell executor and register it (provide it with token obtained from repo)
https://docs.gitlab.com/runner/install/linux-repository.html

How do developers typically use Docker with a Java Maven project and AWS EC2?

I have a single Java application. We developed the application in Eclipse. It is a Maven project. We already have a system for launching our application to AWS EC2. It works but is rudimentary and we would like to learn about the more common and modern approaches other teams use to launch their Java Maven apps to EC2. We have heard of Docker and I researched the tool yesterday. I understand the basics of building an image, tagging it and pushing to either Docker Hub or Amazon's ECS service. I have also read through a few tutorials describing how to pull a Docker image into an EC2 instance. However, I don't know if this is what we are trying to do, given that I am a bit confused about the role Docker can play in our situation to help make our dev ops more robust and efficient.
Currently, we are building our Maven app in Eclipse. When the build completes, we run a second Java file that uses the AWS JDK for Java to
launch an EC2 instance
copy the.jar artifact from the build into this instance
add the instance to a load balancer and
test the app
My understanding of how we can use Docker is as follows. We would Dockerize our application and push it to an online repository according to the steps in this video.
Then we would create an EC2 instance and pull the Docker image into this new instance according to the steps in this tutorial.
If this is the typical flow, then what is the purpose of using Docker here? What is the added benefit, when we are currently ...
creating the instance,
deploying the app directly to the instance and also
testing the running app
all using a simple single Java file and functions from the AWS SDK for Java?
#GNG what are your objectives for containerization?
Amazon ECS is the best method if you want to operate in only AWS environment.
Docker is effective in hybrid environments i.e., on physical servers and VMs.
the Docker image is portable and complete executable of your application: it delivers your jar, but it can also include property files, static resources, etc... You package everything you need and deploy to AWS, but you could decide also to deploy the same image on other platforms (or locally).
Another benefit is the image contains the whole runtime (OS, jdk) so you dont rely on what AWS provides ensuring also isolation from the underlying infrastructure.

How do I run my application code (PHP) across my various Amazon EC2 instances?

I've been trying to get to grips with Amazons AWS services for a client. As is evidenced by the very n00bish question(s) I'm about to ask I'm having a little trouble wrapping my head round some very basic things:
a) I've played around with a few instances and managed to get LAMP working just fine, the problem I'm having is that the code I place in /var/www doesn't seem to be shared across those machines. What do I have to do to achieve this? I was thinking of a shared EBS volume and changing Apaches document root?
b) Furthermore what is the best way to upload code and assets to an EBS/S3 volume? Should I setup an instance to handle FTP to the aforementioned shared volume?
c) Finally I have a basic plan for the setup that I wanted to run by someone that actually knows what they are talking about:
DNS pointing to Load Balancer (AWS Elastic Beanstalk)
Load Balancer managing multiple AWS EC2 instances.
EC2 instances sharing code from a single EBS store.
An RDS instance to handle database queries.
Cloud Front to serve assets directly to the user.
Thanks,
Rich.
Edit: My Solution for anyone that comes across this on google.
Please note that my setup is not finished yet and the bash scripts I'm providing in this explanation are probably not very good as even though I'm very comfortable with the command line I have no experience of scripting in bash. However, it should at least show you how my setup works in theory.
All AMIs are Ubuntu Maverick i386 from Alestic.
I have two AMI Snapshots:
Master
Users
git - Very limited access runs git-shell so can't be accessed via SSH but hosts a git repository which can be pushed to or pulled from.
ubuntu - Default SSH account, used to administer server and deploy code.
Services
Simple git repository hosting via ssh.
Apache and PHP, databases are hosted on Amazon RDS
Slave
Services
Apache and PHP, databases are hosted on Amazon RDS
Right now (this will change) this is how deploy code to my servers:
Merge changes to master branch on local machine.
Stop all slave instances.
Use Git to push the master branch to the master server.
Login to ubuntu user via SSH on master server and run script which does the following:
Exports (git-archive) code from local repository to folder.
Compresses folder and uploads backup of code to S3 with timestamp attached to the file name.
Replaces code in /var/www/ with folder and gives appropriate permissions.
Removes exported folder from home directory but leaves compressed file intact with containing the latest code.
5 Start all slave instances. On startup they run a script:
Apache does not start until it's triggered.
Use scp (Secure copy) to copy latest compressed code from master to /tmp/www
Extract code and replace /var/www/ and give appropriate permissions.
Start Apache.
I would provide code examples but they are very incomplete and I need more time. I also want to get all my assets (css/js/img) being automatically being pushed to s3 so they can be distibutes to clients via CloudFront.
EBS is like a harddrive you can attach to one instance, basically a 1:1 mapping. S3 is the only shared storage stuff in AWS, otherwise you will need to setup an NFS server or similar.
What you can do is put all your php files on s3 and then sync them down to a new instance when you start it.
I would recommend bundling a custom AMI with everything you need installed (apache, php, etc) and setup a cron job to sync php files from s3 to your document root. Your workflow would be, upload files to s3, let server cron sync files.
The rest of your setup seems pretty standard.

Resources