Modify a service inside GitLab CI - elasticsearch

I'm attempting to set up GitLab CI and I have some integration tests that run against elasticsearch. I'd like to install elasticsearch using the official docker image, so:
services:
- elasticsearch:2.2.2
But I want the mapper-attachments plugin. I have had no luck adding a command in the before_script section to install the mapper-attachments plugin, because the elasticsearch files don't seem to be in the environment that the before_script section is running inside of. How can I modify the elasticsearch image that has been installed into the runner?

You should create you custom elasticsearch container.
You could adapt the following Dockerfile:
FROM elasticsearch:2.3
MAINTAINER Your Name <you#example.com>
RUN /usr/share/elasticsearch/bin/plugin install analysis-phonetic
You can find this image on Docker Hub.
Here are detailed steps:
Register at https://hub.docker.com and link you Github account with it
Create a new repo at Github, e.g. "elasticsearch-docker"
Create a Dockerfile which inherits FROM elasticsearch and installs your plugins (see this example)
Create an Automated build at Dockerhub form this github repo (in my case: https://hub.docker.com/r/tmaier/elasticsearch/)
Configure the Build Settings at Docker Hub
I added two tags. One "latest" and one matching the elasticsearch release I am using.
I also linked the elasticsearch repository, so that mine gets rebuilt when there is a new elasticsearch release
See that the Container gets built successfully by Docker Hub
Over at Gitlab CI, change the service to point to your new Docker image. In my example, I would use tmaier/elasticsearch:latest
See your integration tests passing

Related

Is there any alternative of JIB for golang project, which creates the docker image withour using dockerfile

I want to create the docker image of golang project using cloud build.yaml but without using dockerfile.
Is there any tool available for golang which is alternative of JIB which creates the docker image without using dockerfile.
You can check the CNCF project : https://buildpacks.io/
The buildpack is similar to JIB mostly.
Here dropping Github link for GO buildpack : https://github.com/paketo-buildpacks/go

fabric8: how does helm goal work?

We're using fabric8 maven plugin in order to build and deploy our maven projects into kubernetes.
I don't quite figure out how to use fabric8:helm goal.
I've tried to get some details about what exactly it makes, but I don't quite get it:
$ mvn help:describe -Dgoal=helm -DgroupId=io.fabric8 -DartifactId=fabric8-maven-plugin -Ddetail
And this is the output:
fabric8:helm
Description: Generates a Helm chart for the kubernetes resources
Implementation: io.fabric8.maven.plugin.mojo.build.HelmMojo
Language: java
Bound to phase: pre-integration-test
Available parameters:
helm
(no description available)
kubernetesManifest (Default:
${basedir}/target/classes/META-INF/fabric8/kubernetes.yml)
User property: fabric8.kubernetesManifest
The generated kubernetes YAML file
kubernetesTemplate (Default:
${basedir}/target/classes/META-INF/fabric8/k8s-template.yml)
User property: fabric8.kubernetesTemplate
The generated kubernetes YAML file
skip (Default: false)
User property: fabric8.skip
(no description available)
...
Inside our projects we have out artifacts inside src/main/fabric8. The content of this folder is:
tree src/main/fabric8
src/main/fabric8
├── forum-configmap.yaml
├── forum-deployment.yaml
├── forum-route.yaml
└── forum-service.yaml
These are files only related with kubernetes.
I've not been able to find any snippet over there about:
Which kind of files do I need to add on my project? helm files?
Which is exactly the output of this goal?
Just to play around with this I grabbed a basic spring boot project with Web dependency and a RestController created with the spring initializr. The fabric8 plugin docs say to run the resource goal first so I went to the base directory of my project and ran mvn -B io.fabric8:fabric8-maven-plugin:3.5.41:resource. That generated kubernetes descriptors for my project under /target/classes/META-INF/fabric8/.
So then I ran mvn -B io.fabric8:fabric8-maven-plugin:3.5.41:resource io.fabric8:fabric8-maven-plugin:3.5.41:helm. At first I got an error that:
target/classes/META-INF/fabric8/k8s-template does not exist so cannot make chart <project_name>. Probably you need run 'mvn fabric8:resource' before.
But the descriptors did exist under /target/classes/META-INF/fabric8/kubernetes/ so I just renamed that directory to k8s-template and ran again. Then it created a helm chart for me in the /target/fabric8/helm/kubernetes/ directory.
So I followed the the docs literally and then ran helm install target/fabric8/helm/kubernetes/. That complained there was no Chart.yaml. I realised then that I had followed the doc too literally and needed to run helm install target/fabric8/helm/kubernetes/<project_name>. That did indeed create a helm release and install my project to kubernetes. It didn't start as I hadn't created any docker image. It seems to default to an image name of <groupId>/<artifactId>:<version/snapshot-number>. Presumably that would've been there if I'd also run the 'build' goal and push goals and had a docker registry accessible to my kubernetes.
So in short the helm goal generates a basic helm chart. I believe you'd need to customise this chart manually if you have an app that needs to access shared resources with urls or credentials being injected (e.g. for a database or message broker or authentication system) or if your app exposes multiple ports or if you need initContainers or custom startup parameters. Presumably you are trying to customize these generated resources and are doing so by putting files in your /src/fabric8/. If it's the k8s files that you're trying to feed through then I guess they'd have to go in /src/fabric8/kubernetes/ in order to feed through into the expected /target/ directory and also be named <project-name>-deployment.yml and <project-name>-svc.yml.
I guess the generated chart is at least a starting-point and presumably the experience can be a bit smoother than my experimenting was if you add all the plugin to the pom and do all the setup rather than running individual goals.

Using Maven artifact in Google Cloud Docker build

I have a google cloud container build with the following steps
gcr.io/cloud-builders/mvn to run a mvn clean package cmd
gcr.io/cloud-builders/docker to create a docker image
My docker image includes and will run tomcat.
Both these steps work fine independently.
How can I copy the artifacts built by step 1 into the correct folder of my docker container? I need to move either the built wars or specific lib files from step 1 to the tomcat dir in my docker container.
Echoing out the /workspace and /root dir in my Dockerfile doesn't show the artifacts. I think I'm misunderstanding this relationship.
Thanks!
Edit:
I ended up changing the Dockerfile to set the WORKDIR to /workspace
and
COPY /{files built by maven} {target}
The working directory is a persistent volume mounted in the builder containers, by default under /workdir. You can find more details in the documentation here https://cloud.google.com/container-builder/docs/build-config#dir
I am not sure what is happening in your case. But there is an example with a Maven step and a Docker build step in the documentation of the gcr.io/cloud-builders/mvn builder. https://github.com/GoogleCloudPlatform/cloud-builders/tree/master/mvn/examples/spring_boot. I suggest you compare with your files.
If it does not help, could you share your Dockerfile and cloudbuild.yaml. Please make sure you remove any sensitive information.
Also, you can inspect the working directory by running the build locally https://cloud.google.com/container-builder/docs/build-debug-locally#preserve_intermediary_artifacts

AutoDeploy a Laravel app from GitHub branch to AWS EC2 or Elastic Beanstalk

I'm trying to auto deploy a Laravel app from a Github branch into AWS EC2 or Elastic Beanstalk (Prefer) but I haven't found the right solution, one of the tutorials I have followed is the one bellow. Does anyone have a solution for this?
Thank you in advance!
https://aws.amazon.com/blogs/devops/building-continuous-deployment-on-aws-with-aws-codepipeline-jenkins-and-aws-elastic-beanstalk/
You can do this with the following steps
Setup Jenkins with Github plugin
Install AWS Elastic Beanstalk CLI
Create IAM user with Elastic Beanstalk deploying privileges and add the access keys to AWS CLI (if Jenkins run inside a EC2, instead of creating a user, you can create a Role with requird permission and attach to the EC2 instance)
In Jenkins project, clone the branch go to project directory and executive eb deploy in a shell script to deploy it to Elastic Beanstalk. (You can automate this with a build trigger when new code pushed to the branch)
Alternatively there are other approaches, for example
Blue/Green deployment with Elastic Beanstalk
Deploy Gitbranch to specific environment.
Using AWS CodeStar to setup the deployment using templates(Internally it will use AWS Code pipeline, CodeDeploy and etc.)
An alternative to using eb deploy is to use the Jenkins AWS Beanstalk Publisher plugin https://wiki.jenkins.io/display/JENKINS/AWS+Beanstalk+Publisher+Plugin
This can be installed by going to Manage Jenkins > Manage Plugins > Search for AWS Beanstalk Publisher. The root object is the zip file of the project that you want to deploy to EB. The Build Steps can execute a step to zip up the files that are in your repo.
You will still need to fill in the Source Control Management section of the Jenkins Job configuration. This must contain the URL of your GitHub repo, and the credentials used to access them.
Execute Shell as part of the Build Steps which zip up the files from the repo that you want to deploy to EB. For example zip -r myfiles.zip * will zip up all the files within your GitHub repo.
Use the AWS Beanstalk Publisher Plugin and specify myfiles.zip as the value of the Root Object (File / Directory).

teamcity and docker integration

Has anyone used Teamcity's artifacts in a new build for docker? What I'd like to automate is take the artifacts produced by teamcity, and then create a new docker image with those artifacts. I couldn't really find any tutorials online. I saw that Docker could integrate with bitbucket and github, but I wasn't really sure if that was the same thing. My base image should have mono and a few other things installed. Installing mono is not part of my source so I wasn't sure if the github integration would work.
Docker can copy an artifact from a remote URL ( https://docs.docker.com/reference/builder/#add ) and TeamCity exposes URL patterns that you can use to download build artifacts from outside TeamCity ( https://confluence.jetbrains.com/display/TCD9/Patterns+For+Accessing+Build+Artifacts ). If you combine those two, you can create a Docker file that creates a new image with the given artifact.
Like this:
ADD http://localhost:8111/guestAuth/repository/download/BuildName/latest.lastSuccessful/artifactName.war /opt/wildfly/standalone/deployments/
I have never worked with teamcity but in general this should be possible. You should first create a base image with everything you need lets call it "crystal/base".
The in your teamcity setup generate your artifact.
In the same directory as the artifact add a Dockerfile with the following:
from crystal/base
ADD artifactFile /var/location_inside_container/artifactFile
CMD ["commandToUserArtifact.sh"]
Lastly, build you new docker container with
docker build -t crystal/dependent .

Resources