I have a spring boot app which is deployed to AWS Beanstalk via Bitbucket pipelines.
In the bitbucket-pipelines.yml, I create the app.jar file first which has 2 script files. Then in the script section, I deploy the jar file to AWS beanstalk as follows:
script:
- pipe: atlassian/aws-elasticbeanstalk-deploy:0.6.7
variables:
AWS_ACCESS_KEY_ID: '$AWS_ACCESS_KEY_ID'
AWS_SECRET_ACCESS_KEY: '$AWS_ACCESS_KEY_SECRET'
AWS_DEFAULT_REGION: 'xxxxxx'
APPLICATION_NAME: 'xxxxx'
ENVIRONMENT_NAME: 'xxxxx'
ZIP_FILE: 'myapp-$version.jar'
S3_BUCKET: '$S3_BUCKET'
VERSION_LABEL: 'myapp-${BITBUCKET_BUILD_NUMBER}'
COMMAND: 'all'
This works fine. The app is getting deployed. Now I want to run the 2 scripts which I put inside the jar file after the deployment. Basically these scripts do some tasks just before the app is ready to be used (soon after the successful deployment). Where do I define how to run those 2 scripts after the deployment to EBS?
I know bitbucket-pipelines.yml can have after-script section. But that runs before the EBS deployment and it is not happening inside the ec2 instance. Probably it is happening inside the bitbucket build server.
I also noticed that there are post deploy hooks in EBS. How do I declare those scripts in bitbucket-pipelines.yml? Or where do I put those scripts inside the myapp.jar file? Inside the myapp.jar, I have BOOT-INF, META-INF and orgspringframework stuff. I can copy those scripts inside the BOOT-INF/classes by Maven build.
Anyone has done this using spring boot, maven, bitbucket pipelines, AWS elasticbeanstalk etc?
The application source bundle I created using maven is the spring boot executable jar file (myapp.jar). Should I create a jar file wrapping the myapp.jar and include those .platform/hooks/postdeploy/scripts... etc?
Related
I've one repo sunixi and it has two projects sun-angular(angular) and sun-admin(spring boot) in ts-admin i'm building ts-angular via executions and moving dist into resources/static of sun-admin project after that i'm building the sun-admin. On local enviroment it is working fine but how can i do same in heroku deployment.
structure of repo
sunixi
---sun-angular
---sun-admin
in sun-admin i'm setting workingDirectory as ../sun-angular but while deploying to heroku i'm getting
Cannot run program "npm" (in directory "/tmp/build_02607c07/sun-angular"): error=2, No such file or directory
If you are using a monorepository strategy, an option is to use subdir buildpack to select where is the path of your folder: https://github.com/timanovsky/subdir-heroku-buildpack
If you want to deploy backend and frontend in the same heroku app, the project folder should contains a npm package. Otherwise, i will not work.
I want to deploy a Spring Boot application on an OpenShift cluster that I want to monitor with elastic-apm, therefore, with the JAVA elastic agent.
I managed to deploy in a project an Elasticsearch instance, a Kibana instance and an apm-server.
Next to that, I also managed to deploy my Spring Boot application. For this I used the web console. I imported my project from GitLab, and chose the Java 8 image builder. However, using this method, I didn't find a way to launch my application by associating the java-agent elastic-apm-agent.
Locally, I run this command to start my application:
mvn package && java -javaagent:elastic-apm-agent/elastic-apm-agent-1.26.0.jar \
-Delastic.apm.service_name=ms-salarie \
-Delastic.apm.server_urls=http://localhost:8200 \
-Delastic.apm.secret_token= \
-Delastic.apm.environment=development \
-Delastic.apm.application_packages=com.leanerp.salarie \
-Delastic.apm.config_file=elastic-apm-agent/elasticapm.properties \
-jar target/salarie-1.1.3-SNAPSHOT.jar
Is there a way to override the command launched by the container of my application? Or another solution allowing me to use the elastic-apm-agent?
I am a newbie on OpenShift, so I don't fully understand all the concepts.
Ok, so the answer was adding this environment variable :
JAVA_OPTS_APPEND=-javaagent:{{path_to_elastic_apm_agent}}
this command allows you to launch your java application with options.
The Java agent allows multiple ways to configure it, one of which are command line system properties. Others include packaging an elasticapm.properties resource file or setting environment variables.
Check out the docs. Small excerpt:
Properties file: The elasticapm.properties file is located in the same folder as the agent jar, or provided through the config_file option. dynamic config.
Environment variables: All configuration keys are in uppercase and prefixed with ELASTIC_APM_.
Different option sources have different priority and precedence.
To attach the agent to a running JVM process (from within your application), you can use the API to self-attach.
We're using fabric8 maven plugin in order to build and deploy our maven projects into kubernetes.
I don't quite figure out how to use fabric8:helm goal.
I've tried to get some details about what exactly it makes, but I don't quite get it:
$ mvn help:describe -Dgoal=helm -DgroupId=io.fabric8 -DartifactId=fabric8-maven-plugin -Ddetail
And this is the output:
fabric8:helm
Description: Generates a Helm chart for the kubernetes resources
Implementation: io.fabric8.maven.plugin.mojo.build.HelmMojo
Language: java
Bound to phase: pre-integration-test
Available parameters:
helm
(no description available)
kubernetesManifest (Default:
${basedir}/target/classes/META-INF/fabric8/kubernetes.yml)
User property: fabric8.kubernetesManifest
The generated kubernetes YAML file
kubernetesTemplate (Default:
${basedir}/target/classes/META-INF/fabric8/k8s-template.yml)
User property: fabric8.kubernetesTemplate
The generated kubernetes YAML file
skip (Default: false)
User property: fabric8.skip
(no description available)
...
Inside our projects we have out artifacts inside src/main/fabric8. The content of this folder is:
tree src/main/fabric8
src/main/fabric8
├── forum-configmap.yaml
├── forum-deployment.yaml
├── forum-route.yaml
└── forum-service.yaml
These are files only related with kubernetes.
I've not been able to find any snippet over there about:
Which kind of files do I need to add on my project? helm files?
Which is exactly the output of this goal?
Just to play around with this I grabbed a basic spring boot project with Web dependency and a RestController created with the spring initializr. The fabric8 plugin docs say to run the resource goal first so I went to the base directory of my project and ran mvn -B io.fabric8:fabric8-maven-plugin:3.5.41:resource. That generated kubernetes descriptors for my project under /target/classes/META-INF/fabric8/.
So then I ran mvn -B io.fabric8:fabric8-maven-plugin:3.5.41:resource io.fabric8:fabric8-maven-plugin:3.5.41:helm. At first I got an error that:
target/classes/META-INF/fabric8/k8s-template does not exist so cannot make chart <project_name>. Probably you need run 'mvn fabric8:resource' before.
But the descriptors did exist under /target/classes/META-INF/fabric8/kubernetes/ so I just renamed that directory to k8s-template and ran again. Then it created a helm chart for me in the /target/fabric8/helm/kubernetes/ directory.
So I followed the the docs literally and then ran helm install target/fabric8/helm/kubernetes/. That complained there was no Chart.yaml. I realised then that I had followed the doc too literally and needed to run helm install target/fabric8/helm/kubernetes/<project_name>. That did indeed create a helm release and install my project to kubernetes. It didn't start as I hadn't created any docker image. It seems to default to an image name of <groupId>/<artifactId>:<version/snapshot-number>. Presumably that would've been there if I'd also run the 'build' goal and push goals and had a docker registry accessible to my kubernetes.
So in short the helm goal generates a basic helm chart. I believe you'd need to customise this chart manually if you have an app that needs to access shared resources with urls or credentials being injected (e.g. for a database or message broker or authentication system) or if your app exposes multiple ports or if you need initContainers or custom startup parameters. Presumably you are trying to customize these generated resources and are doing so by putting files in your /src/fabric8/. If it's the k8s files that you're trying to feed through then I guess they'd have to go in /src/fabric8/kubernetes/ in order to feed through into the expected /target/ directory and also be named <project-name>-deployment.yml and <project-name>-svc.yml.
I guess the generated chart is at least a starting-point and presumably the experience can be a bit smoother than my experimenting was if you add all the plugin to the pom and do all the setup rather than running individual goals.
I'm trying to auto deploy a Laravel app from a Github branch into AWS EC2 or Elastic Beanstalk (Prefer) but I haven't found the right solution, one of the tutorials I have followed is the one bellow. Does anyone have a solution for this?
Thank you in advance!
https://aws.amazon.com/blogs/devops/building-continuous-deployment-on-aws-with-aws-codepipeline-jenkins-and-aws-elastic-beanstalk/
You can do this with the following steps
Setup Jenkins with Github plugin
Install AWS Elastic Beanstalk CLI
Create IAM user with Elastic Beanstalk deploying privileges and add the access keys to AWS CLI (if Jenkins run inside a EC2, instead of creating a user, you can create a Role with requird permission and attach to the EC2 instance)
In Jenkins project, clone the branch go to project directory and executive eb deploy in a shell script to deploy it to Elastic Beanstalk. (You can automate this with a build trigger when new code pushed to the branch)
Alternatively there are other approaches, for example
Blue/Green deployment with Elastic Beanstalk
Deploy Gitbranch to specific environment.
Using AWS CodeStar to setup the deployment using templates(Internally it will use AWS Code pipeline, CodeDeploy and etc.)
An alternative to using eb deploy is to use the Jenkins AWS Beanstalk Publisher plugin https://wiki.jenkins.io/display/JENKINS/AWS+Beanstalk+Publisher+Plugin
This can be installed by going to Manage Jenkins > Manage Plugins > Search for AWS Beanstalk Publisher. The root object is the zip file of the project that you want to deploy to EB. The Build Steps can execute a step to zip up the files that are in your repo.
You will still need to fill in the Source Control Management section of the Jenkins Job configuration. This must contain the URL of your GitHub repo, and the credentials used to access them.
Execute Shell as part of the Build Steps which zip up the files from the repo that you want to deploy to EB. For example zip -r myfiles.zip * will zip up all the files within your GitHub repo.
Use the AWS Beanstalk Publisher Plugin and specify myfiles.zip as the value of the Root Object (File / Directory).
I am trying to deploy my static application to cloud foundry using cf-gradle-plugin.
in my gradle file I have defined buildpack:
buildpack = "https://github.com/cloudfoundry/staticfile-buildpack.git#v1.1.0"
and files to deploy:
file = file('.')
I would like to specify which files should be not pushed to the cf. I have tried to do it by specifying .cfignore file in the root directory, but it does not work. Does anybody know how to filter files which should be deployed to CF when using cf-gradle-plugin and staticfile buildpack?
There isn't a way to filter files with the v1 version of the Gradle plugin. As you probably saw from your issue on the project, supporting .cfignore should be supported in v2.
For now, you could use Gradle to package the contents of your project into a zip archive, using the sophisticated filtering rules that Gradle supports, then specify the zip file in the Cloud Foundry Gradle plugin.