In order to be conservative on resources (and costs), I would like to put more than 1 war file (representing different apps) on the same EC2 beanstalk instance.
I would like then to have appl A mapping to myapp.elasticbeanstalk.com/applA using warA and appl B mapping to myapp.elasticbeanstalk.com/applB using warB
But, the console allows you to upload a single and only war for any instance.
1) So, I understand that its not possible with the current interface. Am I right ?
2) Though, is is possible to achieve this via "non-standard" ways: uploading warA via interface and copying / updating warB to /tomcat6/webapps via ssh, ftp, etc ?
3) With (2), my concern is that B will be lost each time BT health checker decides to terminate the instance (successive failed checks for example) and restart a new one. I would then have to make warB as part of my customized AMI used by applA and create a new version of this AMI each time i update warB
Please, help me
regards
didier
You are correct ! You can not (yet ) have multiple war in beanstalk.
Amazon Forum answer is here
https://forums.aws.amazon.com/thread.jspa?messageID=219284
There is a workaround though, but not using Beanstalk, but plain EC2:
https://forums.aws.amazon.com/thread.jspa?messageID=229121
http://blog.jetztgrad.net/2011/02/how-to-customize-an-amazon-elastic-beanstalk-instance/
Shameless plug: While not related directory, I've made a plugin for Maven 2 to automate Beanstalk deployments and Elastic MapReduce as well. Check out http://beanstalker.ingenieux.com.br/
This is an old question but it took me some time to find a more up to date answer so I thought I'd share my findings.
Multiple WAR deployment is now supported natively by Elastic Beanstalk (and has been for some time).
Simply create a new zip file with each of your WAR files inside of it. If you want one of them to be available at the root context name it ROOT.war like you would if you were deploying to Tomcat manually.
Your zip file structure should looks like so:
MyApplication.zip
├── .ebextensions
├── foo.war
├── bar.war
└── ROOT.war
Full details can be found in the Elastic Beanstalk documentation.
The .ebextensions folder is optional and can contain configuration files that customize the resources deployed to your environment. See Elastic Beanstalk Environment Configuration for information on using configuration files.
There another hack which allows you to boot an arbitrary jar by installing java and using a node.js boot script:
http://docs.ingenieux.com.br/project/beanstalker/using-arbitrary-platforms.html
Hope it helps
Related
Apparently i have a basic spring boot application that i need to deploy in openshift. The openshift contains aan application.yml/application.properties in it in a var/config directory. After deployment of the application in openshift, I need to read the propert/yml file from the directory in my application. Is there any process how to do the same?
How are you building your container? Are you using S2I? Building the container yourself with Docker/podman? And what's "var/config" relative to? When you say "the OpenShift contains", what do you mean.
In short, an application running in an OpenShift container has access to whatever files you add to the container. There is effectively no such thing as an "OpenShift directory": you have access to what is in the container (and what is mounted to it). The whole point of containers is that you are limited to that.
So yourquestion probably boils down to, "how do I add a config file into my container". That will, however, depend on where the file is. Most tools for building containers will grab the standard places you'd have a config file, but if you have it somewhere non-standard you will have to copy it into the container yourself.
See this Spring Boot guide for how to use a Dockerfile to do it yourself. See here for using S2I to assemble the image (what you'd do if you were using the developer console of OpenShift to pull dir. (In general S2I tends to do what is needed automatically, but if your config is somewhere odd you may have to write an assemble script.)
This Dockerfile doc on copying files into a container might also be helpful.
We have build a few Microservices (MS) which have been deployed to our company's K8s clusters.
For current deployment, any one of our MSs will be built as a Docker image and they deployed manually using the following steps; and it works fine:
Create Configmap
Installing a Service.yaml
Installing a Deployment.yaml
Installing an Ingress.yaml
I'm now looking at Helm v3 to simplify and encapsulate these deployments. I've read a lot of the Helm v3 documentation, but I still haven't found the answer to some simple questions and I hope to get an answer here before absorbing the entire doc along with Go and SPRIG and then finding out it won't fit our needs.
Our Spring MS has 5 separate application.properties files that are specific to each of our 5 environments. These properties files are simple multi-line key=value format with some comments preceded by #.
# environment based values
key1=value1
key2=value2
Using helm create, I installed a chart called ./deploy in the root directory which auto-created ./templates and a values.yaml.
The problem is that I need to access the application.properties files outside of the Chart's ./deploy directory.
From helm, I'd like to reference these 2 files from within my configmap.yaml's Data: section.
./src/main/resource/dev/application.properties
./src/main/resources/logback.xml
And I want to keep these files in their current format, not rewrite them to JSON/YAML format.
Does Helm v3 allow this?
Putting this as answer as there's no enough space on the comments!
Check the 12 factor app link I shared above, in particular the section on configuration... The explanation there is not great but the idea is behind is to build one container and deploy that container in any environment without having to modify it plus to have the ability to change the configuration without the need to create a new release (the latter cannot be done if the config is baked in the container). This allows, for example, to change a DB connection pool size without a release (or any other config parameter). It's also good from a security point of view as you might not want the container running in your lower environments (dev/test/whatnot) having production configuration (passwords, api keys, etc). This approach is similar to the Continuous Delivery principle of build once, deploy anywhere.
I assume that when you run the app locally, you only need access to one set of configuration, so you can keep that in a separate file (e.g. application.dev.properties), and have the parameters that change between environments in helm environment variables. I know you mentioned you don't want to do this, but this is considered a good practice nowadays (might be considered otherwise in the future...).
I also think it's important to be pragmatic, if in your case you don't feel the need to have the configuration outside of the container, then don't do it, and probably using the suggestion I gave to change a command line parameter to pick the config file works well. At the same time, keep in mind the 12 factor-app approach in case you find out you do need it in the future.
In AWS Elastic Beanstalk, there is a wizard flow for deploying node.js apps. When I get to the step for "upload your own" application source, it describes in generic terms their 3 requirements: zip file, less that 500MG, no parent folder.
But they stop there. No specifics.
I dropped out to bash and ran...
ng build --prod
...and now have a dist folder. So... what do I include in my zip file and at what folder level? I have tried just /dist, and also /myapp/dist which included all the other loose files in /myapp but no other sub folders such as /src. I have looked all over the web, but don't see what should be a fairly simple tutorial on zipping up an Application Source Bundle for AWS EC2.
What should be included in the zip file for upload?
The cardinal sin in my question above was attempting to run my Angular 5 app in AWS by using their choice for node.js as my server platform. Here is what I learned (with some help from folks like Albert Haff: Angular 5 uses Node (ng serve) to simulate a webserver while you code. However, even though there is a supported flag for --prod, it's not to be used in production! It's really easy (and tempting) to select node.js as the environment when deploying your Angular 5 app via Beanstalk -- but don't do it!
from within your Angular 5 project folder, run ng build --prod ( and consider adding --aot)
if you can, from within the new /dist folder that the build just created (or updated), optionally run compression like for i in dist/*; do brotli $i; done
from within the /dist folder, zip up ALL the contents including subfolders.
go to beanstalk, and as you create a webserver environment, select Tomcat (or any other plain old webserver, but DON'T pick node.js even though it's on the list!).
on Application Code, select to Upload Your Own, and browse to the .zip file you created in step 3 above.
click Create Environment and in a few minutes your Angular 5 app will be serving up on the internet.
Now, from here you will likely need to connect up your domain name. Use Route 53 for that.
I have been following this guide:
https://deliciousbrains.com/scaling-laravel-using-aws-elastic-beanstalk-part-3-setting-elastic-beanstalk/
However I am stuck at this point.
Not in terms of something not working, but in how it should be done properly. Which app I should deploy?
Is is the development app that is tested and deployed? Do I create another instance in AWS that will be only used to deploy ready apps? What is the pattern to follow?
At the moment I have local development server which runs on my PC, and also 1 Development instance EC2 on AWS. Do I need more than that on top of Elastic beanstalk?
Please advice me! Thanks!
The following pattern is the one that best fits your need. You're not just looking for a pattern, but an architecture. I'll try to help you with the information you provided.
First it is important that you really understand what Beanstalk is and how it works. See: http://docs.aws.amazon.com/en/elasticbeanstalk/latest/dg/Welcome.html
Answering your question, applications are typically placed in the beanstalk for scalable production, but nothing prevents you from setting up development environments for testing, too.
You do not need to create an instance to deploy, you can deploy from your own local machine, using the console, cli, or api. Look:
Console: https://sa-east-1.console.aws.amazon.com/elasticbeanstalk/home
EB Cli: http://docs.aws.amazon.com/en/elasticbeanstalk/latest/dg/eb-cli3.html
API: http://docs.aws.amazon.com/en/elasticbeanstalk/latest/api/Welcome.html
Having said that, I will cite a very useful scenario in several cases:
You create a beanstalk application from the console or cli and configure the integration with AWS CodeCommit. CodeCommit will prevent you from having to send the whole project to each deploy.
You create an instance of amazon to perform the implantation. This instance has a git repository of your project, it gets committed to the beanstalk environment settings (environment variables for example), and deploy to beanstalk using CodeCommit.
This scenario is very useful for a team project for beanstalk because you can use the deployment instance to hide sensitive details and configure deploy patterns.
I'm trying to do some clustering testing and I am setting up multiple RabbitMQ services on a single Windows machine. I am able to set the environment variables RABBITMQ_NODENAME, RABBITMQ_SERVICENAME, and RABBITMQ_NODE_PORT then run RabbitMQ-Service Install to have a new RabbitMQ service installed under a different name.
My question is regarding the configuration file. Based on what I read on the RabbitMQ site, the configuration file defaults to the %AppData%\RabbitMQ directory.
I'm just having trouble trying to understand how it should be setup so I can have 3 instances of the service running with their own configuration.
Do I run the installation under a different local or domain account so it gets placed under a different %AppData%\RabbitMQ directory or can I add a directive to the service to look in a particular directory for the configuration file for that particular service?
Also, how does RABBITMQ_BASE come into play? Is that only for data and log files or does that also apply to the configuration file? I'm not sure if once I have the service setup with BASE defined as a specific path I can place a new rabbitmq.config under the root of that path.
Please confirm and provide any additional assistance. Thank you in advance!
For now I'm testing on Windows but I plan on converting to linux once I have this all working correctly and understood. Unfortunately, I've inherited the current environment and it's already installed and running using Windows servers. They just wanted me to setup clustering for it so I'm trying to simulate the cluster on my workstation.
Nevermind, I found out what I needed. The environment variable RABBITMQ_CONFIG_FILE can be used to override the location of the default config file.
http://www.rabbitmq.com/relocate.html
You can run multiple RabbitMQ instances on 1 machine without clustering. You just need to change the ports and the node name in rabbitmq-defaults, rabbitmq-env and config files. If you want them as a service you can just create them from the already configured instances.
HERE is a detailed guide on how to do that. It's pretty easy and straightforward.