I'm using the Beanstalk Maven Plugin, v1.3.5, to deploy WAR files onto m Elastic Beanstalk instances. All was well until I recently started a new AWS account, which seems to have forced me onto some new policies (previous account was about 5 years old). Now I can no longer deploy to S3.
I get success when validate my security setup, so all is well there.
mvn br.com.ingenieux:beanstalk-maven-plugin:1.1.1:show-security-credentials
but running:
mvn -X beanstalk:upload-source-bundle
spits out
The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256.
I've grepped the plugin code, and there is reference to AWS4-HMAC-SHA256 in there, but I don't see how to 'turn it on'.
Does anyone have beanstalk:upload-source-bundle working in the newer AWS S3 environments?
Yes it does, using the 1.4.0 SNAPSHOT. The main developer says this will be turned into a release soon.
See https://groups.google.com/forum/#!topic/beanstalker-users/c6GeuA3NA6U for discussion on this topic.
Related
I spent a week trying to set up Safe-guard and Openshift in docker-container and completely torn apart...
I am working at a project where I plan to have clients, who can be given access to only those indices. X-pack, Safe-guard enterprise work perfectly - unfortunately until I get any clients I cannot pay yearly fees of several thousands $.
I tried to setup Safe-guard, turn off enterprise mode and then install openshift-elasticsearch-plugin
If I install them both after many tunings - I got an error that you cannot enable functionality in openshift that already enabled by safeguard.
When I install only openshift-elasticsearch-plugin and set all settings - it says "Failed authentication for null".
Here is the repository https://github.com/SvitlanaShepitsena/Lana
I have a small issue (somehow sleep does not work) so in order to start the cluster you need:
docker-compose up
docker ps
docker exec [container-id] -it /bin/bash
./sgadmin.sh
After 1 week of work I am desperate and beg for help :-).
The openshift-elasticsearch-plugin is designed to add specific features to the openshift logging stack. It, among other things, provides dynamic ACLs for users based on their openshift permissions. I would suggest containerizing an Elasticsearch image and adding the Searchguard plugins directly. Alternatively, versions of Elasticsearch later then the the one the plugin is designed for (2.4.4) are able to utilize XPACK that provides similar security.
Its preinstalled https://hub.docker.com/r/elastic/elasticsearch and can be configured as described https://www.elastic.co/guide/en/elasticsearch/reference/current/docker.html
I have been following this guide:
https://deliciousbrains.com/scaling-laravel-using-aws-elastic-beanstalk-part-3-setting-elastic-beanstalk/
However I am stuck at this point.
Not in terms of something not working, but in how it should be done properly. Which app I should deploy?
Is is the development app that is tested and deployed? Do I create another instance in AWS that will be only used to deploy ready apps? What is the pattern to follow?
At the moment I have local development server which runs on my PC, and also 1 Development instance EC2 on AWS. Do I need more than that on top of Elastic beanstalk?
Please advice me! Thanks!
The following pattern is the one that best fits your need. You're not just looking for a pattern, but an architecture. I'll try to help you with the information you provided.
First it is important that you really understand what Beanstalk is and how it works. See: http://docs.aws.amazon.com/en/elasticbeanstalk/latest/dg/Welcome.html
Answering your question, applications are typically placed in the beanstalk for scalable production, but nothing prevents you from setting up development environments for testing, too.
You do not need to create an instance to deploy, you can deploy from your own local machine, using the console, cli, or api. Look:
Console: https://sa-east-1.console.aws.amazon.com/elasticbeanstalk/home
EB Cli: http://docs.aws.amazon.com/en/elasticbeanstalk/latest/dg/eb-cli3.html
API: http://docs.aws.amazon.com/en/elasticbeanstalk/latest/api/Welcome.html
Having said that, I will cite a very useful scenario in several cases:
You create a beanstalk application from the console or cli and configure the integration with AWS CodeCommit. CodeCommit will prevent you from having to send the whole project to each deploy.
You create an instance of amazon to perform the implantation. This instance has a git repository of your project, it gets committed to the beanstalk environment settings (environment variables for example), and deploy to beanstalk using CodeCommit.
This scenario is very useful for a team project for beanstalk because you can use the deployment instance to hide sensitive details and configure deploy patterns.
This is my setup;
Bitbucket Repo of HTML docs.
Elastic Beanstalk Environment
EC2 c3 Instance (8GB Elastic Block Store attached)
So I connect Deploybot to Elastic Beanstalk successfully and deploy. Path is default.
Success, so it seems.
output Creating application archive for Elastic Beanstalk.
output Application archive created.
output Uploading application archive to S3 bucket deploybot-elastic-beanstalk-> mysite-997c5d66.
output Application archive uploaded.
output Creating application version mysite-fbba70de-5e4736.
output Application version created.
output Updating environment e-txarhpt4wp with new version.
output Environment was updated and will now be refreshing.
But no... where are the files?
I drop in with filezilla (SFTP) and cannot find anywere on the server.
Moreover, my path is actually;
var/www/vhosts/my.site.yay/html/
If I change the path in the Deploybot environment settings - the repo never successfuly deploys, instead all I get is 'bypassed' with every single git push which indicates to me that deploybot is not acutally connecting to anything and thus constantly sees 'no changes'.
Anyone got any clues?
I have spent several hours searching prior to this post and there is almost nothing written about using deploybot with aws besides official deploybot documents.
thanks in advance to those with potential answers
Doh!
I had set up my ec2 instance to use Nginx instead of Apache - deleting Apache (httpd).
In the process of writing this question I looked more closely at the deploybot log and traced that Deploybot pushs a .zip to a S3 bucket which it creates and then triggers an elastic beanstalk environment refresh - using whatever built in webhook there must be.
So whatever webhook that uses looks for Apache (httpd) and the whole thing fails, as revealed in the Beanstalk environment event logs;
ERROR
[Instance: i-0dc350d4f82711400] Command failed on instance. Return code: 1 Output: (TRUNCATED)...tory # rb_sysopen - /etc/httpd/conf/httpd.conf (Errno::ENOENT) from /opt/elasticbeanstalk/support/php_apache_env:81:in update_apache_env_vars' from /opt/elasticbeanstalk/support/php_apache_env:125:in' Using configuration value for DocumentRoot:. Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/05_configure_php.sh failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
So, I switched my deploybot to SFTP with public key and it works. (I'm not clever enough to edit/write my own beanstalk envior refresh webhook yet)
We're hosting on EC2. I've read this article here for provisioning tentacles. Is there a script which will then tell that provisioned server to grab the latest packages (from the latest release of the environment it's provisioned for)?
Skip actions are step related, however I've just traced the POST request and there's a field SpecificMachineIds - So you CAN deploy to a specific machine.
It feels a bit smelly, but you'd have to get the new Id of the machine from the API, and then use that in your deployment request.
EDIT
A quick google on SpecificMachineIds and I have just come across this which is probably what you need
Octopus Deploy Support Question
I have been trying hard to deploy a java application through cloudfoundry into an Amazon EC2 instance using the console as described in this screencast video on cloud foundry.
http://classic.cloudfoundry.com/screencasts.html
but only failing to get the console where I can deploy by choosing a war from the local file system, as explained in the video..
What is the correct URL where I can find a console similar to the one shown in the video ??
And I have already tried signing up for the url http://classis.cloudfoundry.com, but not getting an approval or acknowledgement mail from the team even after 4 days..
Did anyone face similar issue in the recent times?
If yes please help!
The Cloud Foundry Classic service (at http://classic.cloudfoundry.com) is no longer accepting new accounts.