I have revised an existing deployment with a new version of the component.
However the deployment failed on a core device because there was a local deployment with a different version that was used for testing.
I have removed that local deployment and now I want to rerun the deployment on that device.
Is that possible? Or do I need to revise the existing deployment (keeping everything the same, so not really a revision)?
For Greengrass v2 there is no such thing as 'restart' deployment (not sure about Greengrass v1). If it fails, read the error messages in log files, fix the issue, revise the deployment with correct parameters and deploy again, or create new deployment.
Here you can find common deployment issues:
https://docs.aws.amazon.com/greengrass/v2/developerguide/troubleshooting.html#greengrass-core-deployment-issues
Related
I have just started developing a Golang app, and have deployed it on Google App Engine. But, when I try to connect my local server to CloudSQL instance through proxy, I am able to connect only through TCP.
However, when connecting with the same CloudSQL instance in AppEngine, I am able to connect only through UNIX.
To cope with this, I have made changes in my local environment handler file, so that it can adapt to local and GCloud config, but I'm not sure how I can skip the update on just this file for GCloud? Again, I don't want AppEngine to delete this file, I just want the CLI to avoid uploading the new version of the handler file.
I use this command for deploying: gcloud app deploy
Currently, I deploy directly to AppEngine, instead of pushing it through VCS. Also, if there is an option to detect if the app is running on AppEngine, then it'd be really great.
TIA
Got it, in case anyone gets stuck in such situation, we can make use of environment variables set in GCloud AppEngine. Although there is documentation stating the environment variables, I would still give importance to checking the environment variables in Cloud Console.
Documentation link for Go 1.12+ Runtime env:
https://cloud.google.com/appengine/docs/standard/go/runtime
I have a single repository that hosts my lambda functions on github. I would like to be able to deploy the new versions whenever new logic is pushed to master.
I did a lot of reasearch and found a few different approaches, but nothing really clear. Would like to know what others feel would be the best way to go about this, and maybe some detail (if possible) into how that pipeline is setup.
Thanks
Welcome to StackOverflow. You can improve your question by reading this page.
You can setup a CI/CD pipeline using CircleCI with its GitHub integration (which is an online Service, so you don't need to maintain anything, like a Jenkins server, for example)
Upon every commit to your repository, a CircleCI build will be triggered. Once the build process is over, you can declare sls deploy, sam deploy, use Terraform or even create a script to upload the .zip file from your GitHub repo to an S3 Bucket and then, within your script, invoke the create-function command. There's an example how to deploy Serverless applications using CircleCI along with the Serverless Framework here
Other options include TravisCI, AWS Code Deploy or even maintain your own CI/CD Server. The same logic applies to all of these tools though: commit -> build -> deploy (using one of the tools you've chosen).
EDIT: After #Matt's answer, it clicked that the OP never mentioned the Serverless Framework (I, somehow, thought he was already using it, so I pointed the OP to tutorials using the Serverless Framework already). I then decided to update my answer with a few other options for serverless deployment
I know that this isn't exactly what you asked for but I use Serverless Framework (https://serverless.com) for deployment and I love it. I don't do my deployments when I push to my repo. Instead I push to my repo after I've deployed. I like this flow because a deployment can fail due to so many things and pushing to GitHub is much less likely to fail. I this way, I prevent pushing code that failed to deploy to my master branch.
I don't know if you're familiar with the framework but it is super simple. The website describes the simple steps to creating and deploy a function like this.
1 # Step 1. Install serverless globally
2 $ npm install serverless -g
3
4 # Step 2. Create a serverless function
5 $ serverless create --template hello-world
6
7 # Step 3. deploy to cloud provider
8 $ serverless deploy
9
10 # Your function is deployed!
11 $ http://xyz.amazonaws.com/hello-world
There are also a number of plugins you can use to integrate easily with custom domains on APIGateway, prune older versions of lambda functions that might be filling up your limits, etc...
Overall, I've found it to be the easiest way to manage and deploy my lambdas. Hope it helps!
Given that you're using AWS Lambda, you may want to consider CodePipeline to automate your release process. [SAM(https://docs.aws.amazon.com/lambda/latest/dg/serverless_app.html) may also be interesting.
I too had the same problem. I wanted to manage 12 lambdas with 1 git repository. I solved it by introducing travis-ci. travis-ci saved the time and really useful in many ways. We can check the logs whenever we want and you can share the logs to anyone by sharing the URL. The sample documentation of all steps can be found here. You can go through it. 👍
I am trying to do a PoC on how to achieve continuous integration and deployment using VSTS.
I have been successful in the build process i.e. from VSTS it will pull the code (asp.net based application) and build. The build process is also getting successful.
Now after the build is done I want to deploy the application and run my maven based selenium test cases written in java on the application. This is the part where I am struck. As in the deployment step it is not able to put the artifacts to the remote path that I am mentioning.
Can anyone please provide me some pointers on how to achieve the deployment on a remote machine and then run the java based test cases on this application?
Any pointers would be greatly appreciated.
Ok..here is the complete scenario...
1. I have the asp.net code on cloud in my vsts
2. I have been able to add a build step and create the artifacts successfully
3. Now I have a IIS server where i want to deploy these artifacts, and the server is not accessible from the public network and is behind a firewall.
Hence I am looking for any task that would help me achieve this. I am not sure of the complications that might arise due to the firewall and hence am trying out different methods to understand the complete big picture.
I received a reply here to use the Win RM tasks. I used that but it is giving a 53 error and not able to connect to the server that I am trying to deploy the code on.
To deploy asp.net based application, you can use IIS Web App Deployment step/task to deploy to your server or deploy to azure web site by using Azure App Service Deploy step/task.
To do Java test, there is a Maven step/task.
I'm trying to deploy an existing build with the "LabDefaultTemplate.11.xaml".
My problem is, that the build times out, as soon as I use an existing build. Here are the last steps including details and the timeout exception:
See http://i.stack.imgur.com/po1i6.png
I have two different servers. The first has TFS 2013 with Build Service, Controller and Agent installed on it. The second is thought to be used for Testing and has a Test-Controller and Agent on it (configured as a Standard-Environment in MS Test-Manager).
Build Service Account is a Domain-Admin
Build Connection to TFS goes with a TFS-Admin
Test Controller Service Login Account is a Local-Admin (mirrored on the Build-Server) and earlier tried with the Domain-Admin
Test Controller TFS-Connection also with a TFS-Admin
Test Controller Lab Service Account is not used, earlier also tried with the Domain-Admin
When I set the build to use the latest TFS-Build it runs into the timeout.
And when i set the path to use a Build from a specific location to the Build Directory on the Build-Server it all just works fine.
The difference between a working build and the timeout described above can be seen in this picture: http://i.stack.imgur.com/gPM07.png
Has anyone an idea where I'm struggling?
The problem was, that my previous builds only "partially succeeded" because they had some failing unit tests. The setting to use the only uses fully successful builds. So the latest build which was used had no drop folder configured.
I received no info about that, until I saw it in the logs. My fault was to never check, which build really is used as the latest.
I'm using Jenkins to produce cspkg files using msbuild. It stores build results in azure blob storage. Then I use management portal to deploy them.
The biggest drawbacks I see are:
1. Deployments can be accidentally deleted easily.
2. There is no straightforward* way to check which version the cloud service has.
Is there a better way to manage deployments?
Its definitely not the best experience is it?
The approach I tend to use is as follows:
Build the deployment package and add the version number to the package filename (taken from AssemblyInfo.cs) e.g. MyCloudService-1.2.0.0.cspkg - this should be trivial using msbuild.
Push the package to Cloud Storage.
Perform the deployment of the package from Storage, with the Deployment Label '[CLOUD SERVICE NAME]-[VERSION] # [DATE & TIME]' e.g. 'MyCloudService-1.2.0.0 # 10-09-2015 16:30'
Check the deployment package into a 'Packages' directory in source control.
If you need to identify the version of the package deployed to the cloud service, you can see the Deployment Label on the Azure Management Portal:
'Old' Portal (manage.windowsazure.com):
'New' Portal (portal.azure.com):