At the moment I have a working Jenkins declarative pipeline set up as a paramterized build, and a git pre-push hook set up to build it.
I would like to migrate to a multi-branch setup, where the feature/<someCoolFeature> and development branches are configured to deploy to one server, test is configured to deploy to a different server and production to the production server.
I am comfortable with triggering builds and defining parameters for which branches are built when, but all the documentation, questions and blog posts skip over a central thing:
How do I configure the deployment variables for the different servers?
As I see it, since you don't have switch and if control structures in declarative pipelines. I have to create a library which looks up which BRANCH_NAME I am building, and set the environment variables, or choose the corresponding application-<env>.yml based on that, but there must be a best practice somewhere that I am missing.
Though not encouraged for the sake of simplicity and purpose of the declarative pipelines, you can still use both switch and if control structures within them:
Before the pipeline {...} block
// switch or if statements here
pipeline {
stages {
...
}
}
Inside a stage by wrapping them within a script {} block:
stage('prepare-env') {
steps {
script {
// switch or if statements here
}
}
}
See https://jenkins.io/doc/book/pipeline/syntax/#script.
Related
I am dealing an infrastructure and trying to figure it out how to deploy just single lambda from CI/CD pipeline.
Let's say in a repo you have 20 lambdas, and you made change for one single lambda, instead of deploying all of them i just want to deploy the changed one so cut out the deployment time.
I've got an idea like checking difference from git and figure it out which ones are changed, and do deployment only that part of functionality, but it surely doesn't seem right way to do it. Believing there is more proper way to do it.
I am using terraform for now (moving to serverless framework) i know that terraform and serverless framework holds a state on s3 bucket. However on my case when i run it through pipelines, eventhogh there is a terraform state and there is no change on the state, it still deploys the whole thing as far as realised (i might be wrong). I just want to get clear my mind to see how people does this with their pipline.
Since you seem to be asking about both Terraform and Serverless Framework here, I'm assuming you're looking for a general answer rather than specifically how this would be solved with a particular tool.
One way to solve this problem is to decouple your build process from your deploy process by adding a version selection mechanism in between. This just means that somewhere in your system you have a value that can be written by your build process and read by your deploy process which indicates what is the "current" artifact for each of your Lambda functions.
When your build process completes successfully, it can write the information about the artifact it built into the appropriate location, and then trigger your deployment process. Your deployment process will then read the artifact information and use it to decide what to deploy.
If you have made no changes to the current artifact metadata for a particular function then the deploy process can see that and not do anything. If a particular artifact is flawed in some way and you only notice once it's deployed, you can potentially set the artifact metadata back to the previous one and re-run the deployment process to roll back. If you choose a data store that retains historical versions, you'll also have a log of changes to the current artifact which might be useful to understand circumstances that lead to an incident.
Without getting into specifics it's hard to say more about this. For Terraform in particular, the artifact metadata store ought to be something that Terraform can read using a data source. To show a real example I'm going to just arbitrarily choose AWS SSM Parameter Store as a location for that artifact metadata store:
data "aws_ssm_parameter" "foo" {
name = "FooFunctionArtifact"
}
locals {
# For this example, we'll assume that the stored parameter is a JSON
# string shaped like this:
# {
# "s3_bucket": "awesomecorp-app-artifacts"
# "s3_key": "/awesomeapp/v1.2.0/function.zip"
# }
foo_artifact = jsondecode(data.aws_ssm_parameter.foo)
}
resource "aws_lambda_function" "foo" {
function_name = "foo"
s3_bucket = local.foo_artifact.s3_bucket
s3_key = local.foo_artifact.s3_key
# etc, etc
}
The technical details of this will vary a lot depending on your technology choices. If you don't use Terraform then you'll either use a feature similar to data sources in your other tool or you'd write some wrapper glue code that can itself retrieve the necessary information and pass it into the tool as an argument.
The main thing, regardless of technology choices, is that there is an explicit record somewhere of what is the latest artifact for each function, which is updated by your build step and read by your deploy step. This pattern can apply to other artifact types too, such as AMIs for EC2, docker images, etc.
Seems you have added label of terraform, serverless-framework (I called it sls), and aws-lambda. So all of them work for you.
terraform - Terraform itself will care of the differences which lambda need be updated. But it is not lambda friendly if you need install related packages.
serverless framework (sls) - it is good to use to manage lambda functions, but as side effect, it has to be managed with api gateway together. I am not sure if sls team has fix this issue or not. Need some confirmations.
SLS will take care of installing related packages.
The bad part is, sls can't diff the resources to be deployed and to be planned.
cloudformation - that's AWS owned Infrastructure as Code (IaC) tool to manage aws resources, you should be fine to use it to manage the lambda resource. you will get same issues as Terraform that you have to install the related packages before deploy the stack.
Bad part is, cfn (cloudformation) doesn't have diff feature as well, furtherly, it doesn't have proper tools to manage its aws cli commands, you have to use others, such as shell scriping, Ansible or even Terraform to manage coudformation templates updates.
aws cdk - The newest way is using aws-cdk, it does have the diff feature cdk diff which is mostly suitable for your current job, but it is very new project, a lot of features are still waiting to be developed.
You can take these and think with your team's skill sets. Always choice the tools, which you and your team are most confident.
Using this code:
resource "aws_api_gateway_deployment" "example_deployment" {
depends_on = [
"aws_api_gateway_method.example_api_method",
"aws_api_gateway_integration.example_api_method_integration"
]
rest_api_id = "${aws_api_gateway_rest_api.example_api.id}"
stage_name = "${var.stage_name}"
}
I can deploy API Gateway changes to whatever stage I specify. However, this will override any existing stages. That is, if I first deploy to a stage called 'dev', then deploy to 'prod', it will erase 'dev'.
How can I achieve multi-stage deployments? So that I can first deploy to dev or staging, and if it all looks good, then deploy to the prod stage.
After some research, we ended up taking a different tack. Based on articles like this and this, we split our terraform into folders per stage. So if you want to deploy dev, you run terraform inside the dev folder. To avoid code duplication, you use modules. It seems to be working well, and allows us to deploy different versions of the API.
I'm not sure if this is the best approach to my structure, but I have a repository titled lambda, which has the following structure:
lambda/
lambda_func_one/
lambda_func_two/
...
lambda_func_n/
Each lambda function is not necessarily in the same language. For example, lambda_func_one is in python, while lambda_func_two is in node.
Is it achievable to have a continuous deployment/integration of all of these lambda functions? Alternatively I can make them each their own repo, but it's nice to be able to call git pull and see all changes the team made to their respective lambda functions.
In this way, a change in one function triggers the deployment pipeline in other functions as well.
If you still want to keep them in one repo, you can have different branches for each function and setup CI/CD for those branches.
We have multiple layers of our products split into different build configurations for continuous integration. For the sake of this question, let's just say we have a "Front-End CI" build, and a "API CI" build. The VCS roots are configured to pull in all branches, and triggered to run upon checkin, as should be expected for CI.
Now I have my User Acceptance project, where I use CloudFormation to dynamically spin up servers to which I deploy. I have snapshot dependencies set up for the CI builds mentioned above, and everything works as expected for the default branches on each of the VCS roots and dependencies. I expect that a feature branch for the front-end may not necessarily necessitate a branch from the default for the API, and the current way I have it setup accounts for that as well.
That's where I begin to have issues. If I have to branch both the front-end and API, I cannot get TeamCity to do what I want in this regard. My question is this: how do I tell Team City to run a UA build using branch "A" from the Front-End CI build config and branch "B" from the API CI build config, where "A" and "B" can be any arbitrary branch? Currently right now, all branches from both snapshots are shown when I look at the UA build config. Here's a good picture:
If I run api-branch, it will always use the default branch from the Front-end CI snapshot. Same for any branch on the front-end snapshot. I cannot seem to find a way to specify this in the configuration or when starting a build.
I'm up for just about anything to address this, including build configs that are just cloned off of each other to specify branches the way they're meant to, but I'm just not seeing how I can do that either. Thanks!
Create a teamcity template target which monitors both the front end and API repositories and can trigger on changes. This should be one target (and not 2 different targets). Parameterize the branch names so that actual targets have to give the branch name
I would suggest creating a mapping of the frontend:api branches in a datastore( file,db,nosql) . Then dynamically create teamcity targets (through REST API) for each new/modified combination and explicitly set the branch names. Once the targets are created they will automatically run whenver there are any changes.
I'm investigating Puppet as our future deployment and provisioning tool in our shop, but now I'm stuck at how to make a clever Continuous Integration/Delivery tool chain with deployment through Puppet.
In any of our environments (dev, test, qa, demo, prod) we have a range of components. We need to be able to deploy each component separately and possibly even concurrently.
I'd like a way to initiate (through script) a deploy of a single component package (=Puppet module) and gather the output and success status of that.
Simply waiting for a scheduled agent pull, or doing a 'puppet agent --test' on each node on the environment isn't good enough, because it may pick up other pending changes (I don't know if another component is also in the process of being deployed).
In my tool chain I would like the deployment output and status from component A and component B to be recorded separately and not mixed up.
So my question is: Can I use puppet to deploy one single named package (module) at a time?
And if not, where did I take a wrong turn when I drove down this path?
I realise a master-less Puppet set-up with modules and manifests replicated to each node perhaps could do it, but IMHO a master-less Puppet set-up kind of defeats the purpose of Puppet.
PS: I think what I'm trying to achieve is called 'Directed Orchestration' in Damon Edwards' very enlightening video at Integrating DevOps tools into a Service Delivery Platform (at timestamp around 22:30).
So my question is: Can I use puppet to deploy one single named package (module) at a time?
Yes, you can, via puppet apply. First you need to create a moduledir and a module that will contain your manifests. e.g. :
/scratch/user/puppet/local/ # This is your modulepath for local deployment
# Following contains the manifests for a module name "localmod"
/scratch/user/puppet/local/localmod/manifests/init.pp
# example content of init.pp
class localmod {
notify{"I am in in local module....":}
}
On that local machine you can test this module via puppet apply :
puppet apply -v --modulepath=/scratch/user/puppet/local -e "include localmod"
echo $? # Get the exit status of the above command
I watched the video at the point your video. There are two types of automation you can do.
Application build/deploy automation, which can be achieved via maven/ant (Build) and ant/capistrano/chrome/bash/msdeploy (Deploy) or as termed on that slide "Installer".
System/Infrastructure automation can be achieved via Chef/Puppet/CFEngine.
This question seems to be ... "How do I do applications build using puppet (implied as a system automation tool)"
So quite simply, oval tool in round hole. (I didn't say square)
At my company, we use Jenkins and the Build Pipeline Integration plugin to build massive multi component projects. As an example, a Java app will use ant in a build job, the next chained job will be a "deploy to dev" job which uses Capistrano to deploy the application, then the next job in the chain is "Configure Dev" which calls Chef to update the system configurations in the DEV environment. Chef is used to configure the application. Each of these jobs can be set to run automatically and sequentially.
a master-less Puppet set-up kind of defeats the purpose of Puppet.
Only if you discount
The rich DSL puppet has to offer
So many peer reviewed community modules
Otherwise, something like this gives you remote directed orchestration.
#update manifests etc (version control is the source of truth)
ssh user#host git pull
#run puppet
ssh user#host sudo puppet-apply