Troposphere is a python project for AWS provisioning. This is a mature project.
AWS CDK is still in developer preview.
cdk diff does help on ease of state maintenance before cdk deploy. Am not sure, how troposphere helps us on state maintenance? except generating a cloudformation template...
From AWSEvent talk, it is mentioned that, abstractions are not built-in for using Troposphere
where as
AWS CDK has good abstractions.
CDK enables CI/CD, easily, where-as Troposphere needs extra automation to upload CF templates
We need to take a decision on picking one, to provision our infrastructure on AWS
From the aspect of state maintenance, software best practice, code maintenance & supported resource types, CI/CD(end-to-end automation) for provisioning,
What are the advantages of AWS_CDK vs Troposphere vs Stacker? being open to any programming language..
troposphere is part of what CDK does, a more apt comparison would be stacker (https://stacker.readthedocs.io/en/latest/ which uses troposphere -- I'm a maintainer of both) vs CDK. Let me know if you have any other questions!
Related
Both Github Actions and Bitbucket Pipelines seem to fill similar functions at a surface level. Is it trivial to migrate the YAML for Actions into a Pipeline - or do they operate fundamentally differently?
For example: running something simple like SuperLinter (used on Github Actions) on Bitbucket Pipelines.
I've searched for examples or explanations of the migration process but with little success so far - perhabs they're just not compatible or am I missing something. This is my first time using Bitbucket over Github. Any resources and tips welcome.
They are absolutely unrelated CI systems and there is no straightforward migration path from one to another.
Both systems base their definitions in YAML, just like GitLab-CI, but the only thing that can be reused is your very YAML knowledge (syntax and anchors).
As CI systems, both will start some kind of agent to run a list of instructions, a script, so you can probably reuse most of the ideas of your scripts. But the execution environment is very different so be ready to write tons of tweaks like Benjamin commented.
E.g: about that "superlinter", just forget about it. Instead, Bitbucket Pipelines has a concept of pipes which have a similar purpose but are implemented in a rather different approach.
Another key difference: GHA runs on VMs and you configure whatever you need with "setup-actions". BBP runs on docker containers that should feature most of the runtime and tooling you will need upfront, as "setup-pipes" can not exist. So you will end up installing tooling on every run (via apt, yum, apk, wget...) so as to not maintain and keep updated a crazy amount of images with tooling and language runtimes: https://stackoverflow.com/a/72959639/11715259
I've been looking through the site and I have found some information with regards to this topic, but most of the information is old and possibly outdated.
example: Continuous Integration tools
We are: We're a SaaS product with a microservice (200+) architecture.
We have: We currently do our building through bamboo, and we use nexus as an artifact manager with proper versioning. We deploy those artifacts using bamboo to many different machines. For our frontend deployment we build our code through continua and use AWS codedeploy to handle the deployment. We use Bitbucket and Jira for our development. We have done a POC with bitbucket pipelines but we were lacking proper version management there as well as proper environment management. Setting up 10 servers for every repository manually is just something that we don't want to do.
We want: Since bamboo is EOL next year and since there are many alternatives with different levels of complexity we are currently unsure about the tools that are most suited to our needs. We are currently running everything on dedicated linux machines, but we want to switch to docker containers in AWS in the near future. Support for running gulp scripts etc. would be great since that could help us move from continua and bamboo to one single solution.
The setup of bamboo has been a struggle in the past due to difficulties with the software itself. A nice balance between features and complexity would be best. Does anybody have experience with one or more of the options out there? Some that come to mind are CircleCi, teamCity, GitLab, Jenkins and AWS codePipeline.
Many thanks,
Kenny
Bamboo doesn't EOL next year, but Atlassian forces to switch from perpetual licenses to DC licences to be renewed every year. You can get discount prices when switch to Server to DC licenses. See details at https://www.atlassian.com/licensing/data-center
I would propose Kraken CI. It is open-source and can work on-premise but in the cloud as well. In the cloud, it has support for AWS and Azure, and can do autoscaling depending on a number of tasks.
If you are interested please contact me.
Is there an abstraction for defining continuous integration pipelines which can then produce individual config files for particular CI providers? This would be handy especially for projects intended to serve as boilerplate.
Currently I am finding myself needing to manually write and sync both a .gitlab-ci.yml and a .github/workflows/ci.yml.
This is an interesting question, unless you can abstract all your CI scripts into shell scripts, then from what I can see there would lots of periodical porting process between the different CI providers.
Also, different CI providers has its own ideology of the perfect build pipeline as well as predefined setup.
With that being said, I would love to see some utility tools help me migrate the scripts and converge my CI setup into Github Action world.
I am training models using MLFlow on DataBricks and outputing the final models onto S3. Than, using Seldon-Core to to package AND deploy the models to AWS EKS.
I am looking for the tool that bridges the gap by taking the model from S3, packages it into a docker container, and using Seldon-Core K8S template to push it to AWS EKS.
I believe the tool that seem to fit the job is Kubeflow Pipelines. Other contenders are Jenkins, Gitlab, and TravisCI.
Is Kubeflow the absolute right tool for the job and what are the pros / cons of Kubeflow vs the other guys? if anyone has already done the research of maybe even built the pipeline...
GitLab actually does exactly what Kubeflow Pipelines out of the box, it is similar Yaml to CircleCI or TravisCI. I ended up using that for an alternative to Kubeflow Pipelines.
Regarding Kubeflow...
After experimenting with Kubeflow at version 0.5 and 0.6 our feeling was that is quite unstable yet. Installation never went smooth neither into MiniKube ( local K8S ) not into the AWS EKS. For MiniKube the install scripts from the documentation are broken and you will be able to see many people having issues and editing the install scripts by hand ( which is what I had to do to get it install properly ). On EKS we were not able to install 0.5 and had to install a much older version. Kubeflow wants to manage worker nodes in a particular manner and our security policies to not allow that, only in an order version you can overwrite that option.
Kubeflow is also switching to Kuztomize and it is not stable yet, so if you use it now you will be using Ksonnet which is not supported anymore and you will learn a tool that you will through out the window sooner or later.
All in all, should wait for version 1.0 but Gitlab does an awesome job as an alternative to kubeflow Pipelines.
Hope this help other who have the same thoughts
I'm using GCE in my project. For infrastructure release I have used both terraform and puppet. Each of them have advantages and disadvantages. But both of them falls behind(with upgrading new functionality) compared with Google Deployment Manager and of course if I use only google(not multi-cloud) then it is the best solution for me to use native tools ). As puppet is a declarative tool, it allows anytime declare my infrastructure and any manual change should be reseted.
I am trying to write a script which will reset all manual changes that I made in my project. Of course manual changes is not the best practice, but sometimes it is the fastest way for hotfix (e.g changing the minimum count of instances in group for a critical moment).
Is there a way to perform the same functional in Google Deployment Manager.