Deploying database into different environments with application-specific data - continuous-integration

I do have a requirement in which I need to deploy my database into different environments with application-specific data on the target environment.
For an instance, we are integrating with a 3rd party services and they do have different credentials and other parameters specific to the target environment. Such as credential and endpoints and other parameters for test and prod environments.
Currently, we are using post-deploy DB scripts targetting those environments and stored them on our git repository.
I read and heard we should not commit such sensitive information in repositories due to the risk of hacking and stealing.
Can someone suggest a better way (best practices) of handling this requirement in your CI/CD pipeline?
We are using TeamCity and Octopus tools in our CI/CD process.
Appreciate your valuable advice and feedback in this regard.
Also feel free to share the best CI/CD practices to overcome such concerns in your development process.
Thanks,
RSF

I would have approached it by keeping data like usr/psw as a sensitive variables per environment in Octopus.
To deploy I'd use sqlpackage.exe passing in the dacpac and the variables (/v: variable1=value1).

Related

Using both Apache Airflow and Github Actions for dBT

I'm trying to get my head around if replacing our current CI/CD tool with Github Actions is a pro or con.
My team is currently using GitHub and TeamCity for CI/CD. I find this limiting as it's a separate team that configures the different stages. We use Airflow to schedule our DBT production, so we don't use DBT Cloud.
I'm wondering if there's any benefit to replacing TeamCity with Github Actions as we can write the configuration of the stages (compile, run, test, lint) ourselves. I'm interested in implementing Slim CI , which I don't believe is possible with TeamCity.
Would anyone be able to with the pros and cons of using GitHub Actions alongside Airflow? Is it possible? I'm struggling to find any documentation on using both Actions whilst running DBT production on Airflow.
Right now, my source of information comes from this: https://towardsdatascience.com/how-to-deploy-dbt-to-production-using-github-action-778bf6a1dff6
It mentions "Well, it depends. If you don’t have Airflow running in productions already, you will probably not need it now. There are more simple/elegant solutions than this (dbt Cloud, GitHub Actions, GitLab CI). Also, this approach shares many disadvantages with using a compute instance, such as waste of resources and no easy way for CI/CD."
My main goal is to get rid of TeamCity and implement Slim CI with Github Actions.
Thank you in advance!

Why do some Jelastic providers block Export Environment option

According to Jelastic documentation it is possible to export the Environment configuration and download it so it can be restored in another provider
However I have tried with 2 Jelastic providers and they both have disabled the option for exporting private data.
So exporting/download/upload/import of environment is not possible.
i.e. I was expecting to have a process similar to CPanel backup/restore tool
In fact, another view for the deployment process gives a possibility to get rid of the model of handling the data or configuration on the platform. Try to think a bit differently and using CI/CD approach. The Jelastic provides a platform where something you created, locate on somewhere you're elaborating(VCS or GIT as an example) and based on or depends on the specific stack, already pre-configured like a layer and can be installed(copied) over Jelastic. Don't need to handle the data somewhere in the cloud because you have it locally(means within any VCS) and doing the changes there. Then just do a 'pull' procedure(manually or automatically) on that deployment(test, production, staging) environment you're expecting.
Moreover, you can expect any environments type like a code and perform it creating before deploying the data.
Please, find the articles being described each case:
Deployment Guide
Jelastic Packaging Standard for CI/CD Automation
In case you would like to handle the databases' backups, check this article:
Scheduling Database Backups
Additional FTP add-on can make the copies more easily for each instance:
FTP/FTPS Support in Jelastic

How to manage production, test and development environments with serverless framework

I am planning to build an enterprise application using aws lambda and serverless framework.
I want to separate the dev, test and prod environments and I am planning to use AWS Parameter store for it.
I don't want my production environment configuration be exposed to developers. If the developer runs the command serverless offline -s production start then the production configuration should not be obtained.
It should be obtained only when the serverless function has been successfully deployed to aws lambda.
Here are few considerations based on your question:
To have different environments on Serverless framework you have to set up the stage. This value can be passed as a parameter when executing sls commands.
If you are keeping your code in a repo, the developers will have access to all the configurations. If this is really important, you could keep the production configuration in a diff repo where only very specific people will have access to it, and then you make a reference to in in your serverless.yml. Ex:
custom: ${file(./config/${opt:stage, 'dev'}.json)} and then in your config folder you create the prod.json file, but pointing to the real one of the new repo you created. Note: this would make your project harder to maintain.
Considering you don't want your developers to execute your production environment locally. You can use the global variable of serverless offline to block the execution. You could also inform then to not do so.
Here is what should be a good practice and solution based on your problem:
Considering you have a production environment you want to isolate from a given group in your company, you should create VPC's and configure their resources access, accordingly.
Then you create users to have diff access. When your developer try to execute the code accessing a resource (dynamoDB for example) in a VPC they don't have access, they will be blocked.
AWS configure to define which user will execute the SLS command.
Your development team will still have access to your configuration file.
Note: In this case the person/group with access to the production VPC will have to do the deploy.
If the answer does not suffice, could you please reinforce which type of resource(s) are sensitive across your Serverless project? I am taking for granted it is the DB as it is the most common scenario.

Team City to deploy artifacts to external server

I've tried taking a look on Google for how this can be done but I thought I'd post a question anyway to see what the best practice is for doing this nowadays.
We are trying to setup a Team City build to deploy to a clients environment, basically we're generating an artifacts zip file and the plan is to (somehow) deploy this to the clients UAT, Staging and Live Servers (which are password protected). When the build is run it executes a nant script.
From our network in the office we are able to remote into the UAT box, but we can only get to the Staging and Live servers whilst on the UAT box.
What is the best way of doing this? Are there any useful resources I can look at to help me move forward?
You can try Deployer Plugin developed by TeamCity team. It offers SMB/FTP/SSH deploy options as well as SSH Exec option.

How can I use TeamCity to do Production releases safely?

We currently use TeamCity to build a deployment artifact, then a further TeamCity task takes that artifact and deploys it to our development and testing servers on demand.
We can store the passwords and other secret data in properties files that we can check into source control, as these are all internal servers and the developers have full access to them.
However for release to Production (and our final test layer) there are secret passwords and configuration that we don't want checked into the normal source control, or to have development be able to discover the passwords. So to do 'real' deployments we have to hand the artifact over to another team and they maintain a properties file with the production values.
What methods exist to store these secrets and allow TeamCity to run a deploy without ever leaking the secrets out?
(note I am one of the devs and it is not a trust issue... I don't want to have the ability to find out prod passwords so I can never accidently know them and do some horrific damage!)
Probably what you need here, is to create a separate project with narrower scope of permissions (for example, allow only certain people to edit build configurations). In this project create a build configuration, responsible for deployment. In this configuration, you can define a Typed Parameter of type 'password' to store the password to the production environment.
Another option is to use Deployer Plugin, especially its ability to deploy over ssh with private key authentication
If you are OK to use a third party solution, consider using a solution like CloudMunch which can help you to perform release management functions with these secure parameters collected at deploy time and encrypted post deployment.
Disclaimer: I work with CloudMunch
You can do 2 things.
Use a teamcity project to deploy artefacts for production only. This will only be accessible to ops members.
Teamcity also supports running agents with different user ids. You can create a new user id which can have access to the production "secrets" (passwords and configuration). Use this id to run the targets in the 1st step.

Resources