Why do some Jelastic providers block Export Environment option - jelastic

According to Jelastic documentation it is possible to export the Environment configuration and download it so it can be restored in another provider
However I have tried with 2 Jelastic providers and they both have disabled the option for exporting private data.
So exporting/download/upload/import of environment is not possible.
i.e. I was expecting to have a process similar to CPanel backup/restore tool

In fact, another view for the deployment process gives a possibility to get rid of the model of handling the data or configuration on the platform. Try to think a bit differently and using CI/CD approach. The Jelastic provides a platform where something you created, locate on somewhere you're elaborating(VCS or GIT as an example) and based on or depends on the specific stack, already pre-configured like a layer and can be installed(copied) over Jelastic. Don't need to handle the data somewhere in the cloud because you have it locally(means within any VCS) and doing the changes there. Then just do a 'pull' procedure(manually or automatically) on that deployment(test, production, staging) environment you're expecting.
Moreover, you can expect any environments type like a code and perform it creating before deploying the data.
Please, find the articles being described each case:
Deployment Guide
Jelastic Packaging Standard for CI/CD Automation
In case you would like to handle the databases' backups, check this article:
Scheduling Database Backups
Additional FTP add-on can make the copies more easily for each instance:
FTP/FTPS Support in Jelastic

Related

Using Helm For Deploying Spring Boot Microservice to K8s

We have build a few Microservices (MS) which have been deployed to our company's K8s clusters.
For current deployment, any one of our MSs will be built as a Docker image and they deployed manually using the following steps; and it works fine:
Create Configmap
Installing a Service.yaml
Installing a Deployment.yaml
Installing an Ingress.yaml
I'm now looking at Helm v3 to simplify and encapsulate these deployments. I've read a lot of the Helm v3 documentation, but I still haven't found the answer to some simple questions and I hope to get an answer here before absorbing the entire doc along with Go and SPRIG and then finding out it won't fit our needs.
Our Spring MS has 5 separate application.properties files that are specific to each of our 5 environments. These properties files are simple multi-line key=value format with some comments preceded by #.
# environment based values
key1=value1
key2=value2
Using helm create, I installed a chart called ./deploy in the root directory which auto-created ./templates and a values.yaml.
The problem is that I need to access the application.properties files outside of the Chart's ./deploy directory.
From helm, I'd like to reference these 2 files from within my configmap.yaml's Data: section.
./src/main/resource/dev/application.properties
./src/main/resources/logback.xml
And I want to keep these files in their current format, not rewrite them to JSON/YAML format.
Does Helm v3 allow this?
Putting this as answer as there's no enough space on the comments!
Check the 12 factor app link I shared above, in particular the section on configuration... The explanation there is not great but the idea is behind is to build one container and deploy that container in any environment without having to modify it plus to have the ability to change the configuration without the need to create a new release (the latter cannot be done if the config is baked in the container). This allows, for example, to change a DB connection pool size without a release (or any other config parameter). It's also good from a security point of view as you might not want the container running in your lower environments (dev/test/whatnot) having production configuration (passwords, api keys, etc). This approach is similar to the Continuous Delivery principle of build once, deploy anywhere.
I assume that when you run the app locally, you only need access to one set of configuration, so you can keep that in a separate file (e.g. application.dev.properties), and have the parameters that change between environments in helm environment variables. I know you mentioned you don't want to do this, but this is considered a good practice nowadays (might be considered otherwise in the future...).
I also think it's important to be pragmatic, if in your case you don't feel the need to have the configuration outside of the container, then don't do it, and probably using the suggestion I gave to change a command line parameter to pick the config file works well. At the same time, keep in mind the 12 factor-app approach in case you find out you do need it in the future.

Node (maven) to deploy the application to several environments

On Jelastic, I created a node for building an application (maven), there are several identical environments (NGINX + Spring Boot), the difference is in binding to its database and configured SSL.
The task is to ensure that after building the application (* .jar), deploy at the same time go to these several environments, how to implement it?
When editing a project, it is possible to specify only one environment, multi-selection is not provided.
it`s allowed to specify just one environment
We suggest creating a few environments using one Repository branch, and run updates by API https://docs.jelastic.com/api/#!/api/environment.Vcs-method-Update pushing whole code to VCS.
It's possible to use CloudScripting technology for attaching custom logic to onAfterBuildProject event and deploying the project to additional environments after build is complete. Please check this JPS as an example of the code syntax. Most likely you will need to use DeployProject API method.

How to manage production, test and development environments with serverless framework

I am planning to build an enterprise application using aws lambda and serverless framework.
I want to separate the dev, test and prod environments and I am planning to use AWS Parameter store for it.
I don't want my production environment configuration be exposed to developers. If the developer runs the command serverless offline -s production start then the production configuration should not be obtained.
It should be obtained only when the serverless function has been successfully deployed to aws lambda.
Here are few considerations based on your question:
To have different environments on Serverless framework you have to set up the stage. This value can be passed as a parameter when executing sls commands.
If you are keeping your code in a repo, the developers will have access to all the configurations. If this is really important, you could keep the production configuration in a diff repo where only very specific people will have access to it, and then you make a reference to in in your serverless.yml. Ex:
custom: ${file(./config/${opt:stage, 'dev'}.json)} and then in your config folder you create the prod.json file, but pointing to the real one of the new repo you created. Note: this would make your project harder to maintain.
Considering you don't want your developers to execute your production environment locally. You can use the global variable of serverless offline to block the execution. You could also inform then to not do so.
Here is what should be a good practice and solution based on your problem:
Considering you have a production environment you want to isolate from a given group in your company, you should create VPC's and configure their resources access, accordingly.
Then you create users to have diff access. When your developer try to execute the code accessing a resource (dynamoDB for example) in a VPC they don't have access, they will be blocked.
AWS configure to define which user will execute the SLS command.
Your development team will still have access to your configuration file.
Note: In this case the person/group with access to the production VPC will have to do the deploy.
If the answer does not suffice, could you please reinforce which type of resource(s) are sensitive across your Serverless project? I am taking for granted it is the DB as it is the most common scenario.

How can I have separate APIs for staging and production environments on Heroku?

I was just checking on how pipelines work in Heroku. I want the staging and production apps to be the same except that they should access different API endpoints.
How could I achieve that?
Heroku encourages getting configuration from the environment:
A single app always runs in multiple environments, including at least on your development machine and in production on Heroku. An open-source app might be deployed to hundreds of different environments.
Although these environments might all run the same code, they usually have environment-specific configurations. For example, an app’s staging and production environments might use different Amazon S3 buckets, meaning they also need different credentials for those buckets.
An app’s environment-specific configuration should be stored in environment variables (not in the app’s source code). This lets you modify each environment’s configuration in isolation, and prevents secure credentials from being stored in version control. Learn more about storing config in the environment.
On a traditional host or when working locally, you often set environment variables in your .bashrc file. On Heroku, you use config vars.
In this instance you might use an environment variable called API_BASE that gets set to the base URL of your staging API on your staging instance and to the base URL of your production API in production.
Exactly how you read those values depends on the technology you're using, but if you look for "environment variables" in your language's documentation you should be able to get started.

How can I use TeamCity to do Production releases safely?

We currently use TeamCity to build a deployment artifact, then a further TeamCity task takes that artifact and deploys it to our development and testing servers on demand.
We can store the passwords and other secret data in properties files that we can check into source control, as these are all internal servers and the developers have full access to them.
However for release to Production (and our final test layer) there are secret passwords and configuration that we don't want checked into the normal source control, or to have development be able to discover the passwords. So to do 'real' deployments we have to hand the artifact over to another team and they maintain a properties file with the production values.
What methods exist to store these secrets and allow TeamCity to run a deploy without ever leaking the secrets out?
(note I am one of the devs and it is not a trust issue... I don't want to have the ability to find out prod passwords so I can never accidently know them and do some horrific damage!)
Probably what you need here, is to create a separate project with narrower scope of permissions (for example, allow only certain people to edit build configurations). In this project create a build configuration, responsible for deployment. In this configuration, you can define a Typed Parameter of type 'password' to store the password to the production environment.
Another option is to use Deployer Plugin, especially its ability to deploy over ssh with private key authentication
If you are OK to use a third party solution, consider using a solution like CloudMunch which can help you to perform release management functions with these secure parameters collected at deploy time and encrypted post deployment.
Disclaimer: I work with CloudMunch
You can do 2 things.
Use a teamcity project to deploy artefacts for production only. This will only be accessible to ops members.
Teamcity also supports running agents with different user ids. You can create a new user id which can have access to the production "secrets" (passwords and configuration). Use this id to run the targets in the 1st step.

Resources