how do I change a parameter value in a parameter file dynamically in Informatica PowerCenter - shell

Parameter files have a variable $$envFile that have been hardcoded as DEV, UAT or PROD ($$envFile = DEV for Dev environment, $$envFile = UAT for QA environment, etc. ). We need to make this customized so that if some param file is copied to QA or prod environment, it automatically updates the values as DEV or UAT for respective environment.
How to achieve this via Informatica PC or by using a shell script?

This is not a question of using Informatica. Here we're talking about deployment to different environments, which should be handled by CI/CD tool - NOT Informatica.
Set up a Jenkins job, Git Action, Azure Pipeline - or whatever is available in your organization.

Related

How to manage production, test and development environments with serverless framework

I am planning to build an enterprise application using aws lambda and serverless framework.
I want to separate the dev, test and prod environments and I am planning to use AWS Parameter store for it.
I don't want my production environment configuration be exposed to developers. If the developer runs the command serverless offline -s production start then the production configuration should not be obtained.
It should be obtained only when the serverless function has been successfully deployed to aws lambda.
Here are few considerations based on your question:
To have different environments on Serverless framework you have to set up the stage. This value can be passed as a parameter when executing sls commands.
If you are keeping your code in a repo, the developers will have access to all the configurations. If this is really important, you could keep the production configuration in a diff repo where only very specific people will have access to it, and then you make a reference to in in your serverless.yml. Ex:
custom: ${file(./config/${opt:stage, 'dev'}.json)} and then in your config folder you create the prod.json file, but pointing to the real one of the new repo you created. Note: this would make your project harder to maintain.
Considering you don't want your developers to execute your production environment locally. You can use the global variable of serverless offline to block the execution. You could also inform then to not do so.
Here is what should be a good practice and solution based on your problem:
Considering you have a production environment you want to isolate from a given group in your company, you should create VPC's and configure their resources access, accordingly.
Then you create users to have diff access. When your developer try to execute the code accessing a resource (dynamoDB for example) in a VPC they don't have access, they will be blocked.
AWS configure to define which user will execute the SLS command.
Your development team will still have access to your configuration file.
Note: In this case the person/group with access to the production VPC will have to do the deploy.
If the answer does not suffice, could you please reinforce which type of resource(s) are sensitive across your Serverless project? I am taking for granted it is the DB as it is the most common scenario.

How can I have separate APIs for staging and production environments on Heroku?

I was just checking on how pipelines work in Heroku. I want the staging and production apps to be the same except that they should access different API endpoints.
How could I achieve that?
Heroku encourages getting configuration from the environment:
A single app always runs in multiple environments, including at least on your development machine and in production on Heroku. An open-source app might be deployed to hundreds of different environments.
Although these environments might all run the same code, they usually have environment-specific configurations. For example, an app’s staging and production environments might use different Amazon S3 buckets, meaning they also need different credentials for those buckets.
An app’s environment-specific configuration should be stored in environment variables (not in the app’s source code). This lets you modify each environment’s configuration in isolation, and prevents secure credentials from being stored in version control. Learn more about storing config in the environment.
On a traditional host or when working locally, you often set environment variables in your .bashrc file. On Heroku, you use config vars.
In this instance you might use an environment variable called API_BASE that gets set to the base URL of your staging API on your staging instance and to the base URL of your production API in production.
Exactly how you read those values depends on the technology you're using, but if you look for "environment variables" in your language's documentation you should be able to get started.

LUIS: connect to staging environment using the LuisClient

Is it possible to choose between staging and production environment when I create a LuisClient instance from Microsoft.Cognitive.LUIS nuget package?
As you can see on LUIS portal, the difference between calls to staging vs production is by using a staging=true value in query string, for example for a project in Europe:
https://westeurope.api.cognitive.microsoft.com/luis/v2.0/apps/myAppId?subscription-key=mySubscriptionKey&staging=true&verbose=true&timezoneOffset=0&q=
This variable is not available in Microsoft.Cognitive.LUIS:

Pass machine name into variable string in Octopus Deploy

I have a web server and an app server, which are set up as two separate roles which I can deploy to in Octopus Deploy, named "My-Web-Sever" and "My-App-Server" repesctively.
I have a variable which is a file share path on my app server. This is an app setting in the web config in my web project, and I want to transform this setting as part of the deployment.
The machine name of the app server will be different depending on the environment that I am deploying to, therefore I want to pass the machine name into the variable by referencing the app server role name, something like:
\\$OctopusParameters["My-App-Sever.Machine.Name"]\MyShareName
Is this possible? Otherwise I will have to create a variable for each environment with the machine name explicitly set.
You can define Octopus Variables that reuse Octopus variables, i.e. Name = MyVariable, Value = Something#{OctopusMachineName}. This was introduced in 1.2.2. The only other way round this (to avoid defining a variable per environment) is by mapping drives to the network shares - then the share names become constant across all environments.

Concurrent Environments in AppHarbor

How can I host concurrent environments of a single application of on AppHarbor?
We are currently in development so a single environment of DEBUG is sufficient, with builds triggered from the develop branch.
Soon we will need a PROD environment, triggered by changes to the master branch (or perhaps manually).
How can I configure AppHarbor such that both the DEBUG & PROD environments are accessible at any given time for a single application?
With hostnames such as:
http://debug-myapp.apphb.com
http://prod-myapp.apphb.com
For now you will have to create two applications, one for your debug environment and one for your production environment. You can also set up the tracking branch for each application. Here is a link where we describe how to manage environments.

Resources