Unable to access JENKINS web pages after setting the HOME environment variable - windows

I am struggling to get Jenkins to work with SSH and after looking at a number of questions and answers the answer seems to involve setting the Windows Environment variable HOME.
When I set this environment variable and restart JENKINS, Jenkins starts properly but i can't access it via the URL:
http://localhost:8080
Once I get rid of this variable and restart Jenkins it works well.
I am not sure why this variable is wreaking havoc? I am not sure how others have managed to set this variable and get things to work.
The outcome is the same, when i remove the Windows environment variable and replace it with the system property inside Jenkins.
Appreciate any suggestions/advice.
Thanks

Related

What is the proper way to use flyway environment variables in gradle

At the moment, I have 2 databases, one is local and dev, However, I configured the local one okay using config files and a specific gradle task, this runs okay;
However since Im trying to simulate what would happen in my pipeline, how can I set it up so that reads from the environment variables on my machine, so far this is what I expected;
I checked the environment variable by running
echo $FLYWAY_URL
This returned
jdbc:postgresql://localhost:5432/postgres
Which means variable exists, Next I setup my task like this in gradle;
def jdbcDevUrl = System.getenv()['FLYWAY_URL']
task migrateDev(type: FlywayMigrateTask) {
url = jdbcDevUrl
user = 'myUsr2'
password = 'mySecretPwd2'
locations = ['filesystem:doc/flyway/migrations']
}
However this does not work at all. I have tried to run by not setting up url properties from here hoping it would be picked up automatically but it does not work.
Also using a config file with
flyway.url=${FLYWAY_URL}
does not work. Im using community edition.
All I get is
Unable to connect to the database. Configure the url, user and password!
Any help would be highly appreciated.

Issue setting Jenkins environment variables on EC2-Fleet

We are having issues setting the Jenkins environment variables on our dynamic EC2-Fleet.
We already have a fixed master (linux) and a fixed Windows slave but wanted to add slaves dynamically when the load on the system becomes heavy.
For this we created a Spot Request Instance in AWS spinning up linux machines from an AMI and control this via the EC2-fleet-plugin in Jenkins.
Before this EC2-fleet can be of any help, our jobs must be able to run on its nodes.
Most of our jobs use Jenkinsfiles and need certain environment variables to be set but the EC2-fleet-plugin does not provide the possibility to set environment variables (https://issues.jenkins-ci.org/browse/JENKINS-36544).
As suggested on this ticket (JENKINS-36544), we tried to set the environment variables in "System Configuration" for the dynamic ec2 slaves and set the environment variables for the other nodes on the "Node Configuration" overriding the "System Configuration", or so we thought.
This should work if this bug wouldn't exist: https://issues.jenkins-ci.org/browse/JENKINS-44425. Because of this bug the "System Configuration" overrides the "Node configuration" instead of vice versa. So we can't use this as the existing nodes would not have the correct environment variables anymore.
As a last resort we tried to set the environment variables on the dynamic ec2 slaves by creating an /etc/profile.d/jenkinsvars.sh on the AMI used by the Spot Request Instance.
This script would be automatically run on login system wide (https://help.ubuntu.com/community/EnvironmentVariables#A.2Fetc.2Fprofile.d.2F.2A.sh).
Next to that we also attempted to set them in /home/ubuntu/.profile on the AMI singling out the ubuntu user which is the user running the Jenkins agent (https://help.ubuntu.com/community/EnvironmentVariables#A.2BAH4-.2F.profile).
But it appears Jenkins does not use these environment variables but its own...
A way that works is to adapt the jobs to load a groovy file that's part of the AMI to set the environment variables we need but that would mean to change almost all jobs we have, next to all Jenkinsfiles that are included in our repositories (Bitbucket project).
We would like to avoid this....
Try the following strategy:
Leverage User Data to run a shell script when the Spot Instance launches. It is the primary recommended way by Amazon and the plugin authors.
Instead of saving variables into the environment, have the user data script save them into either /var/tmp or /etc/profile or in parameter store. Refer to answers in this SO question. If you want to encrypt your info use KMS parameter store, if you dont care use one of the others.Choose one of the answers to best fit your needs.
Alter your Jenkins job to pause until your user data script has completed running (refer to the documentation from the plugin)
Change your Jenkins job to pick up the variables from the location you chose in step 2.
Try restarting the server environment.
Just saying.
So we can't use this as the existing nodes would not have the correct environment variables anymore.
Update your existing nodes to load the environment variables when they are provisioned / started, then remove them from the System configuration, then add them to the Node configuration.
You could also try setting the Slave command prefix field to ENV_VAR1=val1 ENV_VAR2=val2, although I haven't tried that.
Thirdly, you can try putting your variables directly into /etc/profile which should always be loaded no matter what user you are logging in as.
However, the easiest by far is to make all of your drones/agents exactly the same and set your environment variables in whatever scripts you run to build your projects. Use docker to pull dependencies to the agents as necessary during the job and to set up specific environments for your applications. This greatly simplifies the maintenance and configuration of your agents.
The Jenkins version or version of the EC2 plugin are missing in the question, but according to the description in this merged pull request, this bug should be fixed now: https://github.com/jenkinsci/ec2-plugin/pull/440#issuecomment-597160730
Jenkins version:
so this change works in both <=2.204, and >=2.205
EC2 plugin version: >=ec2-1.50
JENKINS-36544 - Fix Node Properties on Jenkins 2.205+ (#440) #jhansche
From the Pull Request description:
Navigate to the cloud configuration screen (this moves to a new page >=2.205)
Click "Add a new cloud"
Click "Amazon EC2"
Under the "AMIs" section, click "Add"
At the bottom of the AMI block, expand "Advanced"
Expect to see "Node Properties" block at the bottom of the block
Node Properties has the Environment variables.

How do you reference defined variables in a SQL Server Database Project?

I've read many questions on this such as:
https://social.msdn.microsoft.com/Forums/sqlserver/en-US/da4bdb11-fe42-49db-bb8d-288dd1bb72a2/sqlcmd-vars-in-create-table-script?forum=ssdt
and
How to run different pre and post SSDT pubish scripts depending on the deploy profile
What I'm trying to achieve is a way of defining a set of scripts based on the environment being deployed to. The idea is that the environment is passed in as a SQLCMD variable as part of the azure-devops pipeline into a variable called $(ServerName), which I've setup in the sql server database project under properties with a default of 'DEV'.
This is then used in the post deployment script like this:
:r .\PostDeploymentScripts\$(ServerName)\index.sql
This should therefore pick up the correct index.sql file based on the $(ServerName) variable. When testing this by publishing and entering 'QA' for the $(ServerName) variable and generating the script it was still displaying the 'DEV' scripts. However, the top of the script showed the variable had been set correctly:
How do I get the post deployment script to reference the $(ServerName) variable correctly so I can dynamically set the correct reference path?
Contrary to this nice post: https://stackoverflow.com/a/54482350/11035005 , it appears that the :r directive is evaluated at compile time and inserted into the DACPAC before the xml profiles are even evaluated so this is not possible as explained.
The values used are the defaults or locals from the build config and can only be controlled from there.

How can I add Environment Variables in IDEA before running a test?

I have a Test::Unit configuration and at runtime, I would like a prompt in which i enter an environment(e.g. platform, staging or production) and the test runs in the specified environment. The related links for all environments are in environments.yml in the codebase
Right now, if I manually add an environment variable called DOMAIN and assign its value to 'platform', the test runs in the specified environment.
To do what I need to do, I have tried the following till now:
Created a Shell script which sets the DOMAIN env variable
In Before launch in IDEA, I set it to an external tool.
That tool calls the script created in step 1. A prompt is created and this takes in the environment to run the test. That env is passed as a parameter to the shell script.
The problem is, Idea does not pick up the change. Is there an easy way of doing this?

Windows - Private hosts file for a certain environment

I've an application running on a dev server and connecting to a dev-db hosting an oracle instance.
Now i'm deploying the on a prod/prod-db machine
Since the dev-db url is hardcoded inside the java code, the just-copied binaries still points to dev-db. As a quick warkaround i added a line in Windows Host file on prod so that dev-db now points to prod-db IP address. It's work, but i'm not very satisfied of this global-scope solution.
I was wondering if exits a way to make a hosts file "private" for a certain environments ie. only valid in the scope of my running application
No, there's no way to do this, and it's a bad approach anyway.
You should instead fix the real problem, which is the hard-coding of the address inside your java code. Put such things in a properties file, and use a different properties file for production.

Resources