Jenkins withEnv in pipeline doesn't set environment variables on Windows - windows

I want to be able to change environment variables on Windows in a Jenkins pipeline, and only in that pipeline, how do I do it?
The environment variables are set in the system as:
XXX_DEV_DATA_DIR = E:\tools\jenkins\workspace\data\XXX-IDM
XXX_DEV_LIBS_DIR = E:\tools\jenkins\workspace\dev\libs
I tried the withEnv command but it has no effect:
node
{
withEnv(["XXX_DEV_LIBS_DIR=E:\\tools\\jenkins\\workspace\\dev\\libs', 'XXX_DEV_DATA_DIR=E:\\tools\\jenkins\\workspace\\data\\XXX-IDM-Testing"])
{
dir('E:\\tools\\jenkins\\workspace\\samples\\GetXXXSettings\\bin\\x64\\Release')
{
bat 'GetXXXSettings.exe'
}
}
}
The GetXXXSettings.exe application:
class Program
{
static void Main(string[] args)
{
var data = Environment.GetEnvironmentVariable("XXX_DEV_DATA_DIR");
var libs = Environment.GetEnvironmentVariable("XXX_DEV_LIBS_DIR");
Console.WriteLine("XXX ENVIRONMENT VARIABLES");
Console.WriteLine();
Console.WriteLine($"XXX_DEV_DATA_DIR = {data}");
Console.WriteLine($"XXX_DEV_LIBS_DIR = {libs}");
Console.WriteLine();
Console.WriteLine("END");
}
}
The result is as follows:
XXX ENVIRONMENT VARIABLES
XXX_DEV_DATA_DIR = E:\tools\jenkins\workspace\data\XXX-IDM
XXX_DEV_LIBS_DIR = E:\tools\jenkins\workspace\dev\libs', 'XXX_DEV_DATA_DIR=E:\tools\jenkins\workspace\data\XXX-IDM-Testing
END
The environment variable XXX_DEV_DATA_DIR is unchanged, I'm not sure what is happening to XXX_DEV_LIBS_DIR.

Looks like you issue with quotes at withEnv, try updating it
node
{
withEnv(['XXX_DEV_LIBS_DIR=E:\\tools\\jenkins\\workspace\\dev\\libs', 'XXX_DEV_DATA_DIR=E:\\tools\\jenkins\\workspace\\data\\XXX-IDM-Testing'])
{
dir('E:\\tools\\jenkins\\workspace\\samples\\GetXXXSettings\\bin\\x64\\Release')
{
bat 'GetXXXSettings.exe'
}
}
}

Related

Looking to Parse the following code (Apache CLI 1.4), but it doesn't get into the if loop

I have the following code in JDeveloper and I am trying to parse the output but can't seem to figure it out.
package project1;
import org.apache.commons.cli.*;
public class cmdParser
{
public static void main(String[] args)
{
try
{
Options options = new Options();
options.addOption("t", false, "display current time");
CommandLineParser parser = new DefaultParser();
CommandLine cmd = parser.parse( options, args);
if(cmd.hasOption("t"))
{
String optionT=cmd.getOptionValue("t");
System.out.println("Option t" + optionT);
}
else
{
System.out.println("Can't get the option");
}
}
catch(ParseException exp)
{
System.out.println("Error:" + exp.getMessage());
}
}
}
Output:
Click to enlarge the image
How do you get the option if you don't pass such an option...
Not sure how it is done in JDeveloper but from command line:
java cmdParser -t "my test option"
further more, you should use options.addOption("t", true, "display current time"); if you want to pass a value to the option. If the 2nd parameter is false, this option would just be a flag.

Is there a way to Alias a dead network path to a local directory in Windows 7?

I have a bunch of old Batch scripts that I may need to revive that have hundreds of references to a dead specific network path. Is there a way to alias \\myNetWorkPath.com\SomeFolder\SomeFolder2 to a specific local Windows 7 directory?
For example, \\myNetWorkPath.com\SomeFolder\SomeFolder2 alias to C:\SomeFolder2.
Again, \\myNetWorkPath.com\SomeFolder\SomeFolder2 is a dead (not working anymore) network path.
Please let me know if that doesn’t make any sense.
Thanks!
Following up on my "Pick a language and write a quick and dirty application that will change your code base." comment... Here's a bit of c# that could get your going.
static void Main(string[] args)
{
//foreach file you drop onto the compiled EXE
foreach (string item in args)
{
if (System.IO.File.Exists(item))//if the file path actually exists
{
ChangeThePath(item);
}
}
}
private static void ChangeThePath(string incomingFilePath)
{
string backupCopy = incomingFilePath + ".bck";
System.IO.File.Copy(incomingFilePath, backupCopy);//make a backup
string newPath = "c:\\This\\New\\Path\\Is\\Much\\Better";
string oldPath = "c:\\Than\\This\\Deprecated\\One";
using (System.IO.StreamWriter sw = new System.IO.StreamWriter(incomingFilePath))
{
using (System.IO.StreamReader sr = new System.IO.StreamReader(backupCopy))
{
string currentLine = string.Empty;
while ((currentLine = sr.ReadLine()) != null)
{
sw.WriteLine(currentLine.Replace(oldPath, newPath));
}
}
}
}

Jenkins declarative pipeline job - how to distribute parallel steps across slaves?

I am running a Declarative Pipeline where one of the steps runs a (very long) integration test. I'm trying to split my test into several smaller ones and run them in parallel over several nodes. I have 8 of these smaller tests and I have 8 nodes (under a label), so I'd like to have each test run on a separate node. Unfortunately, two tests — when run on the same node — interfere with each other, and so both fail.
I need to be able to first get the list of available nodes, and then run the smaller tests in parallel, one of each node; if there are not enough nodes, one of the smaller tests need to wait until the node is finished.
However, what happens is that when asking for a node by label, two of the smaller tests usually get the same node, and so both fail. Nodes are configured to run up to 3 executors, otherwise the whole system halts, so I can't change that.
My current configuration for the smaller test is:
stage('Integration Tests') {
when {
expression {params.TESTS_INTEGRATION}
}
parallel {
stage('Test1') {
agent {node {label 'my_builder'}}
steps {
script {
def shell_script = getShellScript("Test1")
sh "${shell_script}"
}
}
}
I am able to get the list of available slaves from a label like this:
pipeline {
stages {
// ... other stages here ...
stage('NodeList'){
steps {
script {
def nodes = getNodeNames('my_builder')
free_nodes = []
for (def element = 0; element < nodes.size(); element++) {
usenode = nodes[element]
try {
// Give it 5 seconds to run the nodetest function
timeout(time: 5, unit: 'SECONDS') {
node(usenode) {
nodetest()
free_nodes += usenode
}
}
} catch(err) {
}
}
println free_nodes
}
}
}
Where
def getNodeNames (String label) {
def lgroup = Jenkins.instance.getLabel(label)
def nodes = lgroup.getNodes()
def result = []
if (nodes.size() > 0) {
for (def element = 0; element < nodes.size(); element++) {
result += nodes[element].getNodeName()
}
}
return result
}
def nodetest() {
sh('echo alive on \$(hostname)')
}
How can I get the node name programmatically out of the free_nodes array and direct the stage to use that?
I've figured it out, so for the people from the future:
It turns out you can run a Scripted Pipeline inside a Declarative Pipeline, like this:
pipeline {
stage('SomeStage') {
steps {
script {
// ... your scripted pipeline here
}
}
}
The script can do anything, and that includes... running a pipeline!
Here is the script:
script {
def builders = [:]
def nodes = getNodeNames('my_label')
// let's find the free nodes
String[] free_nodes = []
for (def element = 0; element < nodes.size(); element++) {
usenode = nodes[element]
try {
// Give it 5 seconds to run the nodetest function
timeout(time: 5, unit: 'SECONDS') {
node(usenode) {
nodetest()
free_nodes += usenode
}
}
} catch(err) {
// do nothing
}
}
println free_nodes
def tests = params.TESTS_LIST.split(',')
for(int i = 0; i < tests.length; i++) {
// select the test to run
def the_test = tests[i]
// select on which node to run it
def the_node = free_nodes[i % free_nodes.length]
// here comes the scripted pipeline: prepare steps
builders[the_test] = {
// run on the selected node
node(the_node) {
// lock the resource with the name of the node so two tests can't run there at the same time
lock(the_node) {
// name the stage
stage(the_test) {
println "Running on ${NODE_NAME}"
def shell_script = getShellScript("${the_test}")
sh "${shell_script}"
}
}
}
}
}
// run the steps in parallel
parallel builders
}

Spark on Windows - What exactly is winutils and why do we need it?

I'm curious! To my knowledge, HDFS needs datanode processes to run, and this is why it's only working on servers. Spark can run locally though, but needs winutils.exe which is a component of Hadoop. But what exactly does it do? How is it, that I cannot run Hadoop on Windows, but I can run Spark, which is built on Hadoop?
I know of at least one usage, it is for running shell commands on Windows OS. You can find it in org.apache.hadoop.util.Shell, other modules depends on this class and uses it's methods, for example getGetPermissionCommand() method:
static final String WINUTILS_EXE = "winutils.exe";
...
static {
IOException ioe = null;
String path = null;
File file = null;
// invariant: either there's a valid file and path,
// or there is a cached IO exception.
if (WINDOWS) {
try {
file = getQualifiedBin(WINUTILS_EXE);
path = file.getCanonicalPath();
ioe = null;
} catch (IOException e) {
LOG.warn("Did not find {}: {}", WINUTILS_EXE, e);
// stack trace comes at debug level
LOG.debug("Failed to find " + WINUTILS_EXE, e);
file = null;
path = null;
ioe = e;
}
} else {
// on a non-windows system, the invariant is kept
// by adding an explicit exception.
ioe = new FileNotFoundException(E_NOT_A_WINDOWS_SYSTEM);
}
WINUTILS_PATH = path;
WINUTILS_FILE = file;
WINUTILS = path;
WINUTILS_FAILURE = ioe;
}
...
public static String getWinUtilsPath() {
if (WINUTILS_FAILURE == null) {
return WINUTILS_PATH;
} else {
throw new RuntimeException(WINUTILS_FAILURE.toString(),
WINUTILS_FAILURE);
}
}
...
public static String[] getGetPermissionCommand() {
return (WINDOWS) ? new String[] { getWinUtilsPath(), "ls", "-F" }
: new String[] { "/bin/ls", "-ld" };
}
Though Max's answer covers the actual place where it's being referred. Let me give a brief background on why it needs it on Windows -
From Hadoop's Confluence Page itself -
Hadoop requires native libraries on Windows to work properly -that
includes accessing the file:// filesystem, where Hadoop uses some
Windows APIs to implement posix-like file access permissions.
This is implemented in HADOOP.DLL and WINUTILS.EXE.
In particular, %HADOOP_HOME%\BIN\WINUTILS.EXE must be locatable
And , I think you should be able to run both Spark and Hadoop on Windows.

java 8 - some error with compiling lambda function

public class GrammarValidityTest {
private String[] dataPaths = new String[] {"data/", "freebase/", "tables/", "regex/"};
#Test(groups = {"grammar"})
public void readGrammars() {
try {
List<String> successes = new ArrayList<>(), failures = new ArrayList<>();
for (String dataPath : dataPaths) {
// Files.walk(Paths.get(dataPath)).forEach(filePath -> {
try {
if (filePath.toString().toLowerCase().endsWith(".grammar")) {
Grammar test = new Grammar();
LogInfo.logs("Reading grammar file: %s", filePath.toString());
test.read(filePath.toString());
LogInfo.logs("Finished reading", filePath.toString());
successes.add(filePath.toString());
}
}
catch (Exception ex) {
failures.add(filePath.toString());
}
});
}
LogInfo.begin_track("Following grammar tests passed:");
for (String path : successes)
LogInfo.logs("%s", path);
LogInfo.end_track();
LogInfo.begin_track("Following grammar tests failed:");
for (String path : failures)
LogInfo.logs("%s", path);
LogInfo.end_track();
assertEquals(0, failures.size());
}
catch (Exception ex) {
LogInfo.logs(ex.toString());
}
}
}
The line beginning with // is the one that brings up the error -"illegal start of expression" starting at the '>' sign.
I do not program much in java. I just downloaded a code from somewhere that is quite popular and supposed to run but I got this error. Any help/fixes/explanation would be appreciated.
Run javac -version and verify that you are actually using the compiler from JDK8, it's possible that even if your java points to the 1.8 releaase, your javac has a different version.
If you are using Eclipse, remember to set the source type for your project to 1.8.
Edit:
Since you are using ant, verify that your JAVA_HOME environment variable points to your jdk1.8 directory.

Resources