Can i add a file to a *.tar via jenkins pipelines without using untar? - jenkins-pipeline

I want to add a test.txt file to the specific folder inside the result.tar with the structure as given below
--delivery
+-files
+-scripts
+-sgbd
Is there a way to do it without using untar in jenkins declarative pipeline ?

Related

Create a Git Alias specific to one project

I have a Python project that is compiled using pyinstaller. It usually uses a bunch of concatenated operations && to not only compile, but perform several other things. Ok, just a big command.
I can use an alias with git bash to reduce it to a single word, fine. But I want to know, if exists some way to distribute some bash file with that alias with my project.
I mean, in my repo, having something like alias.sh that contains the alias I want, but they just exists inside that repo, so whenever I open a terminal inside that repo, I already have present those alias. And if someone forks or clones my project, they already have that alias too thanks to that specific file.
No, it's not possible.
But you can create a script file alias.sh to set up the alias and instruct users of your repo via README file to source this file once in their terminal session when they need this alias: . ./alias.sh
The file alias.sh might contain:
#!/bin/sh
alias youralias='your | long | command'
After the file was sourced with . ./alias.sh, you can simply run youralias as a standalone command.

sed command is not working in Jenkins pipeline

There is one Jenkins pipeline running when I call a file using particular path, e.g: ./UI/specs/**/start.specs.js, pipeline is running all the files inside the folder but I want to execute only single file called start.specs.js.
I think sed is not replacing the present calling function (./UI/specs/**/start.specs.js) properly in the file inside the xcv.conf.js.
sed -ri "/\\.\\/specs\\//s~\\./specs/\\*\\*/\\*\\.spec\\.js~${params.TEST}~" xcv.conf.js
Please help me how to run a single file on Jenkins using a parameter I specify and if there is any issue with this sed for not actively calling the mentioned file** I called on param. TEST while running the Build.

Access Jenkins workspace files in pipeline scrpit

I'm quite new to Jenkins so apologies if the question is not detailed enough but I swear I've done my own searching first.
I have a pipeline script that needs to process files that have been pulled from a SCM (git) in a previous step.
One of the parameters passed to the pipeline is a folder where all these files reside. There may be subfolders contained in this folder and I need to process those as well.
So, for example, I may pass a parameter ./my-folder to the pipeline and my-folder may contain the following:
./my-folder/file1.json
./my-folder/file2.json
./my-folder/subfolder/file3.json
The my-folder directory will be part of the repository cloned during the build phase.
While I was developing my Groovy script locally I was doing something similar to this:
def f = new File(folder)
but this doesn't work in Jenkins given the code is running on the master while the folder is on a different node.
After an extensive research I now know that there are two ways to read files in Jenkins.
Use readFile. This would be ok but I haven't found an easy way to scan an entire folder and subfolders to load all files
Use FilePath. This would be my preferred way since it's more OO but I haven't found a way to create an instance of this class. All the approaches I've seen while searching on the internet, refer to the build variable which, I'm not entirely sure why, is not defined in the script. In fact I'm getting groovy.lang.MissingPropertyException: No such property: build for class: WorkflowScript
I hope the question makes sense otherwise I'd be happy to add more details.
Thanks,
Nico
I've managed to scan the content of a folder using the following approach:
sh "find ${base-folder} -name *.* > files.txt"
def files = readFile "files.txt"
and then loop through the lines in files.txt to open each file.
The problem is this works only for txt files. I'm still unable to open a binary file (eg: a zip file) using readFile

Installer for a .bin file that will run on Ubuntu

I have a .bin file that will comprise of 3 files
1. tar.gz file
2. .zip file
3. install.sh file
For now the install.sh file is empty. I am trying to write a shell script that should be able to extract the .zip file and copy the tar.gz file to a specific location when the *.bin file is executed on an Ubuntu machine. There is a Jenkins job that will pull in these 3 files to create the *.bin file
My Question is how do I access the tar.gz and .zip file from my shell script ?
There are two general tricks that I'm aware of for this sort of thing.
The first is to use a file format that will ignore invalid data and find the correct file contents automatically (I believe zip is one such format/tool).
When this is the case you just run the tool on the packed/concatenated file and let the tool do its job.
For formats and tools where that doesn't work and/or isn't possible the general trick is to embed markers in the concatenated file such that the original script ignores the data but can operate on itself to "extract" the embedded data so the other tool can operate on the extracted contents.

Add multiple files to distributed cache in HIVE

I currently have an issue adding a folders contents to Hives distrusted cache. I can successfully add multiple files to the distributed cache in Hive using:
ADD FILE /folder/file1.ext;
ADD FILE /folder/file2.ext;
ADD FILE /folder/file3.ext;
etc.
.
I also see that there is a ADD FILES (plural) option which in my mind means you could specify a directory like: ADD FILES /folder/; and everything in the folder gets included (this works with Hadoop Streaming -files option). But this does not work with Hive. Right now I have to explicitly add each file.
Am I doing this wrong? Is there a way to had a whole folders contents to the distributed cache.
P.S. I tried wild cards ADD FILE /folder/* and ADD FILES /folder/* but that fails too.
Edit:
As of hive 0.11 this now supported so:
ADD FILE /folder
now works.
What I am using is passing the folder location to the hive script as a param so:
$ hive -f my-query.hql -hiveconf folder=/folder
and in the my-query.hql file:
ADD FILE ${hiveconf:folder}
Nice and tidy now!
Add doesn't support directories, but as a workaround you can zip the files. Then add the it to the distributed cache as an archive (ADD ARCHIVE my.zip). When the job is running the content of the archive will be unpacked on the local job directory of the
slave nodes (see the mapred.job.classpath.archives property)
If the number of the files you want to pass is relatively small, and you don't want deal with archives you can also write a small script which prepares the add file command for all the files you have in a given directory:
E.g:
#!/bin/bash
#list.sh
if [ ! "$1" ]
then
echo "Directory is missing!"
exit 1
fi
ls -d $1/* | while read f; do echo ADD FILE $f\;; done
Then invoke it from the Hive shell and execute the generated output:
!/home/user/list.sh /path/to/files
Well, in my case, I had to move a folder with child folders and files in it.
I used the ADD ARCHIVE xxx.gz, which was adding the file, but was not exploding(unzipping) in the slave machines.
Instead, ADD FILE <folder_name_without_traling_slash> actually copies the whole folder recursively to the slaves.
Courtesy: The comments helped debugging
Hope this helps !

Resources