I'm quite new to Jenkins so apologies if the question is not detailed enough but I swear I've done my own searching first.
I have a pipeline script that needs to process files that have been pulled from a SCM (git) in a previous step.
One of the parameters passed to the pipeline is a folder where all these files reside. There may be subfolders contained in this folder and I need to process those as well.
So, for example, I may pass a parameter ./my-folder to the pipeline and my-folder may contain the following:
./my-folder/file1.json
./my-folder/file2.json
./my-folder/subfolder/file3.json
The my-folder directory will be part of the repository cloned during the build phase.
While I was developing my Groovy script locally I was doing something similar to this:
def f = new File(folder)
but this doesn't work in Jenkins given the code is running on the master while the folder is on a different node.
After an extensive research I now know that there are two ways to read files in Jenkins.
Use readFile. This would be ok but I haven't found an easy way to scan an entire folder and subfolders to load all files
Use FilePath. This would be my preferred way since it's more OO but I haven't found a way to create an instance of this class. All the approaches I've seen while searching on the internet, refer to the build variable which, I'm not entirely sure why, is not defined in the script. In fact I'm getting groovy.lang.MissingPropertyException: No such property: build for class: WorkflowScript
I hope the question makes sense otherwise I'd be happy to add more details.
Thanks,
Nico
I've managed to scan the content of a folder using the following approach:
sh "find ${base-folder} -name *.* > files.txt"
def files = readFile "files.txt"
and then loop through the lines in files.txt to open each file.
The problem is this works only for txt files. I'm still unable to open a binary file (eg: a zip file) using readFile
Related
I have a /.cust_dev_cmds/ directory on my MBP machine that is part of a parent sysadmin-work directory that uses a git repo here. I would like to be able to:
Not have to use a for loop in my .bash_profile to source all the *.sh files.
Add the directory to PATH with and export line in the .bash_profile instead.
# from my .bash_profile
export PATH="/Users/<my-usr-name>/Public/sharable-dev-scripts:$PATH"
This does show up with a echo $PATH but when I try to invoke a function from within the scripts I have created that worked with sourcing directly within the .bash_profile in a loop (like with point #1 above) like this
# create a directory with a builtin command
mkdir test-dir
# use one of my custom ones to create a simple readme.md within that directory:
mkr !!:1
# I am getting >>> mkr: command not found
Use whatever type of link to not have a duplicated directory structure on the machine.
Any good explanations to read up on here without using $10 words would be great.
Define a means to test the link works and works through PATH. It would also be nice that something like declare -F would be available to see that the scripts within the directory are in fact becoming part of callable functions in the shell.
is there a command anyone knows to do this?
Step this up a notch for a shared network directory. I have created a shared directory through apple > System Preferences > Sharing, and turned on the ability to share this directory in the Public folder.
Is there a tutorial that can outline this with something like VirtualBox and an Ubuntu guest that is accessing the commands from the MBP shared directory?
I have realized point #1, so really the question begins with #2 so no one would suggest the first one. I have read a bit on links but the way most of the articles I come across describing them are difficult to wrap my head around- especially when wishing to add this functionality to PATH. I believe the answer may revolve around how links are followed, but it may be better to back up and punt- dig back into linking first- then export my directory appropriately without a link, and eventually get the proper resolution to this situation.
The last thought on links before I try a few hacks on my own is do I need to only add a link to the Public directory and somehow place a flag to look at all the directories within the /Public, or is it better to drill all the way down to the /Public/shared-directory/.cust_dev_cmds? Any direction would be greatly appreciated. My goal is to be able to have a few custom command directories for various tasks, and eventually have them across networks/instances.
When you want all functions that you wrote in files in /.cust_dev_cmds/, the normal way would be sourcing all the files.
When you want to avoid a loop, you can use
utildir="$HOME/.cust_dev_cmds/" # I think the path is relative to your home).
source <(cat ${utildir}/*)
When you want the functions found with the PATH, you should make a file for each function.
Before:
# cat talk
ask() { echo "How are you?"; }
answer() { echo "Fine, thank you"; }
After:
# cat ask
echo "How are you?"
# cat answer
echo "Fine, thank you"
When you want all users to use the same set of functions, consider a master script that sources all scripts (the masterfile can use user=dependent settings like HOME or VERSION):
# cat /Public/shared-directory/setup_functions
utildir="$HOME/.cust_dev_cmds/" # I think the path is relative to your home).
source <(cat ${utildir}/*)
source some_other_file
Now each user only needs to source one file.
We're sharing SYMLINKD files on our git project. It almost works, except git modifies our SYMLINKD files to SYMLINK files when pulled on another machine.
To be clear, on the original machine, symlink is created using the command:
mklink /D Annotations ..\..\submodules\Annotations\Assets
On the original machine, the dir cmd displays:
25/04/2018 09:52 <SYMLINKD> Annotations [..\..\submodules\Annotations\Assets]
After cloning, on the receiving machine, we get
27/04/2018 10:52 <SYMLINK> Annotations [..\..\submodules\Annotations\Assets]
As you might guess, a file target type pointing at a a directory [....\submodules\Annotations\Assets] does not work correctly.
To fix this problem we either need to:
Prevent git from modifying our symlink types.
Fix our symlinks with batch script triggered on a githook
We're going we 2, since we do not want to require all users to use a modified version of git.
My limited knowledge of batch scripting is impeding me. So far, I have looked into simply modifying the attrib of the file, using the info here:
How to get attributes of a file using batch file and https://superuser.com/questions/653951/how-to-remove-read-only-attribute-recursively-on-windows-7.
Can anyone suggest what attrib commands I need to modify the symlink?
Alternatively, I realise I can delete and recreate the symlink, but how do I get the target directory for the existing symlink short of using the dir command and parsing the path from the output?
I think it's https://github.com/git-for-windows/git/issues/1646.
To be more clear: your question appears to be a manifestation of the XY problem: the Git instance used to clone/fetch the project appears to incorrectly treat symbolic links to directories—creating symbolic links pointing to files instead. So it appears to be a bug in GfW, so instead of digging it up you've invented a workaround and ask how to make it work.
So, I'd better try help GfW maintainer and whoever reported #1646 to fix the problem. If you need a stop-gap solution, I'd say a proper way would be to go another route and script several calls to git ls-tree to figure out what the directory symlinks are (they'd have a special set of permission bits;
you may start here).
So you would traverse all the tree objects of the HEAD commit, recursively,
figuring out what the symlinks pointing at directories are and then
fixup the matching entries in the work tree by deleting them
and recreating with mklink /D or whatever creates a correct sort of
symlink.
Unfortunately, I'm afraid trying to script this using lame possibilities
of cmd.exe-s scripting facilities would be an exercise in futility.
I'd take some more "real" programming language (PowerShell as an example,
and—since you're probably a Windows shop—even a .NET would be OK).
We have following requirement.can anyone please help me to write bat script? if you have the script already for my requirement it would be very much appriciated.
1.Input text file will be uploaded manually in the server path e.g: E:\usr\sap\python\input.txt
2.SAP job will create output.txt in the same server path e.g: E:\usr\sap\python\output.txt
3.Once the ouput.txt is generated, that should be copied to archive folder e.g: E:\usr\sap\archive\ouput.txt
4.Once it is moved to archive folder with time stamp, the script should delete all the files in the path E:\usr\sap\python
How about this. Rather than give you the direct programming, let me guide you with a basic flow. A very simple script can do this....
Check if the file exists in the output folder.....
a. If it exists....delete the file you want
b. If it is not there.....restart the script
This is better done with something like Visual Basic and may not be entirely possible with a batch script.
I have a project, where I'm forced to use ftp as a means of deploying the files to the live server.
I'm developing on linux, so I hacked together a bash script that makes a backup of the ftp server's contents,
deletes all the files on the ftp, and uploads all the fresh files from the mercurial repository.
(and taking care of user uploaded files and folders, and making post-deploy changes, etc)
It's working well, but the project is starting to get big enough to make the deployment process too long.
I'd like to modify the script to look up which files have changed, and only deploy the modified files. (the backup is fine atm as it is)
I'm using mercurial as a VCS, so my idea is to somehow request the changed files between two revisions from it, iterate over the changed files,
and upload each modified file, and delete each removed file.
I can use hg log -vr rev1:rev2, and from the output, I can carve out the changed files with grep/sed/etc.
Two problems:
I have heard the horror stories that parsing the output of ls leads to insanity, so my guess is that the same applies to here,
if I try to parse the output of hg log, the variables will undergo word-splitting, and all kinds of transformations.
hg log doesn't tell me a file is modified/added/deleted. Differentiating between modified and deleted files would be the least.
So, what would be the correct way to do this? I'm using yafc as an ftp client, in case it's needed, but willing to switch.
You could use a custom style that does the parsing for you.
hg log --rev rev1:rev2 --style mystyle
Then pipe it to sort -u to get a unique list of files. The file "mystyle" would look like this:
changeset = '{file_mods}{file_adds}\n'
file_mod = '{file_mod}\n'
file_add = '{file_add}\n'
The mods and adds templates are files modified or added. There is a similar file_dels and file_del template for deleted files.
Alternatively, you could use hg status -ma --rev rev1-1:rev2 which adds an M or an A before modified/added files. You need to pass a different revision range, one less than rev1, as it is the status since that "baseline". Deleted files are similar - you need the -d flag and a D is added before each deleted file.
Right now I'm using a few scripts to generate files that I'm including as resources in Xcode. The thing is I'm running the script, then deleting from the project, then adding back into the project. There must be a way to automate this last step, so that the script can generate the files and automatically add them into the xcode project for me.
I'm using bash but any language examples would help.
Thanks,
Andrew
I had a similar need as Andrew. I needed to be able to include resources from a script without knowing those resources ahead of time. Here's the solutions I came up with:
Add a new Run Script build phase after “Copy Bundle Resource” that contains the following command:
find -L ${SRCROOT}/SomeDerivedResources \
-type f -not -name ".*" \
-not -name "`basename ${INFOPLIST_FILE}`" \
| xargs -t -I {} \
cp {} ${CONFIGURATION_BUILD_DIR}/${UNLOCALIZED_RESOURCES_FOLDER_PATH}/
Looks scary, but let’s break it down:
find -L ${SRCROOT}/SomeDerivedResources
This crawls the directory SomeDerivedResources in our source root (-L tells it to follow symbolic links)
-type f
Only include regular files
-not -name ".*"
Ignore files starting with a dot
-not -name "`basename ${INFOPLIST_FILE}`"
In my case, my Info plists live in my SomeDerivedResources directory so we need to exclude that file from being copied to our product
| xargs -t -I {}
Pipe the results of find into xargs with -t (echo resulting commands to stderr so they show up in our build log), -I (run the command once for each input file) and use {} as our argument placeholder
cp {} ${CONFIGURATION_BUILD_DIR}/${UNLOCALIZED_RESOURCES_FOLDER_PATH}/
Lastly, copy each found file (denoted by {}) to our product’s resource directory.
I realized when typing this that using an rsync setup instead of cp could prevent us from copying resources each time you build. If your resources are very large it might be worth looking in to.
(Also, a folder reference wouldn’t work for my need for a few reasons. One, my icons are in my DerivedResources directory and having them in a subdirectory in the bundle seems not to work. Also, I ideally wanted to be able to use [UIImage imageNamed:#"MyAwesomeHappyImage.png"] and -pathForResource:ofType: (and some of my files are nested further inside my DerivedResources directory). If your needs don’t contain those restraints, I highly suggest you go the folder reference route.)
This can be done by adding a new build phase to your application.
In your Xcode project browser, find the target for your application, and expand it to show all of the build phases.
Add a new "run script" build phase to your target. The easiest way is to right-click on the target and choose "Add/New Build Phase/New Run Script Build Phase"
Adding the new build phase should bring up an inspector window. In this window, you can enter the entire shell script, or simply a command line to run the script.
Here's the gold: At the bottom of the inspector window you can specify input files and output files. Specifying input files sets up dependencies automatically (the shell script will only be executed if some of the input files have been modified). Specifying output files automatically propagates the dependencies to those files. When your shell script is run, Xcode knows that it needs to deal with those files that the shell script has modified.
Be sure to drag your new build phase up to the top of the list of phases as shown in the screenshot below. The order will be important if you need those resource files to be included in the bundle.
Save, build, commit to the repository, ask for a raise, get some fresh air and have a nice day! :)
For those with large number of files, to avoid having to recopy (or recheck) each file, as suggested by #Ben Cochran (thanks a lot for the great script), this is how to do it with rsync:
Basically, the files just need to be copied into the main bundle
In that case just add a folder reference to the project (Create a folder in your project folder and then drag it into your projects "Resources" group (in the "Files & Groups" list; then in the sheet that appears select the "Create Folder References for any added Folder" radio button) and Xcode will copy the folder and all of its contents into the target bundle at build time.
Just an additional note: If you use this method to add image subfolders you'll have to prefix the image name with the subfolder name to use '[UIImage imageNamed:]'. For example if you have an image named "Rendezvous.png" in a subfolder named "MyImages":
`
// this won't work
UIImage * image = [UIImage imageNamed:#"Rendezvous"];
if (image) {
NSLog(#"Found Rendezvous!");
} else {
NSLog(#"Didn't find Rendezvous.");
}
// but this will!
image = [UIImage imageNamed:#"MyImages/Rendezvous"];
if (image) {
NSLog(#"Found MyImages/Rendezvous!");
} else {
NSLog(#"Didn't find MyImages/Rendezvous.");
}
`
If you already have the files somewhere on your system, relative to your source root, you can always add a "Copy Files" phase. This will allow you to specify a directory where your resources should be copied from.
You can combine this with the Build Script phase answer provided to you already. For instance, run a script to check out your assets from Subversion into a subdirectory of your project, and then follow that up with a Copy Files phase that copies from "$(SRCROOT)/Assets".
I know it's a bit late, but I just came across this article explaining how to do something that sounds like what you're looking for.
I found myself with a similar situation using Ionic Capacitor. What I was expecting was to include files on the "Copy Bundle Resources" bundle phase. What I found is that Ionic already packs you some inclusions and if you slip your files along this folders you get it included as well.
Do you see the App folder inclusion? It our entry point.
To include on it I add a script that do something like this:
cp -Rf ./includes/yourfolder/ ./ios/App/App/
I managed to solve the issue
"Code object is not signed at all"
that can be encountered during build upload to iTunes Connect in this way:
I didnot include the script to Bundle resources.
So the script (in this case Python file) is executed during build, (it does what it has to do) but it is not included in the bundle of the app.
How to do?
Open Build Phases, go to Copy Bundle Resources section, select the file and remove it with (-).