How to iterate through different sub directories using for loop - for-loop

I am trying to implement a python linter using pylint. But I am getting the score of each python file and also displaying the suggestion to improve the score but I am also looking to terminate the GitHub action job if my pylint score is below 7.0 but currently its not failing my job. I have got a way to fail the build but it only works for one directory. But if there is a sub directory which has a python file it does not lint that
for file in */*.py; do pylint --disable=E0401 "$file" --fail-under=7.0; done
This is the for loop i have used but if there is a directory inside which has another python file I have to write another for loop to lint that and it would look like this
for file in */*/*.py; do pylint --disable=E0401 "$file" --fail-under=7.0; done
is there a way such that for loop can lint all the files even if there is a sub-directory ? In case some new directory is added by a developer this solution is not a great way to fix the issue. I have tried to use find command but it does not fail the GitHub action workflow if there pylint score of a file is less than 7.0

The below snippet works fine and is able to find .py file even from sub directories. This is actually working for me.
- name: Checks if pylint score is above 7.0
run: |
for file in $(find -name '*.py')
do
pylint --disable=E0401,W0611 "$file" --fail-under=7.0;
done
For detailed explanation refer : Lint python files in subdirectries using Github Workflow

Related

Git Pre-Commit Hook: Unable to Run `dartfmt` (command not found) (Windows)

Ideally, I would like to have dartfmt to format my code on every commit, and I think that git hooks are ideal for this. So far, I've tried the code found in this link, but with no success, despite it appearing on multiple other websites — maybe it's outdated.
In the end, I think nothing much more complicated than this should work in most cases (inside the .git/hooks/pre-commit file):
#!/bin/bash
dartfmt -w . # or maybe `flutter format .` for Flutter
The errors I get are:
For dartfmt -w .: dartfmt: command not found
For flutter format .: find: ‘> bin [’: No such file or directory
Both of those commands do work if placed directly in the terminal.
to make dartfmt work, try running which dartfmt manually to get the path to the executable, and then use the absolute path when calling it in the script.
If which isn't able to find it, and assuming you know the complete path to the directory where dartfmt is located, try adding that directory to PATH in the script:
#!/bin/bash
PATH="/path/to/dart-sdk/bin:$PATH"
export PATH
Also, I'd suggest taking a moment double check what git will use for the working directory when it calls those hook scripts. There might be some undesired behavior by using . if the CWD isn't what is expected. See this post.
To format your dart code regularly, you can follow one of the two ways mentioned below:
Preferred way:
In IntelliJ Idea go to Settings -> Language & Frameworks -> Flutter -> Select Format Code on save option.
This will format your code every few seconds. This is preferred because you can customize your personal formatting settings such as max words in a line etc.
Alternatively
From Official website run dartfmt -w bin lib to format your code from the command line.
Add dartfmt reference in PATH, like this:
export PATH="/xxx/flutter/bin/cache/dart-sdk/bin:$PATH"

Sourcing the source files using bash script

Usually I source all the macros I have for the jobs run in a remote machine using this command:
macros=$\my_directory
But I see someone uses a different way to get all the macros for submitting the jobs in a remote machine. He uses this command:
macros=$(dirname $(readlink -f $BASH_SOURCE))
Now I want to know how the $dirname has the advantages over giving the specific macro location. It would be great if you just explain to me regarding the sourcing the macro using $dirname
By using dirname you get the directory of where the script is located, therefore it's easy to source other files locally close to your script and don't worry about specifying the correct path each time the script bundle is relocated.
For instance if you have in your script source $macros/some_script.sh then it will not break when the bundle is located in the /usr/local/bin/ or /bin/ or ...
Regarding $BASH_SOURCE see: https://stackoverflow.com/a/35006505/2146346

Access Jenkins workspace files in pipeline scrpit

I'm quite new to Jenkins so apologies if the question is not detailed enough but I swear I've done my own searching first.
I have a pipeline script that needs to process files that have been pulled from a SCM (git) in a previous step.
One of the parameters passed to the pipeline is a folder where all these files reside. There may be subfolders contained in this folder and I need to process those as well.
So, for example, I may pass a parameter ./my-folder to the pipeline and my-folder may contain the following:
./my-folder/file1.json
./my-folder/file2.json
./my-folder/subfolder/file3.json
The my-folder directory will be part of the repository cloned during the build phase.
While I was developing my Groovy script locally I was doing something similar to this:
def f = new File(folder)
but this doesn't work in Jenkins given the code is running on the master while the folder is on a different node.
After an extensive research I now know that there are two ways to read files in Jenkins.
Use readFile. This would be ok but I haven't found an easy way to scan an entire folder and subfolders to load all files
Use FilePath. This would be my preferred way since it's more OO but I haven't found a way to create an instance of this class. All the approaches I've seen while searching on the internet, refer to the build variable which, I'm not entirely sure why, is not defined in the script. In fact I'm getting groovy.lang.MissingPropertyException: No such property: build for class: WorkflowScript
I hope the question makes sense otherwise I'd be happy to add more details.
Thanks,
Nico
I've managed to scan the content of a folder using the following approach:
sh "find ${base-folder} -name *.* > files.txt"
def files = readFile "files.txt"
and then loop through the lines in files.txt to open each file.
The problem is this works only for txt files. I'm still unable to open a binary file (eg: a zip file) using readFile

Shell build script with ember

I am attempting to write a build script to be used with Facebook watchman and my ember-cli application.
My build script is:
#!/bin/sh
cd ..
ember build
cd ..
cp ./ember-app/dist/index.html ./slim-app/app/templates/app.php
cp -r ./ember-app/dist/assets/ ./slim-app/public/assets/
And my watchman command is:
watchman -- trigger $PWD/ember-app/app 'ember-build' '**' -- sh $PWD/build.sh
Watchman triggers and finds my script fine but when I look at the log I get an error saying ember cannot be found. I'm not really sure why because when i run sh build.sh everything works fine.
Is there any way I could do something like which ember to determine the path to ember and use it directly? I know I can just do which ember and copy and paste that path into the script but I really don't want to do that because I want the build script to work no matter which version of node/nvm I am using.
I'm also open to suggestions to a better way of doing this.
Sounds like a PATH problem. When watchman is first started it captures your PATH environment variable, except on OS X at the moment, due to a bug in our launchd integration.
https://github.com/facebook/watchman/issues/68 has some suggestions for an awkward workaround.
Another possibility is to simply put a line in your build script to set the PATH:
# Add the path to ember in here somewhere
PATH=/usr/local/bin:$PATH

Xcode run script, is it possible to run only if a file changes?

I want to execute the script only when a certain file changes. Is this possible inside a run script phase? I don´t see anything in the docs.
Thank you so much.
I would recommand reading http://indiestack.com/2014/12/speeding-up-custom-script-phases/ which is a great example of using this rarely known Build Phases little gem.
In Input Files of Run Script Phase, add the file to depend on. For example:
$(TARGET_BUILD_DIR)/$(PRODUCT_NAME).framework
In Output Files, add the file which the script will generate:
./SomeFile.txt
In script:
touch SomeFile.txt
Now the script will be called if SomeFile.txt is missing or is older than input file's date.

Resources