Ansible script module 'creates:' not generating file - ansible

I am running the script module in a role on a windows machine.
I am attempting to use the "args: creates:" parameter. The script runs but the file that 'creates' is supposed to generate never gets created. When I run the playbook again the script runs a second time.
I've tried changing the file name and directory, I've tried using the environment variable to designate HOME as the root directory but the file never gets generated.
---
- name: run script
script: ./files/script.ps1 PARAMETERS
args:
creates: script_has_been_run.txt

Q: "The script runs but the file that 'creates' is supposed to generate never gets created."
A: It's the responsibility of the script to create the file. Quoting from script
creates: A filename on the remote node, when it already exists, this step will not be run.
The purpose of the parameter creates is to make the module idempotent, i.e run the script if the file does not exist. Once the file has been created, by the script, the task will be skipped.

Related

Airflow Bash Operator : Not able to see the output

I am a newbie to Airflow and trying to create a simple task of executing a bash file(whose job is just to create a directory). I have given the full path of the bash file to be executed (with a space at the end) in bash_command. However, upon triggering the DAG from the UI, I see no errors in the log as well as no folder created with the name specified in the bash file.
Can someone please help me fix the issue?
When BashOperator executes, Airflow will create a temporary directory as the working directory and executes the bash command. When the execution finishes, the temporary directory will be deleted.
To keep the directory created from the bash command, you can either
specify an absolute path outside of the working directory, or
change your working directory to a place outside of the temporary directory.
I am creating a test directory in the Airflow home directory.
p = BashOperator(
task_id='create_dir',
bash_command='pwd; mkdir $AIRFLOW_HOME/test; ls -al',
)

append a parameter to a command in the file and run the appended command

I have a the following command in a file called $stat_val_result_command.
I want to add -Xms1g parameter at the end of the file so that is should look like this:
<my command in the file> -Xms1g
However, I want to run this command after append. I am running this in a workflow system called "nextflow". I tied many things, including following, but it does not working. check the script section which runs in Bash by default:
process statisticalValidation {
input:
file stat_val_result_command from validation_results_command.flatten()
output:
file "*_${params.ticket}_statistical_validation.txt" into validation_results
script:
"""
echo " -Xms1g" >> $stat_val_result_command && ```cat $stat_val_result_command```
"""
}
Best to avoid appending to or manipulating input files localized in the workdir as these can be, and are by default, symbolic links to the original files.
In your case, consider instead exporting the JAVA_TOOL_OPTIONS environment variable. This might or might not work for you, but might give you some ideas if you have control over how the scripts are being generated:
export JAVA_TOOL_OPTIONS="-Xms1g"
bash "${stat_val_result_command}"
Also, it's generally better to avoid localizing and running scripts like this. It might be unavoidable, but usually there are better options. For example, third-party scripts, like your Bash script could be handled more simply:
Grant the execute permission to these files and copy them into a
folder named bin/ in the root directory of your project repository.
Nextflow will automatically add this folder to the PATH environment
variable, and the scripts will automatically be accessible in your
pipeline without the need to specify an absolute path to invoke them.
This of course assumes you can control and parameterize the process that creates your Bash scripts.

How can I access dockerfile location within a shell script from outside the docker?

-I am writing a shell script to check if file exists and is non zero or not. The script is basic though but the challenge here is I want to check the files contained inside a docker. How can I access a docker file location within a script running from outside the docker.
-I am using an array which takes file locations as values for eg :
array=(/u01/FDT/FDT_Inbox/MAINFRAME_FILES/DC_NETWORK_CONFIG/sample.txt /u01/FDT/FDT_Inbox/MAINFRAME_FILES/DC_NETWORK_CONFIG/abc.txt)
and a for loop for every index i of the array to check if the file exists.

Unable to Pass option using getopts to oozie shell Action

I created a script in shell and passing the arguments using getopts methods in my script like this:
sh my_code.sh -F"file_name"
where my_code.sh is my unix script name and file_name is the file I am passing to my script using getopts.
This is working fine when I am invoking my script from the command line.
I want to invoke the same script by using oozie, but I am not sure how can I do it.
I tried passing the argument to the "exec" as well as "file" tag in the xml
When I am trying passing argument in exec tag, it was giving "JavaNullPoint" Expection
exec TAG
<exec>my_code.sh -F file_name</exec>
file TAG
<file>$/user/oozie/my_code.sh#$my_code.sh -F file_name</file>
When I am trying passing argument in File Tag, I was getting error, "No such File or directory". It was searching the file_name in /yarn/hadoop directory.
Can anyone please suggest how can I achieve this by using oozie?
You need to create a lib/ folder as part of your workflow where Oozie will upload the script as part of its process. This directory should also be uploaded to the oozie.wf.application.path location.
The reason this is required is that Oozie will run on any random YARN node, and pretend that you had a hundred node cluster, and you would otherwise have to ensure that every single server had /user/oozie/my_code.sh file available (which of course is hard to track). When this file can be placed on HDFS, every node can download it locally.
So if you put the script in the lib directory next to the workflow xml that you submit, then you can reference the script by name directly rather than using the # syntax
Then, you'll want to use the argument xml tags for the opts
https://oozie.apache.org/docs/4.3.1/DG_ShellActionExtension.html
I have created lib/ folder and uploaded it to oozie.wf.application.path location.
I am able to pass files to my shell action.

What kind of components are expected to be saved in ansible tmp folder during execution of a task?

I am investigating an issue with ansible and suspect whether a registered variable was saved in ansible tmp folder, because I also suspect that such temporary directory is being removed by async_wrapper module during execution (the environment is based on ansible 2.2 and there is a known issue with async_wrapper module).
Therefore I would like to know what kind of items are expected to be saved in ansible tmp folder such as in .ansible/tmp/ansible-tmp-xxx... during execution of a task. Then at least it would be possible to make some further estimations.
Use:
export ANSIBLE_KEEP_REMOTE_FILES=1
This will retain the files that Ansible copies to .ansible/tmp/ansible-tmp-xxx... and runs on the destination host.
Set the env variable before running the playbook with -vvv. This will output the paths used to store the scripts on the destination host.
After the playbook has completed. SSH onto the destination host and take a look at the files.
The files are most likely on a path like:
/home/user/.ansible/tmp/..../modulename
The easiest way to view/test them is to explode them and then execute them.
python /home/user/.ansible/tmp/..../modulename explode
This will create a subdirectory containing the module, arguments and ansible wrapper.
python /home/user/.ansible/tmp/..../modulename execute
This will run the exploded files.
You will be able to see from this exactly what is being saved where. It's also possible to edit the module and test to see what changes are made to the /tmp folder.

Resources