run .sh script via Jenkins to execute aws command error - bash

my problem is that i try to execute shell script to copy created files from msbuild to AWS s3 via Jenkins.
Then i add new build step "Execute Shell" and set to execute shell script by command: sh publishS3.sh nothing happens and files doesn't apper in s3 bucket.
my Jenkins use Local Windows Server.
Then i try to execute the shell script by typing sh publishS3.sh in Jenkins local directory all ok , files was copyed secessfully to s3 bucket , but if i try to do it from jenkins nothing was happen. My publishS3.sh script is:
#!/bin/bash
aws s3 cp Com.VistaDraft.Common.dll s3://download.vistadraft.com/MVP
i was tryed to to check witch output i receive after execute by adding at the end command > output.txt but Jenkins generate an empty file. If i try to do the same locally i was receive an message that i secessfully copyed files to s3. i Set the shell script path of jenkins C:\Program Files\Git\git-bash.exe and using git-bash.exe locally too. Maybe whom know where is a problem ? Please suggest.

You could try to add -ex in the first line of the script to allow you to see what it's doing and ease the debugging:
#!/bin/bash -ex
# rest of script
Make sure the aws tool is in the PATH of the environment where Jenkins runs your script. It might help if you specify full path to the command.
You could put which aws in your script to see what's going on.

Related

sshpass: No such file or directory

Bellow command if i write inside a script (test.sh) and execute directly on the specific machine it works.
sshpass -p $HOST_PWD sftp testuser#host <<!
cd parent
mkdir test
bye
!
But when i try to run (directly below scrip or invoking the test.sh file in the specif path) in Jenkins with "Execute shall script on remote host using ssh" it failing with
sshpass: Failed to run command: No such file or directory
I have installed sshpass, lftp and rsync in the remote machine
Issue :
I have added export $HOST_PWD in .bashrc of specific machine as well as Jenkins but in not finding it
Script placed in specific machine, if directly executed the script in that machine it works even with $HOST_PWD. But not working if we invoke from jenkins either script or directly scrip using "Execute shall script on remote host using ssh"
Working with Changes :
Instead of $HOST_PWD if i added directly password it works.

How to echo Jenkins Workspace in Cygwin

I am trying to build a job from jenkins pipeline, my pipeline is calling one shell script which is on windows server, and this script is in cygwin terminal, question is how to use the Jenkins workspace in my shell script in cygwin, tried below commands are not working, can someone please advise. Thanks.
In jenkins Pipeline:
sh "/home/test.sh $WORKSPACE"
In Cygwin:
#!/cygdrive/d/cygwin64/bin/bash --login
WORKSPACE=$1
echo "$WORKSPACE"
Out put from above command is
D:Jenkinsworkspacey_test123_feature_test
but actual workspace is(multibranch pipeline)
Running on win01 in D:\Jenkins\workspace\y_test123_feature_test
Not sure will cygwin take the different command to retrieve the workspace as you can see above output does not contains slashes in between words.
Workspace can be accessed using the enviornmental variable inside the pipeline.
In jenkins Pipeline:
// Get workspace
// This will give you the workspace of the agent that is available in the current stage
def Workspace=env.WORKSPACE
// Convert path with \\ for cygwin
Workspace.replace("\\", "\\\\");
println(Workspace)
sh "/home/test.sh ${Workspace}"

In Azure pipeline linux commands getting truncated at the end of the script file

I am running an azure devops pipeline to install and configure ELK. So there is a shell script which executes all the commands to install ELK and configure using curl commands. But at the end of the file last 4-5 commands are not executed and I can see truncated scripts in the log.
2020-06-12T08:56:10.2856017Z > echo "Registering the azure reposi
2020-06-12T08:56:10.2856671Z > curl -X PUT -uadmin:"***" "https://10.XXX.X
2020-06-12T08:56:10.2856981Z > echo "Setting up snapshot backup poli
2020-06-12T08:56:10.2857523Z > curl -X PUT -uadmin:"***" https:
2020-06-12T08:56:10.2857807Z > echo "Finished configuring Kibana"
I have also swaped these scripts with other scripts above it which were executed successfully but again the scripts which were successfull now starts getting truncated. I am not sure what I am doing wrong. Kindly help. Thanks in advance.
I still don't have the answer but I was able to resovle this issue by moving all the curl commands in a bash script file and executed it.

How to execute gcloud command in bash script from crontab -e

I am trying execute some gcloud commands in bash script from crontab. The script execute sucessfully from command shell but not from the cron job.
I have tried with:
Settng the full path to gcloud like:
/etc/bash_completion.d/gcloud
/home/Arturo/.config/gcloud
/usr/bin/gcloud
/usr/lib/google-cloud-sdk/bin/gcloud
Setting in the begin the script:
/bin/bash -l
Setting in the crontab:
51 21 30 5 6 CLOUDSDK_PYTHON=/usr/bin/python2.7;
/home/myuser/folder1/myscript.sh param1 param2 param3 -f >>
/home/myuser/folder1/mylog.txt`
Setting inside the script:
export CLOUDSDK_PYTHON=/usr/bin/python2.7
Setting inside the script:
sudo ln -s /home/myuser/google-cloud-sdk/bin/gcloud /usr/bin/gcloud
Version Ubuntu 18.04.3 LTS
command to execute: gcloud config set project myproject
but nothing is working, maybe I am doing something wrongly. I hope you can help me.
You need to set your user in your crontab, for it to run the gcloud command. As well explained in this other post here, you need to modify your crontab to fetch the data in your Cloud SDK, for the execution to occur properly - it doesn't seem that you have made this configuration.
Another option that I would recommend you to try out, it's using a Cloud Scheduler to run your gcloud commands. This way, you can use gcloud for your cron jobs in a more integrated and easy way. You can verify more information about this option here: Creating and configuring cron jobs
Let me know if the information helped you!
I found my error, the problem here was only in the command: "gcloud dns record-sets transaction start", the others command was executing sucesfully but only no logging nothing, by that I though that was not executng the other commands. This Command create a temp file ex. transaction.yaml and that file could not be created in the default path for gcloud(snap/bin), but the log simply dont write any thing!. I had to specify the path and name for that file with the flag --transaction-file=mytransaction.yaml. Thanks for your supprot and ideas
I have run into the same issue before. I fixed it by forcing the profile to load in my script.sh,loading the gcloud environment variables with it. Example below:
#!/bin/bash
source /etc/profile
gcloud config set project myprojectecho
echo "Project set to myprojectecho."
I hope this can help others in the future with similar issues, as this also helped me when trying to set GKE nodes from 0-4 on a schedule.
Adding the below line to the shell script fixed my issue
#Execute user profile
source /root/.bash_profile

unrecognized arguments when executing script via crontab

I have my crontab set up as follows (this is inside a docker container).
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
SHELL=/bin/bash
5 * * * * bash /usr/local/bin/process-logs > /proc/1/fd/1 2>/proc/1/fd/
The /usr/local/bin/process-logs is designed to expose some MongoDB logs using mtools to a simple web server.
The problematic part of the script is fairly simple. raw_name is archive_name without the file extension.
aws s3 cp "s3://${s3_bucket}/${file_name}" "${archive_name}"
gunzip "${archive_name}"
mlogvis --no-browser "${raw_name}"
If I manually run the command as specified in the crontab config above
bash /usr/local/bin/process-logs > /proc/1/fd/1 2>/proc/1/fd/2
It all works as expected (this is the expected output from mlogvis)
...
copying /usr/local/lib/python3.5/dist-packages/mtools/data/index.html to /some/path/mongod.log-20190313-1552456862.html
...
When the script gets triggered via crontab it throws the following error
usage: mlogvis [-h] [--version] [--no-progressbar] [--no-browser] [--out OUT]
[--line-max LINE_MAX]
mlogvis: error: unrecognized arguments: mongod.log-20190313-1552460462
The mlogvis command that caused the following error (actual values not parameters)
mlogvis --no-browser "mongod.log-20190313-1552460462"
Again if I run this command myself it all works as expected.
mlogvis: http://blog.rueckstiess.com/mtools/mlogvis.html
I don't believe this to be an issue with the file not having correct permissions or not existing as mlogvis produces a different error in these conditions. I've also tested with removing '-' from the file name thinking it might be trying to parse these as arguments but it made no difference.
I know cron execution doesn't have the same execution environment as the user I tested the script as. I've set the PATH to be the same as the user and when the container starts up I execute env >> /etc/environment so all the environment vars and properly set.
Does anyone know of a way to debug this or has anyone encountered similar? All other components of the script are functioning except mlogvis which is core to the purpose of this job.
Summary of what I've tried as a fix:
Set environment and PATH for cron execution to be the same as the user I tested the script as
Replace - in file name(s) to see if it was parsing the parts as arguments
hardcode a filename with full permissions to see if it was permissions related
Manually run the script -> this works
Manually run the mlogvis command in isolation -> this works
try to load /home/user/.bash_profile before executing script and try again. I suspect that you have missing PATH or other environment variable which is not set.
source /home/user/.bash_profile
Please post your complete script, because while executing via crontab,
you have to be sure your raw_name variable was properly created. As
it seems to depend on archive_name, posting some more context can
help us to help you.
In any case, if you are using bash, you can try something like :
aws s3 cp "s3://${s3_bucket}/${file_name}" "${archive_name}"
gunzip "${archive_name}"
# here you have to be sure that archive_name is correct
raw_name_2=${archive_name%%.*}
mlogvis --no-browser "${raw_name_2}"
It is not going to solve your issue, but probably will take you closer to the right path.

Resources