Cleaning up file based variables. ERROR: Job failed: exit code 1 - bash

I am using gitlab-ci to automate build and release. In the last job, I want to upload the artifacts to a remote server using lftp.
$(pwd)/publish/ address is the artifacts that were generated in the previous job. And all variables declared in the gitlab Settings --> CI / CD.
This the job yaml code:
upload-job:
stage: upload
image: mwienk/docker-lftp:latest
tags:
- dotnet
only:
- master
script:
- lftp -e "open $HOST; user $FTP_USERNAME $FTP_PASSWORD; mirror -X .* -X .*/ --reverse --verbose --delete $(pwd)/publish/ wwwroot/; bye"
Note that lftp transfers my files, however, I'm not sure all of my files are transferred.
I added echo "All files Transfered." but it never runs.
There is no error and warning in the pipeline log but I got the following error:
I don't know what is it for. Have anyone faced the error and have found any solution?

Finally, I solved the problem by some changes in lft command parameters.
The point in troubleshooting ERROR: Job failed: exit code 1 is that to use commands with the verbose parameter that return sufficient log to enable you to troubleshoot the problem. Another important point is to know how to debug shell scripts such as bash, powershell, or etc.
Also, you can run and test commands directly in a shell, it is helpful.
The following links are helpful to troubleshoot command-line scripts:
How to debug a bash script?
5 Simple Steps On How To Debug a Bash Shell Script
Use the PowerShell Debugger to Troubleshoot Scripts
For lftp logging:
How to enable lftp protocol logging?

Related

gitlab CI/CD xxx.tmp/TRIGGER_PAYLOAD: No such file or directory

On my window computer, I use git bash to register and start gitlab runner. gitlab-runner.exe stored in C:Runner directory. I open git-bash terminal, cd /c/Runner to register and run gitlab runner as following:
If I use CI/CD>>Schedules UI to start pipelines, everything works fine. But when I use trigger command like following, pipeline job failed.
curl -X POST \
-F token=xxxxxxx \
-F ref=develop \
http://git.xxxxxxxxx/trigger/pipeline
Error message is:
It seems that “/c/Runner/C:/Runner/builds...” is a wrong path. Does anyone know how to fix it? Thank you very much.
BTW: For some reason, I have to use bash terminal on window to start gitlab-runner.

In Azure pipeline linux commands getting truncated at the end of the script file

I am running an azure devops pipeline to install and configure ELK. So there is a shell script which executes all the commands to install ELK and configure using curl commands. But at the end of the file last 4-5 commands are not executed and I can see truncated scripts in the log.
2020-06-12T08:56:10.2856017Z > echo "Registering the azure reposi
2020-06-12T08:56:10.2856671Z > curl -X PUT -uadmin:"***" "https://10.XXX.X
2020-06-12T08:56:10.2856981Z > echo "Setting up snapshot backup poli
2020-06-12T08:56:10.2857523Z > curl -X PUT -uadmin:"***" https:
2020-06-12T08:56:10.2857807Z > echo "Finished configuring Kibana"
I have also swaped these scripts with other scripts above it which were executed successfully but again the scripts which were successfull now starts getting truncated. I am not sure what I am doing wrong. Kindly help. Thanks in advance.
I still don't have the answer but I was able to resovle this issue by moving all the curl commands in a bash script file and executed it.

How to execute gcloud command in bash script from crontab -e

I am trying execute some gcloud commands in bash script from crontab. The script execute sucessfully from command shell but not from the cron job.
I have tried with:
Settng the full path to gcloud like:
/etc/bash_completion.d/gcloud
/home/Arturo/.config/gcloud
/usr/bin/gcloud
/usr/lib/google-cloud-sdk/bin/gcloud
Setting in the begin the script:
/bin/bash -l
Setting in the crontab:
51 21 30 5 6 CLOUDSDK_PYTHON=/usr/bin/python2.7;
/home/myuser/folder1/myscript.sh param1 param2 param3 -f >>
/home/myuser/folder1/mylog.txt`
Setting inside the script:
export CLOUDSDK_PYTHON=/usr/bin/python2.7
Setting inside the script:
sudo ln -s /home/myuser/google-cloud-sdk/bin/gcloud /usr/bin/gcloud
Version Ubuntu 18.04.3 LTS
command to execute: gcloud config set project myproject
but nothing is working, maybe I am doing something wrongly. I hope you can help me.
You need to set your user in your crontab, for it to run the gcloud command. As well explained in this other post here, you need to modify your crontab to fetch the data in your Cloud SDK, for the execution to occur properly - it doesn't seem that you have made this configuration.
Another option that I would recommend you to try out, it's using a Cloud Scheduler to run your gcloud commands. This way, you can use gcloud for your cron jobs in a more integrated and easy way. You can verify more information about this option here: Creating and configuring cron jobs
Let me know if the information helped you!
I found my error, the problem here was only in the command: "gcloud dns record-sets transaction start", the others command was executing sucesfully but only no logging nothing, by that I though that was not executng the other commands. This Command create a temp file ex. transaction.yaml and that file could not be created in the default path for gcloud(snap/bin), but the log simply dont write any thing!. I had to specify the path and name for that file with the flag --transaction-file=mytransaction.yaml. Thanks for your supprot and ideas
I have run into the same issue before. I fixed it by forcing the profile to load in my script.sh,loading the gcloud environment variables with it. Example below:
#!/bin/bash
source /etc/profile
gcloud config set project myprojectecho
echo "Project set to myprojectecho."
I hope this can help others in the future with similar issues, as this also helped me when trying to set GKE nodes from 0-4 on a schedule.
Adding the below line to the shell script fixed my issue
#Execute user profile
source /root/.bash_profile

unrecognized arguments when executing script via crontab

I have my crontab set up as follows (this is inside a docker container).
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
SHELL=/bin/bash
5 * * * * bash /usr/local/bin/process-logs > /proc/1/fd/1 2>/proc/1/fd/
The /usr/local/bin/process-logs is designed to expose some MongoDB logs using mtools to a simple web server.
The problematic part of the script is fairly simple. raw_name is archive_name without the file extension.
aws s3 cp "s3://${s3_bucket}/${file_name}" "${archive_name}"
gunzip "${archive_name}"
mlogvis --no-browser "${raw_name}"
If I manually run the command as specified in the crontab config above
bash /usr/local/bin/process-logs > /proc/1/fd/1 2>/proc/1/fd/2
It all works as expected (this is the expected output from mlogvis)
...
copying /usr/local/lib/python3.5/dist-packages/mtools/data/index.html to /some/path/mongod.log-20190313-1552456862.html
...
When the script gets triggered via crontab it throws the following error
usage: mlogvis [-h] [--version] [--no-progressbar] [--no-browser] [--out OUT]
[--line-max LINE_MAX]
mlogvis: error: unrecognized arguments: mongod.log-20190313-1552460462
The mlogvis command that caused the following error (actual values not parameters)
mlogvis --no-browser "mongod.log-20190313-1552460462"
Again if I run this command myself it all works as expected.
mlogvis: http://blog.rueckstiess.com/mtools/mlogvis.html
I don't believe this to be an issue with the file not having correct permissions or not existing as mlogvis produces a different error in these conditions. I've also tested with removing '-' from the file name thinking it might be trying to parse these as arguments but it made no difference.
I know cron execution doesn't have the same execution environment as the user I tested the script as. I've set the PATH to be the same as the user and when the container starts up I execute env >> /etc/environment so all the environment vars and properly set.
Does anyone know of a way to debug this or has anyone encountered similar? All other components of the script are functioning except mlogvis which is core to the purpose of this job.
Summary of what I've tried as a fix:
Set environment and PATH for cron execution to be the same as the user I tested the script as
Replace - in file name(s) to see if it was parsing the parts as arguments
hardcode a filename with full permissions to see if it was permissions related
Manually run the script -> this works
Manually run the mlogvis command in isolation -> this works
try to load /home/user/.bash_profile before executing script and try again. I suspect that you have missing PATH or other environment variable which is not set.
source /home/user/.bash_profile
Please post your complete script, because while executing via crontab,
you have to be sure your raw_name variable was properly created. As
it seems to depend on archive_name, posting some more context can
help us to help you.
In any case, if you are using bash, you can try something like :
aws s3 cp "s3://${s3_bucket}/${file_name}" "${archive_name}"
gunzip "${archive_name}"
# here you have to be sure that archive_name is correct
raw_name_2=${archive_name%%.*}
mlogvis --no-browser "${raw_name_2}"
It is not going to solve your issue, but probably will take you closer to the right path.

run .sh script via Jenkins to execute aws command error

my problem is that i try to execute shell script to copy created files from msbuild to AWS s3 via Jenkins.
Then i add new build step "Execute Shell" and set to execute shell script by command: sh publishS3.sh nothing happens and files doesn't apper in s3 bucket.
my Jenkins use Local Windows Server.
Then i try to execute the shell script by typing sh publishS3.sh in Jenkins local directory all ok , files was copyed secessfully to s3 bucket , but if i try to do it from jenkins nothing was happen. My publishS3.sh script is:
#!/bin/bash
aws s3 cp Com.VistaDraft.Common.dll s3://download.vistadraft.com/MVP
i was tryed to to check witch output i receive after execute by adding at the end command > output.txt but Jenkins generate an empty file. If i try to do the same locally i was receive an message that i secessfully copyed files to s3. i Set the shell script path of jenkins C:\Program Files\Git\git-bash.exe and using git-bash.exe locally too. Maybe whom know where is a problem ? Please suggest.
You could try to add -ex in the first line of the script to allow you to see what it's doing and ease the debugging:
#!/bin/bash -ex
# rest of script
Make sure the aws tool is in the PATH of the environment where Jenkins runs your script. It might help if you specify full path to the command.
You could put which aws in your script to see what's going on.

Resources