CircleCI permission denied running bash script - bash

I have a circle.yml file like so:
dependencies:
override:
- meteor || curl https://install.meteor.com | /bin/sh
deployment:
production:
branch: "master"
commands:
- ./deploy.sh
When I push to Github, I get the error:
/home/ubuntu/myproject/deploy.sh returned exit code 126
bash: line 1: /home/ubuntu/myproject/deploy.sh: Permission denied Action failed: /home/ubuntu/myproject/deploy.sh
When I run the commands that live inside deploy.sh outside of the file (under commands) everything runs fine.
Everything in the circle.yml file seems to be in line with the examples in the CircleCI docs.. What am I doing wrong?

Several possible problems:
deploy.sh might not be marked as executable (chmod +x deploy.sh would fix this)
The first line of deploy.sh might not be a runnable shell...
If the first doesn't work, can we please see the contents of deploy.sh?

I was having the same issue. I added sh to the front of my commands section to get it to work.
deployment:
production:
branch: "master"
commands:
- sh ./deploy.sh
Hopefully that fix saves everyone sometime going forward.

Assuming you have already checked it in, use this command to flag it as executable to git:
git update-index --chmod=+x script.sh
reference:
https://www.pixelninja.me/make-script-committed-to-git-executable/

As #palfrey says the script is probably not marked as executable, and sometimes it seems to be marked wrong on deployment even when you have previously run chmod +x on your script at your local machine. (Why? I don't know. If someone does please enlighten me!)
Here is a general command to use to ensure your scripts are always marked as executable. This assumes they are all located in a /home/ubuntu/${CIRCLE_PROJECT_REPONAME}/scripts directory and all have a .sh extension. If your directory(s) is(are) different, edit to use your directory instead.
Since all my scripts source a shared script (shared.sh) at the top of each script that are called by circle.yml I add the following code to shared.sh which ensures all scripts are marked as executable:
SCRIPTS="/home/ubuntu/${CIRCLE_PROJECT_REPONAME}/scripts"
find "${SCRIPTS}" | grep "\.sh$" | xargs chmod +x
Works like a charm. :-)

Related

execute aws command in script with sudo

I am running a bash script with sudo and have tried the below but am getting the error below using aws cp. I think the problem is that the script is looking for the config in /root which does not exist. However doesn't the -E preserve the original location? Is there an option that can be used with aws cp to pass the location of the config. Thank you :).
sudo -E bash /path/to/.sh
- inside of this script is `aws cp`
Error
The config profile (name) could not be found
I have also tried `export` the name profile and `source` the path to the `config`
You can use the original user like :
sudo -u $SUDO_USER aws cp ...
You could also run the script using source instead of bash -- using source will cause the script to run in the same shell as your open terminal window, which will keep the same env together (such as user) - though honestly, #Philippe answer is the better, more correct one.

Cleaning up file based variables. ERROR: Job failed: exit code 1

I am using gitlab-ci to automate build and release. In the last job, I want to upload the artifacts to a remote server using lftp.
$(pwd)/publish/ address is the artifacts that were generated in the previous job. And all variables declared in the gitlab Settings --> CI / CD.
This the job yaml code:
upload-job:
stage: upload
image: mwienk/docker-lftp:latest
tags:
- dotnet
only:
- master
script:
- lftp -e "open $HOST; user $FTP_USERNAME $FTP_PASSWORD; mirror -X .* -X .*/ --reverse --verbose --delete $(pwd)/publish/ wwwroot/; bye"
Note that lftp transfers my files, however, I'm not sure all of my files are transferred.
I added echo "All files Transfered." but it never runs.
There is no error and warning in the pipeline log but I got the following error:
I don't know what is it for. Have anyone faced the error and have found any solution?
Finally, I solved the problem by some changes in lft command parameters.
The point in troubleshooting ERROR: Job failed: exit code 1 is that to use commands with the verbose parameter that return sufficient log to enable you to troubleshoot the problem. Another important point is to know how to debug shell scripts such as bash, powershell, or etc.
Also, you can run and test commands directly in a shell, it is helpful.
The following links are helpful to troubleshoot command-line scripts:
How to debug a bash script?
5 Simple Steps On How To Debug a Bash Shell Script
Use the PowerShell Debugger to Troubleshoot Scripts
For lftp logging:
How to enable lftp protocol logging?

Not getting the output from shell script in Gitlab CI

I have set up a gitlab runner on my windows. I have a build.sh script in which I am just echoing "Hello world". I have provided these lines in my gitlab-ci.yml file:
build:
stage: build
script:
- ./build.sh
The runner executes this job but does not print the echo command which I have mentioned in the build.sh file. But if I changed the extension to .bat it works and shows me the output. The gitlab-runner is set up for shell. What can be the possible reason? or I am missing something?
GitLab will output for anything that ends up written to STDOUT or STDERR. It's hard to say what's happening without seeing your whole script, but I imagine somehow you're not actually echoing to STDOUT. This is why the output isn't ending up in the CI output.
To test this I created a test project on GitLab.com. One difference in my test is my CI YAML script command was sh build.sh. This is because the script wasn't executable so it couldn't be executed with ./build.sh.
builds.sh file:
#!/bin/bash
echo "This is output from build.sh"
.gitlab-ci.yml file:
build:
stage: build
script:
- sh build.sh
The build output:
Running with gitlab-runner 12.3.0-rc1 (afb9fab4)
on docker-auto-scale 72989761
...
Fetching changes with git depth set to 50...
Initialized empty Git repository in /builds/dblessing/ci-output-test/.git/
Created fresh repository.
Checking out cfe8a4ee as master...
Skipping Git submodules setup
$ sh build.sh
This is output from build.sh
Job succeeded

Docker unable to start an interactive shell if the image has an entry script

My custom-made image ends with
ENTRYPOINT [ "/bin/bash", "-c", "/home/tool/entry_script.sh" ]
This is absolutely needed because at runtime, the first thing the user must do is to update an already cloned github project, and users will often forget to do it.
But then, when i try to launch using
docker run -it --rm my_image /bin/bash
i can see that the ENTRYPOINT script is being executed, but then the container exit.
I expect to have /bin/bash being executed and the shell to remain in interactive mode, due to -it flags.
What am I doing wrong?
UPDATE: I add my entry script
#!/bin/bash
echo "UPDATING GIT REPO";
cd /home/tool/cloned_github_tools_root
git pull
git submodule init
git submodule update
echo "Entrypoint ended";
Actually I've not kind of errors at runtime
When you set and entry point in a docker container. It is the only thing it will run. It's the one and only process that matters (PID 1). Once your entry_point.sh script finishes running and returns and exit code, docker thinks the container has done what it needed to do and exits, since the only process inside it exits.
If you want to launch a shell inside the container, you can modify your entry point script like so:
#!/bin/bash
echo "UPDATING GIT REPO";
cd /home/tool/cloned_github_tools_root
git pull
git submodule init
git submodule update
echo "Entrypoint ended";
/bin/bash "$#"
This starts a shell after the repo update has been done. The container will now exit when the user quits the shell.
The -i and -t flags will make sure the session gives you an stdin/stdout and will allocate a psuedo-tty for you, but they will not automatically run bash for you. Some containers don't even have bash in them.
I think the original question and answer are pretty good (thank you!). However I had the same exact problem but the provided solution did not work for me. I ended up wasting a lot of time figuring out what I was doing wrong. Hence I came up with a solution that should work all the time, if this could save time for others. In my docker entry point I'm sourcing a shell script file from Intel compiler and the received parameters $# are somewhat changed by the 'source' command. Then when ending the script with /bin/bash "$#" the original parameters are gone. Here is my updated version that would be safer for all use cases:
#!/bin/bash
# Save original parameters
allparams=("$#")
echo "UPDATING GIT REPO";
cd /home/tool/cloned_github_tools_root
git pull
git submodule init
git submodule update
echo "Entrypoint ended";
# Forward initial parameters
/bin/bash "${allparams[#]}"

How to determine whether a script has previously run using Ansible?

I'm using Ansible to deploy (Git clone, run the install script) a framework to a server. The install step means running the install.sh script like this:
- name: Install Foo Framework
shell: ./install.sh
args:
chdir: ~/foo
How can I determine whether I have executed this step in a previous run of Ansible? I want to add a when condition to this step that only executes if the install.sh script hasn't been run previously.
The install.sh script does a couple of things (replacing some files in the user's home directory), but it's not obvious whether the script was run before from just taking a look at the files. The ~/foo.sh file might have existed before, it's not clear whether it was replaced by the install script or was there before.
Is there a way in Ansible to store a value on the server that let's me determine whether this particular task has been executed before? Or should I just create a marker file in the user's home directory (e.g. ~/foo-installed) that I check in later invocations of the playbook?
I suggest to use the script module instead. This module has a creates parameter:
a filename, when it already exists, this step will not be run. (added in Ansible 1.5)
So your script then could simply touch a file which would prevent execution of the script in subsequent calls.
Here's how I solved it in the end. The pointer to using the creates option helped:
- name: Install Foo Framework
shell: ./install.sh && touch ~/foo_installed
args:
chdir: ~/foo
creates: ~/foo_installed
Using this approach, the ~/foo_installed file is only created when the install script finished without an error.

Resources