Capistrano 3 deploy failed messages - exit status 1 (failed) - ruby

When I deploy my Symfony2 app using Capistrano with symfony gem I get various errors such as
Running /usr/bin/env [ -L /var/www/releases/20151014090151/app/config/parameters.yml ] as ubuntu#ec2-00-000-000-000.eu-west-1.compute.amazonaws.com
Command: [ -L /var/www/releases/20151014090151/app/config/parameters.yml ]
Finished in 0.038 seconds with exit status 1 (failed)
and I get the same for
-f /var/www/releases/20151014120425/app/config/parameters.yml
-L /var/www/releases/20151014090151/web/.htaccess
-L /var/www/releases/20151014090151/web/robots.txt
-L /var/www/releases/20151014090151/app/logs
-d /var/www/releases/20151014120425/app/logs
SYMFONY_ENV=prod /usr/bin/env ln -s /var/www/shared/app/logs /var/www/releases/20151014120425/app/logs
-L /var/www/releases/20151014120425/web/uploads
-d /var/www/releases/20151014120425/web/uploads
-L /var/www/releases/20151014120425/src/Helios/CoreBundle/Resources/translations
-d /var/www/releases/20151014120425/src/Helios/CoreBundle/Resources/translations
-L /var/www/releases/20151014120425/app/spool
-d /var/www/releases/20151014120425/app/spool
-d /var/www/releases/20151014120425/app/cache
I am not sure what is failing or what the various flags -f -L -d mean?
The deploy completes but it just shows these failed message. Please can someone advise what they mean and how to fix please?
Thanks

The flags are file test operators.
When Capistrano says a command fails, it just means it returns a non-0 status code. In the case of file test operators, it may be checking whether something exists before creating it, so that failure is fed into a conditional which will create the file/folder/symlink. This is normal, albeit confusing. If a critical command fails, Capistrano will stop the deployment and display an error message.

Related

Does not getting build failure status even the build not successful run(cloud-build remote builder)

Cloud-build is not showing build failure status
I created my own remote-builder which scp all files from /workspace to my Instance and running build on using gcloud compute ssh -- COMMAND
remote-builder
#!/bin/bash
USERNAME=${USERNAME:-admin}
REMOTE_WORKSPACE=${REMOTE_WORKSPACE:-/home/${USERNAME}/workspace/}
GCLOUD=${GCLOUD:-gcloud}
KEYNAME=builder-key
ssh-keygen -t rsa -N "" -f ${KEYNAME} -C ${USERNAME} || true
chmod 400 ${KEYNAME}*
cat > ssh-keys <<EOF
${USERNAME}:$(cat ${KEYNAME}.pub)
EOF
${GCLOUD} compute scp --compress --recurse \
$(pwd)/ ${USERNAME}#${INSTANCE_NAME}:${REMOTE_WORKSPACE} \
--ssh-key-file=${KEYNAME}
${GCLOUD} compute ssh --ssh-key-file=${KEYNAME} \
${USERNAME}#${INSTANCE_NAME} -- ${COMMAND}
below is the example of the code to run build(cloudbuild.yaml)
steps:
- name: gcr.io/$PROJECT_ID/remote-builder
env:
- COMMAND="docker build -t [image_name]:[tagname] -f Dockerfile ."
During docker build inside Dockerfile it got failure and show errors in log but status showing SUCCESS
can any help me how to resolve it.
Thanks in advance.
try adding
|| exit 1
at the end of your docker command... alternatively, you might just need to change the entrypoint to 'bash' and run the script manually
To confirm -- the first part was the run-on.sh script, and the second part was your cloudbuild.yaml right? I assume you trigger the build manually via UI and/or REST API?
I wrote all docker commands on bash script and add below error handling code to it.
handle_error() {
echo "FAILED: line $1, exit code $2"
exit 1
}
trap 'handle_error $LINENO $?' ERR
It works!

I need help parsing HTML with grep [duplicate]

It works ok as a single tool:
curl "someURL"
curl -o - "someURL"
but it doesn't work in a pipeline:
curl "someURL" | tr -d '\n'
curl -o - "someURL" | tr -d '\n'
it returns:
(23) Failed writing body
What is the problem with piping the cURL output? How to buffer the whole cURL output and then handle it?
This happens when a piped program (e.g. grep) closes the read pipe before the previous program is finished writing the whole page.
In curl "url" | grep -qs foo, as soon as grep has what it wants it will close the read stream from curl. cURL doesn't expect this and emits the "Failed writing body" error.
A workaround is to pipe the stream through an intermediary program that always reads the whole page before feeding it to the next program.
E.g.
curl "url" | tac | tac | grep -qs foo
tac is a simple Unix program that reads the entire input page and reverses the line order (hence we run it twice). Because it has to read the whole input to find the last line, it will not output anything to grep until cURL is finished. Grep will still close the read stream when it has what it's looking for, but it will only affect tac, which doesn't emit an error.
For completeness and future searches:
It's a matter of how cURL manages the buffer, the buffer disables the output stream with the -N option.
Example:
curl -s -N "URL" | grep -q Welcome
Another possibility, if using the -o (output file) option - the destination directory does not exist.
eg. if you have -o /tmp/download/abc.txt and /tmp/download does not exist.
Hence, ensure any required directories are created/exist beforehand, use the --create-dirs option as well as -o if necessary
The server ran out of disk space, in my case.
Check for it with df -k .
I was alerted to the lack of disk space when I tried piping through tac twice, as described in one of the other answers: https://stackoverflow.com/a/28879552/336694. It showed me the error message write error: No space left on device.
You can do this instead of using -o option:
curl [url] > [file]
So it was a problem of encoding. Iconv solves the problem
curl 'http://www.multitran.ru/c/m.exe?CL=1&s=hello&l1=1' | iconv -f windows-1251 | tr -dc '[:print:]' | ...
If you are trying something similar like source <( curl -sS $url ) and getting the (23) Failed writing body error, it is because sourcing a process substitution doesn't work in bash 3.2 (the default for macOS).
Instead, you can use this workaround.
source /dev/stdin <<<"$( curl -sS $url )"
Trying the command with sudo worked for me. For example:
sudo curl -O -k 'https url here'
note: -O (this is capital o, not zero) & -k for https url.
I had the same error but from different reason. In my case I had (tmpfs) partition with only 1GB space and I was downloading big file which finally filled all memory on that partition and I got the same error as you.
I encountered the same problem when doing:
curl -L https://packagecloud.io/golang-migrate/migrate/gpgkey | apt-key add -
The above query needs to be executed using root privileges.
Writing it in following way solved the issue for me:
curl -L https://packagecloud.io/golang-migrate/migrate/gpgkey | sudo apt-key add -
If you write sudo before curl, you will get the Failed writing body error.
For me, it was permission issue. Docker run is called with a user profile but root is the user inside the container. The solution was to make curl write to /tmp since that has write permission for all users , not just root.
I used the -o option.
-o /tmp/file_to_download
In my case, I was doing:
curl <blabla> | jq | grep <blibli>
With jq . it worked: curl <blabla> | jq . | grep <blibli>
I encountered this error message while trying to install varnish cache on ubuntu. The google search landed me here for the error (23) Failed writing body, hence posting a solution that worked for me.
The bug is encountered while running the command as root curl -L https://packagecloud.io/varnishcache/varnish5/gpgkey | apt-key add -
the solution is to run apt-key add as non root
curl -L https://packagecloud.io/varnishcache/varnish5/gpgkey | apt-key add -
The explanation here by #Kaworu is great: https://stackoverflow.com/a/28879552/198219
This happens when a piped program (e.g. grep) closes the read pipe before the previous program is finished writing the whole page. cURL doesn't expect this and emits the "Failed writing body" error.
A workaround is to pipe the stream through an intermediary program that always reads the whole page before feeding it to the next program.
I believe the more correct implementation would be to use sponge, as already suggested by #nisetama in the comments:
curl "url" | sponge | grep -qs foo
I got this error trying to use jq when I didn't have jq installed. So... make sure jq is installed if you're trying to use it.
In Bash and zsh (and perhaps other shells), you can use process substitution (Bash/zsh) to create a file on the fly, and then use that as input to the next process in the pipeline chain.
For example, I was trying to parse JSON output from cURL using jq and less, but was getting the Failed writing body error.
# Note: this does NOT work
curl https://gitlab.com/api/v4/projects/ | jq | less
When I rewrote it using process substitution, it worked!
# this works!
jq "" <(curl https://gitlab.com/api/v4/projects/) | less
Note: jq uses its 2nd argument to specify an input file
Bonus: If you're using jq like me and want to keep the colorized output in less, use the following command line instead:
jq -C "" <(curl https://gitlab.com/api/v4/projects/) | less -r
(Thanks to Kowaru for their explanation of why Failed writing body was occurring. However, their solution of using tac twice didn't work for me. I also wanted to find a solution that would scale better for large files and tries to avoid the other issues noted as comments to that answer.)
I was getting curl: (23) Failed writing body . Later I noticed that I did not had sufficient space for downloading an rpm package via curl and thats the reason I was getting issue. I freed up some space and issue for resolved.
I had the same question because of my own typo mistake:
# fails because of reasons mentioned above
curl -I -fail https://www.google.com | echo $?
curl: (23) Failed writing body
# success
curl -I -fail https://www.google.com || echo $?
I added flag -s and it did the job. eg: curl -o- -s https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash

Install script curl'ed from github:

I have the following script hosted on Github:
https://rawgit.com/oresoftware/quicklock/master/install.sh
the contents of that file are:
#!/usr/bin/env bash
set -e;
cd "$HOME"
mkdir -p "$HOME/.quicklock/locks"
curl https://rawgit.com/oresoftware/quicklock/master/install.sh > "$HOME/.quicklock/ql.sh"
echo "To complete installation of 'quicklock' add the following line to your .bash_profile file:";
echo ". \"$HOME/.quicklock/ql.sh\"";
I download and run this script with:
curl -o- https://rawgit.com/oresoftware/quicklock/master/install.sh | bash
but I get this error:
bash: line 1: Moved: command not found
That error is killing me, I cannot figure out what is causing it. I tried curl with both the -o- option and without.
The url for raw git has changed, the error itsel is from curl.
Change rawgit.com to raw.githubusercontent.com.
Another option is to add -L to have curl follow the redirect link.
I figured this out by changing bash to bash -x. Here is the output:
curl -o- https://rawgit.com/oresoftware/quicklock/master/install.sh | bash -x
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 107 100 107 0 0 400 0 --:--:-- --:--:-- --:--:-- 402
+(:1): Moved Permanently. Redirecting to https://raw.githubusercontent.com/oresoftware/quicklock/master/install.sh
bash: line 1: Moved: command not found
#xxfelixxx is pretty much right
This was sort of nightmare, but there appears to be a redirect even when using raw.githubusercontent.com
the only thing that worked with curl was to use:
curl -o- https://raw.githubusercontent.com/oresoftware/quicklock/master/install.sh | bash
For the scripts that require arguments, you can do _ for the script placeholder and then the arguments. For exampe: example.sh that expects --help
curl -L https://raw.githubusercontent.com/<USER>/<NAME>/<BRANCH>/example.sh | bash -s _ --help

Login via curl fails inside bash script, same curl succeeds on command line

I'm running this login via curl in my bash script. I want to make sure I can login before executing the rest of the script, where I actually log in and store the cookie in a cookie jar and then execute another curl in the API thousands of times. I don't want to run all that if I've failed to login.
Problem is, the basic login returns 401 when it runs inside the script. But when I run the exact same curl command on the command line, it returns 200!
basic_login_curl="curl -w %{http_code} -s -o /dev/null -X POST -d \"username=$username&password=$password\" $endpoint/login"
echo $basic_login_curl
outcome=`$basic_login_curl`
echo $outcome
if [ "$outcome" == "401" ]; then
echo "Failed login. Please try again."; exit 1;
fi
This outputs:
curl -w %{http_code} -s -o /dev/null -X POST -d "username=bdunn&password=xxxxxx" http://stage.mysite.it:9301/login
401
Failed login. Please try again.
Copied the output and ran it on the cmd line:
$ curl -w %{http_code} -s -o /dev/null -X POST -d "username=bdunn&password=xxxxxx" http://stage.mysite.it:9301/login
200$
Any ideas? LMK if there's more from the code you need to see.
ETA: Please note: The issue's not that it doesn't match 401, it's that running the same curl login command inside the script fails to authenticate, whereas it succeeds when I run it on the actual CL.
Most of the issues reside in how you are quoting/not quoting variables and the subshell execution. Setting up your command like the following is what I would recommend:
basic_login_curl=$(curl -w "%{http_code}" -s -o /dev/null -X POST -d "username=$username&password=$password" "$endpoint/login")
The rest basically involves quoting everything properly:
basic_login_curl=$(curl -w "%{http_code}" -s -o /dev/null -X POST -d "username=$username&password=$password" "$endpoint/login")
# echo "$basic_login_curl" # not needed since what follows repeats it.
outcome="$basic_login_curl"
echo "$outcome"
if [ "$outcome" = "401" ]; then
echo "Failed login. Please try again."; exit 1;
fi
Running the script through shellcheck.net can be helpful in resolving issues like this as well.

Not able to send psadmin output to logs

I'm making a script to restart an instance and it works without any log file but it gives the following error when I try to log the output of psadmin:
java.lang.NullPointerException
at com.peoplesoft.pt.psadmin.ui.Progress.<init>(Progress.java:135)
at com.peoplesoft.pt.psadmin.ui.Progress.getInstance(Progress.java:123)
at com.peoplesoft.pt.psadmin.pia.DomainBootHandler.BootWlsServer(DomainBootHandler.java:84)
at com.peoplesoft.pt.psadmin.pia.DomainBootHandler.run(DomainBootHandler.java:62)
at com.peoplesoft.pt.psadmin.pia.PIAAdminCmdLine.startDomain(PIAAdminCmdLine.java:270)
at com.peoplesoft.pt.psadmin.pia.PIAAdminCmdLine.run(PIAAdminCmdLine.java:481)
at com.peoplesoft.pt.psadmin.PSAdmin.runSwitched(PSAdmin.java:170)
at com.peoplesoft.pt.psadmin.PSAdmin.main(PSAdmin.java:232)
The following works (with no log):
export ORAENV_ASK=NO
export ORACLE_SID=PSCNV
.oraenv
export TUXDIR=/m001/Oracle/Middleware/tuxedo12.1.1.0
. /m001/pt854/psconfig.sh
. $TUXDIR/tux.env
export PS_CFG_HOME=$PS_HOME
$PS_HOME/appserv/psadmin -w shutdown -d PSCNV
$PS_HOME/appserv/psadmin -w start -d PSCNV
$PS_HOME/appserv/psadmin -w status -d PSCNV
Changing the psadmin invocations like so causes the error:
LOGFILE=/home/psoft/scripts/pscnv_webserv_stopNstart.log
test() {
$PS_HOME/appserv/psadmin -w shutdown -d PSCNV
$PS_HOME/appserv/psadmin -w start -d PSCNV
$PS_HOME/appserv/psadmin -w status -d PSCNV
}
test >> ${LOGFILE}
I also tried redirecting the output of each call individually and saw the same error.
I'm interested in any feedback to this question as well. I tried writing a cross platform java program to bounce multiple app and web servers and it seems that the psadmin.jar program exclusively holds onto stdout during the psadmin program.
I want to evaluate the output of psadmin/psadmin.jar to see if there are trappable errors that require killing of the process at the os level.
Hopefully there is a way to share stdout, but I have not found a way yet...
This solved this for me. nohup script -q -c "psadmin -w start -d peoplesoft"

Resources