Kubernetes postStart lifecycle hook exited with 7 - bash

I have a simple bash script to execute at postStart, but i get an error which is not informative at all:
Exec lifecycle hook ([/bin/bash -c sleep 30;/xcom/scripts/sidecar_postStart.sh]) for Container "perfcibuddy" in Pod "xcomapp-597fb859c5-6r4g2_ns(412852d1-5eea-11ea-b641-0a31ddb9a71e)" failed - error: command '/bin/bash -c sleep 120;/xcom/scripts/sidecar_postStart.sh' exited with 7: , message: ""
The sleep is there because I got a tip that there might be a race condition, that the script is not in place at the time Kubernetes calls it.
And if I log into the container I can execute the script from the shell without any problem.
The script is just doing a simple curl call (IP obviously sanitized):
# ----------------------------------------------------------------------------
# Script to perform postStart lifecycle hook triggered actions in container
# ----------------------------------------------------------------------------
# -------------------------------------------[ get token from Kiam server ]---
role_name=$( curl -s http://1.1.1.1/latest/meta-data/iam/security-credentials/ )
curl -s http://1.1.1.1/latest/meta-data/iam/security-credentials/${role_name}
I tried numerous form to set the command in the template (everything in quotes, with && instead of ;), this is the current one:
exec:
command: [/bin/bash, -c, "sleep 120;/xcom/scripts/sidecar_postStart.sh"]
What could be the problem here?

Curl exit code 7 is generally “unable to connect” so your IP is probably wrong or the kiam agent is not set up correctly.

Related

How can we output error in Jenkins when using SSH Publishers plugin

i have a shell script in remote server
mcccc 1
$ sh -x demo.sh
+ mcccc 1
demo.sh: line 4: mcccc: command not found
in Jenkins project configure page
exec command box of SSH Publishers setting section
sh -x demo.sh
but in console output of Jenkins, I only get the error below
ERROR: Exception when publishing, exception message [Exec exit status not zero. Status [127]]
Finished: UNSTABLE
how to get "demo.sh: line 4: mcccc: command not found" message

Error handling for aws cli in bash doesnt cause it to exit

I have a bash script running bunch of stuff along with aws listobject command. Recently we noted that the std err would have an error code but the script does not exit with non zero code. I hoped that the set -e should handle any command failures but it doesnt seem to do the trick.
Script looks like this:
#!/bin/bash
set -e
# do stuff
aws s3api list-objects --bucket xyz --prefix xyx --output text --query >> files.txt
# do stuff
Error in Stderr :
An error occurred (SlowDown) when calling the ListObjects operation (reached max retries: 4): Please reduce your request rate.
Objective:
I want the bash script to fail & exit when it encounters a problem with the aws cli commands. I can add an explicit check on ($? != 0) but wondering if there is better way to do this.
For me, this did the trick:
set -e -o pipefail
The #codeforrester's link says:
set -o pipefail is a workaround by returning the exit code of the first failed process

How to use wait in MAKEFILE when I use it through NMAKE in windows

I'm not familiar with MAKEFILE and trying to figure out how to wait between destroy and deploy for 2 seconds.
It looks like NMAKE has very limited resource on internet and the one I found sleep 2 throws 'sleep' is not recognized as an internal or external command,
operable program or batch file.
I'm working on WINDOWS not LINUX.
REGISTRY=registry.cengiz.dev
IMAGE=cengiz.geocode.host
TAG=latest
MARATHON=http://mesos.cengiz.dev/v2/apps/geocode
PAYLOAD=Marathon_geocode.json
.PHONY: deploy
push:
docker push $(REGISTRY)/$(IMAGE):$(TAG)
destroy:push
curl -X DELETE $(MARATHON)
echo Waiting
sleep 2
deploy:destroy
curl -X PUT -H "Content-Type: application/json" $(MARATHON) -d#$(PAYLOAD)
Try the timeout command:
timeout 3
Note that I intentionally wrote 3 to assure that two seconds passed (instead of 2: the current second will pass and then another one). See more about it in here.

Bash script to run perforce4 on startup with -C1 argument via daemon calls daemon config instead of launching P4 server with -C1 arg

So, I've got this bit of a script trying to launch a P4 server in init.d
When I run this, daemon parses "p4start" and outputs:
daemon --user=perforce -d -C1
Unfortunately, daemon has decided that -C1 will be parsed as a command to Daemon instead of an argument to pass to p4d. So I get this error:
Invalid --config option argument 1: no such file or directory
Is there a good way to get around this?
Thanks!
p4start="p4d -d -C1"
p4stop="p4 admin stop"
p4user=perforce
case "$1" in
start)
log_action_begin_msg "Starting Perforce Server"
daemon --user=$p4user $p4start;
;;

Execute bash script from url via cron

I am trying to get a cloud server (built from an image I have saved) to execute a script from a URL upon startup, but the script is not executing properly.
I used one of the answers from Execute bash script from URL to configure a curl script, and am executing that script via the #reboot directive in crontab (Ubuntu 14.04). My setup looks like this:
The script contains these commands:
user#cloud-server-01:~$ cat startup.sh
#! /bin/sh
/usr/bin/curl -s http://192.168.100.59/user/startup.sh.txt | bash /dev/stdin
I call the script via crontab:
user#cloud-server-01:~$ crontab -l
#reboot /home/user/startup.sh > startup.log 2>&1 &
If I manually execute the script from the command line using exactly the same command, it works fine. However, executing by crontab on startup, it seems to hang, and I see the following processes running:
user#cloud-server-01:~$ ps ux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
user 1287 0.0 0.1 4444 632 ? S 19:17 0:00 /bin/sh /home/user/startup.sh
user 1290 0.0 0.7 89536 3536 ? S 19:17 0:00 /usr/bin/curl -s http://192.168.100.59/user/startup.sh.txt
user 1291 0.0 0.2 12632 1196 ? S 19:17 0:00 bash /dev/stdin
Am I missing something obvious in why the cron execution isn't giving me the same results as my command line?
EDIT:
Thanks Olof for the redirect on my troubleshooting. In fact, curl is executing, and if I wait long enough (several minutes) it appears to operate as desired. I suspect the problem is that the network interface and/or URL is not available when curl is initially called, and while it may poll for a connection, it probably backs off its polling interval. So the question now becomes, "How do I check whether I have a connection to this URL before calling curl?"
This is not a bash problem; your curl command is still running so bash is still running, waiting for curl to close the pipe that the bash shell is reading from.
To troubleshoot your curl invocation I would run it first without piping to bash to check that I get the output I expected.
The hint in Olof's answer got me there, but I'm posting the full result here for completeness:
Because of a cloud provider's script which takes 20-40 seconds following reboot, my desired connection IP wasn't available to me when I first executed cron. It would either timeout, or connect after a significant delay. I have modified my connection script to poll the connection until it is available before calling curl:
#! /bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOST_IP=192.168.100.59
check_online() {
IS_ONLINE=$(netcat -z -w 5 $HOST_IP 80 && echo 1 || echo 0)
}
# Initial check to see if we're online
check_online
# Loop while we're not online.
while [ $IS_ONLINE -eq 0 ];do
# We're offline. Sleep for a bit, then check again
sleep 5;
check_online
done
# Run remote script
bash <(curl -s http://${HOST_IP}/user/startup.sh.txt)

Resources