Using `test` to check the existence of a variable - bootloader

I am having some trouble using the test command in u-boot in the u-boot that shipped with the beagle bone black. I do not believe this is a beagle bone black issue.
if test -n $uenvcmd;
then
echo Running uenvcmd ...;
run uenvcmd;
fi;
In the above case I see -
test - minimal test like /bin/sh
Usage:
test [args..]
The variable $uenvcmd contains a script. However, if the content of the variable is anything other than a script, things work fine. Am I missing something? Is there a way around this problem?
For Carl Norum's Comment:
U-Boot# test -n $uenvcmd
U-Boot# test -n $bootcmd
test - minimal test like /bin/sh
Usage:
test [args..]
U-Boot# test -n "$bootcmd"
test - minimal test like /bin/sh
Usage:
test [args..]
U-Boot#
#CarlNorum - Added the output. test -n $uenvcmd seems to work now. I have changed the content of the variable. It looks like the problem is content dependent. $bootcmd has a lot of script and it does not work.

Related

Dot (source) command doesn't work in script, but works in terminal

Script file simplified for experimenting:
#!/bin/sh
if test -f /home/vl/docker-test/envvars; then . /home/vl/docker-test/envvars; fi
envvars file content:
export APACHE_RUN_USER=www-data
Nothing happens after running script, no output, no error.
Checking if env contains variable from envvars, no, it doesn't:
$ env | grep -i apache
output is empty.
But:
$ if test -f /home/vl/docker-test/envvars; then . /home/vl/docker-test/envvars; fi
$ env | grep -i apache
APACHE_RUN_USER=www-data
What i'm doing wrong in my script?
. applies within the current running environment. In the first case, that's the script (and goes away when the script is done). In the second case, it's the shell. If you want the script to influence the shell it runs in, then you need to . it into the shell, not run it as a script.
So if your script is bring-in-vars, you are currently doing something like this (running it as a child):
./bring-in-vars
And you need to be doing this (sourcing it into the current shell):
. ./bring-in-vars
Children cannot, by design, modify their parents.

Bash script runs but no output on main commands and not executed

I'm setting a cron job that is a bash script containing the below:
#!/bin/bash
NUM_CONTAINERS=$(docker ps -q | wc -l)
if [ $NUM_CONTAINERS -lt 40 ]
then
echo "Time: $(date). Restart containers."
cd /opt
pwd
sudo docker kill $(docker ps -q)
docker-compose up -d
echo "Completed."
else
echo Nothing to do
fi
The output is appended to a log file:
>> cron.log
However the output in the cron file only shows:
Time: Sun Aug 15 10:50:01 UTC 2021. Restart containers.
/opt
Completed.
Both command do not seem to execute as I don't see any change in my containers either.
These 2 non working commands work well in a standalone .sh script without condition though.
What am I doing wrong?
User running the cron has sudo privileges, and we can see the second echo printing.
Lots of times, things that work outside of cron don't work within cron because the environment is not set up in the same way.
You should generally capture standard output and error, to see if something going wrong.
For example, use >> cron.log 2>&1 in your crontab file, this will capture both.
There's at least the possibility that docker is not in your path or, even if it is, the docker commands are not working for some other reason (that you're not seeing since you only capture standard output).
Capturing standard error should help out with that, if it is indeed the issue.
As an aside, I tend to use full path names inside cron scripts, or set up very limited environments at the start to ensure everything works correctly (once I've established why it's not working correctly).

Nagios custom plugin calling python Openstack Swift client

I want to check with NAGIOS whether my server can connect to Openstack Swift container. I wrote a simple script where I use Swift Python client to get stat of the container
Script looks like that
#!/bin/bash
set -e
STATE_OK=0
STATE_WARNING=1
STATE_CRITICAL=2
STATE_UNKNOWN=3
STATE_DEPENDENT=4
if ! which /usr/bin/swift >/dev/null 2>&1
then
echo "Swift command not found"
exit $STATE_UNKNOWN
fi
my_swift="/usr/bin/swift -V 2.0 -A http://my-swift-domain.com:5000/v2.0/ --insecure --os-username my-user-name --os-password my-password --os-tenant-name tenant-name stat container"
output=`$my_swift | grep Objects | sed 's/Objects:\s*\([0-9]*\).*/\1/'`
if [ "$output" -eq "$output" ] 2>/dev/null
then
echo "successfully connected to swift. Number of objects in container $output";
exit $STATE_OK
else
echo "Number of container objects is not correct";
exit $STATE_CRITICAL
fi
Script has right permissions and NAGIOS is able to run it properly. The script itself called from bash works and returns something like:
successfully connected to swift. Number of objects in container 4973123
But it doesn't when I run it via nrpe. I checked it by running /usr/lib64/nagios/plugins/check_nrpe -H 127.0.0.1 -c check_swift
I just get Number of container objects is not correct
After debugging I'm pretty sure that the command
output=`$my_swift | grep Objects | sed 's/Objects:\s*\([0-9]*\).*/\1/'`
is not even called.
I tried to put swift --version there just to see if it will give me some output and it does. So, it let me think that there is something wrong with parameters but I really don't know what, because the command itself called in a shell works perfectly fine.
Any help appreciated :)
Try to change de first line for this:
#!/usr/bin/env bash
Turns out that it was SELinux (on CentOS) blocking the execution of the command because of the wrong context of the file. I copied the file from home directory to Nagios' plugins directory.
restorecon check_swift_container -v helped

how do I get etcd values into my systemd service on coreOS?

I have two services A and B.
A sets a value in etcd as it's being started, say the public IP address which it gets from an environment file:
ExecStartPost=/usr/bin/etcdctl set /A_ADDR $COREOS_PUBLIC_IPV4
B needs that value as it starts up, as well as its own IP address. So something like this would be nice:
ExecStart=/usr/bin/docker run -e MY_ADDR=$COREOS_PUBLIC_IPV4 -e A_ADDR=$ETCD_A_ADDR mikedewar/B
but that's obviously not possible as etcd variables don't present as systemd environment variables like that. Instead I can do some sort of /usr/bin/bash -c 'run stuff' in my ExecStart but it's awkward especially as I need systemd to expand $COREOS_PUBLIC_IPV4 and my new bash shell to expand $(etcdctl get /A_ADDR). It also reeks of code smell and makes me think I'm missing something important.
Can someone tell me the "right" way of getting values from etcd into my ExecStart declaration?
-- update
So I'm up and running with
ExecStart=/usr/bin/bash -c 'source /etc/environment && /usr/bin/docker run -e A_ADDR=$(/usr/bin/etcdctl get /A_ADDR) -e MY_ADDR=$COREOS_PUBLIC_IPV4 mikedewar/B'
but it's pretty ugly. Still can't believe I'm not missing something..
I've was struggling with the same thing until recently. After reading much of the documentation of CoreOS and systemd, here is a slightly 'cleaner' version of what you're doing:
[Service]
EnvironmentFile=/etc/environment
ExecStart=/bin/sh -c '/usr/bin/docker run -e A_ADDR=$(/usr/bin/etcdctl get /A_ADDR) -e MY_ADDR=$COREOS_PUBLIC_IPV4 mikedewar/B'
Additionally, I have adopted a pattern where my services depend on a systemd 'oneshot' service that will compute some value and write it in to /etc/environment. This allows you to keep more complex shell scripting out of the main service unit and place it into it's own oneshot service unit.
Here are the docs for EnvironmentFile: http://www.freedesktop.org/software/systemd/man/systemd.exec.html#EnvironmentFile=
Finally, a quick gotchya: you must use a shell invocation if you use any variable in your ExecStart/Stop commands. systemd does no shell invocation when executing the command you provide, so variables will not be expanded.
I am currently using such a workaround:
I've created scripts which extracts data from particular etcd directory
#! /bin/sh
for entry in `etcdctl ls /my_dir --recursive` ; do
echo ' -e '`grep -o '[^/]*$' <<< ${entry}`=`etcdctl get ${entry}`
done
its output looks following:
-e DATABASE_URL=postgres://m:m#mi.cf.us-.rds.amazonaws.com:5432/m
-e WEB_CONCURRENCY=4
So then eventually I can in my init file place that in such way
/bin/sh -c '/usr/bin/docker run -p 9000:9000 $(/home/core/envs.sh) me/myapp -D FOREGROUND'
It's not the most elegant way, and I'd love to know how to improve it, but placing that for loop as a one-liner requires lots of escaping.
Can you container read directly from etcd as it starts, over the docker0 bridge IP, instead of passing in the values? This will also allow you to do more complex logic on the response, parse JSON if you are storing it as the etcd value, etc.

Simple Bash Script Error and Advice - Saving Environment Variables in Linux

I am working on a project that is hosted in Heroku. The app is hard coded to use Amazon S3 and looks for the keys in environment variables. This is what I wrote after looking at some examples and I am not sure why its not working.
echo $1
if [ $1 != "unset" ]; then
echo "set"
export AMAZON_ACCESS_KEY_ID=XXXXXXXXXXXX
export AMAZON_SECRET_ACCESS_KEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
export S3_BUCKET_NAME=XXXXXXXXX
else
echo "unset"
export AMAZON_ACCESS_KEY_ID=''
export AMAZON_SECRET_ACCESS_KEY=''
export S3_BUCKET_NAME=''
fi
When running the script it goes to the set section. But when inspecting through echo $AMAZON_ACCESS_KEY_ID # => ''.
I am not sure what is causing the issue. I will be interested in...
A fix for this...
A way to extract and add heroku config variables in the the env in an easier way.
You need to source the script, not run it as a child. If you run the script directly, its environment disappears when it ends. Sourcing the script causes it to be executed in the current environment. help source for more information.
Example:
$ VAR=old_value
$ cat script.sh
#!/bin/bash
export VAR=new_value
$ ./script.sh
$ echo $VAR
old_value
$ source script.sh
$ echo $VAR
new_value
Scripts executed with source don't need to be executable nor do they need the "shebang" line (#!/bin/bash) because they are not run as separate processes. In fact, it is probably a good idea to not make them executable in order to avoid them being run as commands, since that won't work as expected.

Resources