Reading lines in a text file by Bash Script - bash

I'm going to read lines in a text file:
$SSH_PRIVATE_FILE="address"
I would like to read and evaluate the line in a way to assign a value to the already defined SSH_PRIVATE_FILE.
The follwing is a docker file's contents
ARG SSH_PRIVATE_FILE
COPY build-params build-params
RUN while IFS='' read -r line || [ -n "$line" ]; do\
echo "Text read from file: $line";\
eval `$line`;\
done < "build-params"
RUN echo $SSH_PRIVATE_FILE
UPDATED
But it returns error: /bin/sh: 1: $SSH_PRIVATE_FILE="~/.ssh/id_rsa": not found

Bourne-type shells have a built-in mechanism to read the contents of a text file and evaluate each line, the . directive. (GNU bash has the same functionality under the name source, but this is not part of the POSIX shell standard and some very-light-weight shells in Docker base images don’t support it.) At a shell level, what you’ve written is equivalent to
. ./build-params
However, each Dockerfile RUN line runs a separate container with a separate shell with a clean shell environment, so this turns out to be a pretty bad way to set environment variables. The Dockerfile ENV directive works better.
Furthermore, since you’re writing the Dockerfile, you have complete control over the filesystem layout inside the Dockerfile, and you don’t really need the locations of things inside the Docker container to be parametrizable. In the case of things like credentials, you’d use the docker run -v option to inject things into the container. If I needed a setting like this, I might make my Dockerfile say
ENV SSH_PRIVATE_FILE=/ssh/id_rsa
and then actually launch the container as
docker run -v $HOME/.ssh:/ssh ...
and not make this a build-time option at all.

Just a wild guess, but I'd try putting a space before every \ at the end of each line.

Related

Use content of file as part of a Bash command

I want to use the content of a file.txt as part of a bash command.
Suppose that the bash command with its options that I want to execute is:
my_command -a first value --b_long_version_option="second value" -c third_value
but the first 2 options (-a and --b_long_version_option ) are very verbose so instead of inserting directly on the command line (or bash script) I wrote them in a file.txt like this:
-a first value \
--b_long_version_option="second value"
Now I expect to call the command "my_command" with the following syntax (where "path_to/file.txt" is the path to file.txt, expressed in relative or absolute form):
my_command "$(cat path_to/file.txt)" -c third_value
This however is not the right syntax, as my command is breaking and complaining.
How should I write the new version of the command and/or the file.txt so that it is equivalent to its native bash usage?
Thanks in advance!
The quotes are preserving the newlines. Take them off.
You also don't need the cat unless you're running an old bash parser.
my_command $(<path_to/file.txt) -c third_value
You'll need to take the backslashes at the ends of lines out.
Be careful doing things like this, though. It's probably better to just put the whole command in the file, rather than just pieces of it. If you really just want arguments, maybe define them a little more carefully in an array, source the file and then apply them, like this:
in file:
myArgs=( "-a" "first value"
"--b_long_version_option=second value"
)
Note the quoting. Then run with
. file
my_command "${myArgs[#]" -c third_value
e.g.,
$: printf "[%s] " "${myArgs[#]}" -c=foo
[-a] [first value] [--b_long_version_option=second value] [-c=foo]
I haven't seen any example of what you're trying. But, there are simpler ways to achieve your goal.
Bash Alias
ll for example is a bash alias for ls -al. It usually is defined in .bash_profile or .bashrc as follows :
alias ll='ls -al'
So, what you can do is to set another alias for your shorthand command.
alias mycmd='mycommand -a first value --b_long_version_option="second value"'
then you can use it as follows :
mycmd -c third_value
Config file
You can also define a mycommand.json file or mycommand.ini file for default arguments. Then, you will need to check for config file in your software, and assign arguments from it.
Using config file is more advanced solution. You can define multiple config files. You can set default config file in /etc/mycommand/config.ini for example. When running on different directories, you should check ${cwd}/mycommand.ini to check local config file exists or not. You can even add a --config-file argument to your command.
Using alias is more convenient for small tasks, or thing that won't change much. If your command's behavior should be different in some other project, the using a config file would be a better solution.

How do you use file lists (.xcfilelist) within Xcode 10 script build phases?

Starting with Xcode 10, build script phases can use file lists (.xcfilelist) for input and output instead of specifying input/output files directly. Those files seem to support comments (the WWDC sample showed command line comments on top), blank lines (also in the sample), and otherwise expect one file path per line. If these file contain build settings (e.g. $(SRCROOT)), these are expanded prior to calling the script, just like they would have been expanded if the file path was directly given as input/output file.
This sounds like a great feature but how would you use these file lists in your actual script?
When specifying the files directly, you had the shell variables SCRIPT_INPUT_FILE_COUNT and SCRIPT_OUTPUT_FILE_COUNT and then one variable for each input/output file, named SCRIPT_INPUT_FILE_# and SCRIPT_OUTPUT_FILE_# where # used to be an up-counting number. Assuming that you have an equal number of input/output file, this script would print them all:
#!/bin/sh
: $((i=0))
while [ $i -lt "$SCRIPT_INPUT_FILE_COUNT" ]
do
eval fileIn="\$SCRIPT_INPUT_FILE_${i}"
eval fileOut="\$SCRIPT_OUTPUT_FILE_${i}"
echo "$fileIn --> $fileOut"
: $((i=i+1))
done
This is a clean POSIX compatible shell script, yes, you can make it even nicer when requiring bash but the code above should work with every sh compatible shell (which it also promisses when using #!/bin/sh and not #!/bin/bash).
But when using file lists, SCRIPT_INPUT_FILE_COUNT is 0. Instead you get SCRIPT_INPUT_FILE_LIST_COUNT and SCRIPT_OUTPUT_FILE_LIST_COUNT, and the variables SCRIPT_INPUT_FILE_LIST_# and SCRIPT_OUTPUT_FILE_LIST_#, containing the paths to the pre-processed file lists, where all comments and blank lines have been stripped and all build settings have already been expanded.
Now, how would I go about using these file lists in my script? How would the tiny sample script above produce the same output using file lists in Xcode? I'm not really good at shell scripting and I'm looking for a clean solution that doesn't require any other script interpreter but sh.
This will dynamically construct the SCRIPT_INPUT_FILE_LIST_0, SCRIPT_INPUT_FILE_LIST_1, etc. values and access them from the environment vars passed to the script by Xcode. Swap out the echo "${file_path}" line if you want to do something other than printing each of the lines from the xcfilelist(s).
#!/usr/bin/env bash
for index in $(seq $SCRIPT_INPUT_FILE_LIST_COUNT); do
# 1 => `SCRIPT_INPUT_FILE_LIST_0`
filelist=SCRIPT_INPUT_FILE_LIST_$((index-1))
# `SCRIPT_INPUT_FILE_LIST_0` => value in $SCRIPT_INPUT_FILE_LIST_0
filelist_path=${!filelist}
while read -r file_path; do
echo "${file_path}"
done <$filelist_path
done
I'm not aware of a way to get access to the file list itself inside the shellscript. However, the idea of a file list is you ideally have one for however many files there are. So, I usually hardcode it to the same value I gave xcode. It's a bit duplicated, but not a whole lot:
set -e
while read file; do
EXPANDED=`eval echo "$file"`
echo "do something with $EXPANDED"
done <"${SRCROOT}/path/to/files.xcfileslist"
As a side effect of this read device we strip whitespace and do some other light processing. If you are particular about how you want this to happen, see this SO answer.
I believe most (all?) build settings are exported into the script as environment variables. So by evaluating them with eval here we expand them.
Use of eval opens up the possibility that a malicious file list can execute code. Then again it's probably located in the same place as the build script you're executing so I'm not sure it's a very practical problem. Other shells have more secure ways of going about this, but I'm not aware of any for vanilla sh and default macOS.

Why is bash using single and double quotes literally?

I have a situation in Bash I've never encountered before and don't know how to resolve. I installed bash on Alpine Linux (Docker Container) and for some reason environment variables with quotes translate literally.
MY_PATH="/home/my/path"
> cd $MY_PATH
Result
bash: cd: "/home/my/path": No such file or directory
> echo $MY_PATH
Result
"/home/my/path"
Now if you try it without quotes it works
MY_PATH=/home/my/path
> cd $MY_PATH
Result
bash-4.4# (path changed)
> echo $MY_PATH
Result
/home/my/path
I've never seen this before as I expect bash to gobble up the outer quotes, not even sure what to search for in trying to resolve this.
To fully qualify the scenario let me point out that:
Using Docker with an Alpine (3.8) image
Installing Bash 4 on Alpine that usually defaults to ash shell
Update
This is starting to look like a docker issue. I'm using the env_file in Docker Compose to push environment variables to a container and it looks like its literally copying quotes " => \".
Thanks to #bishop's comment to try od -x
container.env
#!/usr/bin/env bash
MY_PATH="/home/my/path"
Then inside the Alpine 3.8 container running env
MY_PATH="/home/my/path"
Update 2
Looks like there was a bug around this that was closed. But apparently doesn't seem fixed. Is it because I'm the only one in the universe still using Docker Toolbox?
https://docs.docker.com/compose/env-file/
These syntax rules apply to the .env file:
Compose expects each line in an env file to be in VAR=VAL format.
Lines beginning with # are processed as comments and ignored.
Blank lines are ignored.
There is no special handling of quotation marks. This means that they are part of the VAL.
In particular, the env file is not a shell script and not seen by bash (your #!/usr/bin/env bash line is treated as a comment and ignored).

Properly formating a crontab executable on a bash script

I’m having some serious issues with trying to get the proper format for my bash script to be able to run successfully in crontab. The bash script runs successfully when manually prompted from the command line.
Here is the bash script in question (the actual parameters themselves [$1 & $2] have been manually placed in the script):
#!/bin/bash
# Usage: ./s3DeleteByDateVirginia "bucketname" "file type"
past=$(date +"%F" -d "60 days ago")
aws s3api list-objects --bucket $1 --query 'Contents[?LastModified<=`'$past'`][].{Key:Key}' | grep $2 | while read -r line
do
fileName=`echo $line`
aws s3api delete-object --bucket $1 --key "$fileName"
done;
The script is in this bash file: /home/ubuntu/s3DeleteByDateVirginiaSoco1
To set up the script I use: sudo crontab –e
Now I see people online saying you need to give it the proper path which doesn’t make any sense to me especially when it comes to putting it in the right location because I’m seeing a number of various modifications of this online but it consists of this format: SHELL=/bin/sh sPATH=/bin:/sbin:/usr/bin:/usr/sbin but I don't know where to put it.
According to the syslog the cron functionalities that parts working but the script itself doesn’t execute:
In addition to this the script has all of the proper permissions to run.
All in all, I’m more confused that when I started and I’m not seeing that much documentation on how crontab works.
Crontab in question:
Additional Edits based on user's suggestions:
Here's my polished script:
Here's the crontab line:
# m h dom mon dow command
PATH=/usr/local/bin:/usr/bin:/bin:/root/.local/bin/aws
33 20 * * * /home/ubuntu/s3DeleteByDateSoco1
Updated syslog:
Ok, I see several problems here. First, you need to put this in the crontab file for the user you want the script to run as. If you want to run it under your user account, do not use just crontab -e instead of sudo crontab -e (with sudo, it edits the root user's crontab file).
Second, you need to use the correct path & name for the script; it looks like it's /home/ubuntu/s3DeleteByDateVirginiaSoco1, so that's what should be in the crontab entry. Don't add ".sh" if it's not actually part of the filename. It also looks like you tried adding "root" in front of the path; don't do that either, since crontab will try to execute "root" as a command, and it'll fail. bash -c doesn't hurt, but it doesn't help at all either, so don't use it.
Third, the PATH needs to be set appropriately for the executables you use in the script. By default, cron jobs execute with a PATH of just "/usr/bin:/bin", so when you use a command like aws, it'll look for it as /usr/bin/aws, not find it, look for it as /usr/aws, not find it, and give the error "aws: command not found" that you see in the last log entry. First, you need to find out where aws (and any other programs your script depends on) are; you can use which aws in your regular shell to find this out. Suppose it's /usr/local/bin/aws. Then you can either:
Add a line like PATH=/usr/local/bin:/usr/bin:/bin (with maybe any other directories you think are appropriate) to the crontab file, before the line that says to run your script.
Add a line like PATH=/usr/local/bin:/usr/bin:/bin (with maybe any other directories you think are appropriate) to the your script file, before the lines that use aws.
In your script, use an explicit path every time you want to run aws (something like /usr/local/bin/aws s3api list-objects ...)
You can use any (or all) of the above, but you must use at least one or it won't be able to find the aws command (or anything else that isn't in the set of core commands that come with the OS).
Fourth, I don't see where $1 and $2 are supplied. You say they've been manually placed in the script, but I don't know what you mean by that. Since the script expects them as parameters, you need to specify them in the crontab file (i.e. the command in crontab should be something like /home/ubuntu/s3DeleteByDateVirginiaSoco1 bucketname pattern).
Fifth, the script itself doesn't follow good quoting conventions. In general, all variable references should be in double-quotes. For example, use grep "$2" instead of grep $2. Without the double-quotes, variables that contain spaces or certain shell metacharacters can cause weird parsing problems.
Finally, why do you do fileName=echo $line (with backquotes I can't replicate here)? This mostly just copies the value of $line into the variable fileName, but can have those weird parsing problems I mentioned in the last point. If you want to copy a variable reliably, just use fileName="$line" (or fileName=$line -- this is one of the few cases where it's safe to leave the double-quotes off).
BTW, shellcheck.net is good at spotting common problems like bad quoting; I recommend running your scripts through it to see what it finds.

Why does this script work in the current directory but fail when placed in the path?

I wish to replace my failing memory with a very small shell script.
#!/bin/sh
if ! [ –a $1.sav ]; then
mv $1 $1.sav
cp $1.sav $1
fi
nano $1
is intended to save the original version of a script. If the original has been preserved before, it skips the move-and-copy-back (and I use move-and-copy-back to preserve the original timestamp).
This works as intended if, after I make it executable with chmod I launch it from within the directory where I am editing, e.g. with
./safe.sh filename
However, when I move it into /usr/bin and then I try to run it in a different directory (without the leading ./) it fails with:
*-bash: /usr/bin/safe.sh: /bin/sh: bad interpreter: Text file busy*
My question is, when I move this script into the path (verified by echo $PATH) why does it then fail?
D'oh? Inquiring minds want to know how to make this work.
The . command is not normally used to run standalone scripts, and that seems to be what is confusing you. . is more typically used interactively to add new bindings to your environment (e.g. defining shell functions). It is also used to similar effect within scripts (e.g. to load a script "library").
Once you mark the script executable (per the comments on your question), you should be able to run it equally well from the current directory (e.g. ./safe.sh filename) or from wherever it is in the path (e.g. safe.sh filename).
You may want to remove .sh from the name, to fit with the usual conventions of command names.
BTW: I note that you mistakenly capitalize If in the script.
The error bad interpreter: Text file busy occurs if the script is open for write (see this SE question and this SF question). Make sure you don't have it open (e.g. in a editor) when attempting to run it.

Resources