getting variable name in file name for bash [duplicate] - bash

This question already has answers here:
When do we need curly braces around shell variables?
(7 answers)
Closed 4 years ago.
I wanted to change the name of my file from file.txt to file_4i.txt and file_5i.txt according to the number I need but when I use the command below, the file name changes to file_.txt and the value of m never is indicated. I wanted to get 4i but $mi does not work either.
sudo sh -c "m=4 ; mv file.txt file_$mi.txt"
sudo sh -c "m=4 ; mv file.txt file_$m.txt"

Use single quotes so the variable doesn't expand early, and use {} so mi isn't interpreted as the variable name:
sudo sh -c 'm=4 ; mv file.txt file_${m}i.txt'
sudo sh -c 'm=4 ; mv file.txt file_$m.txt'

Related

bash variable expansion in for loop sub-command [duplicate]

This question already has answers here:
How do I use variables in single quoted strings?
(8 answers)
Closed 4 years ago.
Trying to expand a for loop variable in this does not succeed -
I am trying to use the $i variable in the jsonpath for loop below:
for i in {0..9}; do
echo $i
kubectl exec -i -t "$(kubectl get pod -l "app=mdm-shard" -o jsonpath='{.items[{$i}].metadata.name}')" -- cat /proc/net/udp
done
I get:
0
error: error parsing jsonpath {.items[{$i}].metadata.name}, invalid array index {$i}
error: pod name must be specified
I tried a lot of combinations but can't find the one that is going to expand $i inside the query.
My bash version:
GNU bash, version 4.4.19(1)-release (x86_64-pc-linux-gnu)
Thank you Benjamin - yes this worked:
for i in {0..9}; do
echo $i
kubectl exec -i -t "$(kubectl get pod -l "app=mdm-shard" -o jsonpath="{.items[$i].metadata.name}")" -- cat /proc/net/udp;
done

Bash "newest directory" working differently from CL vs from Script [duplicate]

This question already has an answer here:
How do I expand commands within a bash alias?
(1 answer)
Closed 4 years ago.
I've been using cd "$(\ls -1dt ./*/ | head -n 1)" in some scripts to get into a new directory after creating it. I decided to put an alias in my bash_profile:
alias newest="cd $(\ls -1dt ./*/ | head -n 1)"
But when I run newest from the command line, it goes to a different directory, which happens to be the first one alphabetically though I don't know if that's why it's choosing that directory.
Pasting cd "$(\ls -1dt ./*/ | head -n 1)" directly into the command line works correctly. What's going on here?
Don't use ls -t in scripts at all -- see ParsingLs on why it's unreliable, and BashFAQ #3 on what to do instead. But ignoring that, the smallest fix for the immediate, narrow issue is to use a function:
newest() { cd "$(command ls -1dt ./*/ | head -n 1)"; }
Your alias was having the command substitution run at time of definition, not on invocation. If you really want it to still be an alias, you could use single quotes on the outside to prevent that command substitution from happening early:
alias newest='cd "$(\ls -1dt ./*/ | head -n 1)"'
What would a reliable, best-practice approach look like? Perhaps:
cdNewest() {
local latest='' candidate
set -- */
[[ -d $1 ]] || return # handle case where no directories exist so glob did not expand
latest=$1; shift
for candidate; do
[[ $candidate -nt $latest ]] && latest=$candidate
done
cd -- "$latest"
}
...which, instead of running two external commands (ls and head), runs none at all (and also avoids the need for command substitutions and pipelines, both of which are quite high-overhead, altogether).

Bash script sudo and variables [duplicate]

This question already has an answer here:
Bash foreach loop works differently when executed from .sh file [duplicate]
(1 answer)
Closed 4 years ago.
Totally new to Bash here, actually I've avoided it like a plague for 10 years.
Today, there is no way around it.
After a few hours of beating my head against the keyboard, I discovered that sudo and any bash variable in a command gets stripped out.
So I have something like
somescript.sh
for i in {1..5}
do
filename=somefilenumber"$i".txt
echo $filename
done
on the command line now if I run it
user#deb:~$ ./somescript.sh
I get the expected
somefilenumber1.txt
somefilenumber2.txt
somefilenumber3.txt
somefilenumber4.txt
somefilenumber5.txt
but if I run with sudo, like
user#deb:~$ sudo ./somescript.sh
I'll get this
somefilenumber{1..5}.txt
This is a huge problem because I'm trying to cp files and rm files in a loop with the variable.
So here is the code with cp and rm
for i in {1..10}
do
filename=somefilenumber"$i".txt
echo $filename
cp "$filename" "someotherfilename.txt"
rm "$filename"
done
I end up getting
cp: cannot stat 'somefilenumber{1..5}.txt': No such file or directory
rm: cannot remove 'somefilenumber{1..5}.txt': No such file or directory
I need to run sudo also because of other programs that require it.
Is there any way around this?
Even if nothing else require sudo, and I don't use it, the rm command will prompt me for every file if I'm sure that I want to remove it or not. The whole point is to not be sitting here tied to the computer while it runs through hundreds of files.
You could try to replace {1..10} with seq 1 10:
for i in `seq 1 10`
do
filename=somefilenumber"$i".txt
echo $filename
cp "$filename" "someotherfilename.txt"
rm "$filename"
done
Your problem sounds like the environment has something wrong for root, do you start the script with:
#!/bin/bash
?

Bash while stops at first iteration [duplicate]

This question already has answers here:
Capturing output of find . -print0 into a bash array
(13 answers)
Closed 7 years ago.
I am currently a bash script that shall check some data. What I got so far is:
!/bin/bash
#!/bin/bash
find "./" -mindepth 1 -maxdepth 1 -type d -print0 | while IFS= read -r -d '' file; do
folder=${file##*/}
echo "Checking ${folder} for sanity..."
./makeconfig ${folder} | while read -r line; do
title=`echo $line | awk -F' ' '{print $2}'`
echo $title
done
done
Now what it currently does is: Search every directory in ./ and extract the folders name (thus: removing the ./ from the result of find). Then give it to a self-written tool, which will output some lines like this:
-t 1 -a 2
-t 3 -a 5
-t 7 -a 7
-t 9 -a 8
of which I gather the value behind -t via awk. This also works so far, the problem is, the outer while loop stops after the first iteration, thus checking only one folder. My guess is that the two read commands of the inner and outer loop are colliding somehow. The tool makeconfig definitiveley returns 0 (no error) always. I tried to debug it using sh -x script.sh but it does not show me anything I can deal with.
Can someone point me in the right direction here what is going wrong? If you need ANY further informations, I can give them to you. Ive written a quick mimicking program if you want to test the bash script here (also a script now, just echoing some stuff), just make it executable via chmod +x:
echo "-t 3 -a 4"
echo "-t 6 -a 1"
echo "-t 9 -a 5"
Just put this with the script in a folder and create some subfolders, that should do it to make it work (as much as it does).
Thanks in advance!
EDIT: This is NOT a duplicate as mentioned. The problem here are more the nested read commands than the print0 (maybe that has also something to do with it, but not entirely).
IFS= isn't setting the field separator to the null string (\0), but unsetting it entirely, so the entire output of the find command is being read at once. If you run it without the -print0 argument to find it'll be easier to work with in bash. Two other alternatives:
use xargs to run a shell script on each item found with that being the sole argument
use -exec to run the shell script on each item.

do command 2 when command 1 fails in bash [duplicate]

This question already has answers here:
How can I "try to do something and then detect if it fails" in bash?
(3 answers)
Closed 8 years ago.
I run this as part of a 'unlock' bash script, but it fails on the first command -
# Variables
CHUNK="/media/backup/obnam-home"
BIGNUM="17580577608458113855"
LOGTO="/home/boudiccas/logs/unlock.txt"
####################################################
sudo rm $CHUNK/chunklist/lock; sudo rm $CHUNK/$BIGNUM/lock; sudo rm $CHUNK/chunksums/lock; sudo rm $CHUNK/chunks/lock>>'$(date -R)' $LOGTO
How can I get it to continue onto the second, and further commands, even if 'x' command fails?
I think this is what you want:
# Variables
CHUNK="/media/backup/obnam-home"
BIGNUM="17580577608458113855"
LOGTO="/home/boudiccas/logs/unlock-$(date -R).txt"
####################################################
{
sudo rm $CHUNK/chunklist/lock
sudo rm $CHUNK/$BIGNUM/lock
sudo rm $CHUNK/chunksums/lock
sudo rm $CHUNK/chunks/lock
} 2>> $LOGTO
Each of the four rm commands will run, regardless of which ones succeed and which fail. Any error messages from all 4 will be redirected (2>>, not >>) to the named file. I'm assuming you want the current timestamp in the file name, so I moved the call to date to the definition of LOGTO.

Resources