Create directories with incrementing variables as part of directory name using Bash within for loop - bash

I am trying to write a bash script which allows me to automate the creation of multiple directories with incrementing names.
For example, I am trying to create the directories named v0v1, v1v2, ... , v40v41.
I have tried using a 'for' loop, where I create a variable and set this to be equal to the current value of i+1 (where 'i' is the current loop iteration), but it is not working as expected.
I have managed to get the variable to increment (and have checked this using 'echo'), but I cannot get it to become part of the new directory name.
The code I have written is as follows:
for i in {0..40}; do let r=$((i+1)); mkdir v$iv$r; done
However, the directories produced have names only containing the first variable value (i.e. v0, v1, ..., v40), and do not include the 'v$r' at all.
Does anyone have any ideas how to use two variables at once in the same filename?

printf 'r=%d ; mkdir v$((r-1))v${r} ;' $(seq 2 41) |sh
You don't need a shell loop at all.

for i in {0..40}; do let r=$((i+1)); mkdir v${i}v$r; done
When bash sees the expression v$iv$r, it substitutes in for variables iv and r. But, of course, there is no iv. So bash substitutes in the empty string. The solution is to use braces to assure that bash knows where the variable name ends. Thus: v${i}v$r.

Related

Arithmetic in shell script (arithmetic in string)

I'm trying to write a simple script that creates five textfiles enumerated by a variable in a loop. Can anybody tell my how to make the arithmetic expression be evaluated. This doesn't seem to work:
touch ~/test$(($i+1)).txt
(I am aware that I could evaluate the expression in a separate statement or change of the loop...)
Thanks in advance!
The correct answer would depend on the shell you're using. It looks a little like bash, but I don't want to make too many assumptions.
The command you list touch ~/test$(($i+1)).txt will correctly touch the file with whatever $i+1 is, but what it's not doing, is changing the value of $i.
What it seems to me like you want to do is:
Find the largest value of n amongst the files named testn.txt where n is a number larger than 0
Increment the number as m.
touch (or otherwise output) to a new file named testm.txt where m is the incremented number.
Using techniques listed here you could strip the parts of the filename to build the value you wanted.
Assume the following was in a file named "touchup.sh":
#!/bin/bash
# first param is the basename of the file (e.g. "~/test")
# second param is the extension of the file (e.g. ".txt")
# assume the files are named so that we can locate via $1*$2 (test*.txt)
largest=0
for candidate in (ls $1*$2); do
intermed=${candidate#$1*}
final=${intermed%%$2}
# don't want to assume that the files are in any specific order by ls
if [[ $final -gt $largest ]]; then
largest=$final
fi
done
# Now, increment and output.
largest=$(($largest+1))
touch $1$largest$2

for loop in a bash script

I am completely new to bash script. I am trying to do something really basic before using it for my actual requirement. I have written a simple code, which should print test code as many times as the number of files in the folder.
My code:
for variable in `ls test_folder`; do
echo test code
done
"test_folder" is a folder which exist in the same directory where the bash.sh file lies.
PROBLEM: If the number of files are one then, it prints single time but if the number of files are more than 1 then, it prints a different count. For example, if there are 2 files in "test_folder" then, test code gets printed 3 times.
Just use a shell pattern (aka glob):
for variable in test_folder/*; do
# ...
done
You will have to adjust your code to compensate for the fact that variable will contain something like test_folder/foo.txt instead of just foo.txt. Luckily, that's fairly easy; one approach is to start the loop body with
variable=${variable#test_folder/}
to strip the leading directory introduced by the glob.
Never loop over the output of ls! Because of word splitting files having spaces in their names will be a problem. Sure, you could set IFS to $\n, but files in UNIX can also have newlines in their names.
Use find instead:
find test_folder -maxdepth 1 -mindepth 1 -exec echo test \;
This should work:
cd "test_folder"
for variable in *; do
#your code here
done
cd ..
variable will contain only the file names

Using bash wildcards with prefix

I am trying to write a bash script that takes a variable number of file names as arguments.
The script is processing those files and creating a temporary file for each of those files.
To access the arguments in a loop I am using
for filename in $*
do
...
generate t_$(filename)
done
After the loop is done, I want to do something like cat t_$* .
But it's not working. So, if the arguments are a b c, it is catting t_a, b and c.
I want to cat the files t_a, t_b and t_c.
Is there anyway to do this without having to save the list of names in another variable?
You can use the Parameter expansion:
cat "${#/#/t_}"
/ means substitute, # means at the beginning.

Open file in bash script

I've got a bash script accepting several files as input which are mixed with various script's options, for example:
bristat -p log1.log -m lo2.log log3.log -u
I created an array where i save all the index where i can find files in the script's call, so in this case it would be an arrat of 3 elements where
arr_pos[0] = 2
arr_pos[1] = 4
arr_pos[3] = 5
Later in the script I must call "head" and "grep" in those files and i tried this way
head -n 1 ${arr_pos[0]}
but i get this error non runtime
head: cannot open `2' for reading: No such file or directory
I tried various parenthesis combinations, but I can't find which one is correct.
The problem here is that ${arr_pos[0]} stores the index in which you have the file name, not the file name itself -- so you can't simply head it. The array storing your arguments is given by $#.
A possible way to access the data you want is:
#! /bin/bash
declare -a arr_pos=(2 4 5)
echo ${#:${arr_pos[0]}:1}
Output:
log1.log
The expansion ${#:${arr_pos[0]}:1} means you're taking the values ranging from the index ${arr_pos[0]} in the array $#, to the element of index ${arr_pos[0]} + 1 in the same array $#.
Another way to do so, as pointed by #flaschenpost, is to eval the index preceded by $, so that you'd be accessing the array of arguments. Although it works very well, it may be risky depending on who is going to run your script -- as they may add commands in the argument line.
Anyway, you may should try to loop through the entire array of arguments by the beginning of the script, hashing the values you find, so that you won't be in trouble while trying to fetch each value later. You may loop, using a for + case ... esac, and store the values in associative arrays.
I think eval is what you need.
#!/bin/bash
arr_pos[0]=2;
arr_pos[1]=4;
arr_pos[2]=5;
eval "cat \$${arr_pos[1]}"
For me that works.

Simple map for pipeline in shell script

I'm dealing with a pipeline of predominantly shell and Perl files, all of which pass parameters (paths) to the next. I decided it would be better to use a single file to store all the paths and just call that for every file. The issue is I am using awk to grab the files at the beginning of each file, and it's turning out to be a lot of repetition.
My question is: I do not know if there is a way to store key-value pairs in a file so shell can natively do something with the key and return the value? It needs to access an external file, because the pipeline uses many scripts and a map in a specific file would result in parameters being passed everywhere. Is there some little quirk I do not know of that performs a map function on an external file?
You can make a file of env var assignments and source that file as need, ie.
$ cat myEnvFile
path1=/x/y/z
path2=/w/xy
path3=/r/s/t
otherOpt1="-x"
Inside your script you can source with either . myEnvFile or the more versbose version of the same feature sourc myEnvFile (assuming bash shell) , i.e.
$cat myScript
#!/bin/bash
. /path/to/myEnvFile
# main logic below
....
# references to defined var
if [[ -d $path2 ]] ; then
cd $path2
else
echo "no pa4h2=$path2 found, can't continue" 1>&1
exit 1
fi
Based on how you've described your problem this should work well, and provide a-one-stop-shop for all of your variable settings.
IHTH
In bash, there's mapfile, but that reads the lines of a file into a numerically-indexed array. To read a whitespace-separated file into an associative array, I would
declare -A map
while read key value; do
map[$key]=$value
done < filename
However this sounds like an XY problem. Can you give us an example (in code) of what you're actually doing? When I see long piplines of grep|awk|sed, there's usually a way to simplify. For example, is passing data by parameters better than passing via stdout|stdin?
In other words, I'm questioning your statement "I decided it would be better..."

Resources