I am trying to write a bash script that takes a variable number of file names as arguments.
The script is processing those files and creating a temporary file for each of those files.
To access the arguments in a loop I am using
for filename in $*
do
...
generate t_$(filename)
done
After the loop is done, I want to do something like cat t_$* .
But it's not working. So, if the arguments are a b c, it is catting t_a, b and c.
I want to cat the files t_a, t_b and t_c.
Is there anyway to do this without having to save the list of names in another variable?
You can use the Parameter expansion:
cat "${#/#/t_}"
/ means substitute, # means at the beginning.
Related
I have x files: A, B, C... What I need to do is pass each of these files as the first command line argument to a python file and pass the others as the second command line argument until all files have been passed as $1 once. For example, on the first iteration A is $1 and B,C... is $2. On the second iteration, B is $1 and A,C... is $2. I've read about the shift command in shell but am not very sure if it will work in my case (I'm also relatively new to shell scripting). Also, is there a limit to the number of command line arguments I can pass to my python script? I would also like to create a variable to hold the list of file names before iterating through my files. Thank you!
Bash has arrays, and supports array slicing via ${array[#]:start:end} syntax, where start and end are optional indices. That's enough to get the job done.
#!/bin/bash
# Store the master list of file names in an array called $files.
files=("$#")
for ((i = 0; i < ${#files[#]}; ++i)); do
# Store the single item in $file and the rest in an array $others.
file=${files[i]}
others=("${files[#]:0:i}" "${files[#]:i+1}")
# Run command.py. Use ${others[*]} to concatenate all the file names into one long
# string, and override $IFS so they're joined with commas.
(IFS=','; command.py "${files[i]}" "${others[*]}")
done
I am completely new to bash script. I am trying to do something really basic before using it for my actual requirement. I have written a simple code, which should print test code as many times as the number of files in the folder.
My code:
for variable in `ls test_folder`; do
echo test code
done
"test_folder" is a folder which exist in the same directory where the bash.sh file lies.
PROBLEM: If the number of files are one then, it prints single time but if the number of files are more than 1 then, it prints a different count. For example, if there are 2 files in "test_folder" then, test code gets printed 3 times.
Just use a shell pattern (aka glob):
for variable in test_folder/*; do
# ...
done
You will have to adjust your code to compensate for the fact that variable will contain something like test_folder/foo.txt instead of just foo.txt. Luckily, that's fairly easy; one approach is to start the loop body with
variable=${variable#test_folder/}
to strip the leading directory introduced by the glob.
Never loop over the output of ls! Because of word splitting files having spaces in their names will be a problem. Sure, you could set IFS to $\n, but files in UNIX can also have newlines in their names.
Use find instead:
find test_folder -maxdepth 1 -mindepth 1 -exec echo test \;
This should work:
cd "test_folder"
for variable in *; do
#your code here
done
cd ..
variable will contain only the file names
I have a bash script that take advantage of a local toolbox to perform an operation
my question is fairly simple
I have multiple files that are the same quantities but different time steps i would like to first untar them all, and then use the toolbox to perform some manipulation but i am not sure if i am on the right track.
=============================================
The file is as follows
INPUTS
fname = a very large number or files with same name but numbering
e.g wnd20121.grb
wnd20122.grb
.......
wnd2012100.grb
COMMANDS
> cdo -f nc copy fname ofile(s)
(If this is the ofile(s)=output file how can i store it for sequent use ? Take the ofile (output file) from the command and use it / save it as input to the next, producing a new subsequent numbered output set of ofile(s)2)
>cdo merge ofile(s) ofile2
(then automatically take the ofile(s)2 and input them to the next command and so on, producing always an array of new output files with specific set name I set but different numbering for distinguishing them)
>cdo sellon ofile(s)2 ofile(s)3
------------------------------------
To make my question clearer, I would like to know the way in which I can instruct basically through a bash script the terminal to "grab" multiple files that are usually the same name but have a different numbering to make the separate their recorded time
e.g. file1 file2 ...file n
and then get multiple outputs , with every output corresponding to the number of the file it converted.
e.g. output1 output2 ...outputn
How can I set these parameters so the moment they are generated they are stored for subsequent use in the script, in later commands?
Your question isn't clear, but perhaps the following will help; it demonstrates how to use arrays as argument lists and how to parse command output into an array, line by line:
#!/usr/bin/env bash
# Create the array of input files using pathname expansion.
inFiles=(wnd*.grb)
# Pass the input-files array to another command and read its output
# - line by line - into a new array, `outFiles`.
# The example command here simply prepends 'out' to each element of the
# input-files array and outputs each (modified) element on its own line.
# Note: The assumption is that the filenames have no embedded newlines
# (which is usually true).
IFS=$'\n' read -r -d '' -a outFiles < \
<(printf "%s\n" "${inFiles[#]}" | sed s'/^/out-/')
# Note: If you use bash 4, you could use `readarray -t outFiles < <(...)` instead.
# Output the resulting array.
# This also demonstrates how to use an array as an argument list
# to pass to _any_ command.
printf "%s\n" "${outFiles[#]}"
I am trying to write a bash script which allows me to automate the creation of multiple directories with incrementing names.
For example, I am trying to create the directories named v0v1, v1v2, ... , v40v41.
I have tried using a 'for' loop, where I create a variable and set this to be equal to the current value of i+1 (where 'i' is the current loop iteration), but it is not working as expected.
I have managed to get the variable to increment (and have checked this using 'echo'), but I cannot get it to become part of the new directory name.
The code I have written is as follows:
for i in {0..40}; do let r=$((i+1)); mkdir v$iv$r; done
However, the directories produced have names only containing the first variable value (i.e. v0, v1, ..., v40), and do not include the 'v$r' at all.
Does anyone have any ideas how to use two variables at once in the same filename?
printf 'r=%d ; mkdir v$((r-1))v${r} ;' $(seq 2 41) |sh
You don't need a shell loop at all.
for i in {0..40}; do let r=$((i+1)); mkdir v${i}v$r; done
When bash sees the expression v$iv$r, it substitutes in for variables iv and r. But, of course, there is no iv. So bash substitutes in the empty string. The solution is to use braces to assure that bash knows where the variable name ends. Thus: v${i}v$r.
I've got a bash script accepting several files as input which are mixed with various script's options, for example:
bristat -p log1.log -m lo2.log log3.log -u
I created an array where i save all the index where i can find files in the script's call, so in this case it would be an arrat of 3 elements where
arr_pos[0] = 2
arr_pos[1] = 4
arr_pos[3] = 5
Later in the script I must call "head" and "grep" in those files and i tried this way
head -n 1 ${arr_pos[0]}
but i get this error non runtime
head: cannot open `2' for reading: No such file or directory
I tried various parenthesis combinations, but I can't find which one is correct.
The problem here is that ${arr_pos[0]} stores the index in which you have the file name, not the file name itself -- so you can't simply head it. The array storing your arguments is given by $#.
A possible way to access the data you want is:
#! /bin/bash
declare -a arr_pos=(2 4 5)
echo ${#:${arr_pos[0]}:1}
Output:
log1.log
The expansion ${#:${arr_pos[0]}:1} means you're taking the values ranging from the index ${arr_pos[0]} in the array $#, to the element of index ${arr_pos[0]} + 1 in the same array $#.
Another way to do so, as pointed by #flaschenpost, is to eval the index preceded by $, so that you'd be accessing the array of arguments. Although it works very well, it may be risky depending on who is going to run your script -- as they may add commands in the argument line.
Anyway, you may should try to loop through the entire array of arguments by the beginning of the script, hashing the values you find, so that you won't be in trouble while trying to fetch each value later. You may loop, using a for + case ... esac, and store the values in associative arrays.
I think eval is what you need.
#!/bin/bash
arr_pos[0]=2;
arr_pos[1]=4;
arr_pos[2]=5;
eval "cat \$${arr_pos[1]}"
For me that works.