I'm using Mac's Automator to perform a bash script on files that are dropped onto a droplet. Inputs are gathered from Automator actions and are passed to a bash script as arguments. The script works with a single input file, but I'm having trouble handling multiple files.
My ultimate goal is to accept several dropped video files, prompt for a number, and extract that number of frames from each video file (using FFmpeg).
I can loop through inputs by using the special variable $#, like so:
for f in "$#"; do
# perform an action
done
However, my script also prompts the user for text input that I don't want included in this loop.
I can access each input file individually by using $1, $2, etc. But I'd like to use a loop instead of referencing each file individually. Also, the quantity of input files is unpredictable and I'm not sure how to distinguish between input files and input text.
How can I loop through only the file inputs without including the text input?
Here is a description of my current workflow:
Get Specified Movies
one.mov
two.mov
Set Value of Variable (accepts input)
source_files
Ask For Text (ignores input)
Enter a Number:
Set Value of Variable (accepts input)
number
Get Value of Variable (ignores input)
source_files
Get Value of Variable (accepts input)
number
Run Shell Script (accepts input, pass "as arguments")
#/bin/bash
for f in "$#"; do
echo $f
done
OUTPUT:
/folder/one.mov
/folder/two.mov
8
I was hoping to have one variable set to the multiple inputs (so I could loop through it) and another variable set to the number, but it doesn't seem to work that way.
How can I loop through each input file without referencing the text input?
Just change the order of the args: get value of variable number first,
then get value of variable source_files,
so that number will be the first parameter,
then:
echo number: $1
for f in "${#:2}"; do
echo file: $f
done
Related
I'm trying to write a simple script that creates five textfiles enumerated by a variable in a loop. Can anybody tell my how to make the arithmetic expression be evaluated. This doesn't seem to work:
touch ~/test$(($i+1)).txt
(I am aware that I could evaluate the expression in a separate statement or change of the loop...)
Thanks in advance!
The correct answer would depend on the shell you're using. It looks a little like bash, but I don't want to make too many assumptions.
The command you list touch ~/test$(($i+1)).txt will correctly touch the file with whatever $i+1 is, but what it's not doing, is changing the value of $i.
What it seems to me like you want to do is:
Find the largest value of n amongst the files named testn.txt where n is a number larger than 0
Increment the number as m.
touch (or otherwise output) to a new file named testm.txt where m is the incremented number.
Using techniques listed here you could strip the parts of the filename to build the value you wanted.
Assume the following was in a file named "touchup.sh":
#!/bin/bash
# first param is the basename of the file (e.g. "~/test")
# second param is the extension of the file (e.g. ".txt")
# assume the files are named so that we can locate via $1*$2 (test*.txt)
largest=0
for candidate in (ls $1*$2); do
intermed=${candidate#$1*}
final=${intermed%%$2}
# don't want to assume that the files are in any specific order by ls
if [[ $final -gt $largest ]]; then
largest=$final
fi
done
# Now, increment and output.
largest=$(($largest+1))
touch $1$largest$2
I have a folder named datafolder which contains five csv files aa.csv ab.csv ac.csv ad.csv ae.csv Each csv file contains data from an excel sheet in the format: date, product type, name, address etc. and I am only interested in the second column which is named product. Basically what I want to happen is for the jobmaster script to count the number of files in datafolder and then to start a map process for each individual file. I have the following scripts:
The jobmaster script runs without problems, however once the map script starts, only the first echo mapping $1 is displaying and the process is stuck in an infinite loop (my guess). When I run the ps command I expect to see 5 map.sh running, however there are none.
I suspect you missed an input redirection in map.sh:
file=$1
echo "mapping $file"
while IFS="," read -r value1 product remainder; do
# ...
done < "$file"
# ^^^^^ provide the standard input to from this file to `read`
I have x files: A, B, C... What I need to do is pass each of these files as the first command line argument to a python file and pass the others as the second command line argument until all files have been passed as $1 once. For example, on the first iteration A is $1 and B,C... is $2. On the second iteration, B is $1 and A,C... is $2. I've read about the shift command in shell but am not very sure if it will work in my case (I'm also relatively new to shell scripting). Also, is there a limit to the number of command line arguments I can pass to my python script? I would also like to create a variable to hold the list of file names before iterating through my files. Thank you!
Bash has arrays, and supports array slicing via ${array[#]:start:end} syntax, where start and end are optional indices. That's enough to get the job done.
#!/bin/bash
# Store the master list of file names in an array called $files.
files=("$#")
for ((i = 0; i < ${#files[#]}; ++i)); do
# Store the single item in $file and the rest in an array $others.
file=${files[i]}
others=("${files[#]:0:i}" "${files[#]:i+1}")
# Run command.py. Use ${others[*]} to concatenate all the file names into one long
# string, and override $IFS so they're joined with commas.
(IFS=','; command.py "${files[i]}" "${others[*]}")
done
Basically I need to execute a curl command multiple times and redirect the output to a .csv file, each time the command is executed a term that is used in two separate places in the command is changed. I do have a list of these terms (arguments?) contained in a separate text file. Each time the command runs for a different term, the output needs to be appended to the file.
The command is basically:
curl "http://someURL/standardconditions+AND+(TERM_exact+OR+TERM_related)" > testfile.csv
So each time the command is run, TERM changes in both places (TERM_exact and TERM_related). As I mentioned, I have a text file that has a list of all 60 or so terms, what I want is the script to execute the command using the first term on the list, write the output to the specified .csv file and then repeat with the second term on the list, append that to the file and so on and so forth until it's been run for every single term.
I imagine there is a simple way to do this, I'm just not sure how.
Here's one way to do it,
This assumes that listFile.csv is your list of 60 items, and that each line is a comma-separated pair of values (no commas allowed in values!)
while IFS=, read exact related ; do
curl "http://someURL/standardconditions+AND+(TERM_${exact}+OR+TERM_${related})" >> testfile.csv
done < listFile.csv
It's not clear if you wanted one output file, or multiple.
You could replace the >> testfile.csv with >>testfile.${exact}_${related}.csv
to have separate files.
IHTH
You can set a variable to store TERM, and use concat function to get a string like
"http://someURL/standardconditions+AND+(TERM_exact+OR+TERM_related)", and run a python(may be other language)script including loop structure to handle 60 terms.
I have a bash script that take advantage of a local toolbox to perform an operation
my question is fairly simple
I have multiple files that are the same quantities but different time steps i would like to first untar them all, and then use the toolbox to perform some manipulation but i am not sure if i am on the right track.
=============================================
The file is as follows
INPUTS
fname = a very large number or files with same name but numbering
e.g wnd20121.grb
wnd20122.grb
.......
wnd2012100.grb
COMMANDS
> cdo -f nc copy fname ofile(s)
(If this is the ofile(s)=output file how can i store it for sequent use ? Take the ofile (output file) from the command and use it / save it as input to the next, producing a new subsequent numbered output set of ofile(s)2)
>cdo merge ofile(s) ofile2
(then automatically take the ofile(s)2 and input them to the next command and so on, producing always an array of new output files with specific set name I set but different numbering for distinguishing them)
>cdo sellon ofile(s)2 ofile(s)3
------------------------------------
To make my question clearer, I would like to know the way in which I can instruct basically through a bash script the terminal to "grab" multiple files that are usually the same name but have a different numbering to make the separate their recorded time
e.g. file1 file2 ...file n
and then get multiple outputs , with every output corresponding to the number of the file it converted.
e.g. output1 output2 ...outputn
How can I set these parameters so the moment they are generated they are stored for subsequent use in the script, in later commands?
Your question isn't clear, but perhaps the following will help; it demonstrates how to use arrays as argument lists and how to parse command output into an array, line by line:
#!/usr/bin/env bash
# Create the array of input files using pathname expansion.
inFiles=(wnd*.grb)
# Pass the input-files array to another command and read its output
# - line by line - into a new array, `outFiles`.
# The example command here simply prepends 'out' to each element of the
# input-files array and outputs each (modified) element on its own line.
# Note: The assumption is that the filenames have no embedded newlines
# (which is usually true).
IFS=$'\n' read -r -d '' -a outFiles < \
<(printf "%s\n" "${inFiles[#]}" | sed s'/^/out-/')
# Note: If you use bash 4, you could use `readarray -t outFiles < <(...)` instead.
# Output the resulting array.
# This also demonstrates how to use an array as an argument list
# to pass to _any_ command.
printf "%s\n" "${outFiles[#]}"