OSX shell command for opening filename from file content? [duplicate] - macos

I'd like to know how to use the contents of a file as command line arguments, but am struggling with syntax.
Say I've got the following:
# cat > arglist
src/file1 dst/file1
src/file2 dst/file2
src/file3 dst/file3
How can I use the contents of each line in the arglist file as arguments to say, a cp command?

the '-n' option for xargs specifies how many arguments to use per command :
$ xargs -n2 < arglist echo cp
cp src/file1 dst/file1
cp src/file2 dst/file2
cp src/file3 dst/file3

Using read (this does assume that any spaces in the filenames in arglist are escaped):
while read src dst; do cp "$src" "$dst"; done < argslist
If the arguments in the file are in the right order and filenames with spaces are quoted, then this will also work:
while read args; do cp $args; done < argslist

You can use pipe (|) :
cat file | echo
or input redirection (<)
cat < file
or xargs
xargs sh -c 'emacs "$#" < /dev/tty' emacs
Then you may use awk to get arguments:
cat file | awk '{ print $1; }'
Hope this helps..

if your purpose is just to cp those files in thelist
$ awk '{cmd="cp "$0;system(cmd)}' file

Use for loop with IFS(Internal Field Separator) set to new line
OldIFS=$IFS # Save IFS
$IFS=$'\n' # Set IFS to new line
for EachLine in `cat arglist"`; do
Command="cp $Each"
`$Command`;
done
IFS=$OldIFS

Related

Pipe the output of basename to string substitution

I need the basename of a file that is given as an argument to a bash script. The basename should be stripped of its file extension.
Let's assume $1 = "/somefolder/andanotherfolder/myfile.txt", the desired output would be "myfile".
The current attempt creates an intermediate variable that I would like to avoid:
BASE=$(basename "$1")
NOEXT="${BASE%.*}"
My attempt to make this a one-liner would be piping the output of basename. However, I do not know how to pipe stdout to a string substitution.
EDIT: this needs to work for multiple file extensions with possibly differing lengths, hence the string substitution attempt as given above.
Why not Zoidberg ?
Ehhmm.. I meant why not remove the ext before going for basename ?
basename "${1%.*}"
Unless of course you have directory paths with dots, then you'll have to use basename before and remove the extension later:
echo $(basename "$1") | awk 'BEGIN { FS = "." }; { print $1 }'
The awk solution will remove anything after the first dot from the filename.
There's a regular expression based solution which uses sed to remove only the extension after last dot if it exists:
echo $(basename "$1") | sed 's/\(.*\)\..*/\1/'
This could even be improved if you're sure that you've got alphanumeric extensions of 3-4 characters (eg: mp3, mpeg, jpg, txt, json...)
echo $(basename "$1") | sed 's/\(.*\)\.[[:alnum:]]\{3\}$/\1/'
How about this?
NEXT="$(basename -- "${1%.*}")"
Testing:
set -- '/somefolder/andanotherfolder/myfile.txt'
NEXT="$(basename -- "${1%.*}")"
echo "$NEXT"
myfile
Alternatively:
set -- "${1%.*}"; NEXT="${1##*/}"
NOEXT="${1##*/}"; NOEXT="${NOEXT%.*}"
How about:
$ [[ $var =~ [^/]*$ ]] && echo ${BASH_REMATCH%.*}
myfile

How to write a command line script that will loop through every line in a text file and append a string at the end of each? [duplicate]

How do I add a string after each line in a file using bash? Can it be done using the sed command, if so how?
If your sed allows in place editing via the -i parameter:
sed -e 's/$/string after each line/' -i filename
If not, you have to make a temporary file:
typeset TMP_FILE=$( mktemp )
touch "${TMP_FILE}"
cp -p filename "${TMP_FILE}"
sed -e 's/$/string after each line/' "${TMP_FILE}" > filename
I prefer echo. using pure bash:
cat file | while read line; do echo ${line}$string; done
I prefer using awk.
If there is only one column, use $0, else replace it with the last column.
One way,
awk '{print $0, "string to append after each line"}' file > new_file
or this,
awk '$0=$0"string to append after each line"' file > new_file
If you have it, the lam (laminate) utility can do it, for example:
$ lam filename -s "string after each line"
Pure POSIX shell and sponge:
suffix=foobar
while read l ; do printf '%s\n' "$l" "${suffix}" ; done < file |
sponge file
xargs and printf:
suffix=foobar
xargs -L 1 printf "%s${suffix}\n" < file | sponge file
Using join:
suffix=foobar
join file file -e "${suffix}" -o 1.1,2.99999 | sponge file
Shell tools using paste, yes, head
& wc:
suffix=foobar
paste file <(yes "${suffix}" | head -$(wc -l < file) ) | sponge file
Note that paste inserts a Tab char before $suffix.
Of course sponge can be replaced with a temp file, afterwards mv'd over the original filename, as with some other answers...
This is just to add on using the echo command to add a string at the end of each line in a file:
cat input-file | while read line; do echo ${line}"string to add" >> output-file; done
Adding >> directs the changes we've made to the output file.
Sed is a little ugly, you could do it elegantly like so:
hendry#i7 tmp$ cat foo
bar
candy
car
hendry#i7 tmp$ for i in `cat foo`; do echo ${i}bar; done
barbar
candybar
carbar

Turning a list of abs pathed files to a comma delimited string of files in bash

I have been working in bash, and need to create a string argument. bash is a newish for me, to the point that I dont know how to build a string in bash from a list.
// foo.txt is a list of abs file names.
/foo/bar/a.txt
/foo/bar/b.txt
/delta/test/b.txt
should turn into: a.txt,b.txt,b.txt
OR: /foo/bar/a.txt,/foo/bar/b.txt,/delta/test/b.txt
code
s = ""
for file in $(cat foo.txt);
do
#what goes here? s += $file ?
done
myShellScript --script $s
I figure there was an easy way to do this.
with for loop:
for file in $(cat foo.txt);do echo -n "$file",;done|sed 's/,$/\n/g'
with tr:
cat foo.txt|tr '\n' ','|sed 's/,$/\n/g'
only sed:
sed ':a;N;$!ba;s/\n/,/g' foo.txt
This seems to work:
#!/bin/bash
input="foo.txt"
while IFS= read -r var
do
basename $var >> tmp
done < "$input"
paste -d, -s tmp > result.txt
output: a.txt,b.txt,b.txt
basename gets you the file names you need and paste will put them in the order you seem to need.
The input field separator can be used with set to create split/join functionality:
# split the lines of foo.txt into positional parameters
IFS=$'\n'
set $(< foo.txt)
# join with commas
IFS=,
echo "$*"
For just the file names, add some sed:
IFS=$'\n'; set $(sed 's|.*/||' foo.txt); IFS=,; echo "$*"

Grep not finding files when given them in a list

I have a file called file_names_list.txt which contains absolute file names, for example, the first line is:
~/Projects/project/src/files/file.mm
I run a script to grep each of these files,
for file in $(cat file_names_list.txt); do
echo "doing file: $file"
grep '[ \t]*if (.* = .*) {' $file | while read -r line ; do ...
and I get the output:
doing file: ~/Projects/project/src/files/file.mm
grep: ~/Projects/project/src/files/file.mm: No such file or directory
But if I go to the terminal and enter
grep '[ \t]*if (.* = .*) {' ~/Projects/project/src/files/file.mm
I get the proper grep output
What's the problem here? I'm out of ideas
The problem is with the ~ character. That character gets expanded to your home directory when you use it in bash but in this case, it is just another character stored in the variable $file. To see the difference, try this:
file='~'
echo $file
echo ~
So now you have to either recreate the file file_names_list.txt or try to fix it, e.g. with sed:
sed -i -e "s|^~/|$HOME/|" file_names_list.txt
Also note that it would be prefereable to use a while loop instead of a for loop:
while IFS= read -r file; do
# write your code here
done < file_names_list.txt
You can use your script like this:
while IFS= read -r f; do
grep '[ \t]*if .* = .* {' "${f/#\~/\$HOME}"
done < file_names_list.txt
Since ~ cannot be stored in a variable and expanded we are replacing starting ~ by $HOME in each line in this BASH expression: "${f/#\~/\$HOME}"

Sed variable too long

I need to substitute a unique string in a json file: {FILES} by a bash variable that contains thousands of paths: ${FILES}
sed -i "s|{FILES}|$FILES|" ./myFile.json
What would be the most elegant way to achieve that ? The content of ${FILES} is a result of an "aws s3" command. The content would look like :
FILES="/file1.ipk, /file2.ipk, /subfolder1/file3.ipk, /subfolder2/file4.ipk, ..."
I can't think of a solution where xargs would help me.
The safest way is probably to let Bash itself expand the variable. You can create a Bash script containing a here document with the full contents of myFile.json, with the placeholder {FILES} replaced by a reference to the variable $FILES (not the contents itself). Execution of this script would generate the output you seek.
For example, if myFile.json would contain:
{foo: 1, bar: "{FILES}"}
then the script should be:
#!/bin/bash
cat << EOF
{foo: 1, bar: "$FILES"}
EOF
You can generate the script with a single sed command:
sed -e '1i#!/bin/bash\ncat << EOF' -e 's/\$/\\$/g;s/{FILES}/$FILES/' -e '$aEOF' myFile.json
Notice sed is doing two replacements; the first one (s/\$/\\$/g) to escape any dollar signs that might occur within the JSON data (replace every $ by \$). The second replaces {FILES} by $FILES; the literal text $FILES, not the contents of the variable.
Now we can combine everything into a single Bash one-liner that generates the script and immediately executes it by piping it to Bash:
sed -e '1i#!/bin/bash\ncat << EOF' -e 's/\$/\\$/g;s/{FILES}/$FILES/' -e '$aEOF' myFile.json | /bin/bash
Or even better, execute the script without spawning a subshell (useful if $FILES is set without export):
sed -e '1i#!/bin/bash\ncat << EOF' -e 's/\$/\\$/g;s/{FILES}/$FILES/' -e '$aEOF' myFile.json | source /dev/stdin
Output:
{foo: 1, bar: "/file1.ipk, /file2.ipk, /subfolder1/file3.ipk, /subfolder2/file4.ipk, ..."}
Maybe perl would have fewer limitations?
perl -pi -e "s#{FILES}#${FILES}#" ./myFile.json
It's a little gross, but you can do it all within shell...
while read l
do
if ! echo "$l" | grep -q '{DATA}'
then
echo "$l"
else
echo "$l" | sed 's/{DATA}.*$//'
echo "$FILES"
echo "$l" | sed 's/^.*{DATA}//'
fi
done <./myfile.json >newfile.json
#mv newfile.json myfile.json
Obviously I'd leave the final line commented until you were confident it worked...
Maybe just don't do it? Can you just :
echo "var f = " > myFile2.json
echo $FILES >> myFile2.json
And reference myFile2.json from within your other json file? (You should put the global f variable into a namespace if this works for you.)
Instead of putting all those variables in an environment variable, put them in a file. Then read that file in perl:
foo.pl:
open X, "$ARGV[0]" or die "couldn't open";
shift;
$foo = <X>;
while (<>) {
s/world/$foo/;
print;
}
Command to run:
aws s3 ... >/tmp/myfile.$$
perl foo.pl /tmp/myfile.$$ <myFile.json >newFile.json
Hopefully that will bypass the limitations of the environment variable space and the argument length by pulling all the processing within perl itself.

Resources