I have a file with below commands
cat /some/dir/with/files/file1_name.tsv|awk -F "\\t" '{print $21$19$23$15}'
cat /some/dir/with/files/file2_name.tsv|awk -F "\\t" '{print $2$13$3$15}'
cat /some/dir/with/files/file3_name.tsv|awk -F "\\t" '{print $22$19$3$15}'
When i loop through the file to run the command, i get below error
cat file | while read line; do $line; done
cat: invalid option -- 'F'
Try `cat --help' for more information.
You are not executing the command properly as you intended it. Since you are reading line by line on the file (for unknown reason) you could call the interpreter directly as below
#!/bin/bash
# ^^^^ for running under 'bash' shell
while IFS= read -r line
do
printf "%s" "$line" | bash
done <file
But this has an overhead of creating a forking a new process for each line of the file. If your commands present under file are harmless and is safe to be run in one shot, you can just as
bash file
and be done with it.
Also for using awk, just do as below for each of the lines to avoid useless cat
awk -F "\\t" '{print $21$19$23$15}' file1_name.tsv
You are expecting the pipe (|) symbol to act as you are accustomed to, but it doesn't. To help you understand, try this :
A="ls / | grep e" # Loads a variable with a command with pipe
$A # Does not work
eval "$A" # Works
When expanding a variable without using eval, expansion and word splitting occurs after the shell interprets redirections and pipes, so your pipe symbol is seen just as a literal character.
Some options you have :
A) Avoid piping, by passing the file name as an argument
awk -F "\\t" '{print $21$19$23$15}' /some/dir/with/files/file1_name.tsv
B) Use eval as shown below, the potential security implications of which I would suggest you to research.
C) Put arguments in file and parse it, avoiding the use of eval, something like :
# Assumes arguments separated by spaces
IFS=" " read -r -a arguments;
awk "${arguments[#]-}"
D) Implement the parsing of your data files in Bash instead of awk, and use your configuration file to specify output without the need for expanding anything (e.g. by specifying fields to print separated by spaces).
The first three approaches involve some form of interpretation of outside data as code, and that comes with risks if the file used as input cannot be guaranteed safe. Approach C might be considered a bit better in that regard, but since the command you are calling is awk, an actual program is passed to awk, so whatever awk can do, an attacker (or careless user) with write access to your file can cause your script to do anything awk can do.
Related
I'm trying to make an awk command which stores an entire config file as variables.
The config file is in the following form (keys never have spaces, but values may):
key=value
key2=value two
And my awk command is:
$(awk -F= '{printf "declare %s=\"%s\"\n", $1, $2}' $file)
Running this without the outer subshell $(...) results in the exact commands that I want being printed, so my question is less about awk, and more about how I can run the output of awk as commands.
The command evaluates to:
declare 'key="value"'
which is somewhat of a problem, since then the double quotes are stored with the value. Even worse is when a space is introduced, which results in:
declare 'key2="value' two"
Of course, I cannot remove the quotes or the multi-word values cause problems.
I've tried most every solution I could find, such as set -f, eval, and system().
You don't need to use Awk for this but the do this with built-ins available. Read the config file properly using input redirection
#!/bin/bash
while IFS== read -r k v; do
declare "$k"="$v"
done < config_file
and source the file as
$ source script.sh
$ echo "$key"
value
$ echo "$key2"
value two
If source is not available explicitly, POSIX-ly way of doing it would be to do just
. ./script.sh
I have 3 text files. I would like to read them and store them in different variables and later concatenate them using paste and print them in console.
I tried the following code but it threw an error saying
File not found
Here is my code
#!/bin/sh
value_1=`cat file_1.txt`
value_2=`cat file_2.txt`
value_3 = paste $value_1 $value_2
echo "$value_3"
paste expects its arguments to be the names of files, not the content of files. With bash, ksh, or zsh, there is a way around this. Replace:
paste $value_1 $value_2
with:
paste <(echo "$value_1") <(echo "$value_2")
<(...) is called process substitution. It makes the output from the command inside the parens look like a file.
Improvement
If we don't know the first character in the output, then printf is more reliable than echo:
paste <(printf "%s" "$value_1") <(printf "%s" "$value_2")
Example
Let's use these two test files:
$ cat file1
1
2
$ cat file2
a
b
Now, let's read those files into variables and apply paste to those variables:
$ value_1=$(cat file1); value_2=$(cat file2)
$ paste <(printf "%s" "$value_1") <(printf "%s" "$value_2")
1 a
2 b
Or, saving the output in a variable:
$ value_3=$(paste <(printf "%s" "$value_1") <(printf "%s" "$value_2"))
$ echo "$value_3"
1 a
2 b
The third line should read
value_3=`paste file_1.txt file_2.txt`
You need the backticks, no space after value_3 and don't use the variables as arguments, use the file names.
The reason it is saying "file not found" is because the value_3 with a space after it is being interpreted as a command to be run.
#!/bin/sh
value_1=$(cat file_1.txt
value_2=$(cat file_2.txt)
value_3=$(echo $value_1 $value_2 | paste) # or value_3="$value1 $value2"
echo "$value_3"
Note :
no space allowed around = in shell
The backquote ` is used in the old-style command
substitution, e.g.
foo=`command`
The foo=$(command) syntax is recommended instead. Backslash handling inside $() is less surprising, and $() is easier to nest.
Check http://mywiki.wooledge.org/BashFAQ/082
After searching online I was able to figure out how to read a file line by line:
while read p; do
echo $p
done < file.txt
But I would actually like to modify the line in the file.
For example:
while read p; do
if condition
then
echo $p | perl -i -pe 's/a/b/'
fi
done < file.txt
However this doesn't actually modify the file.
Update A far better version of bash code added. Thanks to Charles Duffy for comments.
Your Perl one-liner takes a line piped into it by echo $p |, getting its standard input that way. It doesn't do anything with the file itself, so the -i flag has no effect. The -p makes it print to the standard output stream. So that whole line, echo ..., doesn't touch the file.
You can redirect the output to a new file and then move that to overwrite file.txt. Here is a simple minded example, that appends each line to a new file. For better bash code see the update below.
while read p; do
if condition
then
echo $p | perl -pe 's/a/b/' >> temp_out.txt
else
echo $p >> temp_out.txt
fi
done < file.txt
mv temp_out.txt file.txt
We have to add the else where all unmodified lines are also appended. Note that in general we cannot have just some lines replaced but the whole file has to be re-written.
If this is all that the script does you can do it with a very simple one-liner, see the end. If more work is done you can also put it all in a Perl script but I take it that there may be other good reasons for a bash script.
Update A much better version of the above. See read and echo in Builtins in Bash manual
Appending each line opens the file anew each time without a need for that.
Just redirect at the end of the loop, much like it is done in the terminal
read uses backslash for escaping, removing it from input. Turn that off with -r
Trailing white space is removed, as a part of breaking the line into words. Suppress this by unsetting the variable that controls which characters are used for splitting, IFS=
The echo $p can do all kinds of unintended things. A formatted print is better, printf '%s\n' "$p", or at least echo "$p"
With this,
while IFS= read -r p; do
if condition
then
echo "$p" | perl -pe 's/a/b/'
else
echo "$p"
fi
done < file.txt > temp_out.txt
mv temp_out.txt file.txt
Finally, if the sole purpose of the Perl one-liner were to run a simple substitution, it is much better to simply do that in the shell itself than to have a pipeline and run a whole new process for each line.
echo "${p//a/b}"
Thanks to Charles Duffy for raising all these points in comments.
A few comments on Perl one-liners. See documentation at perlrun.
The command perl -e '...' executes any valid Perl code between ''. When we add the -n or -p switch it also reads standard input and executes that code on a line of it at the time, where -p also prints out each line after it's processed. The standard input can be supplied to it from a file,
perl -pe '...' input.txt
in which case adding -i flag will result in the file being changed in-place. Or, the input can be piped into it, for example
echo "input text" | perl -pe '...'
in which case the processed line is printed to standard output. This can be redirected to a file, as in the answer above.
To make changes to a given file a line at a time you only need this on the command line
perl -i -pe 's/a/b/' file.txt
If there is more work to do then it may well be better to put it in a script, of course. In this case the one-liner can be a command in the bash script as well, replacing all that code above (unless some bash-specific functionality is preferred for processing lines).
I am trying to create a file using the following script (see below). While the script runs without errors (at least according to shellcheck), I cannot get the resulting file to have the correct name.
#!/bin/bash
# Set some variables
export site_path=~/Documents/Blog
drafts_path=~/Documents/Blog/_drafts
title="$title"
# Create the filename
title=$("$title" | "awk {print tolower($0)}")
filename="$title.markdown"
file_path="$drafts_path/$filename"
echo "File path: $file_path"
# Create the file, Add metadata fields
cat >"$file_path" <<EOL
---
title: \"$title\"
layout:
tags:
---
EOL
# Open the file in BBEdit
bbedit "$file_path"
exit 0
Very new to bash, so I'm not quite sure what I'm doing wrong...
The most glaring error is this:
title=$("$title" | "awk {print tolower($0)}")
It's wrong for several reasons:
This pipeline runs "$title" as a command -- meaning that it looks for a command named with the title of your blog post to run -- and pipes the output of that command (a command that presumably won't exist) to awk.
Using double-quotes around the entire awk command means you're looking for a command named something like /usr/bin/awk {print tolower(bash-)} (if $0 evaluates to bash-, which it will in an interactive interpreter; behavior will differ elsewhere).
Using double-quotes rather than single-quotes to protect your awk script means that the $0 gets evaluated to the shell rather than by awk.
A better alternative might look like:
title=$(awk '{print tolower($0)}' <<<"$title")
...or, to use simpler tools:
title=$(tr '[:upper:]' '[:lower:]' <<<"$title")
...or, to use bash 4.x built-in functionality:
title=${title,,}
Of course, all that assumes that title is set to start with. If you aren't passing it through your environment, you might want something like title=$1 rather than title="$title" earlier in your script.
I'm working with Mac OS X's pbpaste command, which returns the clipboard's contents. I'd like to create a shell script that executes each line returned by pbpaste as a separate bash command. For example, let's say that the clipboard's contents consists of the following lines of text:
echo 1234 >~/a.txt
echo 5678 >~/b.txt
I would like a shell script that executes each of those lines, creating the two files a.txt and b.txt in my home folder. After a fair amount of searching and trial and error, I've gotten to the point where I'm able to assign individual lines of text to a variable in a while loop with the following construct:
pbpaste | egrep -o [^$]+ | while read l; do echo $l; done
which sends the following to standard out, as expected:
echo 1234 >~/a.txt
echo 5678 >~/b.txt
Instead of simply echoing each line of text, I then try to execute them with the following construct:
pbpaste | egrep -o [^$]+ | while read l; do $l; done
I thought that this would execute each line (thus creating two text files a.txt and b.txt in my home folder). Instead, the first term (echo) seems to be interpreted as the command, and the remaining terms (nnnn >~/...) seem to get lumped together as if they were a single parameter, resulting in the following being sent to standard out without any files being created:
1234 >~/a.txt
5678 >~/b.txt
I would be grateful for any help in understanding why my construct isn't working and what changes might get it to work.
[…] the remaining terms (nnnn >~/...) seem to get lumped together as if they were a single parameter, […]
Not exactly. The line actually gets split on whitespace (or whatever $IFS specifies), but the problem is that the redirection operator > cannot be taken from a shell variable. For example, this snippet:
gt='>'
echo $gt foo.txt
will print > foo.txt, rather than printing a newline to foo.txt.
And you'll have similar problems with various other shell metacharacters, such as quotation marks.
What you need is the eval builtin, which takes a string, parses it as a shell command, and runs it:
pbpaste | egrep -o [^$]+ | while IFS= read -r LINE; do eval "$LINE"; done
(The IFS= and -r and the double-quotes around $LINE are all to prevent any other processing besides the processing performed by eval, so that e.g. whitespace inside quotation marks will be preserved.)
Another possibility, depending on the details of what you need, is simply to pipe the commands into a new instance of Bash:
pbpaste | egrep -o [^$]+ | bash
Edited to add: For that matter, it occurs to me that you can pass everything to eval in a single batch; just as you can (per your comment) write pbpaste | bash, you can also write eval "$(pbpaste)". That will support multiline while-loops and so on, while still running in the current shell (useful if you want it to be able to reference shell parameters, to set environment variables, etc., etc.).