Reading the second (or subsequent) line in Korn Shell - ksh

My OS is AIX (7200-05-03-2136) and my Korn Shell version is ksh88 (Version M-11/16/88f), but I think my question doesn't depend on versions.
Consider a single-line output of a command. I can easily put this into a variable via "read":
command | read variable
Now, suppose the command would have a two-line output. Is there a way to capture only the second line into a variable? It would be easy to use some external program like, i.e.:
command | sed '1d' | read variable
But I would like to avoid that and find a pure shell-solution. I have tried the following variations:
command | { read -r junk ; read -r variable }
command | { IFS=\n read junk ; read variable }
command | IFS='\n' read junk variable
But all these won't work.

Assign everything to a variable first. Next you can print or assign the second line.
variable=$(echo "line 1
line 2")
echo "${variable#*$'\n'}"
# or
variable="${variable#*$'\n'}"

Related

Can't manage to give two arguments from a fil to bash script : command not found [duplicate]

This question already has answers here:
Why does a space in a variable assignment give an error in Bash? [duplicate]
(3 answers)
How do I set a variable to the output of a command in Bash?
(15 answers)
Closed 3 years ago.
I'm new to bash script, it is interesting, but somehow I'm struggling with everything.
I have a file, separated by tab "\t" with 2 infos : a string and a number.
I'd like to use both info on each line into a bash script in order to look into a file for those infos.
I'm not even there, I'm struggling to give the arguments from the two columns as two arguments for bash.
#/!bin/bash
FILE="${1}"
while read -r line
do
READ_ID_WH= "echo ${line} | cut -f 1"
POS_HOTSPOT= echo '${line} | cut -f 2'
echo "read id is : ${READ_ID_WH} with position ${POS_HOTSPOT}"
done < ${FILE}
and my file is :
ABCD\t1120
ABC\t1121
I'm launching my command with
./script.sh file_of_data.tsv
What I finally get is :
script.sh: line 8: echo ABCD 1120 | cut -f 1: command not found
I tried a lot of possibilities by browsing SO, and I can't make it to divide my line into two arguments to be used separately in my script :(
Hope you can help me :)
Best,
The quotes cause the shell to look for a command whose name is the string between the quotes.
Apparently you are looking for
while IFS=$'\t' read -r id hotspot; do
echo "read id is: $id with position $hotspot"
done <"$1"
You generally want to avoid capturing things into variables you only use once, but the syntax for that is
id=$(echo "$line" | cut -f1)
See also Correct Bash and shell script variable capitalization and When to wrap quotes around a shell variable?. You can never have whitespace on either side of the = assignment operator (or rather, incorrect whitespace changes the semantics to something you probably don't want).
You have a space after the equals sign on lines 5 and 6, so it thinks you are looking for an executable file named echo ABCD 1120 | cut -f 1 and asking to execute it while assigning the variable READ_ID_WH to the empty string, rather than assigning the string you want to the variable.

execute a line from a script in the current shell environment

I'd like to execute a range of lines from a shell script from a running shell and using the current shell's environment.
For example, in a script that sets a variable to something I don't want to type (say an API key or MAC address):
line
1 #!/bin/zsh
2
3 # Set the MAC address:
4 MAC="12-34-56-78-90-ab-cd"
...
...and in my shell, I'd like to grab line 4 above, run it, and have $MAC exist in that environment
I've tried sed -n '4p' script.zsh | zsh but that doesn't affect the current shell I'm working in:
$ MAC="this is not a MAC address"
$ sed -n '4p' script.zsh | zsh
$ echo $MAC
---
this is not a MAC address
I could just copy and paste, but I'd like a solution I can use without touching my mouse - or when I don't have a mouse available.
You could combine your sed command with a process substitution and source it:
source <(sed -n '4p' script.zsh)
though you might want to use a pattern match for the print line in case the line numbers shift.
source <(sed -n '/^MAC=/p' script.zsh)

Trying to run few awk commands stored in file

I have a file with below commands
cat /some/dir/with/files/file1_name.tsv|awk -F "\\t" '{print $21$19$23$15}'
cat /some/dir/with/files/file2_name.tsv|awk -F "\\t" '{print $2$13$3$15}'
cat /some/dir/with/files/file3_name.tsv|awk -F "\\t" '{print $22$19$3$15}'
When i loop through the file to run the command, i get below error
cat file | while read line; do $line; done
cat: invalid option -- 'F'
Try `cat --help' for more information.
You are not executing the command properly as you intended it. Since you are reading line by line on the file (for unknown reason) you could call the interpreter directly as below
#!/bin/bash
# ^^^^ for running under 'bash' shell
while IFS= read -r line
do
printf "%s" "$line" | bash
done <file
But this has an overhead of creating a forking a new process for each line of the file. If your commands present under file are harmless and is safe to be run in one shot, you can just as
bash file
and be done with it.
Also for using awk, just do as below for each of the lines to avoid useless cat
awk -F "\\t" '{print $21$19$23$15}' file1_name.tsv
You are expecting the pipe (|) symbol to act as you are accustomed to, but it doesn't. To help you understand, try this :
A="ls / | grep e" # Loads a variable with a command with pipe
$A # Does not work
eval "$A" # Works
When expanding a variable without using eval, expansion and word splitting occurs after the shell interprets redirections and pipes, so your pipe symbol is seen just as a literal character.
Some options you have :
A) Avoid piping, by passing the file name as an argument
awk -F "\\t" '{print $21$19$23$15}' /some/dir/with/files/file1_name.tsv
B) Use eval as shown below, the potential security implications of which I would suggest you to research.
C) Put arguments in file and parse it, avoiding the use of eval, something like :
# Assumes arguments separated by spaces
IFS=" " read -r -a arguments;
awk "${arguments[#]-}"
D) Implement the parsing of your data files in Bash instead of awk, and use your configuration file to specify output without the need for expanding anything (e.g. by specifying fields to print separated by spaces).
The first three approaches involve some form of interpretation of outside data as code, and that comes with risks if the file used as input cannot be guaranteed safe. Approach C might be considered a bit better in that regard, but since the command you are calling is awk, an actual program is passed to awk, so whatever awk can do, an attacker (or careless user) with write access to your file can cause your script to do anything awk can do.

How to execute lines of text on the clipboard as bash commands

I'm working with Mac OS X's pbpaste command, which returns the clipboard's contents. I'd like to create a shell script that executes each line returned by pbpaste as a separate bash command. For example, let's say that the clipboard's contents consists of the following lines of text:
echo 1234 >~/a.txt
echo 5678 >~/b.txt
I would like a shell script that executes each of those lines, creating the two files a.txt and b.txt in my home folder. After a fair amount of searching and trial and error, I've gotten to the point where I'm able to assign individual lines of text to a variable in a while loop with the following construct:
pbpaste | egrep -o [^$]+ | while read l; do echo $l; done
which sends the following to standard out, as expected:
echo 1234 >~/a.txt
echo 5678 >~/b.txt
Instead of simply echoing each line of text, I then try to execute them with the following construct:
pbpaste | egrep -o [^$]+ | while read l; do $l; done
I thought that this would execute each line (thus creating two text files a.txt and b.txt in my home folder). Instead, the first term (echo) seems to be interpreted as the command, and the remaining terms (nnnn >~/...) seem to get lumped together as if they were a single parameter, resulting in the following being sent to standard out without any files being created:
1234 >~/a.txt
5678 >~/b.txt
I would be grateful for any help in understanding why my construct isn't working and what changes might get it to work.
[…] the remaining terms (nnnn >~/...) seem to get lumped together as if they were a single parameter, […]
Not exactly. The line actually gets split on whitespace (or whatever $IFS specifies), but the problem is that the redirection operator > cannot be taken from a shell variable. For example, this snippet:
gt='>'
echo $gt foo.txt
will print > foo.txt, rather than printing a newline to foo.txt.
And you'll have similar problems with various other shell metacharacters, such as quotation marks.
What you need is the eval builtin, which takes a string, parses it as a shell command, and runs it:
pbpaste | egrep -o [^$]+ | while IFS= read -r LINE; do eval "$LINE"; done
(The IFS= and -r and the double-quotes around $LINE are all to prevent any other processing besides the processing performed by eval, so that e.g. whitespace inside quotation marks will be preserved.)
Another possibility, depending on the details of what you need, is simply to pipe the commands into a new instance of Bash:
pbpaste | egrep -o [^$]+ | bash
Edited to add: For that matter, it occurs to me that you can pass everything to eval in a single batch; just as you can (per your comment) write pbpaste | bash, you can also write eval "$(pbpaste)". That will support multiline while-loops and so on, while still running in the current shell (useful if you want it to be able to reference shell parameters, to set environment variables, etc., etc.).

How do I iterate over each line in a file with Bash?

Given a text file with multiple lines, I would like to iterate over each line in a Bash script. I had attempted to use cut, but cut does not accept \n (newline) as a delimiter.
This is an example of the file I am working with:
one
two
three
four
Does anyone know how I can loop through each line of this text file in Bash?
I found myself in the same problem, this works for me:
cat file.cut | cut -d$'\n' -f1
Or:
cut -d$'\n' -f1 file.cut
Use cat for concatenating or displaying. No need for it here.
file="/path/to/file"
while read line; do
echo "${line}"
done < "${file}"
Simply use:
echo -n `cut ...`
This suppresses the \n at the end
cat FILE|while read line; do # 'line' is the variable name
echo "$line" # do something here
done
or (see comment):
while read line; do # 'line' is the variable name
echo "$line" # do something here
done < FILE
So, some really good (possibly better) answers have been provided already. But looking at the phrasing of the original question, in wanting to use a BASH for-loop, it amazed me that nobody mentioned a solution with change of Field Separator IFS. It's a pure bash solution, just like the accepted read line
old_IFS=$IFS
IFS='\n'
for field in $(<filename)
do your_thing;
done
IFS=$old_IFS
If you are sure that the output will always be newline-delimited, use head -n 1 in lieu of cut -f1 (note that you mentioned a for loop in a script and your question was ultimately not script-related).
Many of the other answers, including the accepted one, have multiple lines unnecessarily. No need to do this over multiple lines or changing the default delimiter on the system.
Also, the solution provided by Ivan with -d$'\n' did not work for me either on Mac OSX or CentOS 7. Since his answer is four years old, I assume something must have changed on the logic of the $ character for this situation.
While loop with input redirection and read command.
You should not be using cut to perform a sequential iteration of each line in a file as cut was not designed to do this.
Print selected parts of lines from each FILE to standard output.
— man cut
TL;DR
You should use a while loop with the read -r command and redirect standard input to your file inside a function scope where IFS is set to \n and use -E when using echo.
processFile() { # Function scope to prevent overwriting IFS globally
file="$1" # Any file that exists
local IFS="\n" # Allows spaces and tabs
while read -r line; do # Read exits with 1 when done; -r allows \
echo -E "$line" # -E allows printing of \ instead of gibberish
done < $file # Input redirection allows us to read file from stdin
}
processFile /path/to/file
Iteration
In order to iterate over each line of a file, we can use a while loop. This will let us iterate as many times as we need to.
while <condition>; do
<body>
done
Getting our file ready to read
We can use the read command to store a single line from standard input in a variable. Before we can use that to read a line from our file, we need to redirect standard input to point to our file. We can do this with input redirection. According to the man pages for bash, the syntax for redirection is [fd]<file where fd defaults to standard input (a.k.a file descriptor 0). We can place this before or after our while loop.
while <condition>; do
<body>
done < /path/to/file
# or the non-traditional way
</path/to/file while <condition>; do
<body>
done
Reading the file and ending the loop
Now that our file can be read from standard input, we can use read. The syntax for read in our context is read [-r] var... where -r preserves the \ (backslash) character, instead of using it as an escape sequence character, and var is the name of the variable to store the input in. You can have multiple variables to store pieces of the input in but we only need one to read an entire line. Along with this, to preserve any backslashes in any output from echo you will likely need to use the -E flag to disable the interpretation of backslash escapes. If you have any indentation (spaces or tabs), you will need to temporarily change the IFS (Input Field Separators) variable to only "\n"; normally it is set to " \t\n".
main() {
local IFS="\n"
read -r line
echo -E "$line"
}
main
How do we use read to end our while loop?
There is really only one reliable way, that I know of, to determine when you've finished reading a file with read: check the exit value of read. If the exit value of read is 0 then we successfully read a line, if it is 1 or higher then we reached EOF (end of file). With that in mind, we can place the call to read in our while loop's condition section.
processFile() {
# Could be any file you want hardcoded or dynamic
file="$1"
local IFS="\n"
while read -r line; do
# Process line here
echo -E "$line"
done < $file
}
processFile /path/to/file1
processFile /path/to/file2
A visual breakdown of the above code via Explain Shell.
If I am executing a command and want to cut the output but it has multiple lines I found it helpful to do
echo $([command]) | cut [....]
This puts all the output of [command] on a single line that can be easier to process.
My opinion is that "cut" uses '\n' as its default delimiter.
If you want to use cut, I have two ways:
cut -d^M -f1 file_cut
I make ^M By click Enter After Ctrl+V. Another way is
cut -c 1- file_cut
Does that help?

Resources