When I copy multiple paragraphs of data like
line 1
line 2
line 3
to my clipboard on Mac I can access its elements via AppleScript through
on run {input, parameters}
set theClip to input as text
set value of variable "Empfänger" of front workflow to paragraph 1 of theClip
set value of variable "Betreff" of front workflow to paragraph 2 of theClip
set value of variable "Textkörper" of front workflow to paragraph 3 of theClip
end run
and write it to Automator variables. Can I do the same thing in Shell-Script? When I run
for f in "$#"
do
echo "$f"
done
it seems like everything is stored in $1.
Actually, I wouldn't mind to use paragraphs as separator but a configurable sign like {NEXT} or something similar.
Thank you in advance!
Your example seems to be happy to treat a line as a paragraph. So, I'll do the same. So, if you copy your three lines of sample data into your clipboard by selecting them below and pressing ⌘C:
line1
line2
line3
and you want to separate them into shell variables, as you say:
para1=$(pbpaste | sed -ne '1p')
and check:
echo "$para1"
line 1
Likewise:
para2=$(pbpaste | sed -ne '2p')
para3=$(pbpaste | sed -ne '3p')
Or, if you mean you want the lines in an array:
paras=( $(pbpaste) )
echo ${paras[0]}
line1
echo ${paras[1]}
line2
Or, if you want loop over the elements:
for p in "${paras[#]}" ; do echo $p; done
Related
I have a simple bash script and I don't understand the return value.
My script
#!bin/bash
string=$(head -n 1 test.txt)
IFS=":"
read -r pathfile line <<< "$string"
echo "left"$line"right"
And my test.txt
filepath:file content
others lines
...
I have this return on the console.
rightfile content
The problem isn't when file only have 1 line.
I don't know why I don't have left value right to result.
Your input file has MSWin line ends (\x0D\x0A). Therefore, \x0D becomes part of $line and when printed, it moves the cursor back to the beginning, so $line"right" overwrites it.
Run dos2unix or fromdos on the input file to fix it.
BTW, you don't need to quote left and right. Quoting the variable might be needed, though.
echo left"$line"right
I have a file that contains following information in a tab separated manner:
abscdfr 2 5678
bgbhjgy 7 8756
ptxfgst 5 6783
lets call this file A and it contains 2000 lines
and I have another file B written in ruby
that takes these values as command line input:
f_id = ARGV[0]
lane = ARGV[1].to_i
sample_id = ARGV[2].to_i
puts " #{f_id}_#{lane}_#{sample_id}.bw"
I execute the file B in ruby by providing the information in file A
./fileB.rb abscdfr 2 5678
I want to know how can I pass the values of file A as input to file B in a recursive manner.
If it was one value it was easy but I am confused with three values.
Kindly help me in writing a wrapper around these two file either in bash or ruby.
Thank you
The following command will do the job in bash:
while read line; do ./fileB.rb $line; done < fileA
This reads each lines into line. Then it runs ./fileB.rb $line for each line. $line gets replaced before the command line is evaluated, thus each word in every line is passed as its own argument, it is important that there is no quotation like "$line". read reads from STDIN and would usually wait for user input, but with < fileA the content of fileA is redirected to STDIN so that read takes its input from there.
You could use a little bash script to loop through each line in the file and output the contents as arguments to another script.
while read line; do
eval "./fileB.rb $line"
done < fileA
This will evaluate the line in the quotes as if you typed it into the shall yourself.
Also you can use one liner ruby :
ruby -ne 'system( "./fileB.rb #{$_}" )' < fileA
Explanation :
-e Which allow us to specifies script from command-line
-n The other useful flags are -n (somewhat like sed -n or awk) , the flag tell ruby to read input or input file line by line like while loop.
$_ Default ruby save current line stored in $_ variable
Here is my code
michal#argon:~$ cat test.txt
1
2
3
4
5
michal#argon:~$ cat test.txt | while read line;do echo $line;done > new.txt
michal#argon:~$ cat new.txt
1
2
3
4
5
I don't know why the command echo $line filtered the space character, I want test.txt and new.txt to be exactly the same.
Please help, thanks.
Several issues with your code.
$parameter outside of " ". Don't.
read uses $IFS to split input line into words, you have to disable this.
UUOC
To summarize:
while IFS= read line ; do echo "$line" ; done < test.txt > new.txt
While the provided answers solves your task at hand, they do not explain why bash and echo "forgot" to print the spaces you have in your string. Lets first make an small example to show the problem. I simply run the commands in my shell, no real script needed for this one:
mogul#linuxine:~$ echo something
something
mogul#linuxine:~$ echo something
something
Two echo commands that both print something right at the beginning of the line, even if the first one had plenty space between echo and something. And now with quoting:
mogul#linuxine:~$ echo " something"
something
Notice, here echo printed the leading spaces before something
If we stuff the string into a variable it work exactly the same:
mogul#linuxine:~$ str=" something"
mogul#linuxine:~$ echo $str
something
mogul#linuxine:~$ echo "$str"
something
Why?
Because the shell, bash in your case, removes space between arguments to commands before passing them on to the sub process. By quoting the strings we tell bash that we mean this literally and it should not mess with out strings.
This knowledge will become quite valuable if you are going to handle files with funny names, like "this is a file with blanks in its name.txt"
Try this --
$ oldIFS="$IFS"; IFS=""; while read line;do echo $line >> new.txt ;done < test.txt; IFS="$oldIFS"
$ cat new.txt
1
2
3
4
5
var=ab
echo -n "$var"
Output: ab
var=abc
echo "$var"
Output: ababc
I want to delete the first ab and replace it by abc
How would I do that?
Regards, intelinside
Perhaps you are looking for something like:
var=ab
echo -n "$var"
var=abc
echo -e "\r$var"
This doesn't actually delete anything, but merely moves the cursor to the beginning of the line and overwrites. If the text being written is too short, the content of the previous write will still be visible. You can either write spaces over the old text (very easy and portable):
printf "\r%-${COLUMNS}s" "$var"
or use some terminal escape sequences to delete the old text (not portable):
echo -e "\r$var\033[K"
to move to the beginning of the line, write new text, and then delete from the cursor to the end of the line. (This may not work, depending on the terminal.)
Given a text file with multiple lines, I would like to iterate over each line in a Bash script. I had attempted to use cut, but cut does not accept \n (newline) as a delimiter.
This is an example of the file I am working with:
one
two
three
four
Does anyone know how I can loop through each line of this text file in Bash?
I found myself in the same problem, this works for me:
cat file.cut | cut -d$'\n' -f1
Or:
cut -d$'\n' -f1 file.cut
Use cat for concatenating or displaying. No need for it here.
file="/path/to/file"
while read line; do
echo "${line}"
done < "${file}"
Simply use:
echo -n `cut ...`
This suppresses the \n at the end
cat FILE|while read line; do # 'line' is the variable name
echo "$line" # do something here
done
or (see comment):
while read line; do # 'line' is the variable name
echo "$line" # do something here
done < FILE
So, some really good (possibly better) answers have been provided already. But looking at the phrasing of the original question, in wanting to use a BASH for-loop, it amazed me that nobody mentioned a solution with change of Field Separator IFS. It's a pure bash solution, just like the accepted read line
old_IFS=$IFS
IFS='\n'
for field in $(<filename)
do your_thing;
done
IFS=$old_IFS
If you are sure that the output will always be newline-delimited, use head -n 1 in lieu of cut -f1 (note that you mentioned a for loop in a script and your question was ultimately not script-related).
Many of the other answers, including the accepted one, have multiple lines unnecessarily. No need to do this over multiple lines or changing the default delimiter on the system.
Also, the solution provided by Ivan with -d$'\n' did not work for me either on Mac OSX or CentOS 7. Since his answer is four years old, I assume something must have changed on the logic of the $ character for this situation.
While loop with input redirection and read command.
You should not be using cut to perform a sequential iteration of each line in a file as cut was not designed to do this.
Print selected parts of lines from each FILE to standard output.
— man cut
TL;DR
You should use a while loop with the read -r command and redirect standard input to your file inside a function scope where IFS is set to \n and use -E when using echo.
processFile() { # Function scope to prevent overwriting IFS globally
file="$1" # Any file that exists
local IFS="\n" # Allows spaces and tabs
while read -r line; do # Read exits with 1 when done; -r allows \
echo -E "$line" # -E allows printing of \ instead of gibberish
done < $file # Input redirection allows us to read file from stdin
}
processFile /path/to/file
Iteration
In order to iterate over each line of a file, we can use a while loop. This will let us iterate as many times as we need to.
while <condition>; do
<body>
done
Getting our file ready to read
We can use the read command to store a single line from standard input in a variable. Before we can use that to read a line from our file, we need to redirect standard input to point to our file. We can do this with input redirection. According to the man pages for bash, the syntax for redirection is [fd]<file where fd defaults to standard input (a.k.a file descriptor 0). We can place this before or after our while loop.
while <condition>; do
<body>
done < /path/to/file
# or the non-traditional way
</path/to/file while <condition>; do
<body>
done
Reading the file and ending the loop
Now that our file can be read from standard input, we can use read. The syntax for read in our context is read [-r] var... where -r preserves the \ (backslash) character, instead of using it as an escape sequence character, and var is the name of the variable to store the input in. You can have multiple variables to store pieces of the input in but we only need one to read an entire line. Along with this, to preserve any backslashes in any output from echo you will likely need to use the -E flag to disable the interpretation of backslash escapes. If you have any indentation (spaces or tabs), you will need to temporarily change the IFS (Input Field Separators) variable to only "\n"; normally it is set to " \t\n".
main() {
local IFS="\n"
read -r line
echo -E "$line"
}
main
How do we use read to end our while loop?
There is really only one reliable way, that I know of, to determine when you've finished reading a file with read: check the exit value of read. If the exit value of read is 0 then we successfully read a line, if it is 1 or higher then we reached EOF (end of file). With that in mind, we can place the call to read in our while loop's condition section.
processFile() {
# Could be any file you want hardcoded or dynamic
file="$1"
local IFS="\n"
while read -r line; do
# Process line here
echo -E "$line"
done < $file
}
processFile /path/to/file1
processFile /path/to/file2
A visual breakdown of the above code via Explain Shell.
If I am executing a command and want to cut the output but it has multiple lines I found it helpful to do
echo $([command]) | cut [....]
This puts all the output of [command] on a single line that can be easier to process.
My opinion is that "cut" uses '\n' as its default delimiter.
If you want to use cut, I have two ways:
cut -d^M -f1 file_cut
I make ^M By click Enter After Ctrl+V. Another way is
cut -c 1- file_cut
Does that help?