I am trying to create a script in order to break down a file into 24. The "infoband.dat" contains the data of 24 bands that i want to plot, but rather than writing each band separately, it first writes all the 1st points of each band, then all the 2nd points, etc.
My script was supposed to begin reading each line of the file, while count to 24 over and over until the end of the file. On the first iteration of the for loop, it would create a file with all the first lines out of those 24 line chunks, and it does it successfully. But the second iteration doesn't even start. What is breaking the loop?
1 #!/bin/sh
2 grep frequency band.yaml > infoband.dat
3 contadora=0
4 for i in {1..24} #loop to create the band file, 24 is the no. of bands
5 do
6 contadora=$((contadora+1))
7 contadorb=0
8 contadorc=0
9 while read line
10 do
11 contadorb=$((contadorb+1))
12 if [ $contadorb -eq 25 ]
13 then
14 contadorb=1
15 contadorc=$((contadorc+1))
16 fi
19 if [ $contadora -eq $contadorb ]
20 then
21 echo $contadora $contadorb $contadorc "$line" >> band_$contadora.dat
22 fi
23 done < infoband.dat
24 echo "file of the band " $contadora "is finished"
25 done
Update: i got the code done using a different approach (the variable contadorc is useless btw):
1 #!/bin/sh
2 grep frequency band.yaml > infoband.dat
3 nband=24
4 contadorb=0
5 contadorc=0
6 while read line
7 do
8 contadorb=$((contadorb+1))
9 if [ $contadorb -eq $((nband+1)) ]
10 then
11 contadorb=1
12 contadorc=$((contadorc+1))
13 fi
14 echo " "$contadorb" "$line" punto_q $contadorc">> test_infoband.dat
15 done < infoband.dat
16 for i in `seq 1 $nband`
17 do
18 echo $i $nband
19 grep " $i " test_infoband.dat > banda_$i.dat
20 done
/bin/sh doesn't do brace expansion, so your loop only has one iteration, in which i is set to the string {1..24}. Either change the hashtag to /bin/bash and/or run the script with bash, or use
for i in $(seq 1 24)
(assuming your system has the seq command, otherwise you may need to just hard-code the list, or use a while loop to explicitly increment and test the value of i).
Did you try using the command "split"?
split -l 24 infoband.dat
Related
This question already has an answer here:
Command Line Arguments vs Input - What's the Difference?
(1 answer)
Closed 12 months ago.
I have for example a user with a set of numbers. How can I make bash add them together?
Example in one go the user enters
(The amount of numbers they enter is up to them and it is unknown)
bash file 3 1 5 2 2 4
How can I make bash return 17 directly from that example?
I tried
#!/usr/bin/env sh
sum=0
while read number && [ -n "$number" ]; do
sum=$((sum + ${number/#-}))
echo "$sum"
done
But this is not clean and it is returning
$ bash file
3
3
1
4
5
9
2
11
2
13
4
17
I instead want the user to only place their numbers in 1 go and not be there to put more and more numbers
Instead of having them excute the command like I have it like
bash file
1
3
4
etc
instead I want to do it in 1 go
bash file 1 3 5 6
How?
You can loop through all script arguments and calculate sum:
#!/usr/bin/env sh
sum=0
for i in "$#"; do
sum=$(( $sum + $i ))
done
echo $sum
Running with your example:
$ bash sum 3 1 5 2 2 4
17
I wrote a simple code via bash script,
Code 1
for((i=1;i<=10;i++))
do
echo $i
i=$((i+1))
echo $i
i=$((i+2))
done
output of Code 1
1
2
5
6
9
10
Code 2
for i in {1..10}
do
echo $i
i=$((i+1))
echo $i
i=$((i+2))
done
output of Code 2
1
2
2
3
3
4
4
5
5
6
6
7
7
8
8
9
9
10
10
11
I'm just wondering, why outputs are not same ?
Thnx in advance
With in, the variable iterates over the list. You can change its value in the loop, but when the next iteration starts, the next value will be assigned to it, regardless of what value you assigned to it. (And I can't imagine any other behaviour: should the shell try to guess how far in the list you want to jump? What if the value is repeated, or not present in the list at all?)
With the C-style for, the variable is initialized, and at each iteration, its value is changed and the condition checked. There's no list of values, only the condition to end the loop.
These are not shorthand for each other: They're completely different code, and expected to behave in different ways.
{1..10} is just shorthand for 1 2 3 4 5 6 7 8 9 10; when you run for i in 1 2 3 4 5 6 7 8 9 10, you're explicitly assigning those exact values to i in turn, overwriting whatever else was previously there.
By contrast, for ((i=1; i<=10; i++)) is providing three separate statements: An initializer (i=1), telling it what to do to start the loop; a check (i<=10) to tell it how to determine whether the loop is finished; and an update (i++), telling it what to do between loop iterations. These are completely arbitrary commands, and you can put any arithmetic expression you want in these positions.
The for((i=1;i<=10;i++)) loop could be rewritten as such:
i=1 # for loop's 'i=1'
while [[ "${i}" -le 10 ]] # for loop's 'i<=10'
do
echo $i
i=$((i+1))
echo $i
i=$((i+2))
((i++)) # for loop's 'i++'
done
This generates ... you guessed it ...
1
2
5
6
9
10
Charles Duffy has already expanded the for i in {1..10} loop to show how that behaves.
This program is supposed to open 3 vim terminals with 3 files opened in each one. I have a startup.txt file in each subfolder with the name of each file that I want to startup. ( one name on each line )
Problems that need to be fixed:
How can I make the command on line 22 executable? It gives me the following errors when I try to run this command ( it opens the first file in each startup.txt though )
Error:
/home/george/bin/project: line 17: vsp: command not found
/home/george/bin/project: line 19: vsp: command not found
/home/george/bin/project: line 17: vsp: command not found
/home/george/bin/project: line 19: vsp: command not found
Updated code:
1 #!/bin/bash
2
3 ADDR_FILE=~/Documents/CEDA #address to the file
4 ADDR_CEDA=$ADDR_FILE/ceda_lib #address for ceda_lib
5 ADDR_GENERAL=$ADDR_FILE/general #address for general
6 ADDR_CLI=$ADDR_FILE/CLI #address for CLI
7
8 function getFile
9 {
10 awk 'NR=='$1 $2/startup.txt
11 }
12
13 for i in $ADDR_CEDA $ADDR_GENERAL $ADDR_CLI; do
14 CMD_2=""
15 CMD_3=""
16 if [ $(getFile 2 $i) ]; then #checks if line 2 isn't blank
17 CMD_2=-c "vsp $i/$(getFile 2 $i)"
18 if [ $(getFile 3 $i) ]; then #checks if line 3 isn't blank
19 CMD_3=-c "vsp $i/$(getFile 3 $i)"
20 fi
21 fi
22 gnome-terminal -e "vim $i/$(getFile 1 $i) $CMD_2 $CMD_3"
23 done
Example startup.txt:
file1.cpp
file2.hpp
file3.hpp
Please note that I'm a beginner in bash scripting. Any general advice would be greatly appreciated.
I'm not sure what you are trying to do with vim exactly as I don't use that syntax, but here are some hints.
Your getline() function can easily be replaced with a simple sed command. If file contains:
line 1
line 2
line 3
Try running:
sed '2!d' file
and you will see it deletes all lines other than line 2. So, your vim command starts to look like this:
vim -c vsp $i/$(sed '2!d' $i)
If you want three xterms, you need to start three, and tell each one to execute a new vim. If you want them all running at once, you will need to background them with & at the end. That means your script is starting to look like this:
for i in "file1" "file2" "file3" ; do
xterm -e "vim -c vsp $i/$(sed '2!d' $i) ... " &
done
Working code:
1 #!/bin/bash
2
3 ADDR_FILE=~/Documents/CEDA #address to the file
4 ADDR_CEDA=$ADDR_FILE/ceda_lib #address for ceda_lib
5 ADDR_GENERAL=$ADDR_FILE/general #address for general
6 ADDR_CLI=$ADDR_FILE/CLI #address for CLI
7 ADDRESSES="$ADDR_CEDA $ADDR_GENERAL $ADDR_CLI"
8
9 if [ $1 ]; then #checks for parameters
10 if [ $1 = "-f" ] || [ $1 = "--files" ]; then
11 for i in $ADDRESSES; do
12 nautilus $i &
13 done
14 exit 0 #end script
15 fi
16 fi
17
18 function getFile
19 {
20 awk 'NR=='$1 $2/startup.txt
21 }
22
23 for i in $ADDRESSES; do
24 CMD_2=""
25 CMD_3=""
26 if [ $(getFile 2 $i) ]; then #checks if line 2 isn't blank
27 CMD_2="-c \"vsp $i/$(getFile 2 $i)\""
28 if [ $(getFile 3 $i) ]; then #checks if line 3 isn't blank
29 CMD_3="-c \"vsp $i/$(getFile 3 $i)\""
30 fi
31 fi
32 gnome-terminal -e "vim $i/$(getFile 1 $i) $CMD_2 $CMD_3"
33 done
I will add more to it over time but it does the bare minimum of what I wanted it to do :D Thanks for all your help guys!
PS: I've edited my question quite some times.
I have a string with 3000 elements (NOT in series) in bash,
sections='1 2 4 ... 3000'
I am trying to split this string into x chunks of length n.
I want x to be typically between 3-10. Each chunk may not be of
the same length.
Each chunk is the input to a job.
Looking at https://unix.stackexchange.com/questions/122499/bash-split-a-list-of-files
and using bash arrays, my first attempt looks like this:
#! /bin/bash
nArgs=10
nChunkSize=10
z="0 1 2 .. 1--"
zs=(${z// / })
echo ${zs[#]}
for i in $nArgs; do
echo "Creating argument: "$i
startItem=$i*$nChunkSize
zArg[$i] = ${zs[#]:($startItem:$chunkSize}
done
echo "Resulting args"
for i in $nArgs; do
echo "Argument"${zArgs[$1]}
done
The above is far from working I'm afraid. Any pointers on the ${zs[#]:($startItem:$chunkSize} syntax?
For an input of 13 elements:
z='0 1 2 3 4 5 6 7 8 10 11 12 15'
nChunks=3
and nArgs=4
I would like to obtain an array with 3 elements, zs with content
zs[0] = '0 1 2 3'
zs[1] = '4 5 6 7'
zs[2] = '8 10 11 12 15'
Each zs will be used as arguments to subsequent jobs.
First note: This is a bad idea. It won't work reliably with arbitrary (non-numeric) contents, as bash doesn't have support for nested arrays.
output=( )
sections_str='1 2 4 5 6 7 8 9 10 11 12 13 14 15 16 3000'
batch_size=4
read -r -a sections <<<"$sections_str"
for ((i=0; i<${#sections[#]}; i+=batch_size)); do
current_pieces=( "${sections[#]:i:batch_size}" )
output+=( "${current_pieces[*]}" )
done
declare -p output # to view your output
Notes:
zs=( $z ) is buggy. For example, any * inside your list will be replaced with a list of filenames in the current directory. Use read -a to read into an array in a reliable way that doesn't depend on shell configuration other than IFS (which can be controlled scoped to just that one line with IFS=' ' read -r -a).
${array[#]:start:count} expands to up to count items from your array, starting at position start.
I have 2 scripts, #1 and #2. Each work OK by themselves. I want to read a 15 row file, row by row, and process it. Script #2 selects rows. Row 0 is is indicated as firstline=0, lastline=1. Row 14 would be firstline=14, lastline=15. I see good results from echo. I want to do the same with script #1. Can't get my head around nesting correctly. Code below.
#!/bin/bash
# script 1
filename=slash
firstline=0
lastline=1
i=0
exec <${filename}
while read ; do
i=$(( $i + 1 ))
if [ "$i" -ge "${firstline}" ] ; then
if [ "$i" -gt "${lastline}" ] ; then
break
else
echo "${REPLY}" > slash1
fold -w 21 -s slash1 > news1
sleep 5
fi
fi
done
# script2
firstline=(0 1 2 3 4 5 6 7 8 9 10 11 12 13 14)
lastline=(1 2 3 4 5 6 7 8 9 10 11 12 13 14 15)
for ((i=0;i<${#firstline[#]};i++))
do
echo ${firstline[$i]} ${lastline[$i]};
done
Your question is very unclear, but perhaps you are simply looking for some simple function calls:
#!/bin/bash
script_1() {
filename=slash
firstline=$1
lastline=$2
i=0
exec <${filename}
while read ; do
i=$(( $i + 1 ))
if [ "$i" -ge "${firstline}" ] ; then
if [ "$i" -gt "${lastline}" ] ; then
break
else
echo "${REPLY}" > slash1
fold -w 21 -s slash1 > news1
sleep 5
fi
fi
done
}
# script2
firstline=(0 1 2 3 4 5 6 7 8 9 10 11 12 13 14)
lastline=(1 2 3 4 5 6 7 8 9 10 11 12 13 14 15)
for ((i=0;i<${#firstline[#]};i++))
do
script_1 ${firstline[$i]} ${lastline[$i]};
done
Note that reading the file this way is extremely inefficient, and there are undoubtedly better ways to handle this, but I am trying to minimize the changes from your code.
Update: Based on your later comments, the following idiomatic Bash code that uses sed to extract the line of interest in each iteration solves your problem much more simply:
Note:
- If the input file does not change between loop iterations, and the input file is small enough (as it is in the case at hand), it's more efficient to buffer the file contents in a variable up front, as is demonstrated in the original answer below.
- As tripleee points out in a comment: If simply reading the input lines sequentially is sufficient (as opposed to extracting lines by specific line numbers, then a single, simple while read -r line; do ... # fold and output, then sleep ... done < "$filename" is enough.
# Determine the input filename.
filename='slash'
# Count its number of lines.
lineCount=$(wc -l < "$filename")
# Loop over the line numbers of the file.
for (( lineNum = 1; lineNum <= lineCount; ++lineNum )); do
# Use `sed` to extract the line with the line number at hand,
# reformat it, and output to the target file.
fold -w 21 -s <(sed -n "$lineNum {p;q;}" "$filename") > 'news1'
sleep 5
done
A simplified version of what I think you're trying to achieve:
#!/bin/bash
# Split fields by newlines on input,
# and separate array items by newlines on output.
IFS=$'\n'
# Read all input lines up front, into array ${lines[#]}
# In terms of your code, you'd use
# read -d '' -ra lines < "$filename"
read -d '' -ra lines <<<$'line 1\nline 2\nline 3\nline 4\nline 5\nline 6\nline 7\nline 8\nline 9\nline 10\nline 11\nline 12\nline 13\nline 14\nline 15'
# Define the arrays specifying the line ranges to select.
firstline=(0 1 2 3 4 5 6 7 8 9 10 11 12 13 14)
lastline=(1 2 3 4 5 6 7 8 9 10 11 12 13 14 15)
# Loop over the ranges and select a range of lines in each iteration.
for ((i=0; i<${#firstline[#]}; i++)); do
extractedLines="${lines[*]: ${firstline[i]}: 1 + ${lastline[i]} - ${firstline[i]}}"
# Process the extracted lines.
# In terms of your code, the `> slash1` and `fold ...` commands would go here.
echo "$extractedLines"
echo '------'
done
Note:
The name of the array variable filled with read -ra is lines; ${lines[#]} is Bash syntax for returning all array elements as separate words (${lines[*]} also refers to all elements, but with slightly different semantics), and this syntax is used in the comments to illustrate that lines is indeed an array variable (note that if you were to use simply $lines to reference the variable, you'd implicitly get only the item with index 0, which is the same as: ${lines[0]}.
<<<$'line 1\n...' uses a here-string (<<<) to read an ad-hoc sample document (expressed as an ANSI C-quoted string ($'...')) in the interest of making my example code self-contained.
As stated in the comment, you'd read from $filename instead:
read -d '' -ra lines <"$filename"
extractedLines="${lines[*]: ${firstline[i]}: 1 + ${lastline[i]} - ${firstline[i]}}" extracts the lines of interest; ${firstline[i]} references the current element (index i) from array ${firstline[#]}; since the last token in Bash's array-slicing syntax
(${lines[*]: <startIndex>: <elementCount>}) is the count of elements to return, we must perform a calculation to determine the count, which is what 1 + ${lastline[i]} - ${firstline[i]} does.
By virtue of using "${lines[*]...}" rather than "${lines[#]...}", the extracted array elements are joined by the first character in $IFS, which in our case is a newline ($'\n') (when extracting a single line, that doesn't really matter).