Set a fixed numerical ration between alternative sorted numbers - bash

So I am working on a large text file made up of rows and rows of numbers,
below is just an short excerpt to help with the question, but it could even be a series of random incrementing numbers with all sorts of numerical gaps between each row.
267
368
758
936
1248
1415
1739
1917
I am looking for a way to set a fixed numerical gap between every second pair of numbers starting on the 2nd and 3rd number. such as 100 whist maintaining the numerical difference within each pair but this difference could be any number.
Such that if the numerical gap was set to 100 the above example would become:
267
368
# gap of 100
468
646
# gap of 100
746
913
# gap of 100
1013
1191
would anybody know of a possible one liner to do this in terminal of a shell script. Thanks

A little bit clumsy, but for starters this would do, I guess. It reads the list of numbers from stdin (i.e. start with cat numbers.txt or the like), then pipe it into the rest.
paste - - | {
read -r x m; echo $x; echo $m
while read -r x y; do echo $((m+=100)); echo $((m+=y-x)); done
}
267
368
468
646
746
913
1013
1191
Explanation: paste - - lets us read two numbers at once, so we can read line by line (two numbers). The first pair is printed out unchanged, but the subsequent pairs only serve as the base for calculating the diffrence, which is added onto a running variable, which also increments by 100 on each iteration.
Here's a rewrite as parametrized function, and without the use of paste:
addgaps() {
read -r m; echo $m; read -r m; echo $m
while read -r x; read -r y; do echo $((m+=$1)); echo $((m+=y-x)); done;
}
cat numbers.txt | addgaps 100

Related

How to divide numbers stored as text into many parts in awk or maybe sed or other?

I need to divide my text file. In my text file, I have numbers. This is a small fragment of my input file. In my text file, I have numbers from 29026 to 58050.
29026 29027 29028 29029 29030 29031 29032 29033 29034 29035 29036 29037 29038 29039 29040
29041 29042 29043 29044 29045 ...........................................................
................................................58029 58030 58031 58032 58033 58034 58035
58036 58037 58038 58039 58040 58041 58042 58043 58044 58045 58046 58047 58048 58049 58050
I must create 225 index groups. Every group must have 129 numbers. So my output will look like
[ Lipid 1 ]
29026 29027 29028 29029 ...................................
...............
...........................29150 29151 29152 29153 29154
[ Lipid 2 ]
...
...
[ Lipid 225 ]
57921 57922 57923 57924 57925 57926......
.....
.......................
58044 58045 58046 58047 58048 58049 58050
Do you have any idea?
Edit
My text file
29026 29027 29028 29029 29030 29031 29032 29033 29034 29035 29036 29037 29038 29039 29040
29041 29042 29043 29044 29045 29046 29047 29048 29049 29050 29051 29052 29053 29054 29055
29056 29057 29058 29059 29060 29061 29062 29063 29064 29065 29066 29067 29068 29069 29070
29071 29072 29073 29074 29075 29076 29077 29078 29079 29080 29081 29082 29083 29084 29085
29086 29087 29088 29089 29090 29091 29092 29093 29094 29095 29096 29097 29098 29099 29100
29101 29102 29103 29104 29105 29106 29107 29108 29109 29110 29111 29112 29113 29114 29115
29116 29117 29118 29119 29120 29121 29122 29123 29124 29125 29126 29127 29128 29129 29130
29131 29132 29133 29134 29135 29136 29137 29138 29139 29140 29141 29142 29143 29144 29145
29146 29147 29148 29149 29150 29151 29152 29153 29154 29155 29156 29157 29158 29159 29160
29161 29162 29163 29164 29165 29166 29167 29168 29169 29170 29171 29172 29173 29174 29175
29176 29177 29178 29179 29180 29181 29182 29183 29184 29185 29186 29187 29188 29189 29190
29191 29192 29193 29194 29195 29196 29197 29198 29199 29200 29201 29202 29203 29204 29205
29206 29207 29208 29209 29210 29211 29212 29213 29214 29215 29216 29217 29218 29219 29220
29221 29222 29223 29224 29225 29226 29227 29228 29229 29230 29231 29232 29233 29234 29235
29236 29237 29238 29239 29240 29241 29242 29243 29244 29245 29246 29247 29248 29249 29250
29251 29252 29253 29254 29255 29256 29257 29258 29259 29260 29261 29262 29263 29264 29265
29266 29267 29268 29269 29270 29271 29272 29273 29274 29275 29276 29277 29278 29279 29280
29281 29282 29283 29284 29285 29286 29287 29288 29289 29290 29291 29292 29293 29294 29295
29296 29297 29298 29299 29300 29301 29302 29303 29304 29305 29306 29307 29308 29309 29310
29311 29312 29313 29314 29315 29316 29317 29318 29319 29320 29321 29322 29323 29324 29325
29326 29327 29328 29329 29330 29331 29332 29333 29334 29335 29336 29337 29338 29339 29340
29341 29342 29343 29344 29345 29346 29347 29348 29349 29350 29351 29352 29353 29354 29355
29356 29357 29358 29359 29360 29361 29362 29363 29364 29365 29366 29367 29368 29369 29370
29371 29372 29373 29374 29375 29376 29377 29378 29379 29380 29381 29382 29383 29384 29385
29386 29387 29388 29389 29390 29391 29392 29393 29394 29395 29396 29397 29398 29399 29400
29401 29402 29403 29404 29405 29406 29407 29408 29409 29410 29411 29412 29413 29414 29415
29416 29417 29418 29419 29420 29421 29422 29423 29424 29425 29426 29427 29428 29429 29430
here I have thousands of lines, but I will not paste all of this text
57736 57737 57738 57739 57740 57741 57742 57743 57744 57745 57746 57747 57748 57749 57750
57751 57752 57753 57754 57755 57756 57757 57758 57759 57760 57761 57762 57763 57764 57765
57766 57767 57768 57769 57770 57771 57772 57773 57774 57775 57776 57777 57778 57779 57780
57781 57782 57783 57784 57785 57786 57787 57788 57789 57790 57791 57792 57793 57794 57795
57796 57797 57798 57799 57800 57801 57802 57803 57804 57805 57806 57807 57808 57809 57810
57811 57812 57813 57814 57815 57816 57817 57818 57819 57820 57821 57822 57823 57824 57825
57826 57827 57828 57829 57830 57831 57832 57833 57834 57835 57836 57837 57838 57839 57840
57841 57842 57843 57844 57845 57846 57847 57848 57849 57850 57851 57852 57853 57854 57855
57856 57857 57858 57859 57860 57861 57862 57863 57864 57865 57866 57867 57868 57869 57870
57871 57872 57873 57874 57875 57876 57877 57878 57879 57880 57881 57882 57883 57884 57885
57886 57887 57888 57889 57890 57891 57892 57893 57894 57895 57896 57897 57898 57899 57900
57901 57902 57903 57904 57905 57906 57907 57908 57909 57910 57911 57912 57913 57914 57915
57916 57917 57918 57919 57920 57921 57922 57923 57924 57925 57926 57927 57928 57929 57930
57931 57932 57933 57934 57935 57936 57937 57938 57939 57940 57941 57942 57943 57944 57945
57946 57947 57948 57949 57950 57951 57952 57953 57954 57955 57956 57957 57958 57959 57960
57961 57962 57963 57964 57965 57966 57967 57968 57969 57970 57971 57972 57973 57974 57975
57976 57977 57978 57979 57980 57981 57982 57983 57984 57985 57986 57987 57988 57989 57990
57991 57992 57993 57994 57995 57996 57997 57998 57999 58000 58001 58002 58003 58004 58005
58006 58007 58008 58009 58010 58011 58012 58013 58014 58015 58016 58017 58018 58019 58020
58021 58022 58023 58024 58025 58026 58027 58028 58029 58030 58031 58032 58033 58034 58035
58036 58037 58038 58039 58040 58041 58042 58043 58044 58045 58046 58047 58048 58049 58050
Here is how I understood your problem:
The input is a text file in several lines, with fifteen numbers on each line, separated by spaces or tabs. Some lines (perhaps the last one) may have fewer than fifteen numbers. (In fact in the solution below it doesn't matter how many numbers are on each line.)
You must group the numbers into sets of 129 numbers each, in sequence. The last group may have less than 129 numbers, if the input cardinality is not an exact multiple of 129. In the solution below, it doesn't matter how many input numbers there are (and therefore how many groups there will be in the output).
For each group of 129 numbers, you must get a few lines in the output. First, a title or label that says [Lipid n] where n is the line number, and then the numbers in that group, shown fifteen per line (so, there will be eight full lines and a ninth line with only 9 numbers on it: 129 = 15 * 8 + 9).
Here's how you can do this. First let's start with a small example, and then we can look at what must be changed for a more general solution.
I will assume that your inputs can be arbitrary numbers of any length; of course, if they are consecutive numbers like you showed in your sample data, then the problem is trivial and completely uninteresting. So let's assume your numbers are in fact any numbers at all. (Not really; I wrote the solution for non-negative integers; but it can be re-written for "tokens" of non-blank characters separated by blanks.)
I start with the following input file:
$ cat lipid-inputs
124 150 178 111 143 177 116
154 194 139 183 132 180 133
185 142 101 159 122 184 151
120 188 161 136 113 189 170
We want to group the 28 input numbers into sets of ten numbers each, and present the output with (at most) seven numbers per line. So: There will be two full groups, and a third group with only eight member numbers (since we have only 28 inputs). The desired output looks like this:
[Lipid 1]
124 150 178 111 143 177 116
154 194 139
[Lipid 2]
183 132 180 133 185 142 101
159 122 184
[Lipid 3]
151 120 188 161 136 113 189
170
Strategy: First write the input numbers one per line, so we can then arrange them ten per line (ten: cardinality of desired groups in the output). Then add line numbers (which will go into the label lines). Then edit the "line number" lines to add the "lipid" stuff, and break the data lines into shorter lines, showing seven tokens each (possibly fewer on the last line in each group).
Implementation: tr to break up the tokens one per line; paste reading repeatedly from standard input, ten stdin lines for each output line; then sed = to add the line numbers (on separate lines); and finally a standard sed for the final editing. The command looks like this:
$ tr -s ' ' '\n' < lipid-inputs | paste -d ' ' - - - - - - - - - - |
> sed = | sed -E 's/^[[:digit:]]+$/[Lipid &]/ ;
> s/(([[:blank:]]*[[:digit:]]+){7}) /\1\n/g'
The output is the one I showed already.
To generalize (so you can apply to your problem): The number of tokens per line in the input file is irrelevant. To get 15 tokens per line in the output, change the hard-coded number 7 to 15 on the last line in the command shown above. And to allocate 129 tokens per line, instead of 10, what needs to be changed is the paste command: I show it reading ten times from stdin. You need 129. So it would be better to create a string of 129 dashes separated by space, in a simple command - rather than hard-coding - and to use that string as an input to paste. I show how to do this for my example, you will adapt for yours.
Define variables to hold your relevant values: how many tokens per lipid (129 in your case, 10 in mine) and how many tokens per line in the output (15 in your case, 7 in mine).
$ tokens_per_lipid=10
$ tokens_per_line=7
Then create a variable to hold the string - - - - [...] needed in the paste command. There are several ways to do this, here's just one:
$ paste_arg=$(yes '-' | head -n $tokens_per_lipid | tr '\n' ' ')
Let's check it:
$ echo $paste_arg
- - - - - - - - - -
OK, so let's re-write the command that does what you need. We must use double-quotes for the argument to sed to allow variable expansion.
$ tr -s ' ' '\n' < lipid-inputs | paste -d ' ' $paste_arg |
> sed = | sed -E "s/^[[:digit:]]+$/[Lipid &]/ ;
> s/(([[:blank:]]*[[:digit:]]+){$tokens_per_line}) /\1\n/g"
[Lipid 1]
124 150 178 111 143 177 116
154 194 139
[Lipid 2]
183 132 180 133 185 142 101
159 122 184
[Lipid 3]
151 120 188 161 136 113 189
170
I have no clue what you really are trying to do, but maybe this does what you want
< input sed -zE 's/(([0-9]+[^0-9]+){129})/[ Lipid # ]\n\1\n/g' | awk 'BEGIN { RS = ORS = "]" } { sub("#", NR) } 1' | sed '$d'
It uses Sed to insert the [ Lipid # ] string (with some newline) every 129 occurrences of [0-9]+[^0-9]+ (which is 1 or more digits followed by 1 or more non-digits); then it uses Awk to substitute # with numbers from one (to do so, it interprets the ] as the record separator, and so it can change # to the number of the record NR); finally it uses Sed again to remove the last line which appears as the last record separator from the Awk processing.
I used Awk for inserting the increasing numbers as there's no easy way to do maths in Sed; I used Sed to break the file and insert text in between as requested as I find it easier than doing it in Awk.
If you need to have all numbers on one line in the output, you can do
< input sed -zE 's/[^0-9]+/ /g;s/(([0-9]+[^0-9]+){129})/[ Lipid # ]\n\1\n/g' | awk 'BEGIN { RS = ORS = "]" } { sub("#", NR) } 1' | sed '$d'
where I have just added s/[^0-9]+/ /g; to collapse whatever happens to be between numbers to a single whitespace.

I'm trying to cut a string with a number of bytes. What is the problem with this for loop?

Sorry in advance for the beginner question, but I'm quite stuck and keen to learn.
I am trying to echo a string (in hex) and then cut a piece of that with cut command. It looks like this:
for y in "${Offset}"; do
echo "${entry}" | cut -b 60-$y
done
Where echo ${Offset} results in
75 67 69 129 67 567 69
I would like each entry to be printed, and then cut from the 60th byte until the respective number in $Offset.
So the first entry would be cut 60-75.
However, I get an error:
cut: 67: No such file or directory
cut: 69: No such file or directory
cut: 129: No such file or directory
cut: 67: No such file or directory
cut: 567: No such file or directory
cut: 69: No such file or directory
I tried adding/removing parentheses around each variable but never got the right result.
Any help will be appreciated!
UPDATE: updated the code with changed from markp-fuso. However, this codes still does not work as intended. I would like to print every entry based on the respective offset, but it goes wrong. This prints every entry seven times, where each time is based on seven different offsets. Any ideas on how to fix this?
#!/bin/bash
MESSAGES=$( sqlite3 -csv file.db 'SELECT quote(data) FROM messages' | tr -d "X'" )
for entry in ${MESSAGES}; do
Offset='75 67 69 129 67 567 69'
for y in $Offset; do
echo "${entry:59:(y-59)}"
done
done
echo ${MESSAGES}
Results in seven strings with minimal length 80 bytes and max 600.
My output should be:
String one: cut by first offset
String two: cut by second offset
and so on...
In order for for to iterate over each space-separated "word" in $Offset, you need to get rid of the quotes, which are making it read as a single variable.
for y in ${Offset}; do
echo "${entry}" | cut -b 60-$y
done
To eliminate the sub-process that's going to be invoked due to the | cut ..., we could look at a comparable parameter expansion solution ...
Quick reminder on how to extract a substring from a variable:
${variable:start_position:length}
Keeping in mind that the first character in ${variable} is in position zero/0.
Next, we need to convert each individual offset (y) into a 'length':
length=$((y-60+1))
Rolling these changes into your code (and removing the quotes from around ${Offset}) gives us:
for y in ${Offset}
do
start=$((60-1))
length=$((y-60+1))
echo "${entry:${start}:${length}}"
#echo "${entry:59:(y-59)}"
done
NOTE: You can also replace the start/length/echo with the single commented-out echo.
Using a smaller data set for demo purposes, and using 3 (instead of 60) as the start of our extraction:
# base-10 character position
# 1 2
# 123456789012345678901234567
$ entry='123456789ABCDEFGHIabcdefghi'
$ echo ${#entry} # length of entry?
27
$ Offset='5 8 10 13 20'
$ for y in ${Offset}
do
start=$((3-1))
length=$((y-3+1))
echo "${entry:${start}:${length}}"
done
345 # 3-5
345678 # 3-8
3456789A # 3-10
3456789ABCD # 3-13
3456789ABCDEFGHIab # 3-20
And consolidating the start/length/echo into a single echo:
$ for y in ${Offset}
do
echo "${entry:2:(y-2)}"
done
345 # 3-5
345678 # 3-8
3456789A # 3-10
3456789ABCD # 3-13
3456789ABCDEFGHIab # 3-20

Iterate Through List with Seq and Variable

I am attempting to loop through a list of integers starting out like so:
start=000
for i in $(seq -w $start 48 006);
However, when I try this code above, the loop seems to loop once and then quit.
What do I need to modify? (The leading zeroes need to stay)
Could you please try following.
start=0
diff=6
for i in $(seq $start $diff 48);
do
printf '%03d\n' $i
done
Output will be as follows.
000
006
012
018
024
030
036
042
048
Problem in OP's tried code:
I believe you have given wrong syntax in seq it should be startpoint then increment_number then endpoint eg-->(seq(start_point increment end_point)). Since you have given them wrongly thus it is printing them only once in loop.
In your attempt it is taking starting point as 0 and should run till 6 with difference of 48 which is NOT possible so it is printing only very first integer value which is fair enough.
EDIT: As per #Cyrus sir's comment adding BASH builtin solution here without using seq.
for ((i=0; i<=48; i=i+6)); do printf '%03d\n' $i; done
seq's input takes a start, increment-by, and finish.
You've reversed the increment-by with finish: seq -w $start 48 006 means start at zero, increment by 48 to finish at 6. The simple fix is seq -w $start 6 48. Note: 006 is not needed, just 6 since seq will equalize the widths of the numbers to two places.

check continuity of a number series using if-else

I have a file which contain numbers, say 1 to 300. But the numbers are not continuous. A sample file looks like this
042
043
044
045
078
198
199
200
201
202
203
212
213
214
215
238
239
240
241
242
256
257
258
Now I need to check the continuity of the number series and accordingly write out the output. For example the first 4 numbers are in series, so the output should be
042-045
Next, 078 is a lone number, so the output should be
078
for convenience it can be made to look like
078-078
Then 198 to 203 are continuous. So, next output should be
198-203
and so on. The final output should be like
042-045
078-078
198-203
212-215
238-242
256-258
I just need to know the first and end member of the continuous series and jump on the next series when discontinuity is encountered; The output can be manipulated. I am inclined to use the if statement and can think of a complicated thing like this
num=`cat file | wc -l`
out1=`head -1 file`
for ((i=2;i<=$num;i++))
do
j=`echo $i-1 | bc`
var1=`cat file | awk 'NR='$j'{print}'`
var2=`cat file | awk 'NR='$i'{print}'`
var3=`echo $var2 - $var1 | bc`
if [ $var3 -gt 1 ]
then
out2=$var1
echo $out1-$out2
out1=$var2
fi
done
which works but too lengthy. I am sure there is definitely a short way of doing this.
I am also open to other straight-forward command (or few commands) in shell, awk or a few lines of fortran code that can do it.
Thanking you in anticipation.
This awk one-liner works for given example:
awk 'p+1!=$1{printf "%s%s--",NR==1?"":p"\n",$1}{p=$1}END{print $1}' file
It gives the output for your data as input:
042--045
078--078
198--203
212--215
238--242
256--258
Here is a simple program in Fortran:
program test
implicit none
integer :: first, last, uFile, i, stat
open( file='numbers.txt', newunit=uFile, action='read', status='old' )
read(uFile,*,iostat=stat) i
if ( stat /= 0 ) stop
first = i ; last = i
do
read(uFile,*,iostat=stat) i
if ( stat /= 0 ) exit
if ( i == last+1 ) then
last = i
else
print *,first,'-',last
write(*,'(i3.3,a,i3.3)') first,'-',last
endif
enddo
write(*,'(i3.3,a,i3.3)') first,'-',last
end program
The output is
042-045
078-078
198-203
212-215
238-242
256-258

Clean list of points depending on close range (+-5)

How to clean a list of points in a variable regarding on if it is
the same point or
a close by point (+-5).
Example each line is one point with to coordinates:
points="808,112\n807,113\n809,113\n155,183\n832,572"
echo "$points"
#808,112
#807,113
#809,113
#155,183
#832,572
#196,652
I would like to ignore points within a range of +-5 counts. The result should be:
echo "$points_clean"
#808,112
#155,183
#832,572
#196,652
I thought about looping through the list, but I need help to how to check if point coordinates already exist in the new list:
points_clean=$(for point in $points; do
x=$(echo "$point" | cut -d, -f1)
y=$(echo "$point" | cut -d, -f2)
# check if same or similar point coordinates already in $points_clean
echo "$x,$y"
done)
This seems to work with Bash 4.x (support for process substitution is needed):
#!/bin/bash
close=100
points="808,112\n807,113\n809,113\n155,183\n832,572"
echo -e "$points"
clean=()
distance()
{
echo $(( ($1 - $3) * ($1 - $3) + ($2 - $4) * ($2 - $4) ))
}
while read x1 y1
do
ok=1
for point in "${clean[#]}"
do
echo "compare $x1 $y1 with $point"
set -- $point
if [[ $(distance $x1 $y1 $1 $2) -le $close ]]
then
ok=0
break
fi
done
if [ $ok = 1 ]
then clean+=("$x1 $y1")
fi
done < <( echo -e "$points" | tr ',' ' ' | sort -u )
echo "Clean:"
printf "%s\n" "${clean[#]}" | tr ' ' ','
The sort is optional and may slow things down. Identical points will be too close together, so the second instance of a given coordinate will be eliminated even if the first wasn't.
Sample output:
808,112
807,113
809,113
155,183
832,572
compare 807 113 with 155 183
compare 808 112 with 155 183
compare 808 112 with 807 113
compare 809 113 with 155 183
compare 809 113 with 807 113
compare 832 572 with 155 183
compare 832 572 with 807 113
Clean:
155,183
807,113
832,572
The workaround for Bash 3.x (as found on Mac OS X 10.10.4, for example) is a tad painful; you need to send the output of the echo | tr | sort command to a file, then redirect the input of the pair of loops from that file (and clean up afterwards). Or you can put the pair of loops and the code that follows (the echo of the clean array) inside the scope of { …; } command grouping.
In response to the question 'what defines close?', wittich commented:
Let's say ±5 counts. Eg. 808(±5,) 112(±5). That's why the second and third point would be "cleaned".
OK. One way of looking at that would be to adjust the close value to 50 in my script (allowing a difference of 52 + 52), but that rejects points connected by a line of length just over 7, though. You could revise the distance function to do ±5; it takes a bit more work and maybe an auxilliary abs function, or you could return the square of the larger delta and compare that with 25 (52 of course). You can play with what the criterion should be to your hearts content.
Note that Bash shell arithmetic is integer arithmetic (only); you need Korn shell (ksh) or Z shell (zsh) to get real arithmetic in the shell, or you need to use bc or some other calculator.

Resources