shell script to convert degrees to cardinal direction - shell

Is there any way to port the following python function to an ash-based shell script?
#def degrees_to_cardinal(d):
#dirs = ["N", "NNE", "NE", "ENE", "E", "ESE", "SE", "SSE",
#"S", "SSW", "SW", "WSW", "W", "WNW", "NW", "NNW"]
# ix = int((d + 11.25)/22.5)
#return dirs[ix % 16]
Essentially what I need is to convert a value corresponding to a direction (0-360 degrees) to its corresponding 16-point cardinal direction. degrees -> cardinal direction.
I could use a series of if elif elif... but that seems clumsy and inefficient.

It must be said that bash has no real way of handling floating point numbers.
Of course, to get around this you can just multiply 360 by 5 or 10 and then modify
the rest of the script accordingly.

Here's something that might suit your needs:
D=N..NNENE.ENEE..ESESE.SSES..SSWSW.WSWW..WNWNW.NNW
d=$((d*2))
n=${D:$(((d/45)*3)):3}
echo ${n//.*}
Where "d" is the degree value

cat degs2dir
#!/bin/ash
awk -v deg="$1" '
BEGIN {
dirs[1]="N" ; dirs[2]="NNE" ; dirs[3]="NE" ; dirs[4]="ENE"
dirs[5]= "E" ; dirs[6]="ESE" ; dirs[7]="SE" ; dirs[8]="SSE"
dirs[9]="S" ; dirs[10]="SSW"; dirs[11]="SW"; dirs[12]="WSW"
dirs[13]="W" ; dirs[14]="WNW"; dirs[15]="NW"; dirs[16]="NNW"
}
END {
ix = int((deg + 11.25)/22.5)
print dirs[ix]
}' /dev/null
chmod 755 degs2dir
degs2dir 237
#output
SW
IHTH

Related

Bash array transformation like .map() in JavaScript

In JavaScript, the Array.map() function exists such that
const array1 = [1, 4, 9, 16];
const map1 = array1.map(x => x * 2);
console.log(map1);
// expected output: Array [2, 8, 18, 32]
I need a bash equivalent where I can take my array, manipulate its contents, then receive a new array with the manipulations.
array1=(1 4 9 16)
map1=# ????
echo ${map1[*]}
Soooooooo, just write the loop.
array1=(1 4 9 16)
map1=()
for i in "${array1[#]}"; do
map1+=("$((i * 2))")
done
echo "${map1[#]}"
Might be a good time to re-read an introduction to Bash arrays.
It is possible to implement an array_walk with a callback to perform operation on each element this way:
#!/usr/bin/env bash
# Applies the user-defined callback function to each element of the array.
#
# #params
# $1: The array name to walk
# $2: The callback command or function name
array_walk() {
local -n __array=$1
local -- __callback=$2 __i
for __i in "${!__array[#]}"; do
"$__callback" "$1[$__i]"
done
}
x2() {
local -n __e=$1
__e=$((2 * __e))
}
array1=(1 4 9 16)
array_walk array1 x2
printf '%s\n' "${array1[*]}"

Parsing multiple instances of data

I am trying to parse multiple instances of data from a textfile. I can grep and grab one line and the lat/lon associated with that find, but I am having issued parsing multiple instances:
... CATEGORICAL ...
SLGT 33618675 34608681 35658642 36668567 38218542 41018363
41588227 41918045 41377903 40177805 38927813 37817869
36678030 35068154 33368262 33078321 32888462 33618675
SLGT 30440169 31710202 33010185 33730148 34010037 33999962
33709892 32869871 30979883 29539912 29430025 30440169
SLGT 41788755 41698893 42069059 42639132 43889124 44438960
44438757 43988717 43278708 42398720 41788755
MRGL 42897922 41907743 40147624 38837627 37637700 35897915
35028021 34038079 33118130 31998226 31698419 32078601
32818733 33848809 34758764 36998623 38588677 39458701
40178757 40608870 41069099 43549479 44499512 44809478
45259379 44989263 45109100 45718986 46478920 46758853
46738752 46398664 44768565 44308457 43198218
MRGL 29720174 31900221 33650181 34160154 34430032 34649931
34159800 32539784 31359767 29739808 29299723 28969581
28959440 99999999 26769674 26579796 26139874
TSTM 45077438 43177245 40597113 99999999 30488085 30248563
29588926 28739072 28569092 99999999 27138160 27578139
27908100 27848061 27518032 26968006 26338005 25698017
25338025 25088048 25058071 25238109 25578128 25888157
26218171 26578170 26988163 27138160 99999999 29200399
31910374 33520340 35190229 35450147 36109944 36399709
35779395 36399167 38559059 40189373 41729594 43029985
42820283 42860489 43580863 44121062 44521135 45281179
46271166 47561286 48251548 48671765 49051814 99999999
38810245 37660271 37120322 36950398 37090559 37380662
38090741 39410791 39980777 40930695 41380598 41370510
41190353 40840299 40220263 38810245
From: https://www.spc.noaa.gov/products/outlook/archive/2019/KWNSPTSDY1_201906241300.txt
Here is my code and results:
#!/bin/sh
sed -n '/^MRGL/,/^TSTM/p;/^TSTM/q' day1_status | sed '$ d' | sed -e 's/MRGL//g' > MRGL
while read line
do
count=1
ncols=$(echo $line | wc -w)
while [ $count -le $ncols ]
do
echo $line | cut -d' ' -f$count
((count++))
done
done < MRGL > MRGL_output.txt
cat MRGL_output.txt | sed ':a;s/\B[0-9]\{2\}\>/.&/;ta'| sed 's/./, -/6' > MRGL_final
Results:
one instance of MRGL and the lat/lon associated with that polygon
more MRGL
32947889 34137855 35307825 36147735 36327622 35797468
27107968 25518232 99999999 27088303 28418215 30208125
30618064
Turn the line above into a single instance of lines
more MRGL_output.txt
32947889
34137855
35307825
36147735
36327622
35797468
27107968
25518232
99999999
27088303
28418215
30208125
30618064
Final format that I need it in
more MRGL_final
32.94, -78.89
34.13, -78.55
35.30, -78.25
36.14, -77.35
36.32, -76.22
35.79, -74.68
27.10, -79.68
25.51, -82.32
99.99, -99.99
27.08, -83.03
28.41, -82.15
30.20, -81.25
30.61, -80.64
Just need to parse multiple instances that show up.
UPDATE for better explanation.
... CATEGORICAL ...
ENH 38298326 40108202 40518094 40357974 39907953 39017948
38038052 36148202 35848297 35888367 36618371 38298326
SLGT 30440169 31710202 33010185 33730148 34010037 33999962
33709892 32869871 30979883 29539912 29430025 30440169
SLGT 33548672 34408661 35918543 36858496 38648520 41018363
41588227 41918045 41377903 40177805 38927813 37817869
36678030 35068154 33368262 33078321 32888462 33548672
SLGT 41788755 41698893 42069059 42639132 43889124 44438960
44438757 43988717 43278708 42398720 41788755
MRGL 29720174 31900221 33650181 34160154 34430032 34649931
34159800 32539784 31359767 30059748 29299723 28969581
28959440 99999999 26769674 26579796 26139874
MRGL 42897922 41907743 40147624 38837627 37637700 35897915
35028021 34038079 33118130 31938225 30758424 30678620
30988709 34128741 36208583 37738554 39508601 40628878
41069099 43549479 44499512 44809478 45259379 44989263
45109100 45718986 46478920 46758853 46738752 46398664
44768565 44308457 43198218
TSTM 30488085 29978211 29408316 29068379 99999999 27138160
27578139 27908100 27848061 27518032 26968006 26338005
25698017 25338025 25088048 25058071 25238109 25578128
25888157 26218171 26578170 26988163 27138160 99999999
45427410 43217292 40247181 99999999 28650405 31910374
33520340 35190229 35450147 36109944 36399709 35779395
36769245 38319148 40189373 41219571 41299753 39959979
38220054 37320091 36560136 36070290 36100295 35840394
36790544 37150626 37880709 39110774 40120876 41150895
41600769 41890540 43070599 43580863 43390914 43401262
44171458 45521497 46131301 47181242 47561286 48251548
48671765 49371856
Wanting to take this data set above and grab each available risk ENH, SLGT, MRGL, TSTM lat and long and place into this format:
"Enhanced Risk"
38.29, -83.26
40.10, -82.02
40.51, -80.94
40.35, -79.74
39.90, -79.53
39.01, -79.48
38.03, -80.52
36.14, -82.02
35.84, -82.97
35.88, -83.67
36.61, -83.71
38.29, -83.26
End:
"Slight Risk"
30.44, -101.69
31.71, -102.02
33.01, -101.85
33.73, -101.48
34.01, -100.37
33.99, -99.62
33.70, -98.92
32.86, -98.71
30.97, -98.83
29.53, -99.12
29.43, -100.25
30.44, -101.69
End:
"Slight Risk"
33.54, -86.72
34.40, -86.61
35.91, -85.43
36.85, -84.96
38.64, -85.20
41.01, -83.63
41.58, -82.27
41.91, -80.45
41.37, -79.03
40.17, -78.05
38.92, -78.13
37.81, -78.69
36.67, -80.30
35.06, -81.54
33.36, -82.62
33.07, -83.21
32.88, -84.62
33.54, -86.72
End:
"Slight Risk"
41.78, -87.55
41.69, -88.93
42.06, -90.59
42.63, -91.32
43.88, -91.24
44.43, -89.60
44.43, -87.57
43.98, -87.17
43.27, -87.08
42.39, -87.20
41.78, -87.55
End:
"Marginal Risk"
29.72, -101.74
31.90, -102.21
33.65, -101.81
34.16, -101.54
34.43, -100.32
34.64, -99.31
34.15, -98.00
32.53, -97.84
31.35, -97.67
30.05, -97.48
29.29, -97.23
28.96, -95.81
28.95, -94.40
26.76, -96.74
26.57, -97.96
26.13, -98.74
End:
Here's a little awk program which seems to work, although I'm not certain about some of the details. In particular, I don't know what the minimum value for longitude is; evidently, a value under the minimum has 100 added to it before the longitude is negated. So you'll have to change LON_THRESHOLD to what you consider the correct value.
I've tried to avoid the usual temptation to golf awk programs into a textual minimum, in the hopes that the way this program works is less obscure. But it's entirely possible that some awkisms snuck in anyway. I added a bit of explanation at the end.
BEGIN { risk["HIGH"] = "High Risk"
risk["ENH"] = "Enhanced Risk"
risk["SLGT"] = "Slight Risk"
risk["MRGL"] = "Marginal Risk"
LON_THRESHOLD = 30
END_STRING = "End:"
}
END { if (in_risk) print END_STRING }
in_risk && substr($0, 1, 1) != " " {
print END_STRING "\n" "\n"
in_risk = 0
}
$1 in risk { printf("\"%s\"\n", risk[$1])
in_risk = 2
}
in_risk { for (i = in_risk; i <= NF; ++i) {
lat = substr($i, 1, 4) / 100
lon = substr($i, 5, 4) / 100
if (lon < LON_THRESHOLD) lon += 100
printf "%5.2f, %.2f\n", lat, -lon
}
in_risk = 1
}
Save that program as, for example, noaa.awk, and then apply it with:
awk -f noaa.awk input.txt
By way of explanation:
Awk programs consist of a series of rules. Each rule has a predicate -- that is, an expression which evaluates to a true or false value -- and an action.
Awk processes each line from its input in turn, running through all of the rules and executing the actions of the ones whose predicates evaluate to a true value. Inside the action, you can use the $ operator to access individual fields in the input (by default, fields are separated with whitespace). $0 is the entire input line, and $n is field number n. Unlike bash/sh, $ is an operator and can be applied to an expression.
BEGIN and END rules are special, in that they are not real variables. BEGIN rules are executed exactly once, before any other processing; END rules are executed exactly once after all processing is finished. In this example, as is common, BEGIN is used to initialise reference data, while END is used for any necessary termination -- in this case, printing the final End: line.
In cases like this, where the desired action is really dependent on where we are in the file, it's necessary to build some kind of state machine, and I did that using the variable in_risk, which has three possible values:
0 or undefined: We're not currently in a block corresponding to a risk selector.
1: The current line, if it starts with a space, is part of a previously identified risk selector.
2: The current line has been detected as starting with a risk selector.
The reason for the difference between the last two values is that $1 in a line which starts with a risk selector is the risk selector, whereas in a line which starts with a space, $1 is actually the first number. So when we're iterating over the numbers in a line, we have to start with $2 for lines which start with a risk selector.
If you're just asking how to turn a file of lines of like AABBCCDD into lines like AA.BB, -CC.DD:
perl -nE '/^(..)(..)(..)(..)$/ && say "$1.$2, -$3.$4"' MRGL_output.txt
(There's almost certainly better ways to get from your original input to those lines, but I'm not really clear on what your posted code is doing or why)
I think this will process your original input correctly, but can't be sure because the numbers in your sample output don't match up with your sample input so I can't verify:
perl -anE 'if (/^MRGL/ .. /^TSTM/) { exit if /^TSTM/; push #nums, #F }
END { for (#nums) {
if (/^(..)(..)(..)(..)$/) { say "$1.$2, -$3.$4" }
}}' day1_status
Got GNU Awk?
awk -v RS='\\s+' '
/[A-Z]/ {p = /^MRGL$/? 1: 0; next}
p {print gensub(/(..)(..)(..)(..)/, "\\1.\\2, -\\3.\\4", "G")}
' file
-v RS'\\s+' - Use any amount of whitespace as the Record Separator
/[A-Z]/ {...} - On records with uppercase alphabetics, do
p = /^MRGL$/? 1: 0; next - Set flag if record is MRGL, else unset, but always skip any other rules.
p {print gensub(...)} - Print result of gensub if flag is set
/(...)/, "\\1", "G" - Capturing groups, Backreferences, Global substitution.

Executing a TCL script from line N

I have a TCL script that say, has 30 lines of automation code which I am executing in the dc shell (Synopsys Design Compiler). I want to stop and exit the script at line 10, exit the dc shell and bring it back up again after performing a manual review. However, this time, I want to run the script starting from line number 11, without having to execute the first 10 lines.
Instead of having two scripts, one which contains code till line number 10 and the other having the rest, I would like to make use of only one script and try to execute it from, let's say, line number N.
Something like:
source a.tcl -line 11
How can I do this?
If you have Tcl 8.6+ and if you consider re-modelling your script on top of a Tcl coroutine, you can realise this continuation behaviour in a few lines. This assumes that you run the script from an interactive Tcl shell (dc shell?).
# script.tcl
if {[info procs allSteps] eq ""} {
# We are not re-entering (continuing), so start all over.
proc allSteps {args} {
yield; # do not run when defining the coroutine;
puts 1
puts 2
puts 3
yield; # step out, once first sequence of steps (1-10) has been executed
puts 4
puts 5
puts 6
rename allSteps ""; # self-clean, once the remainder of steps (11-N) have run
}
coroutine nextSteps allSteps
}
nextSteps; # run coroutine
Pack your script into a proc body (allSteps).
Within the proc body: Place a yield to indicate the hold/ continuation point after your first steps (e.g., after the 10th step).
Create a coroutine nextSteps based on allSteps.
Protect the proc and coroutine definitions in a way that they do not cause a re-definition (when steps are pending)
Then, start your interactive shell and run source script.tcl:
% source script.tcl
1
2
3
Now, perform your manual review. Then, continue from within the same shell:
% source script.tcl
4
5
6
Note that you can run the overall 2-phased sequence any number of times (because of the self-cleanup of the coroutine proc: rename):
% source script.tcl
1
2
3
% source script.tcl
4
5
6
Again: All this assumes that you do not exit from the shell, and maintain your shell while performing your review. If you need to exit from the shell, for whatever reason (or you cannot run Tcl 8.6+), then Donal's suggestion is the way to go.
Update
If applicable in your case, you may improve the implementation by using an anonymous (lambda) proc. This simplifies the lifecycle management (avoiding re-definition, managing coroutine and proc, no need for a rename):
# script.tcl
if {[info commands nextSteps] eq ""} {
# We are not re-entering (continuing), so start all over.
coroutine nextSteps apply {args {
yield; # do not run when defining the coroutine;
puts 1
puts 2
puts 3
yield; # step out, once first sequence of steps (1-10) has been executed
puts 4
puts 5
puts 6
}}
}
nextSteps
The simplest way is to open the text file, parse it to get the first N commands (info complete is useful there), and then evaluate those (or the rest of the script). Doing this efficiently produces slightly different code when you're dropping the tail as opposed to when you're dropping the prefix.
proc ReadAllLines {filename} {
set f [open $filename]
set lines {}
# A little bit careful in case you're working with very large scripts
while {[gets $f line] >= 0} {
lappend lines $line
}
close $f
return $lines
}
proc SourceFirstN {filename n} {
set lines [ReadAllLines $filename]
set i 0
set script {}
foreach line $lines {
append script $line "\n"
if {[info complete $script] && [incr i] >= $n} {
break
}
}
info script $filename
unset lines
uplevel 1 $script
}
proc SourceTailN {filename n} {
set lines [ReadAllLines $filename]
set i 0
set script {}
for {set j 0} {$j < [llength $lines]} {incr j} {
set line [lindex $lines $j]
append script $line "\n"
if {[info complete $script]} {
if {[incr i] >= $n} {
info script $filename
set realScript [join [lrange $lines [incr j] end] "\n"]
unset lines script
return [uplevel 1 $realScript]
}
# Dump the prefix we don't need any more
set script {}
}
}
# If we get here, the script had fewer than n lines so there's nothing to do
}
Be aware that the kinds of files you're dealing with can get pretty large, and Tcl currently has some hard memory limits. On the other hand, if you can source the file at all, you're already within that limit…

Awk Calc Avg Rows Below Certain Line

I'm having trouble calculating an average of specific numbers in column BELOW a specific text identifier using awk. I have two columns of data and I'm trying to start the average keying on a common identifier that repeats, which is 01/1991. So, awk should calc the average of all lines beginning with 01/1991, which repeats, using the next 21 lines with total count of rows for average = 22 for the total number of years 1991-2012. The desired output is an average of each TextID/Name entry for all the January's (01) for each year 1991 - 2012 show below:
TextID/Name 1
Avg: 50.34
TextID/Name 2
Avg: 45.67
TextID/Name 3
Avg: 39.97
...
sample data:
TextID/Name 1
01/1991, 57.67
01/1992, 56.43
01/1993, 49.41
..
01/2012, 39.88
TextID/Name 2
01/1991, 45.66
01/1992, 34.77
01/1993, 56.21
..
01/2012, 42.11
TextID/Name 3
01/1991, 32.22
01/1992, 23.71
01/1993, 29.55
..
01/2012, 35.10
continues with the same data for TextID/Name 4
I'm getting an answer using this code shown below but the average is starting to calculate BEFORE the specific identifier line and not on and below that line (01/1991).
awk '$1="01/1991" {sum+=$2} (NR%22==0){avg=sum/22;print"Average: "avg;sum=0;next}' myfile
Thanks and explanations of the solution is greatly appreciated! I have edited the original answer with more description - thank you again.
If you look at your file, the first field is "01/1991," with a comma at the end, not "01/1991". Also, NR%22==0 will look at line numbers divisible by 22, not 22 lines after the point it thinks you care about.
You can do something like this instead:
awk '
BEGIN { l=-1; }
$1 == "01/1991," {
l=22;
s=0;
}
l > 0 { s+=$2; l--; }
l == 0 { print s/22; l--; }'
It has a counter l that it sets to the number of lines to count, then it sums up that number of lines.
You may want to consider simply summing all lines from one 01/1991 to the next though, which might be more robust.
If you're allowed to use Perl instead of Awk, you could do:
#!/usr/bin/env perl
$start = 0;
$have_started = 0;
$count = 0;
$sum = 0;
while (<>) {
$line = $_;
# Grab the value after the date and comma
if ($line = /\d+\/\d+,\s+([\d\.]+)/) {
$val = $+;
}
# Start summing values after 01/1991
if (/01\/1991,\s+([\d\.]+)/) {
$have_started = 1;
$val = $+;
}
# If we have started counting,
if ($have_started) {
$count++;
$sum += $+;
}
}
print "Average of all values = " . $sum/$count;
Run it like so:
$ cat your-text-file.txt | above-perl-script.pl

Calculate sum of size notated figures?

I want to calculate the total size of all .mobi files from this
link (it's a good link by the way).
In my attempt of making this as my learning experience, I have made a 'pipe' (let's call it a) that output all the sizes from that page which looks like:
189K
20M
549K
2.2M
1.9M
3.1M
2.5M
513K
260K
1.1M
2.8M
5.1M
3.7M
1.5M
5.6M
1.0M
5.6M
1.5M
4.9M
3.4M
810K
My target is to get the total size (ex: 50.50M, or 50000K) - sum of all these numbers.
My question is, how to calculate that target, using pipeling (a | some_other_commands). Answers using python or any other language (preferably one liners) are welcome. Thanks a lot.
For the fun a solution in shell:
a | sed -e 's/M$/ 1024 * +/' -e 's/K$/ +/' | dc -e '0' -f - -e 'p'
Perl one-liner:
a | perl -ne 's/^([\d.]+)M$/$1*1024/e;$sum+=$_; END{print $sum."K"}'
see it
It assumes that all entries are in either Kilobytes or Megabytes as shown in OPs input.
Sigh, someone says “one-liner” and all my code-golf reflexes fire...
ruby -e 'puts $<.read.split.inject(0){ |m,e| m += e.to_f * { "M" => 1, "K" => 0.001 }[e[-1,1]]}.to_s+"M"'
or, with some shortcuts...
ruby -ne 'p #e=#e.to_f+$_.to_f*{"M"=>1,"K"=>0.001}[$_[-2,1]]'
Update: Heh, ok, hard to read. The OP asked for a "one liner". :-)
#!/usr/bin/env ruby
total = 0
while s = gets # get line
scalefactorMK = s.chomp[-1,1] # get the M or K
scalefactor = { 'M'=>1,'K'=>0.001 }[scalefactorMK] # get numeric scale
total += s.to_f * scalefactor # accumulate total
end
puts "%5.1fM" % [total]
if you have Ruby (1.9+)
require 'net/http'
url="http://hewgill.com/~greg/stackoverflow/ebooks/"
response = Net::HTTP.get_response( URI.parse(url) )
data=response.body
total=0
data.split("\n").each do |x|
if x=~/\.mobi/
size = x.split(/\s+/)[-1]
c = case size[-1]
when 'K' then 1024
when 'M' then 1024 * 1024
when 'G' then 1024 * 1024 * 1024
end
total+=size[0..-1].to_i * c
end
end
puts "Total size: %.2f MB" % ( total/(1024.0 * 1024.0) )
awk (assume files less than 1K don't substantially add to the total):
a | awk '/K/ {sum += $1/1024} /M/ {sum += $1} END {printf("%.2fM\n", sum)}'

Resources