I am trying to convert all negative numbers to positive numbers and have so far come up with this
echo "-32 45 -45 -72" | sed -re 's/\-([0-9])([0-9])\ /\1\2/p'
but it is not working as it outputs:
3245 -45 -72
I thought by using \1\2 I would have got the positive number back ?
Where am I going wrong ?
Why not just remove the -'s?
[root#vm ~]# echo "-32 45 -45 -72" | sed 's/-//g'
32 45 45 72
My first thought is not using sed, if you don't have to. awk can understand that they're numbers and convert them thusly:
echo "-32 45 -45 -72" | awk -vRS=" " -vORS=" " '{ print ($1 < 0) ? ($1 * -1) : $1 }'
-vRS sets the "record separator" to a space, and -vORS sets the "output record separator" to a space. Then it simply checks each value, sees if it's less than 0, and multiplies it by -1 if it is, and if it's not, just prints the number.
In my opinion, if you don't have to use sed, this is more "correct," since it treats numbers like numbers.
This might work for you:
echo "-32 45 -45 -72" | sed 's/-\([0-9]\+\)/\1/g'
Reason why your regex is failing is
Your only doing a single substitution (no g)
Your replacement has no space at the end.
The last number has no space following so it will always fail.
This would work too but less elegantly (and only for 2 digit numbers):
echo "-32 45 -45 -72" | sed -rn 's/-([0-9])([0-9])(\s?)/\1\2\3/gp'
Of course for this example only:
echo "-32 45 -45 -72" | tr -d '-'
You are dealing with numbers as with a string of characters. More appropriate would be to store numbers in an array and use built in Shell Parameter Expansion to remove the minus sign:
[~] $ # Creating and array with an arbitrary name:
[~] $ array17=(-32 45 -45 -72)
[~] $ # Calling all elements of the array and removing the first minus sign:
[~] $ echo ${array17[*]/-}
32 45 45 72
[~] $
Related
Sorry in advance for the beginner question, but I'm quite stuck and keen to learn.
I am trying to echo a string (in hex) and then cut a piece of that with cut command. It looks like this:
for y in "${Offset}"; do
echo "${entry}" | cut -b 60-$y
done
Where echo ${Offset} results in
75 67 69 129 67 567 69
I would like each entry to be printed, and then cut from the 60th byte until the respective number in $Offset.
So the first entry would be cut 60-75.
However, I get an error:
cut: 67: No such file or directory
cut: 69: No such file or directory
cut: 129: No such file or directory
cut: 67: No such file or directory
cut: 567: No such file or directory
cut: 69: No such file or directory
I tried adding/removing parentheses around each variable but never got the right result.
Any help will be appreciated!
UPDATE: updated the code with changed from markp-fuso. However, this codes still does not work as intended. I would like to print every entry based on the respective offset, but it goes wrong. This prints every entry seven times, where each time is based on seven different offsets. Any ideas on how to fix this?
#!/bin/bash
MESSAGES=$( sqlite3 -csv file.db 'SELECT quote(data) FROM messages' | tr -d "X'" )
for entry in ${MESSAGES}; do
Offset='75 67 69 129 67 567 69'
for y in $Offset; do
echo "${entry:59:(y-59)}"
done
done
echo ${MESSAGES}
Results in seven strings with minimal length 80 bytes and max 600.
My output should be:
String one: cut by first offset
String two: cut by second offset
and so on...
In order for for to iterate over each space-separated "word" in $Offset, you need to get rid of the quotes, which are making it read as a single variable.
for y in ${Offset}; do
echo "${entry}" | cut -b 60-$y
done
To eliminate the sub-process that's going to be invoked due to the | cut ..., we could look at a comparable parameter expansion solution ...
Quick reminder on how to extract a substring from a variable:
${variable:start_position:length}
Keeping in mind that the first character in ${variable} is in position zero/0.
Next, we need to convert each individual offset (y) into a 'length':
length=$((y-60+1))
Rolling these changes into your code (and removing the quotes from around ${Offset}) gives us:
for y in ${Offset}
do
start=$((60-1))
length=$((y-60+1))
echo "${entry:${start}:${length}}"
#echo "${entry:59:(y-59)}"
done
NOTE: You can also replace the start/length/echo with the single commented-out echo.
Using a smaller data set for demo purposes, and using 3 (instead of 60) as the start of our extraction:
# base-10 character position
# 1 2
# 123456789012345678901234567
$ entry='123456789ABCDEFGHIabcdefghi'
$ echo ${#entry} # length of entry?
27
$ Offset='5 8 10 13 20'
$ for y in ${Offset}
do
start=$((3-1))
length=$((y-3+1))
echo "${entry:${start}:${length}}"
done
345 # 3-5
345678 # 3-8
3456789A # 3-10
3456789ABCD # 3-13
3456789ABCDEFGHIab # 3-20
And consolidating the start/length/echo into a single echo:
$ for y in ${Offset}
do
echo "${entry:2:(y-2)}"
done
345 # 3-5
345678 # 3-8
3456789A # 3-10
3456789ABCD # 3-13
3456789ABCDEFGHIab # 3-20
Hi is there anyway to get ascii value of Alphanumeric String without reading single character at a time .
For eg if I enter A ,output should be 65.
If I enter Onkar123#. How to calculate ascii of this string?
Also I want sum of ascii value produced by the above string.
Try using echo "test" | hexdump -e '16/1 "%02x " "\n"' by replacing test with Onkar123# or anything else
idk what kind of output you expect, nor do I know why you care if the string is processed one char at a time or how you'd know if a given tool is going one char at a time (and how else COULD any tool be doing this anyway?) so idk if this is the kind of answer you're looking for or not but maybe this will point you in a direction at least:
$ printf '%s' "Onkar123#" | awk -l ordchr -v RS='.{1}' '{print ord(RT)}'
79
110
107
97
114
49
50
51
35
The above uses GNU awk for ord() in the ordchr library.
Based on one of your comments, it sounds like this might be what you're looking for:
$ printf '%s' "Onkar123#" | awk -l ordchr -v RS='.{1}' '{s+=ord(RT)} END{print s+0}'
692
od
There's really no such thing as the ASCII value of a string. There is such a thing as the decimal (or octal, or hexadecimal) value of each ASCII character in a string, though.
Since you don't seem to have hexdump, try the od (octal dump) utility. I don't think I've ever seen a *nix system that didn't have od.
$ echo "Onkar123#" | od -An -t d1
79 110 107 97 114 49 50 51 35 10
I guess endianness might come into play. But od has a --endian argument for that.
awk
It's a lot harder in awk. I think you have to build a lookup table, then lookup the decimal code for each character in the input. That means you still have to process one character at a time.
# output-decimal-ascii.awk -- write ASCII decimal codes for input
BEGIN {
# 127 for ASCII; 256 for extended ASCII
for(n = 0; n < 127; n++) {
ascii_table[sprintf("%c",n)] = n
}
}
{
split($0, arr, "")
for (i = 1; i <= length(arr); i++) {
printf("%d ", ascii_table[arr[i]])
}
print "\n"
}
$ echo "Onkar123#" | awk -f code/awk/output-decimal-ascii.awk
79 110 107 97 114 49 50 51 35
To sum the numbers use:
echo "test" | od -An -t d1 | xargs | sed "s/ /+/g" | bc
I have a little script to extract specific data and cleanup the output a little. It seems overly messy and i'm wondering if the script can be trimmed down a bit.
The input file contains of pairs of lines -- names, followed by numbers.
Line pairs where the numeric value is not between 80 and 199 should be discarded.
Pairs may sometimes, but will not always, be preceded or followed by blank lines, which should be ignored.
Example input file:
al12t5682-heapmemusage-latest.log
38
al12t5683-heapmemusage-latest.log
88
al12t5684-heapmemusage-latest.log
100
al12t5685-heapmemusage-latest.log
0
al12t5686-heapmemusage-latest.log
91
Example/wanted output:
al12t5683 88
al12t5684 100
al12t5686 91
Current script:
grep --no-group-separator -PxB1 '([8,9][0-9]|[1][0-9][0-9])' inputfile.txt \
| sed 's/-heapmemusage-latest.log//' \
| awk '{$1=$1;printf("%s ",$0)};NR%2==0{print ""}'
Extra input example
al14672-heapmemusage-latest.log
38
al14671-heapmemusage-latest.log
5
g4t5534-heapmemusage-latest.log
100
al1t0000-heapmemusage-latest.log
0
al1t5535-heapmemusage-latest.log
al1t4676-heapmemusage-latest.log
127
al1t4674-heapmemusage-latest.log
53
A1t5540-heapmemusage-latest.log
54
G4t9981-heapmemusage-latest.log
45
al1c4678-heapmemusage-latest.log
81
B4t8830-heapmemusage-latest.log
76
a1t0091-heapmemusage-latest.log
88
al1t4684-heapmemusage-latest.log
91
Extra Example expected output:
g4t5534 100
al1t4676 127
al1c4678 81
a1t0091 88
al1t4684 91
another awk
$ awk -F- 'NR%2{p=$1; next} 80<=$1 && $1<=199 {print p,$1}' file
al12t5683 88
al12t5684 100
al12t5686 91
UPDATE
for the empty line record delimiter
$ awk -v RS= '80<=$2 && $2<=199{sub(/-.*/,"",$1); print}' file
al12t5683 88
al12t5684 100
al12t5686 91
Consider implementing this in native bash, as in the following (which can be seen running with your sample input -- including sporadically-present blank lines -- at http://ideone.com/Qtfmrr):
#!/bin/bash
name=; number=
while IFS= read -r line; do
[[ $line ]] || continue # skip blank lines
[[ -z $name ]] && { name=$line; continue; } # first non-blank line becomes name
number=$line # second one becomes number
if (( number >= 80 && number < 200 )); then
name=${name%%-*} # prune everything after first "-"
printf '%s %s\n' "$name" "$number" # emit our output
fi
name=; number= # clear the variables
done <inputfile.txt
The above uses no external commands whatsoever -- so whereas it might be slower to run over large input than a well-implemented awk or perl script, it also has far shorter startup time since no interpreter other than the already-running shell is required.
See:
BashFAQ #1 - How can I read a file (data stream, variable) line-by-line (and/or field-by-field)?, describing the while read idiom.
BashFAQ #100 - How do I do string manipulations in bash?; or The Bash-Hackers' Wiki on parameter expansion, describing how name=${name%%-*} works.
The Bash-Hackers' Wiki on arithmetic expressions, describing the (( ... )) syntax used for numeric comparisons.
perl -nle's/-.*//; $n=<>; print "$_ $n" if 80<=$n && $n<=199' inputfile.txt
With gnu sed
sed -E '
N
/\n[8-9][0-9]$/bA
/\n1[0-9]{2}$/!d
:A
s/([^-]*).*\n([0-9]+$)/\1 \2/
' infile
So what i'm trying to do is this: I've been using keybr.com to sharpen my typing skills and on this site you can "provide your own custom text." Now i've been taking chapters out of books to type so its a little more interesting than just typing groups of letters. Now I want to also insert numbers into the text. Specifically, between each word have something like "393" and random sets smaller and larger than that example.
so i have saved a chapter of a book into a file in my home folder. Now i just need a command to search for spaces and input a group of numbers and add a space so a sentence would look like this: The 293 dog 328 is 102 black. 334 The... etc.
I have looked up linux commands through search engines and i've found out how to replace strings in text files with:
sed -i 's/original/new/g' file.txt
and how to generate random numbers with:
$ shuf -i MIN-MAX -n COUNT
i just can not figure out how to output a one line command that will have random numbers between each word. I'm still-a-searching so thanks to anyone that takes the time to read my problem.
Perl to the rescue!
perl -pe 's/ /" " . (100 + int rand 900) . " "/ge' < input.txt > output.txt
-p reads the input line by line, after reading a line, it runs the code and prints the line to the output
s/// is similar to the substitution you know from sed
/g means global, i.e. it substitutes as many times as possible
/e means the replacement part is a code to run. In this case, the code generates a random number (100-999).
Given:
$ echo "$txt"
Here is some random words. Please
insert a number a space between each one.
Here is a simple awk to do that:
$ echo "$txt" | awk '{for (i=1;i<=NF;i++) printf "%s %d ", $i, rand()*100; print ""}'
Here 92 is 59 some 30 random 57 words. 74 Please 78
insert 43 a 33 number 77 a 10 space 78 between 83 each 76 one. 49
And here is roughly the same thing in pure Bash:
while read -r line; do
for word in $line; do
printf "%s %s" "$word $((1+$RANDOM % 100))"
done
echo
done < <(echo "$txt")
I have a question. I have a file with coordinates (TAB separated)
2 10
35 50
90 200
400 10000
...
I would like to substract the first column of the second line from the second column of the fist line , i.e. calculate the distance, i.e. I would like a file with
25
40
200
...
How could I do that using awk???
Thank you very much in advance
here is an awk one-liner may help you:
kent$ awk 'a{print $1-a}{a=$2}' file
25
40
200
Here's a pure bash solution:
{
read _ ps
while read f s; do
echo $((f-ps))
((ps=s))
done
} < input_file
This only works if you have (small) integers, as it uses bash's arithmetic. If you want to deal with arbitrary sized integers or floats, you can use bc (with only one fork):
{
read _ ps
while read f s; do
printf '%s-%s\n' "$f" "$ps"
ps=$s
done
} < input_file | bc
Now I leave the others give an awk answer!
Alright, since nobody wants to upvote my answer, here's a really funny solution that uses bash and bc:
a=( $(<input_file) )
printf -- '-(%s)+(%s);\n' "${a[#]:1:${#a[#]}-2}" | bc
or the same with dc (shorter but doesn't work with negative numbers):
a=( $(<input_file) )
printf '%s %sr-pc' "${a[#]:1:${#a[#]}-2}" | dc
using sed and ksh for evaluation
sed -n "
1x
1!H
$ !b
x
s/^ *[0-9]\{1,\} \(.*\) [0-9]\{1,\} *\n* *$/\1 /
s/\([0-9]\{1,\}\)\(\n\)\([0-9]\{1,\}\) /echo \$((\3 - \1))\2/g
s/\n *$//
w /tmp/Evaluate.me
"
. /tmp/Evaluate.me
rm /tmp/Evaluate.me