Fetch values from particular file and display swapped values in terminal - terminal

I have a file named input.txt which contains students data in StudentName|Class|SchoolName format.
Shriii|Fourth|ADCET
Chaitraliii|Fourth|ADCET
Shubhangi|Fourth|ADCET
Prathamesh|Third|RIT
I want to display this values in reverse order for particular college. Example:
ADCET|Fourth|Shriii
ADCET|Fourth|Chaitraliii
I used grep 'ADCET$' input.txt which gives output
Shriii|Fourth|ADCET
Chaitraliii|Fourth|ADCET
But I want it in reverse order. I also used grep 'ADCET$' input.txt | sort -r but didn't get required output
Ref1

You may use either of the following sed or awk solutions:
grep 'ADCET$' input.txt | sed 's/^\([^|]*\)\(|.*|\)\([^|]*\)$/\3\2\1/'
grep 'ADCET$' input.txt | awk 'BEGIN {OFS=FS="|"} {temp=$NF;$NF=$1;$1=temp;}1'
See the online demo
awk details
BEGIN {OFS=FS="|"} - the field separator is set to | and the same char will be used for output
{temp=$NF;$NF=$1;$1=temp;}1:
temp=$NF; - the last field value is assigned to a temp variable
$NF=$1; - the last field is set to Field 1 value
$1=temp; - the value of Field 1 is set to temp
1 - makes the awk write the output.
sed details
^ - start of the line
\([^|]*\) - Capturing group 1: any 0+ chars other than |
\(|.*|\) - Capturing group 2: |, then any 0+ chars and then a |
\([^|]*\) - Capturing group 3: any 0+ chars other than|`
$ - end of line.
The \3\2\1 are placeholders for values captured into Groups 1, 2 and 3.

Related

Remove multiple file extesions when using gnu parallel and cat in bash

I have a csv file (separated by comma), which contains
file1a.extension.extension,file1b.extension.extension
file2a.extension.extension,file2b.extension.extension
Problem is, these files are name such as file.extension.extension
I'm trying to feed both columns to parallel and removing all extesions
I tried some variations of:
cat /home/filepairs.csv | sed 's/\..*//' | parallel --colsep ',' echo column 1 = {1}.extension.extension column 2 = {2}
Which I expected to output
column 1 = file1a.extension.extension column 2 = file1b
column 1 = file2a.extension.extension column 2 = file2b
But outputs:
column 1 = file1a.extension.extension column 2 =
column 1 = file2a.extension.extension column 2 =
The sed command is working but is feeding only column 1 to parallel
As currently written the sed only prints one name per line:
$ sed 's/\..*//' filepairs.csv
file1a
file2a
Where:
\. matches on first literal period (.)
.* matches rest of line (ie, everything after the first literal period to the end of the line)
// says to remove everything from the first literal period to the end of the line
I'm guessing what you really want is two names per line ... one sed idea:
$ sed 's/\.[^,]*//g' filepairs.csv
file1a,file1b
file2a,filepath2b
Where:
\. matches on first literal period (.)
[^,]* matches on everything up to a comma (or end of line)
//g says to remove the literal period, everything afterwards (up to a comma or end of line), and the g says to do it repeatedly (in this case the replacement occurs twice)
NOTE: I don't have parallel on my system so unable to test that portion of OP's code
Use --plus:
$ cat filepairs.csv | parallel --plus --colsep , echo {1..} {2..}
file1a file1b
file2a file2b
If the input is CSV:
$ cat filepairs.csv | parallel --plus --csv echo {1..} {2..}
file1a file1b
file2a file2b

How to convert a line into camel case?

This picks all the text on single line after a pattern match, and converts it to camel case using non-alphanumeric as separator, remove the spaces at the beginning and at the end of the resulting string, (1) this don't replace if it has 2 consecutive non-alphanumeric chars, e.g "2, " in the below example, (2) is there a way to do everything using sed command instead of using grep, cut, sed and tr?
$ echo " hello
world
title: this is-the_test string with number 2, to-test CAMEL String
end! " | grep -o 'title:.*' | cut -f2 -d: | sed -r 's/([^[:alnum:]])([0-9a-zA-Z])/\U\2/g' | tr -d ' '
ThisIsTheTestStringWithNumber2,ToTestCAMELString
To answer your first question, change [^[:alnum:]] to [^[:alnum:]]+ to mach one ore more non-alnum chars.
You may combine all the commands into a GNU sed solution like
sed -En '/.*title: *(.*[[:alnum:]]).*/{s//\1/;s/([^[:alnum:]]+|^)([0-9a-zA-Z])/\U\2/gp}'
See the online demo
Details
-En - POSIX ERE syntax is on (E) and default line output supressed with n
/.*title: *(.*[[:alnum:]]).*/ - matches a line having title: capturing all after it up to the last alnum char into Group 1 and matching the rest of the line
{s//\1/;s/([^[:alnum:]]+|^)([0-9a-zA-Z])/\U\2/gp} - if the line is matched,
s//\1/ - remove all but Group 1 pattern (received above)
s/([^[:alnum:]]+|^)([0-9a-zA-Z])/\U\2/ - match and capture start of string or 1+ non-alnum chars into Group 1 (with ([^[:alnum:]]+|^)) and then capture an alnum char into Group 2 (with ([0-9a-zA-Z])) and replace with uppercased Group 2 contents (with \U\2).

Allow only specifi character else null should transfer in unix

Allow characters in 2nd columns are 0 to 9 and A to Z and Symbol like "+" and "-", if allow character found in 2nd column then complete record should be Transfer else null should be Transfer in 2nd column
Input
- 1|89+
- 2|-AB
- 3|XY*
- 4|PR%
Output
- 1|89+
- 2|-AB
- 3|<null>
- 4|<null>
grep -E '^[a-zA-Z0-9\+\-\|]+$' file > file1
but above code is discard complete record if matching not found, I Need all records but if matching found then it should Transfer else null Transfer.
Use sed to replace everything after a pipe, that begins with zero or more characters in the class of digits, letters, plus or minus followed by one character not in that class up to the end of the string with a pipe only.
sed 's/\|[0-9a-zA-Z+-]*[^0-9a-zA-Z+-].*$/|/' file
Using awk and character classes where supported:
$ awk 'BEGIN{FS=OFS="|"}$2~/[^[:alnum:]+-]/{$2=""}1' file
1|89+
2|-AB
3|
4|
Where not supported (such as mawk) use:
$ awk 'BEGIN{FS=OFS="|"}$2~/[^A-Za-z0-9+-]/{$2=""}1' file

How to add a constant number to all entries of a row in a text file in bash

I want to add or subtract a constant number form all entries of a row in a text file in Bash.
eg. my text file looks like:
21.018000 26.107000 51.489000 71.649000 123.523000 127.618000 132.642000 169.247000 173.276000 208.721000 260.032000 264.127000 320.610000 324.639000 339.709000 354.779000 385.084000
(it has only one row)
and I want to subtract value 18 from all columns and save it in a new file. What is the easiest way to do this in bash?
Thanks a lot!
Use simple awk like this:
awk '{for (i=1; i<=NF; i++) $i -= 18} 1' file >> $$.tmp && mv $$.tmp file
cat file
3.018 8.107 33.489 53.649 105.523 109.618 114.642 151.247 155.276 190.721 242.032 246.127 302.61 306.639 321.709 336.779 367.084
Taking advantage of awks RS and ORS variables we can do it like this:
awk 'BEGIN {ORS=RS=" "}{print $1 - 18 }' your_file > your_new_filename
It sets the record separator for input and output to space. This makes every field a record of its own and we have only to deal with $1.
Give a try to this compact and funny version:
$ printf "%s 18-n[ ]P" $(cat text.file) | dc
dc is a reverse-polish desk calculator (hehehe).
printf generates one string per number. The first string is 21.018000 18-n[ ]P. Other strings follow, one per number.
21.018000 18: the values separated with spaces are pushed to the dc stack.
- Pops two values off, subtracts the first one popped from the second one popped, and pushes the result.
n Prints the value on the top of the stack, popping it off, and does not print a newline after.
[ ] add string (space) on top of the stack.
P Pops off the value on top of the stack. If it it a string, it is simply printed without a trailing newline.
The test with an additional sed to replace the useless last (space) char with a new line:
$ printf "%s 18-n[ ]P" $(cat text.file) | dc | sed "s/ $/\n/" > new.file
$ cat new.file
3.018000 8.107000 33.489000 53.649000 105.523000 109.618000 114.642000 151.247000 155.276000 190.721000 242.032000 246.127000 302.610000 306.639000 321.709000 336.779000 367.084000
----
For history a version with sed:
$ sed "s/\([1-9][0-9]*[.][0-9][0-9]*\)\{1,\}/\1 18-n[ ]P/g" text.file | dc
With Perl which will work on multiply rows:
perl -i -nlae '#F = map {$_ - 18} #F; print "#F"' num_file
# ^ ^^^^ ^
# | |||| Printing an array in quotes will join
# | |||| with spaces
# | |||Evaluate code instead of expecting filename.pl
# | ||Split input on spaces and store in #F
# | |Remove (chomp) newline and add newline after print
# | Read each line of specified file (num_file)
# Inplace edit, change original file, take backup with -i.bak

parsing a file and build a query

I have a file which contains
CF=test1
HOST=kp10
USER=user1
PASSWORD=password1
CF=test2
HOST=kp11
USER=user2
PASSWORD=password2
I want to build query by parsing the file .i.e grep each 4 lines and take out the value
insert into x=test1 ,host=kp10,user=user1,password=password1
insert into x=test2 ,host=kp11,user=user2,password=password2
It's pretty easy:
$ cat file | paste -d, - - - - | tr '[[:upper:]]' '[[:lower:]]' | sed 's|.*|insert into &|'
insert into cf=test1,host=kp10,user=user1,password=password1
insert into cf=test2,host=kp11,user=user2,password=password2
Step by step:
cat passes file's content to paste
paste joins each 4 lines together and use comma as separator
tr converts from upper case to lower case
sed prepends string 'insert into ' to each line

Resources