Bash read file and get values from to - bash

Any idea how to read a file and get text from to values?
Let´s say I have
Json info lastActiveTimes:{"707514313":1505584723,"100004389551456":1505591385},chatNotif:0}, a lot of more Json info.
And I want to get from Start: to End, so read the file and just return
{"707514313":1505584723,"100004389551456":1505591385}
I´m using bash from OSX

When you have 2nd line with start in it between Start and End, you can use
sed -n '/Start:/,/End/p' file| sed '1s/.*Start://; $s/End.*//'
UPDATE:
Question has changed,now it can be anything like
sed 's/.*\({[^}]*}\).*/\1/' file
of
grep -Eo "{[^}]*}" file

With GNU grep and Perl regular expression (-P):
grep -Poz '(?<=Start:).*(\n)*.*(?= End)' file
Output:
aaaaaaaaaabbbbbbbb
ccccccc

Related

Combine multiple sed commands into one

I have a file example.txt, I want to delete and replace fields in it.
The following commands are good, but in a very messy way, unfortunately I'm a rookie to sed command.
The commands I used:
sed 's/\-I\.\.\/\.\.\/\.\.//\n/g' example.txt > example.txt1
sed 's/\-I/\n/g' example.txt1 > example.txt2
sed '/^[[:space:]]*$/d' > example.txt2 example.txt3
sed 's/\.\.\/\.\.\/\.\.//g' > example.txt3 example.txt
and then I'm deleting all the unnecessary files.
I'm trying to get the following result:
Common/Components/Component
Common/Components/Component1
Common/Components/Component2
Common/Components/Component3
Common/Components/Component4
Common/Components/Component5
Common/Components/Component6
Comp
App
The file looks like this:
-I../../../Common/Component -I../../../Common/Component1 -I../../../Common/Component2 -I../../../Common/Component3 -I../../../Common/Component4 -I../../../Common/Component5 -I../../../Common/Component6 -IComp -IApp ../../../
I want to know how the best way to transform input format to output format standard text-processing tool with 1 call with sed tool or awk.
With your shown samples, please try following awk code. Written and tested in GNU awk.
awk -v RS='-I\\S+' 'RT{sub(/^-I.*Common\//,"Common/Components/",RT);sub(/^-I/,"",RT);print RT}' Input_file
output with samples will be as follows:
Common/Components/Component
Common/Components/Component1
Common/Components/Component2
Common/Components/Component3
Common/Components/Component4
Common/Components/Component5
Common/Components/Component6
Comp
App
Explanation: Simple explanation would be, in GNU awk. Setting RS(record separator) as -I\\S+ -I till a space comes. In main awk program, check if RT is NOT NULL, substitute starting -I till Common with Common/Components/ in RT and then substitute starting -I with NULL in RT. Then printing RT here.
If you don't REALLY want the string /Components to be added in the middle of some output lines then this may be what you want, using any awk in any shell on every Unix box:
$ awk -v RS=' ' 'sub("^-I[./]*","")' file
Common/Component
Common/Component1
Common/Component2
Common/Component3
Common/Component4
Common/Component5
Common/Component6
Comp
App
That would fail if any of the paths in your input contained blanks but you don't show that as a possibility in your question so I assume it can't happen.
What about
sed -i 's/\-I\.\.\/\.\.\/\.\.//\n/g
s/\-I/\n/g
/^[[:space:]]*$/d
s/\.\.\/\.\.\/\.\.//g' example.txt

How to find lines with current date with %d-%h-%Y format assigned to a variable in all files

I have a directory under which i have many access files like:
access
access87681
access98709
Now i am trying grep all those lines with current date in format +%d-%h-%Y.
I have written like below:
tm1=$(date '+%d-%h-%Y')
sed -n '/$tm1/p' $dir/access* > $loc/OUD_Req_Res_matrix_data
I am trying to grep all $dir/access* files with $tm1 which is current date in above date format and pushing them into output file $loc/OUD_Req_Res_matrix_data.
the above code is not working. Please suggest
With tm1=$(date '+%d-%h-%Y') and access-files without spaces, you can use:
sed -n "/${tm1}/p" $dir/access*
When the date has slashes, the sed command would break.
You can escape the slashes with
sed 's#/#\\/#g' <<< "$tm1"
Use this in your command:
sed -n "/$(sed 's#/#\\/#g' <<< "$tm1")/p" $dir/access*

How to read all text file with head linux command?

I can't read or apply any other commands like cat or strings on .txt files because it is not allowed. I need to read a file named flag.txt, but this file is also on the blacklist. So, is there any way to read *.txt using the head command? The head command is allowed.
blacklist=\
'flag\|<\|$\|"\|'"'"'\|'\
'cat\|tac\|*\|?\|less\|more\|pico\|nano\|edit\|hexdump\|xxd\|'\
'sed\|tail\|diff\|grep\|paste\|strings\|bas64\|sort\|uniq\|cut\|awk\|'\
'bzip\|gzip\|xz\|tar\|ar\|'\
'mv\|cp\|ln\|nl\|'\
'python\|perl\|sh\|cc\|g++\|php\|hd\|g++\|gcc\|curl\|tcp\|udp\|'\
'scp\|sftp\|wget\|nc\|netcat'
Thanks
do you want some alternative of the command head *.txt? if so, ls/findand xargs will help, but it can not identify .txt file, it will read all the file under the directory.
ls -1| xargs head
You can use the ` (backtick) in the following way:
head `ls -1`
Backtick has a very special meaning. Everything you type between
backticks is evaluated (executed) by the shell before the main command
So the command will do the following:
`ls -1` - will result with the file names
head - will show the start of the files listed in ls -1
More info about backtick can be found in this answer
If you need a glob that matches flag.txt but can use neither * not the string flag, you can use fl[a]g.txt instead. Then, to print the entire file using head, use -c and pass it the size of the file:
head -c $(stat -c '%s' fl[a]g.txt) fl[a]g.txt
Another approach would be to use the shell to read the file:
while IFS= read -r c; do echo $c; done < fl[a]g.txt
You could also just use paste:
paste fl[a]g.txt

How to extract a string at end of line after a specific word

I have different location, but they all have a pattern:
some_text/some_text/some_text/log/some_text.text
All locations don't start with the same thing, and they don't have the same number of subdirectories, but I am interested in what comes after log/ only. I would like to extract the .text
edited question:
I have a lot of location:
/s/h/r/t/log/b.p
/t/j/u/f/e/log/k.h
/f/j/a/w/g/h/log/m.l
Just to show you that I don't know what they are, the user enters these location, so I have no idea what the user enters. The only I know is that it always contains log/ followed by the name of the file.
I would like to extract the type of the file, whatever string comes after the dot
THe only i know is that it always contains log/ followed by the name
of the file.
I would like to extract the type of the file, whatever string comes
after the dot
based on this requirement, this line works:
grep -o '[^.]*$' file
for your example, it outputs:
text
You can use bash built-in string operations. The example below will extract everything after the last dot from the input string.
$ var="some_text/some_text/some_text/log/some_text.text"
$ echo "${var##*.}"
text
Alternatively, use sed:
$ sed 's/.*\.//' <<< "$var"
text
Not the cleanest way, but this will work
sed -e "s/.*log\///" | sed -e "s/\..*//"
This is the sed patterns for it anyway, not sure if you have that string in a variable, or if you're reading from a file etc.
You could also grab that text and store in a sed register for later substitution etc. All depends on exactly what you are trying to do.
Using awk
awk -F'.' '{print $NF}' file
Using sed
sed 's/.*\.//' file
Running from the root of this structure:
/s/h/r/t/log/b.p
/t/j/u/f/e/log/k.h
/f/j/a/w/g/h/log/m.l
This seems to work, you can skip the echo command if you really just want the file types with no record of where they came from.
$ for DIR in *; do
> echo -n "$DIR "
> find $DIR -path "*/log/*" -exec basename {} \; | sed 's/.*\.//'
> done
f l
s p
t h

Remove a line from a csv file bash, sed, bash

I'm looking for a way to remove lines within multiple csv files, in bash using sed, awk or anything appropriate where the file ends in 0.
So there are multiple csv files, their format is:
EXAMPLEfoo,60,6
EXAMPLEbar,30,10
EXAMPLElong,60,0
EXAMPLEcon,120,6
EXAMPLEdev,60,0
EXAMPLErandom,30,6
So the file will be amended to:
EXAMPLEfoo,60,6
EXAMPLEbar,30,10
EXAMPLEcon,120,6
EXAMPLErandom,30,6
A problem which I can see arising is distinguishing between double digits that end in zero and 0 itself.
So any ideas?
Using your file, something like this?
$ sed '/,0$/d' test.txt
EXAMPLEfoo,60,6
EXAMPLEbar,30,10
EXAMPLEcon,120,6
EXAMPLErandom,30,6
For this particular problem, sed is perfect, as the others have pointed out. However, awk is more flexible, i.e. you can filter on an arbitrary column:
awk -F, '$3!=0' test.csv
This will print the entire line is column 3 is not 0.
use sed to only remove lines ending with ",0":
sed '/,0$/d'
you can also use awk,
$ awk -F"," '$NF!=0' file
EXAMPLEfoo,60,6
EXAMPLEbar,30,10
EXAMPLEcon,120,6
EXAMPLErandom,30,6
this just says check the last field for 0 and don't print if its found.
sed '/,[ \t]*0$/d' file
I would tend to sed, but there is an egrep (or: grep -e) -solution too:
egrep -v ",0$" example.csv

Resources