Edit a particular string in a file based on another file - for-loop

Hello I have a file called users. In that file i have a list of users for example
user1 user2 user3
Now i have another file called searches where there is a specific string called owner = user for example
owner = user1
random text
random text
owner = user15
random text
random text
owner = user2
so is it possible to find all the users based on the users file and rename those users to user#domain.com ? for example
owner = user1#domain.com
random text
random text
owner = user15
random text
random text
owner = user2#domain.com
Currently what i am doing is the long manual process like below
awk '/owner = user1|owner = user2|owner = user3/{print $0 "#domain.com"; next}1' file
and it actually does work but for large files i have to spend a long time creating this command.

Given:
$ head users owners
==> users <==
user1 user2 user3
==> owners <==
owner = user1
random text
random text
owner = user15
random text
random text
owner = user2
You can use this awk:
awk 'BEGIN{FS="[ \t]*=[ \t]*|[ \t]+"}
FNR==NR{for (i=1;i<=NF;i++) seen[$i]; next}
/^owner[ \t]*=/ && $2 in seen{sub($2, $2 "#domain.com") } 1' users owners
Prints:
owner = user1#domain.com
random text
random text
owner = user15
random text
random text
owner = user2#domain.com

Related

Search for word in file then replace text in other file

I have two files. File1 contains a Username and Password like this:
[reader]
label = anylabel
protocol = cccam
device = some.url,13377
user = Username1
password = password1
password2 = password2
inactivitytimeout = 30
group = 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20
cccversion = 2.3.2
ccckeepalive = 1
and File2 contains a line like:
http://link:port/username/password/12345
Now I have this "code" to change the Username/Password in File2:
UsernameOLD=Username1
PasswordOLD=password1
UsernameNEW=Username2
PasswordNEW=password2
sed -i -e "s/\/$UsernameOLD\/$PasswordOLD/\/$UsernameNEW\/$PasswordNEW/" /etc/enigma2/file2.cfg
Now I have different Usernames which are always up to date in File1. I'm now searching for a solution to write the Username and the Password2 from File1 to a variable and then set this new Username and Password in File2.
So as a noob the psuedocode should be something like:
find "username" & "password1" in file1
set "username" as $UsernameNEW and
"password1" as $PasswordNEW and
then just execute my sed command.
Can anyone assist ? I guess I could use grep for this? But to be honest I'm happy I got this sed command with variables to work :D
Here is some to get you started.
oscam.conf
[reader]
label = anylabel
protocol = cccam
device = some.url,13377
user = xxx1
password = password1
inactivitytimeout = 30
group = 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20
cccversion = 2.3.2
ccckeepalive = 1
[reader]
label = anylabel
protocol = cccam
device = test.url,13377
user = yyy1
password = password1
inactivitytimeout = 30
group = 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20
cccversion = 2.3.2
ccckeepalive = 1
For the password file I have changed the format some, but can be done with original format.
passwd (format oldUser,newUser,oldPass,newUser)
xxx1,xxx2,passxxx1,passxxx2
yyy1,yyy2,passyyy1,passyyy2
Awk command
awk -F, 'FNR==NR {usr[$1]=$2;pass[$1]=$4;next} FNR!=NR{FS=" = "} /^user/ {t=$2;$2="= "usr[$2]} /^password/ {$2="= "pass[t]} 1' passwd oscam.reader
result
[reader]
label = anylabel
protocol = cccam
device = some.url,13377
user = xxx2
password = passxxx2
inactivitytimeout = 30
group = 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20
cccversion = 2.3.2
ccckeepalive = 1
[reader]
label = anylabel
protocol = cccam
device = test.url,13377
user = yyy2
password = passyyy2
inactivitytimeout = 30
group = 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20
cccversion = 2.3.2
ccckeepalive = 1
Quick & dirty -- loading the old passwords into environmental parameters:
set -- $(grep -m1 -A1 '^user' File1)
sed -i -e "s#/${UsernameOLD}/${PasswordOLD}#/$3/$6#;T;q" /etc/enigma2/file2.cfg
How it works: grep spews out six space separated items, which set turns into command line parameters $1 $2 $3 ... $6. We just need $3 and $6.

Shell - sum of column on user

Basically, I've two columns. First one stands for the users and the second one for the time they've spent on the server. So I'd like to sum for each client, how many minutes did he spend on the server.
user1 21:03
user2 19:55
user3 20:09
user1 18:57
user1 19:09
user3 21:05
user4 19:57
Let's say that I've this. I know how to split but there's one problem. Whenever I do awk -F: '{print $1} it prints users and the first parameter of the time (the number before :), and when I do awk -F: '{print $2} it prints only the numbers after :. After all of the sum, I'd like to get something like
user1 59:09
user2 19:55
user3 41:14
user4 19:57
Here's a possible solution:
perl -ne '/^(\S+) (\d\d):(\d\d)$/ or next; $t{$1} += $2 * 60 + $3; END { printf "%s %02d:%02d\n", $_, $t{$_} / 60, $t{$_} % 60 for sort keys %t }'
Or with better formatting:
perl -ne '
/^(\S+) (\d\d):(\d\d)$/ or next;
$t{$1} += $2 * 60 + $3;
END {
printf "%s %02d:%02d\n", $_, $t{$_} / 60, $t{$_} % 60
for sort keys %t;
}
'
We loop over all input lines (-n). We make sure every line matches the pattern \S+ \d\d:\d\d (i.e. a sequence of 1 or more non-space characters, a space, two digits, a colon, two digits) or else we skip it.
We accumulate the number of seconds per user in the hash %t. The keys are the user names, the values are the numbers.
At the end we print the contents of %t in a nicely formatted way.
this is an awk solution
cat 1.txt | awk '{a[$1]+=substr($2,0,2)*60+substr($2,4)} END {for(i in a) printf("%s %02d:%02d\n", i,a[i]/60,a[i]%60)}'
user1 59:09
user2 19:55
user3 41:14
user4 19:57
first construct an array with index=$1, and the value = convert the time to an integer by minutes * 60 + seconds
{a[$1]+=substr($2,0,2)*60+substr($2,4)}
then print the array in the desired format which converts integer to a mi:ss format.
printf("%s %02d:%02d\n", i,a[i]/60,a[i]%60)
If you want to use awk (and assuming the duration is always hh:mm, though their sizes can be arbitrary), the following will do the trick:
{
split($2, flds, ":") # Get hours and minutes.
mins[$1] += flds[1] * 60 + flds[2] # Add to initially zero array item.
}
END {
for (key in mins) { # For each key in array.
printf "%s %d:%02d\n", # Output specific format.
key, # Key, hours, and minutes.
mins[key] / 60,
mins[key] % 60
}
}
That's the expanded, readable variant, the compressed one is shown in the following transcript, along with the output as expected:
pax> awk '{split($2,flds,":");mins[$1] += flds[1] * 60 + flds[2]}END{for(key in mins){printf "%s %d:%02d\n",key,mins[key]/60,mins[key]%60}}' testprog.in
user1 59:09
user2 19:55
user3 41:14
user4 19:57
Just keep in mind you haven't specified the input format whenever a user entry has more than 24 hours. If it goes something like 25:42, the script will work as is.
If instead it decides to break out days (into something like 1:01:42 rather than 25:42), you'll need to adjust how the minutes are calculated. This can be done relatively easily (including the possibility for minutes-only entries) by checking the flds array size with (in the main body of the script, the non-END bit):
num = split($2, flds, ":")
if (num == 1) { add = flds[1] }
else if (num == 2) { add = flds[1] * 60 + flds[2] }
else { add = flds[1] * 1440 + flds[2] * 60 + flds[3] }
mins[$1] += add

Splitting a large text file into smaller files

I have a large text file and I want to split it into a few different smaller text files. Maybe someone has code for that?
Original file:
111
222
333
444
555
666
then split it to 3 txt files
File 1
111
222
File 2
333
444
File 3
555
666
If you want to split your original files into 3 files, without splitting lines, and getting the pieces into file_01, file_02 and file_03, try this:
split --numeric-suffixes=1 -n l/3 original_file file_
With GNU awk:
awk 'NR%2!=0{print >"File " ++c}; NR%2==0{print >"File " c}' original_file
or shorter:
awk 'NR%2!=0{++c} {print >"File " c}' file
% is modulo operation
edit: Question originally asked for a pythonic solution.
There are similar questions throughout the site, but here's a solution to your example:
# read ('r') the file ('text.txt'), and split at each line break ('\n')
textFile = open('text.txt','r').read().split('\n')
# setup temporary array as a place holder for the files (stored as strings) to write,
# and a counter (i) as a pointer
temp = ['']
i = 0
# for each index and element in textfile
for ind,element in enumerate(textFile):
# add the element to the placeholder
temp[i] += element+'\n'
# if the index is odd, and we are not at the end of the text file,
# create a new string for the next file
if ind%2 and ind<len(textFile)-1:
temp.append('')
i += 1
# go through each index and string of the temporary array
for ind,string in enumerate(temp):
# write as a .txt file, named 'output'+the index of the array (output0, output1, etc.
with open('output'+str(ind)+'.txt','w') as output:
output.write(string)

Extract text and evaluate in bash

I need some help getting a script up and running. Basically I have some data that comes from a command output and want to select some of it and evaluate
Example data is
JSnow <jsnow#email.com> John Snow spotted 30/1/2015
BBaggins <bbaggins#email.com> Bilbo Baggins spotted 20/03/2015
Batman <batman#email.com> Batman spotted 09/09/2015
So far I have something along the lines of
# Define date to check
check=$(date -d "-90 days" "+%Y/%m/%d")
# Return user name
for user in $(command | awk '{print $1}')
do
# Return last logon date
$lastdate=(command | awk '{for(i=1;i<=NF;i++) if ($i==spotted) $(i+1)}')
# Evaluation date again current -90days
if $lastdate < $check; then
printf "$user not logged on for ages"
fi
done
I have a couple of problems, not least the fact that whilst I can get information from places I don't know how to go about getting it all together!! I'm also guessing my date evaluation will be more complicated but at this point that's another problem and just there to give a better idea of my intentions. If anyone can explain the logical steps needed to achieve my goal as well as propose a solution that would be great. Thanks
Every time you write a loop in shell just to manipulate text you have the wrong approach (see, for example, https://unix.stackexchange.com/questions/169716/why-is-using-a-shell-loop-to-process-text-considered-bad-practice). The general purpose text manipulation tool that comes on every UNIX installation is awk. This uses GNU awk for time functions:
$ cat tst.awk
BEGIN { check = systime() - (90 * 24 * 60 * 60) }
{
user = $1
date = gensub(/([0-9]+)\/([0-9]+)\/([0-9]+)/,"\\3 \\2 \\1 0 0 0",1,$NF)
secs = mktime(date)
if (secs < check) {
printf "%s not logged in for ages\n", user
}
}
$ cat file
JSnow <jsnow#email.com> John Snow spotted 30/1/2015
BBaggins <bbaggins#email.com> Bilbo Baggins spotted 20/03/2015
Batman <batman#email.com> Batman spotted 09/09/2015
$ cat file | awk -f tst.awk
JSnow not logged in for ages
BBaggins not logged in for ages
Batman not logged in for ages
Replace cat file with command.

Date sorting data and print only required date columns

Suppose I have data like (the file is a text file).
Col1 Col2 Col3
user1 21:01:15 user1#gmail.com
user2 22:01:15 user2#gmail.com
user3 19:01:15 user3#gmail.com
user4 16:01:15 user4#gmail.com
What I want is to sort and only print columns having time between 19:01:15 to 22:01:15 on the screen. Please help.
You can use the following awk:
$ awk '$2>"19:01:15" && $2<"22:01:15"' file
user1 21:01:15 user1#gmail.com
Note that it allows you to write the exact time range you prefer.
In case you want the date range to be "lower or equal than" and "bigger or equal than", do:
$ awk ' $2>="19:01:15" && $2<="22:01:15"' file
user1 21:01:15 user1#gmail.com
user2 22:01:15 user2#gmail.com
user3 19:01:15 user3#gmail.com

Resources