Loop bash shell script - bash

I have a bash shell script that outputs an iCal event using iCal Buddy which displays 2 events like:
Event1 Title
Event1 Date
Event2 Title
Event2 Date
I would like to have the script output like:
Event Title
Event Date
(wait 10 seconds) clear the Event Title, Event Date, then output the next Event Title, Event Date (wait 10 seconds) then loop back to the first Event and continue looping. I've tried using the command followed by sleep 10, and repeating the command with | head -n 4 | tail -n 2, although then it only outputs the second Event.
How can I do this? (my shell script is below) Thanks!
/usr/local/bin/icalBuddy -npn -nc -n -iep "title,datetime" -b "★ " -ps "| ★\n|" -po "title,datetime" -nrd -df "%a, %b %e" eventsToday+2 | cut -c 1-33
2

Unless I misunderstand you, this should do what you want:
while true
do
clear
command | pipeline | head -n 2
sleep 10
clear
command | pipeline | head -n 4 | tail -n 2
sleep 10
done
Where "command | pipeline" represents the icalBuddy and cut in your question.

Related

Bash script with a SQL query which returns multiple rows and 2 columns. And i want to loop through each row and store it in variable [duplicate]

This question already has answers here:
Shell - Looping through sql result
(2 answers)
Closed 5 months ago.
I am working on a Bash script which ha a sql query in it and it returns multiple rows and 2 columns.
example output:
|db_name | used_% |
|--------|--------|
|row 1 |29% |
|row 2 |45% |
|row 3 |60% |
and so on
I want to loop through each row in the table and store db_name and its used_% as variables. and i want to run the script every 10 seconds.
I am at a point where my code is giving me list of db_name and used_% as a single variable.
My code:
#!/bin/bash
while :
do
QUERY=$("select db_name, used_% from Table1;")
DBNAME=$(echo "$QUERY" | cut -d"|" -f1)
USED_PERCENT=$(echo "$QUERY" | cut -d"|" -f2)
echo "PUTVAL \'$DBNAME'/gauge-used_percent\" interval 10 N:${USED_PERCENT#"${USED_PERCENT%%[!:space:]]*}"}"
sleep 10
done
I would really appreciate if someone can help me with the logic to loop over the db_names and its used_%.
Thanks!
I guess you are looking for something like
#!/bin/bash
sqlclient "select db_name, used_% from Table1;" |
while read -r dbname used_pct; do
echo "PUTVAL \'$dbname'/gauge-used_percent\" interval 10 N:$used_pct"
done
where sqlclient should be replaced with the command to actually run the SQL query, ideally without headers or other formatting in the output.

How can I count and display only the words that are repeated more than once using unix commands?

I am trying to count and display only the words that are repeated more than once in a file. The basic idea is:
You are given a file with names and characters like commas, colons, slashes, etc..
Use the cut command to display only the first names in the file (other commands are also allowed).
Count and then display only the names repeated more than once.
I got to the point of counting and displaying all the names. However, I haven't found a way to display and to count only those names repeated more than once.
Here is a section of the file:
user1:x:80:200:Mia,Spurs:/home/user1:/bin/bash
user2:x:80:200:Martha,Dalton:/home/user2:/bin/bash
user3:x:80:200:Lucy,Carlson:/home/user3:/bin/bash
user4:x:80:200:Carl,Bingo:/home/user4:/bin/bash
Here is what I have been able to do:
Daniel#Daniel-MacBook-Pro Files % cut -d ":" -f 5-5 file1 | cut -d "," -f 1-1 | sort -n | uniq -c
1 Mia
3 Martha
1 Lucy
1 Carl
1 Jessi
1 Joke
1 Jim
2 Race
1 Sem
1 Shirly
1 Susan
1 Tim
You can filter out the rows with count 1 with grep.
cut -d ":" -f 5 file1 | cut -d "," -f 1 | sort | uniq -c | grep -v '^ *1 '

Print unique names of users logged on with finger

I'm trying to write a shell script that prints the full names of users logged on to a machine. The finger command gives me a list of users, but there are many duplicates. How can I loop through and print out only the unique ones?
Edit:
This is the format of what finger gives me:
xxxx XX of group XXX pts/59 1:00 Feb 13 16:38
xxxx XX of group XXX pts/71 1:11 Feb 13 16:27
xxxx XX of group XXX pts/105 1d Feb 12 15:22
xxxx YY of group YYY pts/102 2:19 Feb 13 14:13
xxxx ZZ of group ZZZ pts/42 2d Feb 7 12:11
I'm trying to extract the full name (i.e. whatever comes before 'of group' in column 2), so I would be using awk together with finger.
What you want is actually fairly difficult in a shell script, here is, for example, my full output of finger(1):
Login Name TTY Idle Login Time Office Phone
martin Martin Tournoij *v0 1d Wed 14:11
martin Martin Tournoij pts/2 22 Wed 15:37
martin Martin Tournoij pts/5 41 Thu 23:16
martin Martin Tournoij pts/7 31 Thu 23:24
martin Martin Tournoij pts/8 Thu 23:29
You want the full name, but this may contain 1 space (as per my example), or it may just be 'Teller' (no space), or it may be 'Captain James T. Kirk' (3 spaces). So you can't just use the space as delimiter. You could use the character position of 'TTY' in the header as an indicator, but that's not very elegant IMHO (especially with shell scripting).
My solution is therefore slightly different, we get only the username from finger(1), then we get the full name from /etc/passwd
#!/bin/sh
prev=""
for u in $(finger | tail +2 | cut -w -f1 | sort); do
[ "$u" = "$prev" ] && continue
echo "$u $(grep "^$u" /etc/passwd | cut -d: -f5)"
prev="$u"
done
Which gives me both the username & login name:
martin Martin Tournoij
Obviously, you can also print just the real name (without the $u).
The sort and uniq BinUtils commands can be used to removed duplicates.
finger | sort -u
This will remove all duplicate lines, but you will still see similar lines due to how verbose the finger command is. If you just want a list of usernames, you can filter it out further to be very specific.
finger | cut -d ' ' -f1 | sort -u
Now, you can take this one step further, and remove the "header/label" line printed out by the finger command.
finger | cut -d ' ' -f1 | sort -u | grep -iv login
Hope this helps.
Other possible solution:
finger | tail -n +2 | awk '{ print $1 }' | sort | uniq
tail -n +2 to omit the first line.
awk '{ print $1 }' to extract the first column.
sort to prepare input for uniq.
uniq remove duplicates.
If you want to iterate use:
for user in $(finger | tail -n +2 | awk '{ print $1 }' | sort | uniq)
do
echo "$user"
done
Could this be simpler?
No spaces or any other special characters to worry about!
finger -l | awk '/^Login/'
Edit: To remove the content after of group
finger -l | awk '/^Login/' | sed 's/of group.*//g'
Output:
Login: xx Name: XX
Login: yy Name: YY
Login: zz Name: ZZ

List of last generated file on each day from 7 days list

I've a list of files in the following format:
Group_2012_01_06_041505.csv
Region_2012_01_06_041508.csv
Region_2012_01_06_070007.csv
XXXX_YYYY_MM_DD_HHMMSS.csv
What is the best way to compile a list of last generated file for each day per group from last 7 days list?
Version that worked on HP-UX
for d in 6 5 4 3 2 1 0
do
DATES[d]=$(perl -e "use POSIX;print strftime '%Y_%m_%d%',localtime time-86400*$d;")
done
for group in `ls *.csv | cut -d_ -f1 | sort -u`
do
CSV_FILES=$working_dir/*.csv
if [ ! -f $CSV_FILES ]; then
break # if no file exists do not attempt processing
fi
for d in "${DATES[#]}"
do
file_nm=$(ls ${group}_$d* 2>>/dev/null | sort -r | head -1)
if [ "$file_nm" != "" ]
then
# Process file
fi
done
done
You can explicitly iterate over the group/time combinations:
for d in {1..6}
do
DATES[d]=`gdate +"%Y_%m_%d" -d "$d day ago"`
done
for group in `ls *csv | cut -d_ -f1 | sort -u`
do
for d in "${DATES[#]}"
do
echo "$group $d: " `ls ${group}_$d* 2>>/dev/null | sort -r | head -1`
done
done
Which outputs the following for your example data set:
Group 2012_01_06: Group_2012_01_06_041505.csv
Group 2012_01_05:
Group 2012_01_04:
Group 2012_01_03:
Group 2012_01_02:
Group 2012_01_01:
Region 2012_01_06: Region_2012_01_06_070007.csv
Region 2012_01_05:
Region 2012_01_04:
Region 2012_01_03:
Region 2012_01_02:
Region 2012_01_01:
XXXX 2012_01_06:
XXXX 2012_01_05:
XXXX 2012_01_04:
XXXX 2012_01_03:
XXXX 2012_01_02:
XXXX 2012_01_01:
Note Region_2012_01_06_041508.csv is not shown for Region 2012_01_06 as it is older than Region_2012_01_06_070007.csv

Bash scripting: using sed and cut to output a specific format

I am working on a bash script using sed and cut that will take times input in various ways and output them in a specific format. Here is an example line:
timeinhour=$(cut -d" " -f2<<<"$line" | sed 's/p/ /' | sed 's/a/ /' | sed 's/am/ /' | sed 's/pm/ /' | sed 's/AM/ /' | sed 's/PM/ /' )
As you can see I am just removing any trailing am or pm from a time entry that might be formatted in various ways leaving only the numbers.
So I want this line to just spit out the hour of the day (timeinhour), ie "1000AM" = "10" as does "10a" and "10am."
The problem I am running into is the varying lengths of the time entries. If I tell sed or cut to remove the last two characters "1000" will correctly output the hour I need: "10," but using it on one that is already "10" obviously results in a blank output.
I have been experimenting with a line like this
sed 's/\(.*\)../\1/'
If anyone has any advice, I would appreciate it.
For example, this input:
1p
1032AM
419pm
1202a
would produce:
1
10
4
12
sed 's/[^0-9]//g;s/^[0-9]\{1,2\}$/&00/;s/^\(.*\)..$/\1/'
the steps
1p -> 1 -> 100 -> 1
10a -> 10 -> 1000 -> 10
419pm -> 419 -> 419 -> 4
1202a -> 1202 -> 1202 -> 12
delete what is not number
expand 1 or 2 digit (hours) into 4 digit HHmm
ignore last two charactes (minutes)
Try:
timeinhour=$(cut -d" " -f2<<<"$line" | sed 's/p/ /;s/a/ /;s/am/ /;s/pm/ /;s/AM/ /;s/PM/ /' | sed 's/\(.*\)../\1/' # Using your example.

Resources