I adore taskwarrior, it seems to be the only management app that you can dial in what you decide is the most urgent. Not just due today, or overdue, but a combination of values.
I want to put the top urgency task in a bunch of scripts and widgets (tmux, top bar etc), but this is the best I can do:
task next limit:1 | head -n 4 |tail -n1
Which displays the whole line, due dates, cruft and all, like this:
1 2d H Make widgets 16.5
I know about task _get, the DOM access, but I can't find the way to use it, or any filter.
How can I just display the description of the top task? Thanks!
Make a new report named 'desrc' by adding the following lines to your .taskrc file:
report.desrc.columns = description
report.desrc.labels = Description
report.desrc.sort = urgency-
and then ask for the desrc list:
task rc.verbose: limit:1 desrc
The rc.verbose: removes headlines and everything else, so no need for head and tail.
You can use a one line script to get the ID of the latest, this is assuming that taskwarrior does not change its output format.
task _get $(task next limit:1 | tail -n +4 | head -n 1 | sed 's/^ //' | cut -d ' ' -f1).description
You could sure use cut to get only certain elements of that line, separated by tabs. It's a fast coreutil.
Nothing against creating a new report though, just wanted to mention it.
Awk would also work:
$ task _get $(task next limit:1 | awk 'NR==4{print $1}').description
Related
I am a complete beginner on shell scripting and I am trying to iterate through a set of JSON files and trying to extract a certain field out of it. Each JSON file has a "country:"xxx" field. In each JSON file, there are 10k of the same field with the same country name so I need only the first occurrence and I can do that using "-m 1".
I tried to use grep for this but could not figure out how to extract the whole field including the country name from each file at first occurrence.
for FILE in *.json;
do
grep -o -a -m 1 -h -r '"country":"' $FILE;
done
I tried to use another pipe and use the below pattern but it did not work
| egrep -o '^[^"]+'
Actual Output:
"country":"
"country":"
"country":"
Desired Output:
"country:"romania"
"country:"united kingdom"
"country:"tajikistan"
but I need the whole thing. Any help would be great. Thanks
There is one general answer on the question "I only want the first occurence", and that answer is:
... | head -n 1
This mean, whatever your do: take the head (the first lines), the -n switch gives you the possibility to say how many you want (one in this case).
The same can be done for the last occurence(s), but then you use tail instead of head (you can also use the -n switch).
After trying many things. I found the pattern I was looking for.
grep -Po '"country":.*?[^\\]",' $FILE | head -n 1;
I primarily use Linux Mint 17.1 and I like to use command line to get things done.
At the moment, I am working on organising a whole lot of family pictures and making them easy to view via a browser.
I have a directory with lots of images.
As I was filling the directory I made sure to keep the first four letters of the filename unique to a specific topic, eg, car_, hse_, chl_ etc
The rest of the filename keeps it unique.
There are some 120 different prefixes and I would like to create a list of the unique prefix.
I have tried 'ls i | uniq -d -w 4' and it works but it gives me the first filename of each prefix.
I just want the prefixes.
Fyi, I will use this list to generate an HTML page as a kind of catalogue.
Summary,
Convert car_001,car_002,car_003,dog_001,dog_002
to
car_,dog_
try this
$ ls -1 | cut -c1-3 | sort -u
uses the first 3 chars of the file names.
Try something like
ls -1 | cut -d'_' -f1 | uniq | sort
where cut splits the text by _ and takes the first field of each.
I need to write a script that will show 10 most memory-consuming processes using awk and top -b command. I'd like results to be shown in two columns - in the first the name of process and in the second the amount of memory it is using. I've done some research but I couldn't find anything that would work for me. This is my first contact with programming ever and I have no idea how to start. Could anyone somehow help me? Every hint would be appreciated.
You can use:
top -ab -n1 | awk 'NR>17{exit} NR>7'
top options are:
-a - to sort be memory
-b - batch mode
-n1 - Make top stop after one iteration
Used awk 'NR>17{exit} NR>7' to make sure to print lines between 8 and 17 (first 7 lines being the summary of top command).
And the answer is:
ps aux | sort -nk +4 | tail
So I want to automate a manual task using shell scripting, but I'm a little lost as to how to parse the output of a few commands. I would be able to this in other languages without a problem, so I'll just explain what I'm going for in psuedo code and provide an example of the cmd output I'm trying to parse.
Example of output:
Chg 2167467 on 2012/02/13 by user1234#filename 'description of submission'
What I need to parse out is '2167467'. So what I want to do is split on spaces and take element 1 to use in another command. The output of my next command looks like this:
Change 2167463 by user1234#filename on 2012/02/13 18:10:15
description of submission
Affected files ...
... //filepath/dir1/dir2/dir3/filename#2298 edit
I need to parse out '//filepath/dir1/dir2/dir3/filename#2298' and use that in another command. Again, what I would do is remove the blank lines from the output, grab the 4th line, and split on space. From there I would grab the 1st element from the split and use it in my next command.
How can I do this in shell scripting? Examples or a point to some tutorials would be great.
Its not clear if you want to use the result from the first command for processing the 2nd command. If that is true, then
targString=$( cmd1 | awk '{print $2}')
command2 | sed -n "/${targString}/{n;n;n;s#.*[/][/]#//#;p;}"
Your example data has 2 different Chg values in it, (2167467, 2167463), so if you just want to process this output in 2 different ways, its even simpler
cmd1 | awk '{print $2}'
cmd2 | sed -n '/Change/{n;n;n;s#.*[/][/]#//#;p;}'
I hope this helps.
I'm not 100% clear on your question, but I would use awk.
http://www.cyberciti.biz/faq/bash-scripting-using-awk/
Your first variable would look something like this
temp="Chg 2167467 on 2012/02/13 by user1234#filename 'description of submission'"
To get the number you want do this:
temp=`echo $temp | cut -f2 -d" "`
Let the output of your second command be saved to a file something like this
command $temp > file.txt
To get what you want from the file you can run this:
temp=`tail -1 file.txt | cut -f2 -d" "`
rm file.txt
The last block of code gets the last nonwhite line of the file and delimits on the second set of white spaces
I am trying to debug some errors in a live Merb app. There are a lot of lined of error code running by, but I jut need to see the first one. I can use grep to select these lines and print them but it closes as soon as it reached the end of the file.
What I would like to do is use grep like the shift-F mode in less where it will keep the file open and report new matching line as they are written to the log.
- or -
Is there a way to do this directly with less that I don't know about?
try this
tail -f dev.log | grep '^ERROR:'
the -f option to tail tells it to wait for more data when it hits EOF.
Can't you do this with watch and tail?
watch -n 30 "grep 'dev.log' '^ERROR:' | tail -n 30"