I want to apply a watch command on a mysql query every N seconds, but would like to have the results on the bottom left of the terminal instead of the top left:
watch -n 120 "mysql_query" | column -t"
Shows my results like so:
--------------------------
|xxxxxxxxxxx |
|xxxxxxxxxxx |
|xxxxxxxxxxx |
| |
| |
--------------------------
Whereas I would like them to have like so:
--------------------------
| |
| |
|xxxxxxxxxxx |
|xxxxxxxxxxx |
|xxxxxxxxxxx |
--------------------------
Suggestion?
I don't see a straight-forward way to do this, but I managed to force it to work using the following approach. I haven't fully tested this so I cannot guarantee that this will work in all situations.
Using this script:
#!/bin/bash
TERM_HEIGHT=`tput lines` # determine terminal height
WATCH_BANNER_HEIGHT=2 # account for the lines taken up by the header of "watch"
let VIS_LINES="TERM_HEIGHT - WATCH_BANNER_HEIGHT" # height of visible area
(yes " " | head -n $VIS_LINES; cat | head -n $VIS_LINES) | tail -n $VIS_LINES
Post process the output of your command as it is called by watch e.g. (assuming the script was saved as align_bottom, made executable, and store somewhere within your $PATH):
watch -n 120 "mysql_query | column -t | align_bottom"
What the script does:
Determine the height (number of lines) of the terminal
Calculate the visible area of the watch output
Print blank lines to pad the output (pushing the output down)
Read in output from stdin, and trim it so we only show the top of the output if it extends beyond the screen. If you want to see the bottom of the output instead, simple remove the head command after cat.
tail the output of steps (3) and (4) so excess padding is removed and the final output fits snugly within watch
I have to admit this seems a little hackish, but hopefully it gets you closer to what you're trying to achieve.
Update:
It should also be possible to implement that as a function instead just so it can sit comfortably in .bashrc.
function align_bottom() {
(( VIS = $(tput lines) - 2 )) # height of visible area
(yes " " | head -n $VIS; cat | head -n $VIS) | tail -n $VIS
}
typeset -fx align_bottom # !! make it callable from subshell
Usage would be the same:
watch -n 120 "mysql_query | column -t | align_bottom"
Note that watch runs the given command using sh -c, therefore, as Dennis pointed out in the comments, on systems that does not link /bin/sh to /bin/bash the function approach shown above will not work.
It is possible to make it work usign:
watch -n 120 "mysql_query | column -t | bash -c align_bottom"
but for portability and usability, it's cleaner to simply use the shell script approach.
I don't know if watch can do that, but what I'd do is use another tool to have multiple terminals and resize the one in which watch is running according to my needs.
A couple of these tools that can be useful are:
screen
byobu (screen with some enhancements)
terminator
I hope this helps.
Related
I use the following function to quickly review items in my bash_aliases
showa () { /usr/bin/grep --color=always -i -a1 $# ~/.bash_aliases | grep -v '^\s*$' | less -FSRXc~ ; }
It works well except that for less than a pageful of matches, the first two matches spool off the top of the Terminal screen, leaving the final match as the first visible line, so I have to scroll up to see them every time. Which is a mild nuisance. It doesn't seem to do it when there are more than a full page of matches.
Ideally I want the first match to be at the top of the screen, not shoot off it.
Adding a pipe through more is not helpful, nor any seeming combination of alternative switches eg -j on the less command. The terminal is standard 40 rows and no changes to Terminal settings appear to help either.
What might be a working solution? - grateful for help.
Located the issue. There was a spurious "1" on the -a switch.
This is the final solution, presenting also a few lines context ahead of the match.
showa () { /usr/bin/grep -B 5 --color=always -i -a $# ~/.bash_aliases | grep -v '^\s*$' | less -FSRXc~ ; }
I need an automation for PDFs that is a variant of multiple pages per sheet. In this case, I don't need a simple two-pages-per-sheet solution, that's easy. I need to take hand-written notes side by side to the pages. So, here it goes:
Given a PDF, I'd like to print it with two pages per sheet, however, one page must be blank, like this:
+-------+-------+
| P.1 | white |
| | |
| | |
+-------+-------+
+-------+-------+
| P.2 | white |
| | |
| | |
+-------+-------+
etc.
Has anyone an idea to write a script that can automate this?
PS. I know how to do this in LaTeX, but I'd like to avoid the big gun...
If avoiding LaTeX does not mean avoiding usage of any tools that depend on it, then PDFJam (Debian package is texlive-extra-utils) could be of help, see q/a: Gluing (Imposition) PDF documents.
Otherwise you are probably better off with a little script that converts .pdf file pages to images and then merges them with a blank image, ImageMagick is able to do those things.
With Ubuntu:
# install packages
sudo apt-get install enscript ghostscript pdfjam pdftk
source="source.pdf"
output="output.pdf"
# create ps with one blank page
echo -n | enscript -p blank.ps
# convert p2 to pdf
ps2pdf blank.ps blank.pdf
# get number of pages of $source
num=$(pdftk "$source" dump_data | grep -Po 'NumberOfPages: \K.*')
# create string with new page numbers
for ((i=1;i<=$num;i++)); do pages="$pages A$i-$i B1-1"; done
# create pdf with white pages
pdftk A="$source" B=blank.pdf cat $pages output tmp.pdf
# create pdf with two pages on one side
pdfjam tmp.pdf --nup 2x1 --landscape --outfile "$output"
# clean up
rm blank.ps blank.pdf tmp.pdf
I have a solution which does not print exactly the layout which you want, but prints the page centered in the landscape sheet, like so:
+---+-------+----+
| | P.1 | |
| | | |
| | | |
+---+-------+----+
+---+-------+----+
| | P.2 | |
| | | |
| | | |
+---+-------+----+
If you're goal is to create free space for hand annotations, this layout might be better since it lets you write the annotation closer to the printed text.
The following script relies on pdfjam which uses LaTeX under the hood. Probably adding a few more command line arguments for pdfjam would get exactly what you are looking for.
#!/bin/bash
if [ "$#" -ne 1 ]; then
echo "usage: $0 PDF_filename..."
echo
echo "This script takes a PDF file as command line arguments,"
echo "and generates a new, landscape-formatted PDF file, where every "
echo "page has very large margins which may be useful for editorial notes"
echo
echo "Requires: pdfjam, which is installed by the apt-get package texlive-extra-utils"
exit 1
fi
command -v pdfjam >/dev/null 2>&1 || { echo >&2 "I require pdfjam but it's not installed. Do an apt install of texlive-extra-utils to get it on Ubuntu. Aborting."; exit 1; }
pdfjam --batch --nup 1x1 --suffix widemargin --landscape "$#"
I've changed my data source in a bash pipe from cat ${file} to cat file_${part_number} because preprocessing was causing ${file} to be truncated at 2GB, splitting the output eliminated the preprocessing issues. However while testing this change, I was unable to work out how to get Bash to continue acting the same for some basic operations I was using to test the pipeline.
My original pipeline is:
cat giantfile.json | jq -c '.' | python postprocessor.py
With the original pipeline, if I'm testing changes to postprocessor.py or the preprocessor and I want to just test my changes with a couple of items from giantfile.json I can just use head and tail. Like so:
cat giantfile.json | head -n 2 - | jq -c '.' | python postprocessor.py
cat giantfile.json | tail -n 3 - | jq -c '.' | python postprocessor.py
The new pipeline that fixes the issues the preprocessor is:
cat file_*.json | jq -c '.' | python postprocessor.py
This works fine, since every file gets output eventually. However I don't want to wait 5-10 minutes for each tests. I tried to test with the first 2 lines of input with head.
cat file_*.json | head -n 2 - | jq -c '.' | python postprocessor.py
Bash sits there working far longer than it should, so I try:
cat file_*.json | head -n 2 - | jq -c '.'
And my problem is clear. Bash is outputting the content of all the files as if head was not even there because each file now has 1 line of data in it. I've never needed to do this with bash before and I'm flummoxed.
Why does Bash behave this way, and How do I rewrite my little bash command pipeline to work the way it used to, allowing me to select the first/last n lines of data to work with for testing?
My guess is that when you split the json up into individual files, you managed to remove the newline character from the end of each line, with the consequence that the concatenated file (cat file_json.*) is really only one line in total, because cat will not insert newlines between the files it is concatenating.
If the files were really one line each with a terminating newline character, piping through head -n 2 should work fine.
You can check this hypothesis with wc, since that utility counts newline characters rather than lines. If it reports that the files have 0 lines, then you need to fix your preprocessing.
Suppose I've got a giant command
echo "start string `complexcommand -with -many args | cut -d ' ' -moreargs | sed 's/you/get/g' | grep -v "the idea" | xargs echo` ending string" | program | less -S
It produces output of several hundred lines of many thousand characters in length.
less handles scrolling vertically quite well, as that's what it is used for most of the time, but scrolling left and right is very CPU taxing according to top and I am not aware of any "page-left" or "page-right" style commands to go faster.
So I'm hoping that something like zsh's built-in pager could handle this task faster, but I'm having trouble figuring out the command to use it. It takes a file input. Is there a way to make a oneliner use the pager rather than having to dump it to a file first?
Or if anybody has suggestions for better editors. I might try using vim next.
If you want to invoke zsh's pager, use some-complex-pipeline | zsh -c '< /dev/fd/0'. The /dev/fd/0 file is a device that represents the current process's standard input stream.
Are there there versions (drop-in replacements) of the standard shell utils that displays (partial) results updated on-the-fly (perhaps to stderr)?
Say I want to do this:
du ~/* -s | sort -rn | head
At first absolutely nothing happens, before du is done. I would however like to see partial results, i.e. I want sort to show the data it has already seen. This way I can quickly see if something is wrong with the output and correct it. Like when running grep.
Same thing with this:
du ~/* -s | wc
I would like it to update on the fly.
Here is an ugly work-around showing kinda what I want. (But preferably it shouldn't unnecessarily consume the whole screen, like with du below.)
du ~/* -s > /tmp/duout | watch -n .1 sort -rn /tmp/duout
du ~/* -s > /tmp/duout | watch -n .1 wc /tmp/duout
However, I'd much prefer it if I could just do like:
du ~/* -s | isort -rn
There are lots of shell utilities that display active results. A good example would be the top program.
The trouble is, these kind of tools do NOT lend themselves to the usual linux methodology of input and output. Sort is meant to take an input stream, sort it, and output it. You can then take that output and do something else with it. If it output incremental versions, it would be useless for further processing.
If you have specific needs to see partial data, you will have to hack them together yourself. It's diametrically opposed to the normal work flow and a massive waste of computing resources. Such excessive are left up to the reader :)
If you have another specific utility and wonder if there would be an alternative display system, feel free to ask. As for the ones you mention, particularly sort, they don't exist. A live display of output in sort would slow results down by several orders of magnitude and nobody wants to watch output at the cost of waiting ten or a hundred or a thousand times longer for the final result.
You can insert tee /dev/tty into a pipe sequence to print intermediate results. tee duplicates stdin, sending output both to stdout and to any files specified on the command-line. You could use this trick to view du's output while simultaneously passing it to sort:
du ~/* -s | tee /dev/tty | sort -rn | head
The intermediate output will collide with sort's output. You could work around this with various shell tricks; for example, by sending sort's output to a pager:
du ~/* -s | tee /dev/tty | sort -rn | less
The problem is not the utils, but the shell itself. You need a different shell that will start each process in a pipe chain at the same time. The utilities all stream input just fine. Run a recursive grep to prove that to yourself.
In a normal case of sorting data, you have to read all the data before you can start to print the value for line 1 of data, right? And as you mention, same with du -s (which means summary, it is sorting and collating data too). Take out the -s and you get unsummarized output right away.
So you're always going to have to wait for those sorts of things. The one thing you can do with your first example is to add a tee into the data stream
du ~/* -s | tee /dev/tty | sort -rn | head
or even
du ~/* -s | tee /dev/tty8 | sort -rn | tee /dev/tty12 | head
where tty8 and tty12 are separate terminal windows, and you have found the correct ttyN to substitute by using tty in the shell window.
I hope this helps.
P.S. as you appear to be a new user, if you get an answer that helps you please remember to mark it as accepted, and/or give it a + (or -) as a useful answer.