How can i read the content of a xterm or terminal, only by knowing its device number?
Similar to moving the mouse over the text.
Redirecting or cloning the terminal output to a file would be an option too, as long it could be done without interacting with commands executed in this terminal.
So nothing like 'command > myfile'.
Or is the only way to solve this a print screen with ocr or simulating mouse moves and clicks?
Edit: I m looking for a solution that reads the content regardless of his origin, p.e. 'echo "to tty" > /dev/pts/1'
The script command may work for you.
"Script makes a typescript of everything printed on your terminal. It is useful for students who need a hardcopy record of an interactive session as proof of an assignment, as the typescript file can be printed out later" - man script
You can even pass script as command when invoking xterm with -e:
ubuntu#ubuntu:~$ xterm -e script
ubuntu#ubuntu:~$ # A new xterm is started. uname is run, then exit
ubuntu#ubuntu:~$ # The output is captured to a file called typescript, by default:
ubuntu#ubuntu:~$ cat typescript
Script started on Tue 19 Nov 2013 06:00:07 PM PST
ubuntu#ubuntu:~$ uname
Linux
ubuntu#ubuntu:~$ exit
exit
Script done on Tue 19 Nov 2013 06:00:13 PM PST
ubuntu#ubuntu:~$
Related
I'm encountering a strange problem when redirecting STDOUT and STDERR. The following works as expected:
$ gvim --version > /tmp/version.out
$ ls -l /tmp/version.out
-rw-r--r--. 1 blah blah 3419 Jun 27 17:28 /tmp/version.out
There are 3419 characters in the output file and when I look at the file, it contains what I expect.
However, it does not work as expected when I do the following:
$ gvim --version > /tmp/version.out 2> /tmp/version.err
$ ls -latr /tmp/version.*
-rw-r--r--. 1 blah blah 0 Jun 27 17:29 /tmp/version.out
-rw-r--r--. 1 blah blah 0 Jun 27 17:29 /tmp/version.err
Notice that both the .out and the .err files are zero length this time. I tried this with an ls command and it works as expected:
$ ls . /ZZZ > /tmp/ls.out 2> /tmp/ls.err
$ ls -l /tmp/ls.*
-rw-r--r--. 1 blah blah 50 Jun 27 17:45 /tmp/ls.err
-rw-r--r--. 1 blah blah 33 Jun 27 17:45 /tmp/ls.out
Here, the STDERR gets redirected properly:
$ cat /tmp/ls.err
ls: cannot access /ZZZ: No such file or directory
I did an strace on gvim --version and confirmed that it's trying to write the version info to STDOUT (fd 1). It shouldn't matter either way though since I'm trying to capture both STDOUT and STDERR.
What's going on here?
Congratulations, you just found a bug in gvim!
Correct procedure is to file a new issue on GitHub.
You should first try another variations of the bug, so that developers have the debugging easier.
For example, redirecting only STDERR also causes the error, because there was no output written. Also there was returned success (0), which is obviously an bug.
$ gvim --version 2> /tmp/version.err
$ echo $?
0
By only looking into the code one may search for the bug somewhere in version printing, or anywhere in generic --version argument processing, not being done by gtk.
To answer your question
What is going on?
It's a program error, made by developers of gvim and I would not recommend you to put effort into finding its root cause unless you are experienced with coding vim or if you want to learn, how vim works. In that case, your best shot is to fork the repo and after fixing it, submitting a pull request, so that everyone can benefit from your work.
The specific condition that triggers the bug seems to be when standard error is not a terminal. When I tried redirecting standard error to another terminal device, the --version message was still printed to standard output.
ek#Io:~$ gvim --version 2>/dev/null
ek#Io:~$ tty
/dev/pts/1
ek#Io:~$ who
ek tty7 2017-07-05 02:48 (:0)
ek pts/1 2017-07-10 09:10 (192.168.10.3)
ek pts/3 2017-07-10 09:23 (192.168.10.3)
ek#Io:~$ gvim --version 2>/dev/pts/3
VIM - Vi IMproved 7.4 (2013 Aug 10, compiled Nov 24 2016 16:44:48)
....
Apparently Vim calls isatty(2) to help figure out if it should use a terminal at all. From a comment in gui.c:
Return TRUE if still starting up and there is no place to enter text.
For GTK and X11 we check if stderr is not a tty, which means we were (probably) started from the desktop. Also check stdin, "vim >& file"
does allow typing on stdin.
But the rest of that comment (i.e., the part I didn't highlight), and instances of isatty(2) throughout multiple files, make it not immediately obvious what ought to be changed.
Running grep -RF 'isatty(2)' on the source tree revealed that this is used in gui.c, message.c, and main.c. (That doesn't mean the bug is in all those files, of course.) You might also look at all the instances of isatty in the source code.
In any case, as jirislav says, this ought to be reported as a bug--even if a full explanation of the problem is not yet apparent. I am able to reproduce this with the current upstream sources (commit 163095f).
I have about 3000 individual commands that I need to execute on a system via Putty. I am doing this by copying ~100 of the commands and pasting them into a putty SSH session. It works, however the issue is that Putty does not process them serially and the output gets garbled.
Is there a way to make Putty process each command, wait for a return and then process the next? The Windows command prompt does this and I'm thinking there is a way to do so with Putty.
Yes, I know I could put this in a bash script, but due to circumstance outside my control, this has to be done using SSH and in a manner that can be monitored as we go and logged.
I do this all the time. Put your commands in a ( ) block, which will run it as a subshell, perfectly everything within serially. I'm running Windows PuTTY and connecting to Linux and AIX servers. Try it.
(
Command1
Command2
Command3
)
In practice, I might have a huge load of many 100s of statements I want to run, in Notepad++ or whatever. So I copy them to clipboard, and then in PuTTY:
(
paste in your wad here
)
EDIT: If you want to log the output from each of your statements individually, you might do something like this:
(
Command1 > /home/jon/command1output.txt
Command2 > /home/jon/command2output.txt
Command3 > /home/jon/command3output.txt
)
or if you just want one big stream of output, you could interleave separators for easier reading later:
(
echo "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
echo "[`date`] Now running Command1 ..."
Command1
echo "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
echo "[`date`] Now running Command2 ..."
Command2
echo "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
echo "[`date`] Now running Command3 ..."
Command3
)
EDIT2: Another variation using an inline function. All paste-able into PuTTY, with perfect serial running, logging as command1:output1,command2:output2,... , and capable of driving SQL*Plus.
(
function geniusMagic() {
echo " "
echo "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"
date
echo "RUNNING COMMAND:"
echo " "
echo "$*"
echo " "
echo "OUTPUT:"
echo " "
sh -c "$*"
}
geniusMagic df -m /home
geniusMagic 'printf $RANDOM | sed "s/0//g"'
geniusMagic 'echo "select count(*)
FROM all_tables;
" | sqlplus -s scott/tiger'
)
Sample output:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Wed Jun 25 17:41:19 EDT 2014
RUNNING COMMAND:
df -m /home
OUTPUT:
Filesystem MB blocks Free %Used Iused %Iused Mounted on
/dev/hd1 1024.00 508.49 51% 3164 3% /home
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Wed Jun 25 17:41:19 EDT 2014
RUNNING COMMAND:
printf $RANDOM | sed "s/0//g"
OUTPUT:
2767
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Wed Jun 25 17:41:19 EDT 2014
RUNNING COMMAND:
echo "select count(*)
FROM all_tables;
" | sqlplus -s scott/tiger
OUTPUT:
COUNT(*)
----------
48
Just an idea here, Putty comes with a command-line tool called Plink. You could write a script on your windows machine that creates a connection to the remote server with Plink, then parses your list of commands one at a time and sends them.
This should look exactly the same to the remote server (which I assume is what's doing the logging), while letting you have a bit more control than copy-pasting blocks of commands.
I'm not sure why you could not use Plink, but you could make a batch file with Notepad++.
plink <hostname> -l <login_name> -pw <password> <command 1>
plink <hostname> -l <login_name> -pw <password> <command 2>
plink <hostname> -l <login_name> -pw <password> <command 3>
...
plink <hostname> -l <login_name> -pw <password> <command 3000>
Run the batch file:
filename.bat > log.txt 2>&1
Notepad++: http://notepad-plus-plus.org/
Plink: http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html
Batch files: http://www.robvanderwoude.com/batchfiles.php
Display & Redirect Output: http://www.robvanderwoude.com/battech_redirection.php
Maybe the answer you are looking for is here.
Here a copy of the answer I think may be interesting for you :
// Wait for backup setting prompt
Repeat Until %D1% = 1
Activate Window: "DAYMISYS1.qdx.com - PuTTY"
Mouse Move Window 12, 11 <------- Moves mouse to upper left corner to activate menu options
Mouse Right Button Click
Delay 0.1 Seconds
Text Type: o <------- Activates Copy All to Clipboard command
Delay 0.2 Seconds
If Clipboard Contains "or select a number to change a setting:" <------- Look for text of prompt that I am waiting for
Repeat Exit <------- If found, exit loop and continue macro
End If
Delay 1 Seconds <------- If prompt is not found, continue loop
Repeat End
In the putty I have, I just paste into it and it works.
Open notepad
Type out your list of commands
Highlight notepad
ctr + c (or right click, copy)
click on your putty window
right-click once, into where you type your commands
You should see all of the commands inserted into your entry box
hit enter
Note: I used this to enter multiple lines into cin prompts from C++ program compiled on linux. I don't know if it will work directly into the terminal.
Environment: Recent Ubuntu, non-standard packages are OK as long as they are not too exotic.
I have a data processor bash script that processes data from stdin:
$ cat data | process_stdin.sh
I can change the script.
I have a legacy data producer system (that I can not change) that logs in to a machine via SSH and calls the script, piping it data. Pseudocode:
foo#producer $ cat data | ssh foo#processor ./process_stdin.sh
The legacy system launches ./process_stdin.sh a zillion times per day.
I would like to keep ./process_stdin.sh running indefinitely at processor machine, to get rid of process launch overhead. Legacy producer will call some kind of wrapper that will somehow pipe the data to the actual processor process.
Is there a robust unix-way way to do what I want with minimum code? I do not want to change ./process_stdin.sh (much) — the full rewrite is already scheduled, but, alas, not soon enough — and I can not change data producer.
A (not so) dirty hack could be the following:
As foo on processor, create a fifo and run a tail -f redirected to stdin of process_stdin.sh, possibly in an infinite loop:
foo#processor:~$ mkfifo process_fifo
foo#processor:~$ while true; do tail -f process_fifo | process_stdin.sh; done
Don't worry, at this point process_stdin.sh is just waiting for some stuff to arrive on the fifo process_fifo. The infinite loop is just here in case something wrong happens, so that it is relaunched.
Then you can send your data thus:
foo#producer:~$ cat data | ssh foo#processor "cat > process_fifo"
Hope this will give you some ideas!
Flock do the job.
The same command asked 3 times shortly, but waiting until the lock is free.
# flock /var/run/mylock -c 'sleep 5 && date' &
[1] 21623
# flock /var/run/mylock -c 'sleep 5 && date' &
[2] 21626
# flock /var/run/mylock -c 'sleep 5 && date' &
[3] 21627
# Fri Jan 6 12:09:14 UTC 2017
Fri Jan 6 12:09:19 UTC 2017
Fri Jan 6 12:09:24 UTC 2017
The following should print "hello" (or some reminder) on my Linux command line at 9:00 AM today:
$ at 9:00AM
warning: commands will be executed using /bin/sh
at> echo "hello"
at> <EOT>
However, at the specified time, nothing happens.
I have an empty etc/at.deny and no /etc/at.allow file, so there shouldn't be any problems with permissions to use the command. Also, writing a file at 9:00 AM works:
$ at 9:00AM
at> echo "hello" > /home/mart/hello.txt
at> <EOT>
$ cat /home/mart/hello.txt
hello
All jobs are shown as scheduled, I just can't get any output to the terminal window (I'm on Crunchbang Linux with Terminator). Why? Do I need to somehow specify the window for that output?
Thanks for any help!
at runs commands from a daemon (atd), which doesn't have access to your terminal. Thus, output from the script isn't going to appear in your terminal (unless you pipe to the right tty in your command).
Instead, it does as man at says:
The user will be mailed standard error and standard output from his commands, if any.
You may be able to access these reports using mail if your machine is suitably configured.
If you want to have at write to your terminal, you can try piping the output to write, which writes a message to a user's TTY, or to wall if you want to write to every terminal connected to the system.
Okay, nneonneo's explanations led me to using wall, which sends a message to all users. So setting oneself reminders in a terminal window can be done like this:
$ at 9:00AM
warning: commands will be executed using /bin/sh
at> echo "hello" | wall
at> <EOT>
So I'm using /usr/bin/time to measure my program, and I'm doing multiple runs of the same program so I can gather results. The problem with doing multiple executions and using /usr/bin/time at the same time is that it'll print out that giant chunk of information multiple times, and I don't want to scroll, copy, and paste my results into a text file. I'd rather have the command line do it for me.
Originally, I thought the command was something like:
/usr/bin/time -v sudo ./programname >> timeoutput.txt
But as far as I know, >> is used for stdout, so it won't work in this case.
If you just want to append the standard error of time (which is the handle it uses for outputting the time information) to a file, you can use:
( time sleep 1 ) 2>>timeoutput.txt
The 2>>... bit redirects standard error rather than standard output and the () ensures that the redirection applies to time rather than the command you're running.
Of course, that won't stop any error output from the program you're timing from showing up in the file, if you want to guarantee that, you need something like:
( time ( sleep 1 2>/dev/null ) ) 2>>timeoutput.txt
This will ensure that no error output from the command trickles out to interfere with the error output of time.
In the above examples, I've used sleep 1 for the command but you should just replace that with whatever command you're trying to run.
Actually, time and /usr/bin/time may well be different. Some shells have a built in time function, note as follows from my ksh on Red Hat:
/usr/bin/time date
Thu Oct 31 12:57:04 EDT 2013
0.00user 0.00system 0:00.00elapsed ?%CPU (0avgtext+0avgdata 2864maxresident)k
0inputs+0outputs (0major+227minor)pagefaults 0swaps
time date
Thu Oct 31 12:57:11 EDT 2013
real 0m0.00s
user 0m0.00s
sys 0m0.00s