Delayed paging using `less` - bash

I have a scripts which generates huge amount of data and dump it to stdout. Using less it is easy to scroll through it:
./myscript | less
If myscript is running and providing data continuously, the latest less can update the view automatically (if the stdout buffer is flushed). This simple Python script shows this case:
import time
import os
import sys
for i in range(1000):
stream = sys.stdout.buffer
msg = str(i) + "\n"
stream.write(msg.encode())
stream.flush()
if (i + 0) % 100 == 0:
time.sleep(5)
python3 pipe.py | less
What I want to achieve is that the script only provides the next page of data if the user reached the end of the scrolling in less (instead of automatic update which the script above does).
Is it possible to do this? (Note the Python script is just an example, the question is much more about the shell part.)

Related

How to deal with shell commands that never stops

Here is the case;
There is this app called "termux" on android which allows me to use a terminal on android, and one of the addons are androids API's like sensors, tts engines, etc.
I wanted to make a script in ruby using this app, specifically this api, but there is a catch:
The script:
require('json')
JSON.parse(%x'termux-sensor -s "BMI160 Gyro" -n 1')
-s = Name or partially the name of the sensor
-n = Count of times the command will run
returns me:
{
"BMI160 Gyroscope" => {
"values" => [
-0.03...,
0.00...,
1.54...
]
}
}
I didn't copied and pasted the values, but that's not the point, the point is that this command takes almost a full second the load, but there is a way to "make it faster"
If I use the argument "-d" and not use "-n", I can specify the time in milliseconds to delay between data being sent in STDOUT, it also takes a full second to load, but when it loads, the delay works like charm
And since I didn't specify a 'n' number of times, it never stops, and there is the problem
How can I retrieve the data continuously in ruby??
I thought about using another thread so it won't stop my program, but how can I tell ruby to return the last X lines of the STDOUT from a command that hasn't and will not ever stop since "%x'command'" in ruby waits for a return?
If I understood you need to connect to stdout from a long running process.
see if this works for your scenario using IO.popen:
# by running this program
# and open another terminal
# and start writing some data into data.txt
# you will see it appearing in this program output
# $ date >> data.txt
io_obj = IO.popen('tail -f ./data.txt')
while !io_obj.eof?
puts io_obj.readline
end
I found out a built in module that saved me called PTY and the spawn#method plus thread management helped me to keep a variable updated with the command values each time the command outputted new bytes

How to change Abaqus solver executable file name

I wish to compare two parallel running of Abaqus simulations with umat coded in Fortran. It seems that I am able to select the correct standard.exe associated with each run, but it won't always be this lucky. This prompted me to ask if there is a way to call the abaqus job and change the name of standard.exe to maybe something like standard1.exe to differentiate between the runs. I checked abaqus help but it doesn't seem like there is an option through the command line.
There is a lot of room for improvement for jobs/analyses submission in Abaqus...
Anyways, feel free to have a look at my GitHub repo. I am trying to fill what's lacking in Abaqus when submitting jobs. Let me know if you have any question.
Or you can use this code to identify the right Process IDentifier (pid) of the job that you are running. You can then kill the process associated with this id.
import psutil
processesList = psutil.pids()
jobname=''
print('\n\nStart')
for proc in processesList:
try:
p = psutil.Process(proc)
if (p.name()=='standard.exe' or p.name()=='explicit.exe' or p.name()=='pre.exe' or p.name()=='explicit_dp.exe'):
i=0
jobCpus='1'
jobGpus='0'
sameJob = False
print('\nPID: %s'%proc)
for line in p.cmdline():
if line == '-job':
if jobname==p.cmdline()[i+1]:
sameJob = True
else:
sameJob=False
jobname=p.cmdline()[i+1]
print('Job Name: %s'%jobname)
elif line == '-indir':
jobdir=p.cmdline()[i+1]
print('Job Dir: %s'%jobdir)
elif line == '-cpus':
jobCpus=p.cmdline()[i+1]
print('Cpus number: %s'%jobCpus)
elif line == '-gpus':
jobGpus=p.cmdline()[i+1]
print('Gpus number: %s'%jobGpus)
i+=1
except:
pass
print('\nEnd\n\n')
In order to kill a process, you can use this command:
import os, signal
os.kill(int(pid), signal.SIGTERM)

Use Bash's select from within Python

The idea of the following was to use Bash's select from Python, e.g. use Bash select to get the input from the user, communicate with the Bash script to get the user selections and use it afterwords in the Python code. Please tell me if it at least possible.
Have the following simple Bash script:
#!/bin/bash -x
function select_target {
target_list=("Target1" "Target2" "Target3")
PS3="Select Target: "
select target in "${target_list[#]}"; do
break
done
echo $target
}
select_target
it works standalone
Now I tried to call it from Python like this:
import tempfile
import subprocess
select_target_sh_func = """
#!/bin/bash
function select_target {
target_list=(%s)
PS3="Select Target: "
select target in "${target_list[#]}"; do
break
done
echo $target
}
select_target
"""
target_list = ["Target1", "Target2", "Target3"]
with tempfile.NamedTemporaryFile() as temp:
temp.write(select_target_sh_func % ' '.join(map(lambda s : '\"%s\"' % str(s),target_list)))
subprocess.call(['chmod', '0777', temp.name])
sh_proc = subprocess.Popen(["bash", temp.name], stdout=subprocess.PIPE)
(output, err) = sh_proc.communicate()
exit_code = sh_proc.wait()
print output
It does nothing. No output, no selection.
I'm using High Sierra MacOS, PyCharm and Python 2.7.
PS
After some reading and experimenting ended up with the following:
with tempfile.NamedTemporaryFile() as temp:
temp.write(select_target_sh_func % ' '.join(map(lambda s : '\"%s\"' % str(s),target_list)))
temp.flush()
# bash: /var/folders/jm/4j4mq_w52bx2l5qwg4gt44580000gn/T/tmp00laDV: Permission denied
subprocess.call(['chmod', '0500', temp.name])
sh_proc = subprocess.Popen(["bash", "-c", temp.name], stdout=subprocess.PIPE)
(output, err) = sh_proc.communicate()
exit_code = sh_proc.wait()
print output
It behaves as I expected it would, the user is able to select the 'target' by just typing the number. My mistake was that I forgot to flush.
PPS
The solution works for MacOS X High Sierra, sadly it does not for Debian Jessie complaining the following:
bash: /tmp/tmpdTv4hp: Text file busy
I believe it is because `with tempfile.NamedTemporaryFile' keeps the temp file open and this somehow prevents Bash from working with it. This renders the whole idea useless.
Python is sitting between your terminal or console and the (noninteractive!) Bash process you are starting. Furthermore, you are failing to direct the standard output pipe anywhere, so subprocess.communicate() actually cannot capture standard error (and if it could, you would not be able to see the script's menu).
Running an interactive process programmatically is a nontrivial scenario; you'll want to look at pexpect or just implement your own select command in Python - I suspect this is going to turn out to be the easiest solution (trivially so if you can find an existing library).

Python multiprocessing stdin input

All code written and tested on python 3.4 windows 7.
I was designing a console app and had a need to use stdin from command-line (win os) to issue commands and to change the operating mode of the program. The program depends on multiprocessing to deal with cpu bound loads to spread to multiple processors.
I am using stdout to monitor that status and some basic return information and stdin to issue commands to load different sub-processes based on the returned console information.
This is where I found a problem. I could no get the multiprocessing module to accept stdin inputs but stdout was working just fine. I think found the following help on stack So I tested it and found that with the threading module this all works great, except for the fact that all output to stdout is paused until each time stdin is cycled due to GIL lock with stdin blocking.
I will say I have been successful with a work around implemented with msvcrt.kbhit(). However, I can't help but wonder if there is some sort of bug in the multiprocessing feature that is making stdin not read any data. I tried numerous ways and nothing worked when using multiprocessing. Even attempted to use Queues, but I did not try pools, or any other methods from multiprocessing.
I also did not try this on my linux machine since I was focusing on trying to get it to work.
Here is simplified test code that does not function as intended (reminder this was written in Python 3.4 - win7):
import sys
import time
from multiprocessing import Process
def function1():
while True:
print("Function 1")
time.sleep(1.33)
def function2():
while True:
print("Function 2")
c = sys.stdin.read(1) # Does not appear to be waiting for read before continuing loop.
sys.stdout.write(c) #nothing in 'c'
sys.stdout.write(".") #checking to see if it works at all.
print(str(c)) #trying something else, still nothing in 'c'
time.sleep(1.66)
if __name__ == "__main__":
p1 = Process(target=function1)
p2 = Process(target=function2)
p1.start()
p2.start()
Hopefully someone can shed light on whether this is intended functionality, if I didn't implement it correctly, or some other useful bit of information.
Thanks.
When you take a look at Pythons implementation of multiprocessing.Process._bootstrap() you will see this:
if sys.stdin is not None:
try:
sys.stdin.close()
sys.stdin = open(os.devnull)
except (OSError, ValueError):
pass
You can also confirm this by using:
>>> import sys
>>> import multiprocessing
>>> def func():
... print(sys.stdin)
...
>>> p = multiprocessing.Process(target=func)
>>> p.start()
>>> <_io.TextIOWrapper name='/dev/null' mode='r' encoding='UTF-8'>
And reading from os.devnull immediately returns empty result:
>>> import os
>>> f = open(os.devnull)
>>> f.read(1)
''
You can work this around by using open(0):
file is either a string or bytes object giving the pathname (absolute or relative to the current working directory) of the file to be opened or an integer file descriptor of the file to be wrapped. (If a file descriptor is given, it is closed when the returned I/O object is closed, unless closefd is set to False.)
And "0 file descriptor":
File descriptors are small integers corresponding to a file that has been opened by the current process. For example, standard input is usually file descriptor 0, standard output is 1, and standard error is 2:
>>> def func():
... sys.stdin = open(0)
... print(sys.stdin)
... c = sys.stdin.read(1)
... print('Got', c)
...
>>> multiprocessing.Process(target=func).start()
>>> <_io.TextIOWrapper name=0 mode='r' encoding='UTF-8'>
Got a

ipython notebook : how to parallelize external script

I'm trying to use parallel computing from ipython parallel library. But I have little knowledge about it and I find the doc difficult to read from someone who knows nothing about parallel computing.
Funnily, all tutorials I found just re-use the example in the doc, with the same explanation, which from my point of view, is useless.
Basically what I'd like to do is running few scripts in background so they are executed in the same time. In bash it would be something like :
for my_file in $(cat list_file); do
python pgm.py my_file &
done
But bash interpreter of Ipython notebook doesn't handle the background mode.
It seems that solution was to use parallel library from ipython.
I tried :
from IPython.parallel import Client
rc = Client()
rc.block = True
dview = rc[:2] # I take only 2 engines
But then I'm stuck. I don't know how to run twice (or more) the same script or pgm at the same time.
Thanks.
One year later, I eventually managed to get what I wanted.
1) Create a function with what you want to do on the different cpu. Here it is just calling a script from the bash with the ! magic ipython command. I guess it would work with the call() function.
def my_func(my_file):
!python pgm.py {my_file}
Don't forget the {} when using !
Note also that the path to my_file should be absolute, since the clusters are where you started the notebook (when doing jupyter notebook or ipython notebook) which is not necessarily where you are.
2) Start your ipython notebook Cluster with the number of CPU you want.
Wait 2s and execute the following cell:
from IPython import parallel
rc = parallel.Client()
view = rc.load_balanced_view()
3) Get a list of file you want to process:
files = list_of_files
4) Map asynchronously your function with all your files to the view of your engines you just created. (not sure of the wording).
r = view.map_async(my_func, files)
While it's running you can do something else on the notebook (It runs in "background"!). You can also call r.wait_interactive() that enumerates interactively the number of files processed and the number of time spent so far and the number of files left. This will prevent you to run other cells (but you can interrupt it).
And if you have more files than engines, no worries, they will be processed as soon as an engine finishes with 1 file.
Hope this will help others !
This tutorial might be of some help:
http://nbviewer.ipython.org/github/minrk/IPython-parallel-tutorial/blob/master/Index.ipynb
Note also that I still have IPython 2.3.1, I don't know if it changed since Jupyter.
Edit: Still works with Jupyter, see here for difference and potential issues you may encounter
Note that if you use external libraries in your function, you need to import them on the different engines with:
%px import numpy as np
or
%%px
import numpy as np
import pandas as pd
Same with variable and other functions, you need to push them to the engine name space:
rc[:].push(dict(
foo=foo,
bar=bar))
If you're trying to executing some external scripts in parallel, you don't need to use IPython's parallel functionality. Replicating bash's parallel execution can be achieved with the subprocess module as follows:
import subprocess
procs = []
for i in range(10):
procs.append(subprocess.Popen(['ls', '/Users/shad/tmp/'], stdout=subprocess.PIPE))
results = []
for proc in procs:
stdout, _ = proc.communicate()
results.append(stdout)
Be wary that if your subprocess generates a lot of output, the process will block. If you print the output (results) you get:
print results
['file1\nfile2\n', 'file1\nfile2\n', 'file1\nfile2\n', 'file1\nfile2\n', 'file1\nfile2\n', 'file1\nfile2\n', 'file1\nfile2\n', 'file1\nfile2\n', 'file1\nfile2\n', 'file1\nfile2\n']

Resources