How does a read-eval-print-loop (REPL) Works? How is the state of previous commands stored? does it recompile and runs again each time? - read-eval-print-loop

How is this achieved?
does it store in a file like test.py which has
# test.py
def foo():
print('foo was called')
foo()
and runs $ python test.py
I am asking about the internal workings of the repl.
how is the state of previous commands stored?

Related

sendline() content isn't executed until 'interact()' is called (podman + bash)

I'm trying to use pexpect to test application which spawns podman container. I'm using sendline() however its arguments are just send to the child but not executed, as if there was no 'return'.
Once I do child.interact() whole previously sent content is executed at once. But I cannot use interact in my code.
Any idea what to change child's bash process executes its input after sendline?
Using pexpect 4.8.0 from PyPI with python3-3.11.1-1.fc37.x86_64 on Fedora 37.
import pexpect
l = open('/tmp/pexpect_session.log', 'wb')
child = pexpect.spawn('podman run --rm -ti fedora bash', logfile=l)
# few seconds delay
child.sendline('echo hello world;')
print(child.buffer)
At this point child.buffer contains only b'' and logfile has only the contant I sent via sendfile. Not the output of the command itself.
If I run child.interact() at this point the echo command is executed.

How execute a pipeline in bash using a python wrapper?

Scenario:
I have a pipeline in a bash script and a list of some process along with their arguments. I want to run a python script after the execution of each process (executable) in the pipeline if the process is in my list.
(I use Python 2.7)
My proposed solution:
Using a python wrapper script. I have replaced all executable in the pipeline with my custom python script which:
1) checks the process if is in the list then FLAG=True
2) execute the process by the original executable using subprocess.Popen(process.command, shell=True).communicate()
3) if FLAG==True then do something.
Problem:
Using the current solution when I run the process using
subprocess.Popen().communicate(), the processes will execute separately
and they cannot get the output of inner process (child process) to the outer process (parent).
For example:
#!/bin/bash
Mean=`P1 $Image1 -M`
P2 "$Image2" $Mean -F
We have not output value of Mean in the second line execution.
Second line will execute like:
subprocess.Popen("P2 $Image2 \nP1 $Image1 -M -F" , shell=True).communicate()
Therefore, it returns an error!
Is there a better way in python to execute process like this?
Please let me know if there is any other suggestion for this scenario (I'm a very beginner in bash).
There's no need to use bash at all.
Assuming modern Python 3.x:
#!/usr/bin/env python
import subprocess
image1 = sys.argv[1]
image2 = sys.argv[2]
p1 = subprocess.run(['P1', image1, '-M'], check=True, capture_output=True)
p2 = subprocess.run(['P2', image2, p1.stdout.strip(), '-F'], check=True, capture_output=True)
print(p2_result.stdout)
See here that we refer to p1.stdout.strip() where we need the mean value in p2's arguments.

How do retrieve output from a bashscript that you run from a tcl script <- (modulefile script)

In my home dir, I have sub directories (CentOS, Ubuntu, etc) all for specific nodes I have access to.
Each OS will hold their own copy of programs, one of which is Python:
$HOME/{CentOS, Ubuntu, ...}/{python2,python3}
I am using environment modules so that when I ssh into a different computer (COMP), Python aliases will be set for that specific (COMP). For example:
COMP1 is CentOS
when I ssh into COMP1, "python3" should point to $HOME/Centos/python3/bin/python3
COMP2 is Ubuntu
when I ssh into COMP2 "python2" should point to $HOME/Ubuntu/python2/bin/python2
I can retrieve the OS name in bash using lsb_release -si, but I am working with modulefiles which are written in tcl, and haven't found something like lsb_release. Can I have a bash script that outputs lsb_release -si when called from a tcl script?
I tried doing this but no luck:
BASH SCRIPT:
#!/bin/bash
OS=$(lsb_release -si)
echo $OS
MODULEFILE SCRIPT:
#%Modulefile1.0
set OS [catch {exec bash /path/to/bash_file} output]
puts $OS
This doesn't do much.
Option A: export the variable in bash and access the environment variable in tcl.
#!/bin/bash
OS=$(lsb_release -si)
export OS
somescript.tcl
#!/usr/bin/tclsh
puts $::env(OS)
Option B: Use the platform package that comes with tcl.
#!/usr/bin/tclsh
package require platform
puts [platform::identify] ; # detailed OS-CPU
puts [platform::generic] ; # more generic OS-CPU
References: env platform
Your code mostly doesn't look obviously wrong.
But following the [catch {exec ...} output] the value that you are looking for should be in the output variable; the OS variable will have a code indicating effectively whether the bash script produced any output to stderr. Since you're definitely not interested in that debugging output which might be produced for reasons not under your easy control, you can probably do this:
catch {exec bash /path/to/bash_file 2>/dev/null} output
puts $output
Also make sure your bash script has an explicit exit at the end. Might as well ensure that it stops correctly. That's the default behaviour, but it's better to be explicit here as this is a (small) program.

How to execute bunch of commands in one pipe using python?

I have issue about executing commands in python.
Problem is:
In our company we have bought commercial software that can be used either GUI or Command line interface. I have been assigned a task that automize it as possible as. First I thought about using CLI instead of GUI. But then i have encountered a problem about executing multiple commands.
Now, I want to execute CLI version of that soft with arguments and continue executing commands in its menu(I dont mean execute script with args again.I want , once initial commands executed , it will open menu and i want to execute soft's commands inside Soft's menu at background). Then redirect output to variable.
I know, I must use subprocess with PIPE , but I didn't manage it.
import subprocess
proc=subprocess.Popen('./Goldbackup -s -I -U', shell=True, stdout=subprocess.PIPE)
output=proc.communicate()[0]
proc_2 = subprocess.Popen('yes\r\n/dir/blabla/\r\nyes', shell=True, stdout=subprocess.PIPE)
# This one i want to execute inside first subprocess
Set stdin=PIPE if you want to pass commands to a subprocess via its stdin:
#!/usr/bin/env python
from subprocess import Popen, PIPE
proc = Popen('./Goldbackup -s -I -U'.split(), stdin=PIPE, stdout=PIPE,
universal_newlines=True)
output = proc.communicate('yes\n/dir/blabla/\nyes')[0]
See Python - How do I pass a string into subprocess.Popen (using the stdin argument)?

Call a function with root's permission in ruby?

I know in shell script I can do this:
#!/bin/sh
foo() {
rm -rf /
}
foo # fail
sudo foo # succeed
For implementing this in ruby, I now use a individual script file to store those operations that need root privileges, and then call in the main script like system(['sudo', 'ruby', 'sudo_operations.rb', 'do_rm_rf_root']).
It would be much better if I can directly invoke the function without separating it out. For example, I wonder something like this:
def sudo(&method_needs_root_privilege)
# ...
end
Then I can use that like:
sudo do
puts ENV['UID'] # print 0
system('rm -rf /') # successfully executed
end
Is there any gems that helps or any idea for implementing this?
Thanks in advance.
No. UID/GID only exist at the process level; functions cannot run as a different user (e.g, root) from the rest of a process.
While it is possible for a process to change its uid (using the set*id family of system calls), a process must already be running as root to do so.

Resources