I'm trying to use pexpect to test application which spawns podman container. I'm using sendline() however its arguments are just send to the child but not executed, as if there was no 'return'.
Once I do child.interact() whole previously sent content is executed at once. But I cannot use interact in my code.
Any idea what to change child's bash process executes its input after sendline?
Using pexpect 4.8.0 from PyPI with python3-3.11.1-1.fc37.x86_64 on Fedora 37.
import pexpect
l = open('/tmp/pexpect_session.log', 'wb')
child = pexpect.spawn('podman run --rm -ti fedora bash', logfile=l)
# few seconds delay
child.sendline('echo hello world;')
print(child.buffer)
At this point child.buffer contains only b'' and logfile has only the contant I sent via sendfile. Not the output of the command itself.
If I run child.interact() at this point the echo command is executed.
Related
I have a script wrapper.sh it takes a string as an argument.
wrapper.sh
#!/usr/bin/env bash
node ./index.js $1
Now if I pass argument as hello it runs fine but if I pass hello&pwd then it passes full string as an argument to the nodejs file instead of just passing hello in nodejs and running pwd separately.
Example
./wrapper.sh "hello"
# nodejs gets argument hello : Expected
./wrapper.sh "hello&pwd"
# nodejs gets argument hello&pwd : Not Expected
# Requied only hello in nodejs while pwd running separately
I have tried a lot of solutions online but none seem to work except eval and bash -c which I don't want to use because the script doesn't wait for these commands to finish.
Edit
wrapper.sh is executed by a third party software and the content of the script is dynamically configured by the user so there's nothing much in my hand. Job of my module is to just setup the script properly that it is executed by the third party software.
I'm setting up unit test framework using ctest and cmake. The idea is to have the test command executed in docker container and the test will execute inside the container. That is the requirement.
the add_test looks like this
add_test(test_name, /bin/sh, runner.sh test_cmd)
where runner.sh is the script that runs the container and
test_cmd is the test command that runs in container.
test_cmd is like this
/path/to/test/test_binary; CODE=$?; echo $CODE > /root/result.txt;
runner.sh has this code
docker exec -t -i --user root $CONTAINERNAME bash -c "test_cmd"
runner.sh further tries to read /root/result.txt from container.
runner.sh spawns new container for each test. Each test runs in its own container
So there is no way they can interfere with one another when executed in parallel.
/root/result.txt is separate for each container.
when I run the tests like this
make test ARGS="-j8"
for some specific tests /root/result.txt is not generated. Hence the reading fails from that file ( docker exec for test_cmd already returns )
And I cannot see stdout of those tests in LastTest.log
when I run the tests like this
make test ARGS="-j1"
All tests pass. /root/result.txt is generated for all tests and I can see the output (stdout) of those tests
same behavior is there for j > 1.
Tests are not being timed out. I checked.
My guess is that, before
echo $CODE > /root/result.txt;
I'm trying to read the exit status from /root/result.txt but again how does it pass in -j1 and in sh its sequential execution. Until one command exits it doesn't move ahead.
One interesting observation is that when I try to do it (docker exec, same command) from python script using subprocess instead of bash it works.
def executeViaSubprocess(cmd, doOutput=False):
p = subprocess.Popen(cmd, stdout=subprocess.PIPE)
stdout, stderr = p.communicate()
retCode = p.returncode
Scenario:
I have a pipeline in a bash script and a list of some process along with their arguments. I want to run a python script after the execution of each process (executable) in the pipeline if the process is in my list.
(I use Python 2.7)
My proposed solution:
Using a python wrapper script. I have replaced all executable in the pipeline with my custom python script which:
1) checks the process if is in the list then FLAG=True
2) execute the process by the original executable using subprocess.Popen(process.command, shell=True).communicate()
3) if FLAG==True then do something.
Problem:
Using the current solution when I run the process using
subprocess.Popen().communicate(), the processes will execute separately
and they cannot get the output of inner process (child process) to the outer process (parent).
For example:
#!/bin/bash
Mean=`P1 $Image1 -M`
P2 "$Image2" $Mean -F
We have not output value of Mean in the second line execution.
Second line will execute like:
subprocess.Popen("P2 $Image2 \nP1 $Image1 -M -F" , shell=True).communicate()
Therefore, it returns an error!
Is there a better way in python to execute process like this?
Please let me know if there is any other suggestion for this scenario (I'm a very beginner in bash).
There's no need to use bash at all.
Assuming modern Python 3.x:
#!/usr/bin/env python
import subprocess
image1 = sys.argv[1]
image2 = sys.argv[2]
p1 = subprocess.run(['P1', image1, '-M'], check=True, capture_output=True)
p2 = subprocess.run(['P2', image2, p1.stdout.strip(), '-F'], check=True, capture_output=True)
print(p2_result.stdout)
See here that we refer to p1.stdout.strip() where we need the mean value in p2's arguments.
I am building an image with Dockerfile. I am pulling latest Ubuntu and golang images.
After importing all the directories and building the executable with go build inside the image, I want to run the executable. For that reason, I tried using either ENTRYPOINT or CMD, so that the executable runs when the container starts.
The issue is that when I do that and I am running the container in either attached or detached mode, it keeps registering the Enter Key repeatedly all the time (and CPU usage goes crazy). I can understand this because my script waits for a key to be registered and then some input to terminate, but since the Enter key is registered immediately again, it prints a message and then the same loop happens again.
When I build my image without executing (no CMD or ENTRYPOINT) the binary, I then run the container (the binary is still built inside the image) with a bash terminal and I execute the binary and all goes normal as it should, without any Enter Key registering all the time.
Any ideas why this might be happening???
Brief description of my Dockerfile:
# Import Images
FROM ubuntu:18.04
FROM golang:1.10
# Open ports
EXPORT ...
# Copy dependencies to GOPATH in docker file
COPY github.com/user/dependencies /go/src/github.com/user/dependencies
...
# Set working directory and build executable
WORKDIR /go/src/github.com/user/app-folder
RUN go build
# Run the binary (or not)
CMD ["app_name"]
-----OR-----
CMD ["./app_name"]
-----OR-----
ENTRYPOINT app_name
-----OR-----
ENTRYPOINT /go/src/github.com/user/app-folder/app_name
In the end, I tried all these, one at a time, I just included them like this for display. The result was always the same. The result in the terminal is:
...
Are you sure you want to exit? y/n
running. press enter to stop.
Are you sure you want to exit? y/n
running. press enter to stop.
...
The go script is as follows:
// running flag is set to True and then it scans for a newline
for running {
fmt.Println("running. press enter to stop.")
fmt.Scanln()
fmt.Println("Are you sure you want to exit? y/n")
if models.ConfirmUserAction() {
running = false
close(models.DbBuffer)
}
}
and the models package that includes the ConfirmUserAction function:
//ConfirmUserAction waits (blocks) for user input, returns true if input was Y/y, else false.
func ConfirmUserAction() bool {
var confirm string
fmt.Scanln(&confirm)
if confirm == "y" || confirm == "Y" {
return true
}
return false
}
I found a way to bypass this issue by creating a shell script inside the container, which then runs the executable (maybe a bit hacky way, but the Enter Key is not registering any more all the time).
So now, instead of running the executable at the ENTRYPOINT on my Dockerfile, I am running the shell script at the ENTRYPOINT which simply includes something like this:
#! /bin/sh
sleep 1;
echo "Starting Metrics Server..."
./metrics_server
The metrics_server is my compiled executable and I am setting the working directory WORKDIR, inside my Dockerfile to be were the executable and the shell script are.
Something worth mentioning about this, is that I already have imported the Ubuntu image in my Dockerfile (FROM ubuntu:18.04), as I need it anyways. I am saying this because it might not work without it (not entirely sure, I did not try it).
I have issue about executing commands in python.
Problem is:
In our company we have bought commercial software that can be used either GUI or Command line interface. I have been assigned a task that automize it as possible as. First I thought about using CLI instead of GUI. But then i have encountered a problem about executing multiple commands.
Now, I want to execute CLI version of that soft with arguments and continue executing commands in its menu(I dont mean execute script with args again.I want , once initial commands executed , it will open menu and i want to execute soft's commands inside Soft's menu at background). Then redirect output to variable.
I know, I must use subprocess with PIPE , but I didn't manage it.
import subprocess
proc=subprocess.Popen('./Goldbackup -s -I -U', shell=True, stdout=subprocess.PIPE)
output=proc.communicate()[0]
proc_2 = subprocess.Popen('yes\r\n/dir/blabla/\r\nyes', shell=True, stdout=subprocess.PIPE)
# This one i want to execute inside first subprocess
Set stdin=PIPE if you want to pass commands to a subprocess via its stdin:
#!/usr/bin/env python
from subprocess import Popen, PIPE
proc = Popen('./Goldbackup -s -I -U'.split(), stdin=PIPE, stdout=PIPE,
universal_newlines=True)
output = proc.communicate('yes\n/dir/blabla/\nyes')[0]
See Python - How do I pass a string into subprocess.Popen (using the stdin argument)?