pexpect timed out before script ends - pexpect

I am using pexpect to connect to a remote server using ssh.
The following code works but I have to use time.sleep to make a delay.
Especially when I am sending a command to run a script on the remote server.
The script will take up to a minute to run and if I don't use a 60 seconds delay, then the script will end prematurely.
The same issue when I am using sftp to download a file. If the file is large, then it download partially.
Is there a way to control without using a delay?
#!/usr/bin/python3
import pexpect
import time
from subprocess import call
siteip = "131.235.111.111"
ssh_new_conn = 'Are you sure you want to continue connecting'
password = 'xxxxx'
child = pexpect.spawn('ssh admin#' + siteip)
time.sleep(1)
child.expect('admin#.* password:')
child.sendline('xxxxx')
time.sleep(2)
child.expect('admin#.*')
print('ssh to abcd - takes 60 seconds')
child.sendline('backuplog\r')
time.sleep(50)
child.sendline('pwd')

Many pexpect functions take an optional timeout= keyword, and the one you give in spawn() sets the default. Eg
child.expect('admin#',timeout=70)
You can use the value None to never timeout.

Related

Let go tool pprof collect new data periodically

I'm using go pprof like this:
go tool pprof -no_browser -http=0.0.0.0:8081 http://localhost:6060/debug/pprof/profile?seconds=60
How can I ask pprof to fetch the profiling data periodically?
Here's a python script that uses wget to grab the data every hour, putting the output into a file whose name includes the timestamp.
Each file can be inspected by running
go tool pprof pprof_data_YYYY_MM_DD_HH
Here's the script:
import subprocess
import time
from datetime import datetime
while True:
now = datetime.now()
sleepTime = 3601 - (60 * now.minute + now.second + 1e-6 * now.microsecond)
time.sleep(sleepTime)
now = datetime.now()
tag = f"{now.year}-{now.month:02d}-{now.day:02d}_{now.hour:02d}"
subprocess.run(["wget", "-O", f"pprof_data_{tag}", "-nv", "-o", "/dev/null", "http://localhost:6060/debug/pprof/profile?seconds=60"])
The 3601 causes wget to run about 1 second after the top of the hour to avoid the race condition that time.Sleep returns just before the top of the hour.
You could obviously write a similar script in bash or your favorite language.

How to I read from a Jupyter iopub socket?

I'm trying to learn more about the Jupyter wire protocol. I want to collect examples of the messages sent on the IOPub socket.
SETUP:
I start a Jupyter console in one terminal then go find the connection file. In my case the contents are as follows:
{
"shell_port": 62690,
"iopub_port": 62691,
"stdin_port": 62692,
"control_port": 62693,
"hb_port": 62694,
"ip": "127.0.0.1",
"key": "9c6bbbfb-6ad699d44a15189c4f3d3371",
"transport": "tcp",
"signature_scheme": "hmac-sha256",
"kernel_name": ""
}
I create a simple python script as follows:
import zmq
iopub_port = "62691"
ip = "127.0.0.1"
transport = "tcp"
context = zmq.Context()
socket = context.socket(zmq.SUB)
socket.connect(f"{transport}://{ip}:{iopub_port}")
while True:
string = socket.recv()
print(string)
I open a second terminal and execute the script as follows (it blocks, as expected):
python3 script.py
And then I switch back to the first terminal (with the Jupyter console running) and start executing code.
ISSUE: Nothing prints on the second terminal.
EXPECTED: Some Jupyter IO messages, or at least some sort of error.
Uh, help? Is my code fine and this is probably an issue with my config? Or is my code somehow braindead?
From one of the owners of the Jupyter client repo:
ZMQ subscriber sockets need a subscription set before they'll receive
any messages. The subscription is a prefix of a valid message, and you
can set it to an empty bytes string to subscribe to all messages.
e.g. in my case I need to add
socket.setsockopt(zmq.SUBSCRIBE, b'')
before starting the while loop.
Do you know if it's possible to capture from IOPub if a process in Jupyter notebook is finished or not?
I'm looking here (http://jupyterlab.github.io/jupyterlab/services/modules/kernelmessage.html) but it is not very clear.

How to properly make an api request in python3 with base64 encoding

I'm trying to run a bash script at server creation on vultr.com through the API using Python3
I'm not sure what I'm doing wrong. The server starts up but the script never runs.
The documentation states it has to be a base64 encoded string. I'm thinking I'm doing something wrong with the encoding.
Have any ideas?
import base64
import requests
key = 'redacted'
squid = '''#!/bin/bash
touch test'''
squid_encoded = base64.b64encode(squid.encode())
payload = {'DCID': 1, 'VPSPLANID': 29, 'OSID': 215, 'userdata': squid_encoded}
headers = {'API-Key': key}
def vult_api_call():
p = requests.post('https://api.vultr.com/v1/server/create', data=payload, headers=headers)
print(p.status_code)
print(p.text)
vult_api_call()
The cloud-init userdata scripts can be tricky to troubleshoot. Looking at your squid script, it is missing the #cloud-config header. Vultr has a similar example in their docs:
Run a script after the system boots.
#cloud-config
bootcmd:
- "/bin/echo sample > /root/my_file.txt"
</pre>
Support for this can vary by operating system though. I would recommend using Vultr's startup script feature instead of cloud-init, as it works for every operating system. These are referenced in the Vultr API spec with SCRIPTID.
Also note that "bootcmd:" will run every time the instance boots. There is also "runcmd:", but I have seen compatibility issues with it on some Ubuntu distros on Vultr.

Errors in UDP sending in a sub-script (bash)

Using a Raspi/Debian - I have a script that parses the results from an iwlist scan and sends them via UDP to a Pure Data patch. This runs fine in gui mode, but now I'm trying to automate the whole process in another script with the following:
pd-extended -nogui /home/pi/patch.pd & /home/pi/libOSC/scan.sh && fg
But when I run this new script, the UDP appears to only send the info to Pure Data once, and then the scanning continues but Pd does not receive the packet. Any help with this would be appreciated.
What happens when you run /home/pi/libOSC/scan.sh? It sends the results only once? Then maybe you need to do it differently, like calling that script from within pd using the 'shell' or 'popen' objects for instance. Or you implement a polling command via UDP that will return the values.
how does your scan.sh script look like?
you probably want to make it something like:
pdhost=localhost
pdport=9999
do_scan() {
## some code here that does the scan and print's the result to stdout
}
doscan | while read line
do
echo "${line};" | pdsend ${pdhost} ${pdport}
done
rather than the following:
doscan | pdsend ${pdhost} ${pdport}

Synchronizing between multiple pexpect processes

I am writing an application that requires to ssh and telnet to a device at the same time.
The pseudo code goes something like this.
p1 = pexpect.spawn("ssh to the device")
p1.send("run some command")
p1.expect("..")
p2 = pexpect.spawn("telnet to same device")
p2.send("run a command that can be run only through telnet")
p2.expect("..")
p1.send("run some other command")
p1.expect("..")
p2.send("run another command that can be run only through telnet")
p2.expect("..")`
If you notice, I need synchronization between two pexpect children
in order to run them one after the other.
I searched a lot but could not find any information.
please help.
thanks

Resources