How to properly make an api request in python3 with base64 encoding - bash

I'm trying to run a bash script at server creation on vultr.com through the API using Python3
I'm not sure what I'm doing wrong. The server starts up but the script never runs.
The documentation states it has to be a base64 encoded string. I'm thinking I'm doing something wrong with the encoding.
Have any ideas?
import base64
import requests
key = 'redacted'
squid = '''#!/bin/bash
touch test'''
squid_encoded = base64.b64encode(squid.encode())
payload = {'DCID': 1, 'VPSPLANID': 29, 'OSID': 215, 'userdata': squid_encoded}
headers = {'API-Key': key}
def vult_api_call():
p = requests.post('https://api.vultr.com/v1/server/create', data=payload, headers=headers)
print(p.status_code)
print(p.text)
vult_api_call()

The cloud-init userdata scripts can be tricky to troubleshoot. Looking at your squid script, it is missing the #cloud-config header. Vultr has a similar example in their docs:
Run a script after the system boots.
#cloud-config
bootcmd:
- "/bin/echo sample > /root/my_file.txt"
</pre>
Support for this can vary by operating system though. I would recommend using Vultr's startup script feature instead of cloud-init, as it works for every operating system. These are referenced in the Vultr API spec with SCRIPTID.
Also note that "bootcmd:" will run every time the instance boots. There is also "runcmd:", but I have seen compatibility issues with it on some Ubuntu distros on Vultr.

Related

pexpect timed out before script ends

I am using pexpect to connect to a remote server using ssh.
The following code works but I have to use time.sleep to make a delay.
Especially when I am sending a command to run a script on the remote server.
The script will take up to a minute to run and if I don't use a 60 seconds delay, then the script will end prematurely.
The same issue when I am using sftp to download a file. If the file is large, then it download partially.
Is there a way to control without using a delay?
#!/usr/bin/python3
import pexpect
import time
from subprocess import call
siteip = "131.235.111.111"
ssh_new_conn = 'Are you sure you want to continue connecting'
password = 'xxxxx'
child = pexpect.spawn('ssh admin#' + siteip)
time.sleep(1)
child.expect('admin#.* password:')
child.sendline('xxxxx')
time.sleep(2)
child.expect('admin#.*')
print('ssh to abcd - takes 60 seconds')
child.sendline('backuplog\r')
time.sleep(50)
child.sendline('pwd')
Many pexpect functions take an optional timeout= keyword, and the one you give in spawn() sets the default. Eg
child.expect('admin#',timeout=70)
You can use the value None to never timeout.

boto3 - base64 encoded lifecycle configuration produces instance failure

I am trying to set up lifecycle configurations for Sagemaker notebooks over the aws api via boto3. From the docs it reads that a base64 encoded string of the configuration has to be provided.
I am using the following code:
with open(lifecycleconfig.sh, 'rb') as fp:
file_content = fp.read()
config_string = base64.b64encode(file_content).decode('utf-8')
boto3.client('sagemaker').create_notebook_instance_lifecycle_config(
NotebookInstanceLifecycleConfigName='mylifecycleconfig1',
OnCreate=[
{
'Content': config_string
},
],
)
With some lifecycleconfig.sh:
#!/bin/bash
set -e
This creates a lifecycle configuration which shows up in the web interface and whose content is seemingly identical to creating a config by hand:
image.
However Notebooks using the lifecycle config created via boto3 will not start and the log file will show error:
/home/ec2-user/SageMaker/create_script.sh: line 2: $'\r': command not found
/home/ec2-user/SageMaker/create_script.sh: line 3: set: -
set: usage: set [-abefhkmnptuvxBCHP] [-o option-name] [--] [arg ...]
Moreover, if I copy paste the content of the corrupted config and create a new config by hand, the new one will now also not start.
How do I have to encode a bash script for a working aws lifecycle configuration?
Found out that it is actually a Windows specific problem concerning the difference between open(..., 'rb').read() and open(..., 'r').read().encode('utf-8').
On my linux machine these two give the same result. On Windows however open(..., 'rb') gives stuff like \r\n for new lines, which appearantly can be comprehended by Amazon's web interface, but not the linux machine where the script gets deployed.
This is a os independent solution:
with open(lifecycleconfig.sh, 'r') as fp:
file_content = fp.read()
config_string = base64.b64encode(file_content.encode('utf-8')).decode('utf-8')

Pyro4 configuration doesn't change

I put the Pyro4 configuration as this in the starting part of my code:
Pyro4.config.THREADPOOL_SIZE = 1
Pyro4.config.THREADPOOL_SIZE_MIN = 1
I check if I tried to run two client code at the same time, it will say ' rejected: no free workers, increase server threadpool size'. It looks like the setting is working, but when I open the console to check the pyro configuration using "python -m Pyro4.configuration", it returns:
THREADPOOL_SIZE = 40
THREADPOOL_SIZE_MIN = 4
Does someone know why?
When you run python -m Pyro4.configuration, it will simply print the default settings (influenced only by any environment variables you may have set). I'm not sure why you think that this should know about the settings you added in your own code.

Consuming function module with SAP Netweaver RFC SDK in Bash

I'm trying to make a request to a function in a SAP RFC server hosted at 10.123.231.123 with user myuser, password mypass, sysnr 00, client 076, language E. The name of the function is My_Function_Nm with params: string Alternative, string Date, string Name.
I use the command line:
/usr/sap/nwrfcsdk/bin/startrfc -h 10.123.231.123 -s 00 -u myuser -p mypass -c 076 -l en -F My_Function_Nm
But it always shows me the help instructions.
I guess I'm not specifying the -E pathname=edifile, and it's because i don't know how to create a EDI File to include the parameters values to the specified function. Maybe someone can help me on how to create this file and how to correctly invoke startrfc to consume from this function?
Thanks in advance.
If you actually check the help text the problem shows, you should find the following passages:
RFC connection options:
[...]
-2 SNA mode on.
You must set this if you want to connect to R/2.
[...]
-3 R/3 mode on.
You must set this if you want to connect to R/3.
Apparently you forgot to specify -3...
You should use sapnwrfc.ini which will store your connection parameters, and it should be places in the same directory as client program.
Sample file for your app should be following:
DEST=TST1
ASHOST=10.123.231.123
USER=myuser
PASSWD=mypass
SYSNR=076
RFC_TRACE=0
Documentation on using this file is here.
For calling the function you must create Bash-script, but better to use Python script.

Perl Script to Monitor URL Using proxy credentials?

Please help on the following code, this is not working in our environment.
use LWP;
use strict;
my $url = 'http://google.com';
my $username = 'user';
my $password = 'mypassword';
my $browser = LWP::UserAgent->new('Mozilla');
$browser->credentials("172.18.124.11:80","something.co.in",$username=>$password);
$browser->timeout(10);
my $response=$browser->get($url);
print $response->content;
OUTPUT :
Can't connect to google.com:80 (timeout)
LWP::Protocol::http::Socket: connect: timeout at C:/Perl/lib/LWP/Protocol/http.p m line 51.
OS: windows XP
Regards, Gaurav
Do you have a HTTP proxy at 172.18.124.11? I assume LWP is not using the proxy. You might want to use env_proxy => 1 with the new() call.
You also have a mod-perl2 tag in this question. If this code runs inside mod-perl2, it's possible that the http_proxy env variable is not visible to the code. You can check this eg. by printing $browser->proxy('http').
Or just set the proxy with $browser->proxy('http', '172.18.124.11');
Also, I assume you don't have use warnings on, because new() takes a hash, not just a string. It's a good idea to always enable warnings. That will save you lots of trouble.

Resources