Why is DBI connect not responding or throwing an error? - oracle

I am trying to test an Oracle database access over a proxy with the following (working) script:
#!/usr/bin/perl
use DBI;
use strict;
use warnings qw(all);
print "Start ->\n";
my $driver = "Proxy";
my $host = "*******";
my $port = "2001";
my $database = "********";
my $driver2 = "Oracle";
my $userid = "******";
my $password = "*****";
my $dsn = "dbi:$driver:hostname=$host;port=$port;dsn=dbi:$driver2:$database";
print "Connect to database ->\n";
my $dbh = DBI->connect($dsn, $userid, $password) or die $DBI::errstr;
print "Read data ->\n";
my $sth = $dbh->prepare("SELECT SYSDATE FROM DUAL");
$sth->execute() or die $DBI::errstr;
print "Number of rows found :". $sth->rows;
while (my #row = $sth->fetchrow_array()) {
my ($sysdate) = #row;
print " System Time = $sysdate\n";
}
$sth->finish();
print "Disconnect ->\n";
$dbh = $dbh->disconnect or warn $dbh->errstr;
When I run this the only output is
Start ->
Connect to database ->
[blank line]
and then nothing else happens (waited for more than a minute) and I have to break the process with CTRL+C.
I did verify that the script in general is working by connecting to another host/proxy. So I assume there must be something special going on with the proxy I am currently testing.
My question is: What could be the reason for the script to just halt without providing any error, and is there any way to get more information about the connection attempt (should the access attempt already be logged on the target system at this point) ?
EDIT
As suggested by the comment I ran the script via "DBI_TRACE=15 perl scriptname.pl" and got the following output:
Connect to database ->
-> DBI->connect(dbi:Proxy:hostname=*****;port=2001;dsn=dbi:Oracle:******, *****, ****)
-> DBI->install_driver(Proxy) for linux perl=5.008005 pid=31261 ruid=27003 euid=27003
install_driver: DBD::Proxy version 0.2004 loaded from /opt/perl585/lib/site_perl/5.8.5/i686-linux/DBD/Proxy.pm
New DBI::dr (for DBD::Proxy::dr, parent=, id=)
dbih_setup_handle(DBI::dr=HASH(0x9fdcf88)=>DBI::dr=HASH(0x9f96edc), DBD::Proxy::dr, 0, Null!)
dbih_make_com(Null!, 0, DBD::Proxy::dr, 112, 0) thr#0
dbih_setup_attrib(DBI::dr=HASH(0x9f96edc), Err, Null!) SCALAR(0x9fdf9d0) (already defined)
dbih_setup_attrib(DBI::dr=HASH(0x9f96edc), State, Null!) SCALAR(0x9fdf9b8) (already defined)
dbih_setup_attrib(DBI::dr=HASH(0x9f96edc), Errstr, Null!) SCALAR(0x9fdf9e8) (already defined)
dbih_setup_attrib(DBI::dr=HASH(0x9f96edc), TraceLevel, Null!) 0 (already defined)
dbih_setup_attrib(DBI::dr=HASH(0x9f96edc), FetchHashKeyName, Null!) 'NAME' (already defined)
>> STORE DISPATCH (DBI::dr=HASH(0x9fdcf88) rc1/1 #3 g0 ima41c pid#31261) at /opt/perl585/lib/site_perl/5.8.5/i686-linux/DBD/Proxy.pm line 76
-> STORE in DBD::_::common for DBD::Proxy::dr (DBI::dr=HASH(0x9fdcf88)~0x9f96edc 'CompatMode' 1)
STORE DBI::dr=HASH(0x9f96edc) 'CompatMode' => 1
warn: 0 '' (err#0)
<- STORE= 1 at /opt/perl585/lib/site_perl/5.8.5/i686-linux/DBD/Proxy.pm line 76
<- install_driver= DBI::dr=HASH(0x9fdcf88)
>> connect DISPATCH (DBI::dr=HASH(0x9fdcf88) rc2/3 #5 g0 ima1 pid#31261) at /opt/perl585/lib/site_perl/5.8.5/i686-linux/DBI.pm line 625
!! warn: 0 CLEARED by call to connect method
-> connect for DBD::Proxy::dr (DBI::dr=HASH(0x9fdcf88)~0x9f96edc 'hostname=****;port=2001;dsn=dbi:Oracle:*****' '*****' **** HASH(0x9f0f834))
Will try to find out what that actually means. :)
EDIT2
According to Why is there a "0 CLEARED by call to connect method" warning in DBI_TRACE? the "warn" lines aren't the culprit. So I am as wise as before the trace ?
EDIT3
Quite the adventure. I did let the script idle in the background and like after 4 minutes it actually stopped and delivered more tracing godness:
>> set_err DISPATCH (DBI::dr=HASH(0x8d1fedc) rc1/3 #4 g0 ima11 pid#32187) at /opt/perl585/lib/site_perl/5.8.5/i686-linux/DBD/Proxy.pm line 89
1 -> set_err in DBD::_::common for DBD::Proxy::dr (DBI::dr=HASH(0x8d1fedc)~INNER 1 'Cannot log in to DBI::ProxyServer: Cannot connect: Connection timed out at /opt/perl585/lib/site_perl/5.8.5/RPC/PlClient.pm line 70. at /opt/perl585/lib/site_perl/5.8.5/Net/Daemon/Log.pm line 136.
' ' ')
!! ERROR: 1 'Cannot log in to DBI::ProxyServer: Cannot connect: Connection timed out at /opt/perl585/lib/site_perl/5.8.5/RPC/PlClient.pm line 70. at /opt/perl585/lib/site_perl/5.8.5/Net/Daemon/Log.pm line 136.
' (err#1)
1 <- set_err= undef at /opt/perl585/lib/site_perl/5.8.5/i686-linux/DBD/Proxy.pm line 89
!! ERROR: 1 'Cannot log in to DBI::ProxyServer: Cannot connect: Connection timed out at /opt/perl585/lib/site_perl/5.8.5/RPC/PlClient.pm line 70. at /opt/perl585/lib/site_perl/5.8.5/Net/Daemon/Log.pm line 136.
' (err#1)
<- connect= undef at /opt/perl585/lib/site_perl/5.8.5/i686-linux/DBI.pm line 625
-> $DBI::errstr (&) FETCH from lasth=HASH
>> DBD::Proxy::dr::errstr
<- $DBI::errstr= 'Cannot log in to DBI::ProxyServer: Cannot connect: Connection timed out at /opt/perl585/lib/site_perl/5.8.5/RPC/PlClient.pm line 70. at /opt/perl585/lib/site_perl/5.8.5/Net/Daemon/Log.pm line 136.
'
DBI connect('hostname=*****;port=2001;dsn=dbi:Oracle:****','*****',...) failed: Cannot log in to DBI::ProxyServer: Cannot connect: Connection timed out at /opt/perl585/lib/site_perl/5.8.5/RPC/PlClient.pm line 70. at /opt/perl585/lib/site_perl/5.8.5/Net/Daemon/Log.pm line 136.
[... more of the same and the final DESTROY ...]
So the target proxy/host seems to be just offline or behind a firewall (would I get the same response in the latter case) ?

Related

Ren'Py - If Elif Dialogue Statement

My code:
if $ nice_dec4 = true:
a "Last night, was fantastic, I.."
a "Needed it."
elif $ mean_dec4 = true:
a "Hey.."
a "I was wondering if.."
a "Nevermind."
b "NO!"
b "Could I.. stay over?"
show sloane tired school
a "Why would you lie to me..?"
a "Why do you hit the mouse button like this isn't my life."
a "Why don't you care.. anymore?"
a "Come back soon,"
a "{b}{i}Darling."
The traceback is as follows:
[code]
I'm sorry, but an uncaught exception occurred.
While running game code:
File "game/script.rpy", line 712, in script
if $ nice_dec4 = true:
SyntaxError: invalid syntax (script.rpy, line 712)
-- Full Traceback ------------------------------------------------------------
Full traceback:
File "game/script.rpy", line 712, in script
if $ nice_dec4 = true:
File "/Users/NAME/Desktop/renpy-8.0.1-sdk/renpy/ast.py", line 2115, in execute
if renpy.python.py_eval(condition):
File "/Users/NAME/Desktop/renpy-8.0.1-sdk/renpy/python.py", line 1081, in py_eval
code = py_compile(code, 'eval')
File "/Users/NAME/Desktop/renpy-8.0.1-sdk/renpy/python.py", line 1018, in py_compile
raise e
File "/Users/NAME/Desktop/renpy-8.0.1-sdk/renpy/python.py", line 970, in py_compile
raise orig_e
File "/Users/NAME/Desktop/renpy-8.0.1-sdk/renpy/python.py", line 963, in py_compile
tree = compile(source, filename, py_mode, ast.PyCF_ONLY_AST | flags, 1)
SyntaxError: invalid syntax (script.rpy, line 712)
macOS-10.16-x86_64-i386-64bit x86_64
Ren'Py 8.0.2.22081402
Me And Sloane 1.2
Sun Dec 4 15:33:04 2022
[/code]
I expected the code to change based on a decision I made the player make earlier, all I got was this error.
IGNORE-----------------------------------------------------------------
"It looks like your post is mostly code; please add some more details":
mabklwglabd,jagdwlvgkulygwdvg agwlgd. w gdwilyavjkdfwvakuwdmvawwhmgwa,fywdjgcwfuwgcmhv aw,vw j, dwa dfldf,vdwm kw amvwdjawf wdlva;hwjha.,mww
When you use an if/elif operator in Ren'Py, you don't need to use the $ symbol.
if nice_dec4 = true:
a "Last night, was fantastic, I.."
a "Needed it."
elif mean_dec4 = true:
a "Hey.."
a "I was wondering if.."
$ symbol is used to define variables. For example, $ var1 = True

Paho on_connect works if connection OK but is not called when error. Why?

Why does my paho-mqtt (1.5.1) on_connect work if connection OK but it is not called when an error. For testing I'm using Linux Lite 4.2 based on Ubuntu 18.04 LTS with Xfcs running in a VM (VBox).
class subscribemqtt:
.....
def on_connect(self, client, userdata, flags, rc):
print ("ZZZZZZZZZZZZZ in on_connect")
connectErrs = {.........}
self.connectRc = rc
self.connectReason = connectErrs[str(rc)]
print ("$$$$$$$$$$$$$", self.connectRc, self.connectReason)
return
def subscribe(self, arguments):
...........
self.client = paho.Client(self.config['CLIENT_ID'])
self.client.on_message = self.on_subscribe
self.client.on_connect = self.on_connect
print ("#############", self.on_connect)
print ("XXXXXXXXXXXX calling self.client.connect(...)
self.client.connect(self.config['HOST'],self.config['PORT'])
print ("YYYYYYYYYYYYY calling self.client.loop_start()")
self.client.loop_start()
print ("AAAAAAAAAAAAA", self.connected)
while not self.connected:
time.sleep(0.1)
print ("BBBBBBBBBBBBB", self.connected, self.connectRc)
When all the parameters are correct, on_connect gets called:
############# <bound method subscribemqtt.on_connect of <__main__.subscribemqtt object at 0x7f32065a6ac8>>
XXXXXXXXXXXX calling self.client.connect(self.config['HOST'],self.config['PORT']
YYYYYYYYYYYYY calling self.client.loop_start()
AAAAAAAAAAAAA**ZZZZZZZZZZZZZ in on_connect**
False$$$$$$$$$$$$$
0 Connection successful
BBBBBBBBBBBBB True 0
When I set the host address to an invalid address (to create an error to test my error handling) I get:
subscribemqtt.subscribe:topic= Immersion Dummy
############# <bound method subscribemqtt.on_connect of <__main__.subscribemqtt object at 0x7ffb942ae0b8>>
XXXXXXXXXXXX calling self.client.connect(self.config['HOST'],self.config['PORT']
Traceback (most recent call last):
File "/home/linuxlite/Desktop/EMS/sendroutines/subscribemqtt.py", line 275, in <module>
(retcode, reason, topic) = subscribeObj.subscribe([None, topic])
File "/home/linuxlite/Desktop/EMS/sendroutines/subscribemqtt.py", line 191, in subscribe
self.client.connect(self.config['HOST'],self.config['PORT'])
File "/usr/local/lib/python3.6/dist-packages/paho/mqtt/client.py", line 941, in connect
return self.reconnect()
File "/usr/local/lib/python3.6/dist-packages/paho/mqtt/client.py", line 1075, in reconnect
sock = self._create_socket_connection()
File "/usr/local/lib/python3.6/dist-packages/paho/mqtt/client.py", line 3546, in _create_socket_connection
return socket.create_connection(addr, source_address=source, timeout=self._keepalive)
File "/usr/lib/python3.6/socket.py", line 704, in create_connection
for res in getaddrinfo(host, port, 0, SOCK_STREAM):
File "/usr/lib/python3.6/socket.py", line 745, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -2] Name or service not known
>>>
Thanks for reading.
Alan
PS. I just tried:
try:
self.client.connect(self.config['HOST'],self.config['PORT'])
except:
print ("**************** Error exception calling self.client.connect")
And that works but my understanding is that on_connect should be called for errors.
From the docs:
on_connect()
on_connect(client, userdata, flags, rc)
Called when the broker responds to our connection request.
The important part there is "when the broker responds". But in the example you have shown the hostname provided can not be resolved, so broker never responds because it is never actually contacted.
on_connect() will be called if connection succeeds or if it fails because the username/password is wrong or a unavailable protocol version (e.g. requesting MQTTv5 from a broker that only supports v3)

FTP upload no data error

I'm getting a weird error that only appears when I try to run this code on a Linux server, but it works perfectly on my local Mac. I'm not sure if anyone else has ran into this issue and can give me insight.
import ftplib
import sys, os
files_to_drop = set()
server = ""
username = ""
password = ""
ftp_connection = ftplib.FTP(server, username, password)
remote_path = "/incoming/"
ftp_connection.cwd(remote_path)
local_path = '/var/www/analytics/'
def deliver_via_ftp():
fh = open(local_path + files_to_drop[0], 'rb')
ftp_connection.storbinary('STOR {}'.format(files_to_drop[0]), fh)
fh.close()
The error I'm receiving on the Linux machine is below:
*cmd* 'CWD /incoming/'
*put* 'CWD /incoming/\r\n'
*get* '250 Directory successfully changed.\n'
*resp* '250 Directory successfully changed.'
starting to drop files.
*cmd* 'TYPE I'
*put* 'TYPE I\r\n'
*get* '500 OOPS: vsf_sysutil_recv_peek: no data\n'
*resp* '500 OOPS: vsf_sysutil_recv_peek: no data'
Traceback (most recent call last):
File "manage_batches.py", line 258, in <module>
m.run()
File "manage_batches.py", line 253, in run
self.__deliver_via_ftp()
File "manage_batches.py", line 215, in __deliver_via_ftp
self.ftp_connection.storbinary('STOR {}'.format(file_name), fh)
File "/usr/lib/python3.5/ftplib.py", line 502, in storbinary
self.voidcmd('TYPE I')
File "/usr/lib/python3.5/ftplib.py", line 277, in voidcmd
return self.voidresp()
File "/usr/lib/python3.5/ftplib.py", line 250, in voidresp
resp = self.getresp()
File "/usr/lib/python3.5/ftplib.py", line 245, in getresp
raise error_perm(resp)
ftplib.error_perm: 500 OOPS: vsf_sysutil_recv_peek: no data
I found a thread here (https://github.com/mscdex/node-ftp/issues/50) regarding the problem, but seems to point to repeatedly opening and closing of the connection. I'm reusing the same connection for all files and it works on my Mac.
Debug trace on Mac:
*cmd* 'CWD /incoming/'
*put* 'CWD /incoming/\r\n'
*get* '250 Directory successfully changed.\n'
*resp* '250 Directory successfully changed.'
*cmd* 'TYPE I'
*put* 'TYPE I\r\n'
*get* '200 Switching to Binary mode.\n'
*resp* '200 Switching to Binary mode.'
*cmd* 'PASV'
*put* 'PASV\r\n'
*get* '227 Entering Passive Mode (<hidden IP>,11,133).\n'
*resp* '227 Entering Passive Mode (<hidden IP>,11,133).'
*cmd* 'STOR 11062017_0812_AM.txt'
*put* 'STOR 11062017_0812_AM.txt\r\n'
*get* '150 Ok to send data.\n'
*resp* '150 Ok to send data.'
*get* '226 Transfer complete.\n'
*resp* '226 Transfer complete.'
file dropped /Users/f/Desktop/bucket/11062017_0812_AM.txt
Can someone point me in the right direction?
Thanks
EDIT: I cleaned up the code to remove the class structure and I added the debug output set to level 2.

How do I setup Airflow's email configuration to send an email on errors?

I'm trying to make an Airflow task intentionally fail and error out by passing in a Bash line (thisshouldnotrun) that doesn't work. Airflow is outputting the following:
[2017-06-15 17:44:17,869] {bash_operator.py:94} INFO - /tmp/airflowtmpLFTMX7/run_bashm2MEsS: line 7: thisshouldnotrun: command not found
[2017-06-15 17:44:17,869] {bash_operator.py:97} INFO - Command exited with return code 127
[2017-06-15 17:44:17,869] {models.py:1417} ERROR - Bash command failed
Traceback (most recent call last):
File "/home/ubuntu/.local/lib/python2.7/site-packages/airflow/models.py", line 1374, in run
result = task_copy.execute(context=context)
File "/home/ubuntu/.local/lib/python2.7/site-packages/airflow/operators/bash_operator.py", line 100, in execute
raise AirflowException("Bash command failed")
AirflowException: Bash command failed
[2017-06-15 17:44:17,871] {models.py:1433} INFO - Marking task as UP_FOR_RETRY
[2017-06-15 17:44:17,878] {models.py:1462} ERROR - Bash command failed
Traceback (most recent call last):
File "/home/ubuntu/.local/bin/airflow", line 28, in <module>
args.func(args)
File "/home/ubuntu/.local/lib/python2.7/site-packages/airflow/bin/cli.py", line 585, in test
ti.run(ignore_task_deps=True, ignore_ti_state=True, test_mode=True)
File "/home/ubuntu/.local/lib/python2.7/site-packages/airflow/utils/db.py", line 53, in wrapper
result = func(*args, **kwargs)
File "/home/ubuntu/.local/lib/python2.7/site-packages/airflow/models.py", line 1374, in run
result = task_copy.execute(context=context)
File "/home/ubuntu/.local/lib/python2.7/site-packages/airflow/operators/bash_operator.py", line 100, in execute
raise AirflowException("Bash command failed")
airflow.exceptions.AirflowException: Bash command failed
Will Airflow send an email for these kind of errors? If not, what would be the best way to send an email for these errors?
I'm not even sure if airflow.cfg is setup properly... Since the ultimate goal is to test the email alerting notification, I want to make sure airflow.cfg is setup properly. Here's the setup:
[email]
email_backend = airflow.utils.email.send_email_smtp
[smtp]
# If you want airflow to send emails on retries, failure, and you want to use
# the airflow.utils.email.send_email_smtp function, you have to configure an
# smtp server here
smtp_host = emailsmtpserver.region.amazonaws.com
smtp_starttls = True
smtp_ssl = False
# Uncomment and set the user/pass settings if you want to use SMTP AUTH
# smtp_user = airflow_data_user
# smtp_password = password
smtp_port = 587
smtp_mail_from = airflow_data_user#domain.com
What is smtp_starttls? I can't find any info for it in the documentation or online. If we have 2-factor authentication needed to view emails, will that be an issue here for Airflow?
Here's my Bash command:
task1_bash_command = """
export PATH=/home/ubuntu/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
export rundate=`TZ='America/Los_Angeles' date +%F -d "yesterday"`
export AWS_CONFIG_FILE="/home/ubuntu/.aws/config"
/home/ubuntu/bin/snowsql -f //home/ubuntu/sql/script.sql 1> /home/ubuntu/logs/"$rundate"_dev.log 2> /home/ubuntu/logs/"$rundate"_error_dev.log
if [ -e /home/ubuntu/logs/"$rundate"_error_dev.log ]
then
exit 64
fi
And my task:
task1 = BashOperator(
task_id = 'run_bash',
bash_command = task1_bash_command,
dag = dag,
retries = 2,
email_on_failure = True,
email = 'username#domain.com')
smtp_starttls basically means Use TLS
Set this to False and set smtp_ssl to True if you want to use SSL instead. You probably need smtp_user and smtp_password for either.
Airflow will not handle 2 step authentication. However, is you are using AWS you likely don't need it as your SMTP (SES) credentials are different from your AWS credentials.
See here.
EDIT:
For airflow to send an email on failure, there are a couple things that need to be set on your task, email_on_failure and email.
See here for example:
def throw_error(**context):
raise ValueError('Intentionally throwing an error to send an email.')
t1 = PythonOperator(task_id='throw_error_and_email',
python_callable=throw_error,
provide_context=True,
email_on_failure=True,
email='your.email#whatever.com',
dag=dag)
Use below link for creating airflow dag.
How to trigger daily DAG run at midnight local time instead of midnight UTC time
Approach 1 :
You can setup SMTP locally and make it send email on jobs failure.
[email]
email_backend = airflow.utils.email.send_email_smtp
[smtp]
smtp_host = localhost
smtp_starttls = False
smtp_ssl = False
smtp_port = 25
smtp_mail_from = noreply#company.com
Approach 2 : You can use Gmail to send email.
I have written an article to do this.
https://helptechcommunity.wordpress.com/2020/04/04/airflow-email-configuration/
If we have 2-factor authentication needed to view emails, will that be an issue here for Airflow?
You can use google app password to get your way around 2 factor authentication
https://support.google.com/mail/answer/185833?hl=en-GB
Source - https://docs.aws.amazon.com/mwaa/latest/userguide/configuring-env-variables.html

OSError: [Errno 1] Operation not permitted - Python

I have an automation tool that starts a Ruby on Rails server using the command line:
from subprocess import Popen
devnull = open(os.devnull, 'r+')
self.webserver = Popen(server_cmd, shell=True, stdin=devnull, stdout=devnull,
stderr=devnull, close_fds=True, preexec_fn=os.setsid)
The webserver is a process that contains the server running as a separate process (the way it is supposed to be). So I have the stop_webserver method to kill the process and stop the Rails server:
def stop_webserver(self):
"""
Stop the Rails server, if there is one running or it was created.
Kill the process and all of its children, in order to avoid having a
zoombie processes.
"""
if self.webserver is None:
self.log.info("Server isn't running. Nothing to do.")
return
if self.is_server_running():
# os.killpg(self.webserver.pid, signal.SIGTERM)
self.log.info("PID: %s" % self.webserver.pid)
# self.log.info("PID: %s" % os.getpgid(self.webserver.pid))
time.sleep(10)
os.killpg(self.webserver.pid, signal.SIGKILL)
self.webserver = None
self.log.info("Server was stopped.")
However I'm always getting the following error:
Traceback (most recent call last):
File "unit/front_end_test.py", line 27, in runTest
frontend.stop_webserver()
File "front_end.py", line 184, in stop_webserver
os.killpg(self.webserver.pid, signal.SIGKILL)
OSError: [Errno 1] Operation not permitted
If I try to terminate the process using os.getpgid I get a different error:
Traceback (most recent call last):
File "unit/front_end_test.py", line 27, in runTest
frontend.stop_webserver()
File "front_end.py", line 182, in stop_webserver
self.log.info("PID: %s" % os.getpgid(self.webserver.pid))
OSError: [Errno 3] No such process
However, the server is still running at port 3001. The process was never killed. How can I properly terminate the process?
I solved the problem by forcing a os.system() kill:
def stop_webserver(self):
"""
Stop the Rails server, if there is one running or it was created.
Kill the process and all of its children, in order to avoid having a
zoombie processes.
"""
if self.webserver is None:
self.log.info("Server isn't running. Nothing to do.")
return
if self.is_server_running():
try:
# os.killpg(self.webserver.pid, signal.SIGKILL)
os.killpg(self.webserver.pid, signal.SIGTERM)
self.webserver = None
self.log.info("Server was stopped.")
except OSError, e:
self.log.exception(e)
self.log.info("PID: %s" % self.webserver.pid)
os.system("sudo kill %s" % (self.webserver.pid, ))

Resources