I am trying to connect GNU Radio to a python script using the GR ZMQ REP / REQ blocks. GR is running on a Raspberry Pi 4 on router address 192.168.1.25. The python script is on a separate computer, from which I can successfully ping 192.168.1.25. I am setting up the REQ-REP pairs on separate ports, 55555 and 55556.
Flow graph:
import pmt
import zmq
# create a REQ socket
req_address = 'tcp://192.168.1.25:55555'
req_context = zmq.Context()
req_sock = req_context.socket (zmq.REQ)
rc = req_sock.connect (req_address)
# create a REP socket
rep_address = 'tcp://192.168.1.25:55556'
rep_context = zmq.Context()
rep_sock = rep_context.socket (zmq.REP)
rc = rep_sock.connect (rep_address)
while True:
data = req_sock.recv()
print(data)
rep_sock.send (b'1')
Running this code leads to the following error:
ZMQError: Operation cannot be accomplished in current state
The error is flagged at this line:
data = req_sock.recv()
Can you comment on the cause of the error? I know there is a strict REQ-REP, REQ-REP.. relationship, but I cannot find my error.
Your current code has two problems:
You call req_socket.recv(), but then you call rep_sock.send(): that's not how a REQ/REP pair works. You only need to create one socket (the REQ socket); it connects to a remote REP socket.
When you create a REQ socket, you need to send a REQuest before you receive a REPly.
Additionally, you should only create a single ZMQ context, even if you have multiple sockets.
A functional version of your code might look like this:
import zmq
# create a REQ socket
ctx = zmq.Context()
req_sock = ctx.socket (zmq.REQ)
# connect to a remote REP sink
rep_address = 'tcp://192.168.1.25:55555'
rc = req_sock.connect(rep_address)
while True:
req_sock.send (b'1')
data = req_sock.recv()
print(data)
I tested the above code against the following GNU Radio config:
options:
parameters:
author: ''
catch_exceptions: 'True'
category: '[GRC Hier Blocks]'
cmake_opt: ''
comment: ''
copyright: ''
description: ''
gen_cmake: 'On'
gen_linking: dynamic
generate_options: qt_gui
hier_block_src_path: '.:'
id: example
max_nouts: '0'
output_language: python
placement: (0,0)
qt_qss_theme: ''
realtime_scheduling: ''
run: 'True'
run_command: '{python} -u {filename}'
run_options: prompt
sizing_mode: fixed
thread_safe_setters: ''
title: Example
window_size: (1000,1000)
states:
bus_sink: false
bus_source: false
bus_structure: null
coordinate: [8, 8]
rotation: 0
state: enabled
blocks:
- name: samp_rate
id: variable
parameters:
comment: ''
value: '32000'
states:
bus_sink: false
bus_source: false
bus_structure: null
coordinate: [184, 12]
rotation: 0
state: enabled
- name: analog_sig_source_x_0
id: analog_sig_source_x
parameters:
affinity: ''
alias: ''
amp: '1'
comment: ''
freq: '1000'
maxoutbuf: '0'
minoutbuf: '0'
offset: '0'
phase: '0'
samp_rate: samp_rate
type: complex
waveform: analog.GR_COS_WAVE
states:
bus_sink: false
bus_source: false
bus_structure: null
coordinate: [184, 292.0]
rotation: 0
state: true
- name: blocks_throttle_0
id: blocks_throttle
parameters:
affinity: ''
alias: ''
comment: ''
ignoretag: 'True'
maxoutbuf: '0'
minoutbuf: '0'
samples_per_second: samp_rate
type: complex
vlen: '1'
states:
bus_sink: false
bus_source: false
bus_structure: null
coordinate: [344, 140.0]
rotation: 0
state: true
- name: zeromq_rep_sink_0
id: zeromq_rep_sink
parameters:
address: tcp://0.0.0.0:55555
affinity: ''
alias: ''
comment: ''
hwm: '-1'
pass_tags: 'False'
timeout: '100'
type: complex
vlen: '1'
states:
bus_sink: false
bus_source: false
bus_structure: null
coordinate: [504, 216.0]
rotation: 0
state: true
connections:
- [analog_sig_source_x_0, '0', blocks_throttle_0, '0']
- [blocks_throttle_0, '0', zeromq_rep_sink_0, '0']
metadata:
file_format: 1
Im receiving multiple erlang errors in my CouchDB 2.1.1 cluster (3 nodes/Windows), see errors and node configuration below:
3 nodes (10.0.7.4 - 10.0.7.6), Azure application gateway is used as load balancer.
Why do these errors appear? system resources of the nodes are far from overload.
I would be thankful for any help - thanks in advance.
Errors:
rexi_server: from: couchdb#10.0.7.4(<0.14976.568>) mfa: fabric_rpc:changes/3 exit:timeout [{rexi,init_stream,1,[{file,"src/rexi.erl"},{line,256}]},{rexi,stream_last,2,[{file,"src/rexi.erl"},{line,224}]},{fabric_rpc,changes,4,[{file,"src/fabric_rpc.erl"},{line,86}]},{rexi_server,init_p,3,[{file,"src/rexi_server.erl"},{line,139}]}]
rexi_server: from: couchdb#10.0.7.6(<13540.24597.655>) mfa: fabric_rpc:all_docs/3 exit:timeout [{rexi,init_stream,1,[{file,"src/rexi.erl"},{line,256}]},{rexi,stream2,3,[{file,"src/rexi.erl"},{line,204}]},{fabric_rpc,view_cb,2,[{file,"src/fabric_rpc.erl"},{line,308}]},{couch_mrview,finish_fold,2,[{file,"src/couch_mrview.erl"},{line,642}]},{rexi_server,init_p,3,[{file,"src/rexi_server.erl"},{line,139}]}]
rexi_server: from: couchdb#10.0.7.6(<13540.5991.623>) mfa: fabric_rpc:all_docs/3 exit:timeout [{rexi,init_stream,1,[{file,"src/rexi.erl"},{line,256}]},{rexi,stream2,3,[{file,"src/rexi.erl"},{line,204}]},{fabric_rpc,view_cb,2,[{file,"src/fabric_rpc.erl"},{line,308}]},{couch_mrview,map_fold,3,[{file,"src/couch_mrview.erl"},{line,511}]},{couch_btree,stream_kv_node2,8,[{file,"src/couch_btree.erl"},{line,848}]},{couch_btree,fold,4,[{file,"src/couch_btree.erl"},{line,222}]},{couch_db,enum_docs,5,[{file,"src/couch_db.erl"},{line,1450}]},{couch_mrview,all_docs_fold,4,[{file,"src/couch_mrview.erl"},{line,425}]}]
req_err(3206982071) unknown_error : normal [<<"mochiweb_request:recv/3 L180">>,<<"mochiweb_request:stream_unchunked_body/4 L540">>,<<"mochiweb_request:recv_body/2 L214">>,<<"chttpd:body/1 L636">>,<<"chttpd:json_body/1 L649">>,<<"chttpd:json_body_obj/1 L657">>,<<"chttpd_db:db_req/2 L386">>,<<"chttpd:process_request/1 L295">>]
System running to use fully qualified hostnames ** ** Hostname localhost is illegal
COMPACTION-ERRORS
Supervisor couch_secondary_services had child compaction_daemon started with couch_compaction_daemon:start_link() at <0.18509.478> exit with reason {compaction_loop_died,{timeout,{gen_server,call,[couch_server,get_server]}}} in context child_terminated
CRASH REPORT Process couch_compaction_daemon (<0.18509.478>) with 0 neighbors exited with reason: {compaction_loop_died,{timeout,{gen_server,call,[couch_server,get_server]}}} at gen_server:terminate/7(line:826) <= proc_lib:init_p_do_apply/3(line:240); initial_call: {couch_compaction_daemon,init,['Argument__1']}, ancestors: [couch_secondary_services,couch_sup,<0.200.0>], messages: [], links: [<0.12665.492>], dictionary: [], trap_exit: true, status: running, heap_size: 987, stack_size: 27, reductions: 3173
gen_server couch_compaction_daemon terminated with reason: {compaction_loop_died,{timeout,{gen_server,call,[couch_server,get_server]}}} last msg: {'EXIT',<0.23195.476>,{timeout,{gen_server,call,[couch_server,get_server]}}} state: {state,<0.23195.476>,[]}
Error in process <0.16890.22> on node 'couchdb#10.0.7.4' with exit value: {{rexi_DOWN,{'couchdb#10.0.7.5',noproc}},[{mem3_rpc,rexi_call,2,[{file,"src/mem3_rpc.erl"},{line,269}]},{mem3_rep,calculate_start_seq,1,[{file,"src/mem3_rep.erl"},{line,194}]},{mem3_rep,repl,2,[{file,"src/mem3_rep.erl"},{line,175}]},{mem3_rep,go,1,[{file,"src/mem3_rep.erl"},{line,81}]},{mem3_sync,'-start_push_replication/1-fun-0-',2,[{file,"src/mem3_sync.erl"},{line,208}]}]}
#vm.args
-name couchdb#10.0.7.4
-setcookie monster
-kernel error_logger silent
-sasl sasl_error_logger false
+K true
+A 16
+Bd -noinput
+Q 134217727`
local.ini
[fabric]
request_timeout = infinity
[couchdb]
max_dbs_open = 10000
os_process_timeout = 20000
uuid =
[chttpd]
port = 5984
bind_address = 0.0.0.0
[httpd]
socket_options = [{recbuf, 262144}, {sndbuf, 262144}, {nodelay, true}]
enable_cors = true
[couch_httpd_auth]
secret =
[daemons]
compaction_daemon={couch_compaction_daemon, start_link, []}
[compactions]
_default = [{db_fragmentation, "50%"}, {view_fragmentation, "50%"}, {from, "23:00"}, {to, "04:00"}]
[compaction_daemon]
check_interval = 300
min_file_size = 100000
[vendor]
name = COUCHCLUSTERNODE0X
[admins]
adminuser =
[cors]
methods = GET, PUT, POST, HEAD, DELETE
headers = accept, authorization, content-type, origin, referer
origins = *
credentials = true
[query_server_config]
os_process_limit = 2000
os_process_soft_limit = 1000
we run the nplb test both on the x86-x11 and arm-linux plarform with cobalt Release 11.104700 version, the SbSocketGetInterfaceAddressTest test would both fail, so it seemed to be the issue of NPLB itself, can someone have a look?
[ FAILED ] SbSocketAddressTypes/SbSocketGetInterfaceAddressTest.SunnyDayDestination/1, where GetParam() = 1
[ FAILED ] SbSocketAddressTypes/SbSocketGetInterfaceAddressTest.SunnyDaySourceForDestination/1, where GetParam() = 1
[ FAILED ] SbSocketAddressTypes/SbSocketGetInterfaceAddressTest.SunnyDaySourceNotLoopback/1, where GetParam() = 1
1>SbSocketAddressTypes/SbSocketGetInterfaceAddressTest.SunnyDayDestination/1
../../starboard/nplb/socket_get_interface_address_test.cc:85: Failure
Value of: SbSocketGetInterfaceAddress(&destination, &source, NULL)
Actual: false
Expected: true
../../starboard/nplb/socket_get_interface_address_test.cc:86: Failure
Value of: source.type == GetAddressType()
Actual: false
Expected: true
../../starboard/nplb/socket_get_interface_address_test.cc:87: Failure
Value of: SbSocketGetInterfaceAddress(&destination, &source, &netmask)
Actual: false
Expected: true
../../starboard/nplb/socket_get_interface_address_test.cc:93: Failure
Value of: GetAddressType()
Actual: 1
Expected: source.type
Which is: 4278124286
../../starboard/nplb/socket_get_interface_address_test.cc:94: Failure
Value of: GetAddressType()
Actual: 1
Expected: netmask.type
Which is: 4278124286
../../starboard/nplb/socket_get_interface_address_test.cc:95: Failure
Value of: 0
Expected: source.port
Which is: -16843010
2>SbSocketAddressTypes/SbSocketGetInterfaceAddressTest.SunnyDaySourceForDestination/1
[13672:19284243583:ERROR:socket_connect.cc(52)] SbSocketConnect: connect failed: 101
../../starboard/nplb/socket_get_interface_address_test.cc:128: Failure
Value of: source.type == GetAddressType()
Actual: false
Expected: true
../../starboard/nplb/socket_get_interface_address_test.cc:132: Failure
Value of: GetAddressType()
Actual: 1
Expected: netmask.type
Which is: 4278124286
../../starboard/nplb/socket_get_interface_address_test.cc:134: Failure
Expected: (0) != (SbMemoryCompare(source.address, invalid_address.address, (sizeof(source.address) / sizeof(source.address[0])))), actual: 0 vs 0
../../starboard/nplb/socket_get_interface_address_test.cc:136: Failure
Expected: (0) != (SbMemoryCompare(netmask.address, invalid_address.address, (sizeof(netmask.address) / sizeof(netmask.address[0])))), actual: 0 vs 0
3>SbSocketAddressTypes/SbSocketGetInterfaceAddressTest.SunnyDaySourceNotLoopback/1
../../starboard/nplb/socket_get_interface_address_test.cc:165: Failure
Value of: SbSocketGetInterfaceAddress(&destination, &source, NULL)
Actual: false
Expected: true
../../starboard/nplb/socket_get_interface_address_test.cc:166: Failure
Value of: GetAddressType()
Actual: 1
Expected: source.type
Which is: 4278124286
../../starboard/nplb/socket_get_interface_address_test.cc:167: Failure
Value of: SbSocketGetInterfaceAddress(&destination, &source, &netmask)
Actual: false
Expected: true
../../starboard/nplb/socket_get_interface_address_test.cc:172: Failure
Expected: (0) != (SbMemoryCompare(netmask.address, invalid_address.address, (sizeof(netmask.address) / sizeof(netmask.address[0])))), actual: 0 vs 0
../../starboard/nplb/socket_get_interface_address_test.cc:174: Failure
Expected: (0) != (SbMemoryCompare(source.address, invalid_address.address, (sizeof(source.address) / sizeof(source.address[0])))), actual: 0 vs 0
It looks like a number of SbSocket implementations in your Starboard port are broken and NPLB rightfully points it out.
For example, in order to pass SbSocketGetInterfaceAddressTest.SunnyDayDestination you need to follow the comment from SbSocketGetInterfaceAddress declaration:
// If the destination address is 0.0.0.0, and its |type| is
// |kSbSocketAddressTypeIpv4|, then any IPv4 local interface that is up and not
// a loopback interface is a valid return value.
and
// Returns whether it was possible to determine the source address and the
// netmask (if non-NULL value is passed) to be used to connect to the
// destination. This function could fail if the destination is not reachable,
// if it an invalid address, etc.
In other words, the test expects out_source_address to be an IP address of the machine and return value to be true.
Since you are seeing the same error on linux_x86-x11 build, I suggest you to verify that POSIX function connect (used by SbSocketConnect implementation on Linux) works well with 0.0.0.0 IP address on your platform.
I'm trying to make an Airflow task intentionally fail and error out by passing in a Bash line (thisshouldnotrun) that doesn't work. Airflow is outputting the following:
[2017-06-15 17:44:17,869] {bash_operator.py:94} INFO - /tmp/airflowtmpLFTMX7/run_bashm2MEsS: line 7: thisshouldnotrun: command not found
[2017-06-15 17:44:17,869] {bash_operator.py:97} INFO - Command exited with return code 127
[2017-06-15 17:44:17,869] {models.py:1417} ERROR - Bash command failed
Traceback (most recent call last):
File "/home/ubuntu/.local/lib/python2.7/site-packages/airflow/models.py", line 1374, in run
result = task_copy.execute(context=context)
File "/home/ubuntu/.local/lib/python2.7/site-packages/airflow/operators/bash_operator.py", line 100, in execute
raise AirflowException("Bash command failed")
AirflowException: Bash command failed
[2017-06-15 17:44:17,871] {models.py:1433} INFO - Marking task as UP_FOR_RETRY
[2017-06-15 17:44:17,878] {models.py:1462} ERROR - Bash command failed
Traceback (most recent call last):
File "/home/ubuntu/.local/bin/airflow", line 28, in <module>
args.func(args)
File "/home/ubuntu/.local/lib/python2.7/site-packages/airflow/bin/cli.py", line 585, in test
ti.run(ignore_task_deps=True, ignore_ti_state=True, test_mode=True)
File "/home/ubuntu/.local/lib/python2.7/site-packages/airflow/utils/db.py", line 53, in wrapper
result = func(*args, **kwargs)
File "/home/ubuntu/.local/lib/python2.7/site-packages/airflow/models.py", line 1374, in run
result = task_copy.execute(context=context)
File "/home/ubuntu/.local/lib/python2.7/site-packages/airflow/operators/bash_operator.py", line 100, in execute
raise AirflowException("Bash command failed")
airflow.exceptions.AirflowException: Bash command failed
Will Airflow send an email for these kind of errors? If not, what would be the best way to send an email for these errors?
I'm not even sure if airflow.cfg is setup properly... Since the ultimate goal is to test the email alerting notification, I want to make sure airflow.cfg is setup properly. Here's the setup:
[email]
email_backend = airflow.utils.email.send_email_smtp
[smtp]
# If you want airflow to send emails on retries, failure, and you want to use
# the airflow.utils.email.send_email_smtp function, you have to configure an
# smtp server here
smtp_host = emailsmtpserver.region.amazonaws.com
smtp_starttls = True
smtp_ssl = False
# Uncomment and set the user/pass settings if you want to use SMTP AUTH
# smtp_user = airflow_data_user
# smtp_password = password
smtp_port = 587
smtp_mail_from = airflow_data_user#domain.com
What is smtp_starttls? I can't find any info for it in the documentation or online. If we have 2-factor authentication needed to view emails, will that be an issue here for Airflow?
Here's my Bash command:
task1_bash_command = """
export PATH=/home/ubuntu/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
export rundate=`TZ='America/Los_Angeles' date +%F -d "yesterday"`
export AWS_CONFIG_FILE="/home/ubuntu/.aws/config"
/home/ubuntu/bin/snowsql -f //home/ubuntu/sql/script.sql 1> /home/ubuntu/logs/"$rundate"_dev.log 2> /home/ubuntu/logs/"$rundate"_error_dev.log
if [ -e /home/ubuntu/logs/"$rundate"_error_dev.log ]
then
exit 64
fi
And my task:
task1 = BashOperator(
task_id = 'run_bash',
bash_command = task1_bash_command,
dag = dag,
retries = 2,
email_on_failure = True,
email = 'username#domain.com')
smtp_starttls basically means Use TLS
Set this to False and set smtp_ssl to True if you want to use SSL instead. You probably need smtp_user and smtp_password for either.
Airflow will not handle 2 step authentication. However, is you are using AWS you likely don't need it as your SMTP (SES) credentials are different from your AWS credentials.
See here.
EDIT:
For airflow to send an email on failure, there are a couple things that need to be set on your task, email_on_failure and email.
See here for example:
def throw_error(**context):
raise ValueError('Intentionally throwing an error to send an email.')
t1 = PythonOperator(task_id='throw_error_and_email',
python_callable=throw_error,
provide_context=True,
email_on_failure=True,
email='your.email#whatever.com',
dag=dag)
Use below link for creating airflow dag.
How to trigger daily DAG run at midnight local time instead of midnight UTC time
Approach 1 :
You can setup SMTP locally and make it send email on jobs failure.
[email]
email_backend = airflow.utils.email.send_email_smtp
[smtp]
smtp_host = localhost
smtp_starttls = False
smtp_ssl = False
smtp_port = 25
smtp_mail_from = noreply#company.com
Approach 2 : You can use Gmail to send email.
I have written an article to do this.
https://helptechcommunity.wordpress.com/2020/04/04/airflow-email-configuration/
If we have 2-factor authentication needed to view emails, will that be an issue here for Airflow?
You can use google app password to get your way around 2 factor authentication
https://support.google.com/mail/answer/185833?hl=en-GB
Source - https://docs.aws.amazon.com/mwaa/latest/userguide/configuring-env-variables.html
I am trying to test an Oracle database access over a proxy with the following (working) script:
#!/usr/bin/perl
use DBI;
use strict;
use warnings qw(all);
print "Start ->\n";
my $driver = "Proxy";
my $host = "*******";
my $port = "2001";
my $database = "********";
my $driver2 = "Oracle";
my $userid = "******";
my $password = "*****";
my $dsn = "dbi:$driver:hostname=$host;port=$port;dsn=dbi:$driver2:$database";
print "Connect to database ->\n";
my $dbh = DBI->connect($dsn, $userid, $password) or die $DBI::errstr;
print "Read data ->\n";
my $sth = $dbh->prepare("SELECT SYSDATE FROM DUAL");
$sth->execute() or die $DBI::errstr;
print "Number of rows found :". $sth->rows;
while (my #row = $sth->fetchrow_array()) {
my ($sysdate) = #row;
print " System Time = $sysdate\n";
}
$sth->finish();
print "Disconnect ->\n";
$dbh = $dbh->disconnect or warn $dbh->errstr;
When I run this the only output is
Start ->
Connect to database ->
[blank line]
and then nothing else happens (waited for more than a minute) and I have to break the process with CTRL+C.
I did verify that the script in general is working by connecting to another host/proxy. So I assume there must be something special going on with the proxy I am currently testing.
My question is: What could be the reason for the script to just halt without providing any error, and is there any way to get more information about the connection attempt (should the access attempt already be logged on the target system at this point) ?
EDIT
As suggested by the comment I ran the script via "DBI_TRACE=15 perl scriptname.pl" and got the following output:
Connect to database ->
-> DBI->connect(dbi:Proxy:hostname=*****;port=2001;dsn=dbi:Oracle:******, *****, ****)
-> DBI->install_driver(Proxy) for linux perl=5.008005 pid=31261 ruid=27003 euid=27003
install_driver: DBD::Proxy version 0.2004 loaded from /opt/perl585/lib/site_perl/5.8.5/i686-linux/DBD/Proxy.pm
New DBI::dr (for DBD::Proxy::dr, parent=, id=)
dbih_setup_handle(DBI::dr=HASH(0x9fdcf88)=>DBI::dr=HASH(0x9f96edc), DBD::Proxy::dr, 0, Null!)
dbih_make_com(Null!, 0, DBD::Proxy::dr, 112, 0) thr#0
dbih_setup_attrib(DBI::dr=HASH(0x9f96edc), Err, Null!) SCALAR(0x9fdf9d0) (already defined)
dbih_setup_attrib(DBI::dr=HASH(0x9f96edc), State, Null!) SCALAR(0x9fdf9b8) (already defined)
dbih_setup_attrib(DBI::dr=HASH(0x9f96edc), Errstr, Null!) SCALAR(0x9fdf9e8) (already defined)
dbih_setup_attrib(DBI::dr=HASH(0x9f96edc), TraceLevel, Null!) 0 (already defined)
dbih_setup_attrib(DBI::dr=HASH(0x9f96edc), FetchHashKeyName, Null!) 'NAME' (already defined)
>> STORE DISPATCH (DBI::dr=HASH(0x9fdcf88) rc1/1 #3 g0 ima41c pid#31261) at /opt/perl585/lib/site_perl/5.8.5/i686-linux/DBD/Proxy.pm line 76
-> STORE in DBD::_::common for DBD::Proxy::dr (DBI::dr=HASH(0x9fdcf88)~0x9f96edc 'CompatMode' 1)
STORE DBI::dr=HASH(0x9f96edc) 'CompatMode' => 1
warn: 0 '' (err#0)
<- STORE= 1 at /opt/perl585/lib/site_perl/5.8.5/i686-linux/DBD/Proxy.pm line 76
<- install_driver= DBI::dr=HASH(0x9fdcf88)
>> connect DISPATCH (DBI::dr=HASH(0x9fdcf88) rc2/3 #5 g0 ima1 pid#31261) at /opt/perl585/lib/site_perl/5.8.5/i686-linux/DBI.pm line 625
!! warn: 0 CLEARED by call to connect method
-> connect for DBD::Proxy::dr (DBI::dr=HASH(0x9fdcf88)~0x9f96edc 'hostname=****;port=2001;dsn=dbi:Oracle:*****' '*****' **** HASH(0x9f0f834))
Will try to find out what that actually means. :)
EDIT2
According to Why is there a "0 CLEARED by call to connect method" warning in DBI_TRACE? the "warn" lines aren't the culprit. So I am as wise as before the trace ?
EDIT3
Quite the adventure. I did let the script idle in the background and like after 4 minutes it actually stopped and delivered more tracing godness:
>> set_err DISPATCH (DBI::dr=HASH(0x8d1fedc) rc1/3 #4 g0 ima11 pid#32187) at /opt/perl585/lib/site_perl/5.8.5/i686-linux/DBD/Proxy.pm line 89
1 -> set_err in DBD::_::common for DBD::Proxy::dr (DBI::dr=HASH(0x8d1fedc)~INNER 1 'Cannot log in to DBI::ProxyServer: Cannot connect: Connection timed out at /opt/perl585/lib/site_perl/5.8.5/RPC/PlClient.pm line 70. at /opt/perl585/lib/site_perl/5.8.5/Net/Daemon/Log.pm line 136.
' ' ')
!! ERROR: 1 'Cannot log in to DBI::ProxyServer: Cannot connect: Connection timed out at /opt/perl585/lib/site_perl/5.8.5/RPC/PlClient.pm line 70. at /opt/perl585/lib/site_perl/5.8.5/Net/Daemon/Log.pm line 136.
' (err#1)
1 <- set_err= undef at /opt/perl585/lib/site_perl/5.8.5/i686-linux/DBD/Proxy.pm line 89
!! ERROR: 1 'Cannot log in to DBI::ProxyServer: Cannot connect: Connection timed out at /opt/perl585/lib/site_perl/5.8.5/RPC/PlClient.pm line 70. at /opt/perl585/lib/site_perl/5.8.5/Net/Daemon/Log.pm line 136.
' (err#1)
<- connect= undef at /opt/perl585/lib/site_perl/5.8.5/i686-linux/DBI.pm line 625
-> $DBI::errstr (&) FETCH from lasth=HASH
>> DBD::Proxy::dr::errstr
<- $DBI::errstr= 'Cannot log in to DBI::ProxyServer: Cannot connect: Connection timed out at /opt/perl585/lib/site_perl/5.8.5/RPC/PlClient.pm line 70. at /opt/perl585/lib/site_perl/5.8.5/Net/Daemon/Log.pm line 136.
'
DBI connect('hostname=*****;port=2001;dsn=dbi:Oracle:****','*****',...) failed: Cannot log in to DBI::ProxyServer: Cannot connect: Connection timed out at /opt/perl585/lib/site_perl/5.8.5/RPC/PlClient.pm line 70. at /opt/perl585/lib/site_perl/5.8.5/Net/Daemon/Log.pm line 136.
[... more of the same and the final DESTROY ...]
So the target proxy/host seems to be just offline or behind a firewall (would I get the same response in the latter case) ?