Can not connect to websocket server using WebSocket4Net - websocket

I have a mochiweb as WebSocket server; connectivity using JavaScript from Chrome browser as ws client went smooth (open, send message, close). However, when I tried to connect from C# using websocket4Net, I always get below error from mochiweb.
=CRASH REPORT==== 30-Jan-2013::16:57:41 ===
crasher:
initial call: mochiweb_acceptor:init/3
pid: <0.228.0>
registered_name: []
exception error: no case clause matching {error,timeout}
in function mochiweb_http:websocket_init_with_origin_validated/4 (mochiweb_http.erl, line 292)
in call from mochiweb_http:headers_ws_upgrade/4 (mochiweb_http.erl, line 192)
ancestors: [cim_https,<0.166.0>]
messages: []
links: [<0.167.0>]
dictionary: []
trap_exit: false
status: running
heap_size: 1597
stack_size: 24
reductions: 1585
my C# snippet:
webSocketClient = new WebSocket("wss://localhost:8080/login");
webSocketClient.Error += new EventHandler<SuperSocket.ClientEngine.ErrorEventArgs>(webSocketClient_Error) ;
webSocketClient.AllowUnstrustedCertificate = true;
webSocketClient.Opened += new EventHandler(webSocketClient_Opened);
webSocketClient.Closed += new EventHandler(webSocketClient_Closed);
webSocketClient.MessageReceived += new EventHandler<MessageReceivedEventArgs>(webSocketClient_MessageReceived);
webSocketClient.Open();
Is there any parameter that I've missed ? Any idea on how to trace this ?

Found the issue. Apparently, Mochiweb only supports what in websocket4net is known as Hybi00 -no support for RFC 6455 yet.
Seems like now I have to patch my mochiweb.

Related

Getting error for basic Eland example (loading index from a locally installed ELK docker container)

We installed ELK in docker based on this example. Like:
docker run -d --name elasticsearchdb --net es-stack-network -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" elasticsearch:6.8.13
docker run -d --name kibana-es-ui --net es-stack-network -e "ELASTICSEARCH_URL=http://elasticsearchdb:9200" -p 5601:5601 kibana:6.8.13
We then set up Elastic with the basic built-in data sets, including the flights dataset offered by default.
Then we tried using Eland to pull that data into a dataframe, and I think we're following the documentation correctly.
But with the code:
import eland as ed
index_name = 'flights'
ed_df = ed.DataFrame('localhost:9200', index_name)
we get this error:
File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\elastic_transport\client_utils.py:198, in url_to_node_config(url)
192 raise ValueError(f"Could not parse URL {url!r}") from None
194 if any(
195 component in (None, "")
196 for component in (parsed_url.scheme, parsed_url.host, parsed_url.port)
197 ):
--> 198 raise ValueError(
199 "URL must include a 'scheme', 'host', and 'port' component (ie 'https://localhost:9200')"
200 )
202 headers = {}
203 if parsed_url.auth:
ValueError: URL must include a 'scheme', 'host', and 'port' component (ie 'https://localhost:9200')
So when we add http://, like so:
import eland as ed
index_name = 'flights'
ed_df = ed.DataFrame('http://localhost:9200', index_name)
We get this error:
File ~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.9_qbz5n2kfra8p0\LocalCache\local-packages\Python39\site-packages\elastic_transport\_node\_http_urllib3.py:199, in Urllib3HttpNode.perform_request(self, method, target, body, headers, request_timeout)
191 err = ConnectionError(str(e), errors=(e,))
192 self._log_request(
193 method=method,
194 target=target,
(...)
197 exception=err,
198 )
--> 199 raise err from None
201 meta = ApiResponseMeta(
202 node=self.config,
203 duration=duration,
(...)
206 headers=response_headers,
207 )
208 self._log_request(
209 method=method,
210 target=target,
(...)
214 response=data,
215 )
ConnectionError: Connection error caused by: ConnectionError(Connection error caused by: ProtocolError(('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))))
So I thought, well, maybe it's serving on HTTPS by default for some reason, maybe not related but in the logs I saw:
05T17:17:04.734Z", "log.level": "WARN", "message":"received plaintext http traffic on an https channel, closing connection Netty4HttpChannel{localAddress=/172.18.0.3:9200, remoteAddress=/172.18.0.1:59788}", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[21683dc12cff][transport_worker][T#14]","log.logger":"org.elasticsearch.xpack.security.transport.netty4.SecurityNetty4HttpServerTransport","elasticsearch.cluster.uuid":"XuzqXMk_QgShA3L5HnfXgw","elasticsearch.node.id":"H1CsKboeTyaFFjk2-1nw2w","elasticsearch.node.name":"21683dc12cff","elasticsearch.cluster.name":"docker-cluster"}
so I try replacing http with https and get this error:
TlsError: TLS error caused by: TlsError(TLS error caused by: SSLError([SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1129)))
So I look up this error and find this thread which says do something like:
import ssl
from elasticsearch.connection import create_ssl_context
ssl_context = create_ssl_context(<use `cafile`, or `cadata` or `capath` to set your CA or CAs)
context.check_hostname = False
context.verify_mode = ssl.CERT_NONE
es = Elasticsearch('localhost', ssl_context=context, timeout=60
But this isn't helpful because Eland handles elasticsearch instancing internally, I'm not controlling that.
This is a very basic scenario, so I'm sure the solution must be much simpler than all this. What can I do to make this work?
For who ever is still struggling, the following worked for me with a local Elastic cluster using Docker/docker-compose:
Following this guide you'd have the http_ca.crt file locally, using the command:
docker cp es01:/usr/share/elasticsearch/config/certs/http_ca.crt .
You can use the http_ca.crt file when creating your es_client:
from elasticsearch import Elasticsearch
es_client = Elasticsearch("https://localhost:9200",
ca_certs="/path/to/http_ca.crt",
basic_auth=("[elastic username]",
"[elastic password]"))
And then use the es_client to connect to eland:
import eland as ed
df = ed.DataFrame(es_client=es_client, es_index_pattern="[Your index]")
df.head()

Paho on_connect works if connection OK but is not called when error. Why?

Why does my paho-mqtt (1.5.1) on_connect work if connection OK but it is not called when an error. For testing I'm using Linux Lite 4.2 based on Ubuntu 18.04 LTS with Xfcs running in a VM (VBox).
class subscribemqtt:
.....
def on_connect(self, client, userdata, flags, rc):
print ("ZZZZZZZZZZZZZ in on_connect")
connectErrs = {.........}
self.connectRc = rc
self.connectReason = connectErrs[str(rc)]
print ("$$$$$$$$$$$$$", self.connectRc, self.connectReason)
return
def subscribe(self, arguments):
...........
self.client = paho.Client(self.config['CLIENT_ID'])
self.client.on_message = self.on_subscribe
self.client.on_connect = self.on_connect
print ("#############", self.on_connect)
print ("XXXXXXXXXXXX calling self.client.connect(...)
self.client.connect(self.config['HOST'],self.config['PORT'])
print ("YYYYYYYYYYYYY calling self.client.loop_start()")
self.client.loop_start()
print ("AAAAAAAAAAAAA", self.connected)
while not self.connected:
time.sleep(0.1)
print ("BBBBBBBBBBBBB", self.connected, self.connectRc)
When all the parameters are correct, on_connect gets called:
############# <bound method subscribemqtt.on_connect of <__main__.subscribemqtt object at 0x7f32065a6ac8>>
XXXXXXXXXXXX calling self.client.connect(self.config['HOST'],self.config['PORT']
YYYYYYYYYYYYY calling self.client.loop_start()
AAAAAAAAAAAAA**ZZZZZZZZZZZZZ in on_connect**
False$$$$$$$$$$$$$
0 Connection successful
BBBBBBBBBBBBB True 0
When I set the host address to an invalid address (to create an error to test my error handling) I get:
subscribemqtt.subscribe:topic= Immersion Dummy
############# <bound method subscribemqtt.on_connect of <__main__.subscribemqtt object at 0x7ffb942ae0b8>>
XXXXXXXXXXXX calling self.client.connect(self.config['HOST'],self.config['PORT']
Traceback (most recent call last):
File "/home/linuxlite/Desktop/EMS/sendroutines/subscribemqtt.py", line 275, in <module>
(retcode, reason, topic) = subscribeObj.subscribe([None, topic])
File "/home/linuxlite/Desktop/EMS/sendroutines/subscribemqtt.py", line 191, in subscribe
self.client.connect(self.config['HOST'],self.config['PORT'])
File "/usr/local/lib/python3.6/dist-packages/paho/mqtt/client.py", line 941, in connect
return self.reconnect()
File "/usr/local/lib/python3.6/dist-packages/paho/mqtt/client.py", line 1075, in reconnect
sock = self._create_socket_connection()
File "/usr/local/lib/python3.6/dist-packages/paho/mqtt/client.py", line 3546, in _create_socket_connection
return socket.create_connection(addr, source_address=source, timeout=self._keepalive)
File "/usr/lib/python3.6/socket.py", line 704, in create_connection
for res in getaddrinfo(host, port, 0, SOCK_STREAM):
File "/usr/lib/python3.6/socket.py", line 745, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -2] Name or service not known
>>>
Thanks for reading.
Alan
PS. I just tried:
try:
self.client.connect(self.config['HOST'],self.config['PORT'])
except:
print ("**************** Error exception calling self.client.connect")
And that works but my understanding is that on_connect should be called for errors.
From the docs:
on_connect()
on_connect(client, userdata, flags, rc)
Called when the broker responds to our connection request.
The important part there is "when the broker responds". But in the example you have shown the hostname provided can not be resolved, so broker never responds because it is never actually contacted.
on_connect() will be called if connection succeeds or if it fails because the username/password is wrong or a unavailable protocol version (e.g. requesting MQTTv5 from a broker that only supports v3)

How to set up Karate browser capabilities acceptInsecureCerts:true for geckodriver [duplicate]

This question already has an answer here:
How to fix - `ERROR com.intuit.karate - http request failed`
(1 answer)
Closed 1 year ago.
I tried this way to set up the capabilities of my geckodriver for my karate tests.
I am using karate.version 0.9.6
Here is the geckodriver driver: 64bit windows: https://github.com/mozilla/geckodriver/releases/tag/v0.29.1
firefox Version 89.0.2 (64-bit)
def session = { capabilities: { acceptInsecureCerts:true, browserName: 'firefox', proxy: { proxyType: 'manual', httpProxy: '127.0.0.1:8080' } } }
configure driver = { type: 'geckodriver', showDriverLog: true , executable: 'driver/geckodriver.exe', webDriverSession: '#(session)' }
However, it obviously not picking up my settings.
Here is my log:
1 > User-Agent: Apache-HttpClient/4.5.12 (Java/1.8.0_41)
{"capabilities":{"acceptInsecureCerts":true,"browserName":"firefox","proxy":{"proxyType":"manual","httpProxy":"127.0.0.1:8080"}}}
13:25:13.121 [geckodriver_1626121511819-out] DEBUG c.i.k.d.geckodriver_1626121511819 - 1626121513121 mozrunner::runner INFO Running command: "C:\\Program Files\\Mozilla Firefox\\firefox.exe" "--marionette" "-foreground" "-no-remote" "-profile" "C:\\Users\\xxxxx\\AppData\\Local\\Temp\\rust_mozprofiledFOSxn"
13:25:16.428 [geckodriver_1626121511819-out] DEBUG c.i.k.d.geckodriver_1626121511819 - 1626121516428 Marionette INFO Marionette enabled
13:25:20.065 [geckodriver_1626121511819-out] DEBUG c.i.k.d.geckodriver_1626121511819 - console.warn: SearchSettings: "get: No settings file exists, new profile?" (new NotFoundError("Could not open the file at C:\\Users\\xxxxx\\AppData\\Local\\Temp\\rust_mozprofiledFOSxn\\search.json.mozlz4", (void 0)))
13:25:20.368 [geckodriver_1626121511819-out] DEBUG c.i.k.d.geckodriver_1626121511819 - console.error: Region.jsm: "Error fetching region" (new TypeError("NetworkError when attempting to fetch resource.", ""))
13:25:20.369 [geckodriver_1626121511819-out] DEBUG c.i.k.d.geckodriver_1626121511819 - console.error: Region.jsm: "Failed to fetch region" (new Error("NO_RESULT", "resource://gre/modules/Region.jsm", 419))
13:25:20.960 [geckodriver_1626121511819-out] DEBUG c.i.k.d.geckodriver_1626121511819 - 1626121520961 Marionette INFO Listening on port 58400
13:25:21.071 [ForkJoinPool-1-worker-1] DEBUG com.intuit.karate - response time in milliseconds: 7997.52
1 < 200
1 < cache-control: no-cache
1 < content-length: 712
1 < content-type: application/json; charset=utf-8
1 < date: Mon, 12 Jul 2021 20:25:13 GMT
{"value":{"sessionId":"b17123ef-1426-45d2-827b-adbc35b02e46","capabilities":{"acceptInsecureCerts":false,"browserName":"firefox","browserVersion":"89.0.2","moz:accessibilityChecks":false,"moz:buildID":"20210622155641","moz:geckodriverVersion":"0.29.1","moz:headless":false,"moz:processID":36360,"moz:profile":"C:\\Users\\wli2\\AppData\\Local\\Temp\\rust_mozprofiledFOSxn","moz:shutdownTimeout":60000,"moz:useNonSpecCompliantPointerOrigin":false,"moz:webdriverClick":true,"pageLoadStrategy":"normal","platformName":"windows","platformVersion":"10.0","setWindowRect":true,"strictFileInteractability":false,"timeouts":{"implicit":0,"pageLoad":300000,"script":30000},"unhandledPromptBehavior":"dismiss and notify"}}}
My purpose is to circle around this security check page.
enter image description here
Also, even if I tried to click that button in that security check page, my script is not able to get the buttons from the dom tree when I do the following.
And click('button[id=advancedButton]')
And click('button[id=exceptionDialogButton]')
it might be related with this : KarateUI: How to Handle SSL Certificate during geckodriver configuration? I added the alwaysMatch in and it is able to pick up the capabilities.
* def session = { capabilities: {alwaysMatch:{ acceptInsecureCerts:true, browserName: 'firefox' }}}
* configure driver = { type: 'geckodriver', showDriverLog: true , executable: 'driver/geckodriver.exe', webDriverSession: '#(session)' }
This is an area that may require you to do some research and contribute findings back to the community. Finally Karate passes the capabilities you define "as-is" to the driver. One thing that you should look at is if any command-line sessions should be passed to geckodriver - for example for Chrome, I remember there is some flag for ignoring these security errors. Note that you can use the addOptions flag in the Karate driver options.

Cucumber rescue Exception (Ruby/HTTParty)

I'm just get a Exception while running a Cucumber test and I tried to find what I can do for it, but no luck about that.
When I made a post in the last step:
When('accept terms of use') do
until #o == 200
#o = CadastroApp.sign_term1.code
sleep 1
end
end
class CadastroApp
include HTTParty
def self.sign_term1
post("#{$uat_uri}agree/multipleterms",
body: {
'ContractsId': $contract1,
'deviceType': 'Smartphone',
'Platform': 'ios',
'Model': 'Iphone XS max',
}.to_json,
headers: {
'Authorization': "Bearer #{$auth_token}",
'Content-Type': 'application/json'
})
end
end
I got the error:
52: def self.cucumber_run_with_backtrace_filtering(pseudo_method)
53: begin
54: yield
55: rescue Exception => e
56: instance_exec_invocation_line = "#{__FILE__}:#{__LINE__ - 2}:in `cucumber_run_with_backtrace_filtering'"
57: replace_instance_exec_invocation_line!((e.backtrace || []), instance_exec_invocation_line, pseudo_method)
58: raise e
59: end
60: end
I don't know if it is a problem, but I was using a lot of "until #variable == 200" to loop the api until I got response code 200.
It was a MYSQL problem. When I request the API, MYSQL don't close the requisition, so I get a lot of timeouts when I check on Kubernetes.
Exception Caught by LogRequestResponseMiddleware:
1) ----- Exception Type
MySql.Data.MySqlClient.MySqlException
1) ----- Exception Source
MySql.Data
1) ----- Exception TargetSite
MySql.Data.MySqlClient.Driver GetConnection()
1) ----- Exception Message
error connecting: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.
Message = error connecting: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reac
hed.
So I republished the API and automation works fine.
This isn't a error. This is a response send by Ruby when I get a failure in the step.

CouchDB 3-node cluster (Windows) - multiple erlang errors

Im receiving multiple erlang errors in my CouchDB 2.1.1 cluster (3 nodes/Windows), see errors and node configuration below:
3 nodes (10.0.7.4 - 10.0.7.6), Azure application gateway is used as load balancer.
Why do these errors appear? system resources of the nodes are far from overload.
I would be thankful for any help - thanks in advance.
Errors:
rexi_server: from: couchdb#10.0.7.4(<0.14976.568>) mfa: fabric_rpc:changes/3 exit:timeout [{rexi,init_stream,1,[{file,"src/rexi.erl"},{line,256}]},{rexi,stream_last,2,[{file,"src/rexi.erl"},{line,224}]},{fabric_rpc,changes,4,[{file,"src/fabric_rpc.erl"},{line,86}]},{rexi_server,init_p,3,[{file,"src/rexi_server.erl"},{line,139}]}]
rexi_server: from: couchdb#10.0.7.6(<13540.24597.655>) mfa: fabric_rpc:all_docs/3 exit:timeout [{rexi,init_stream,1,[{file,"src/rexi.erl"},{line,256}]},{rexi,stream2,3,[{file,"src/rexi.erl"},{line,204}]},{fabric_rpc,view_cb,2,[{file,"src/fabric_rpc.erl"},{line,308}]},{couch_mrview,finish_fold,2,[{file,"src/couch_mrview.erl"},{line,642}]},{rexi_server,init_p,3,[{file,"src/rexi_server.erl"},{line,139}]}]
rexi_server: from: couchdb#10.0.7.6(<13540.5991.623>) mfa: fabric_rpc:all_docs/3 exit:timeout [{rexi,init_stream,1,[{file,"src/rexi.erl"},{line,256}]},{rexi,stream2,3,[{file,"src/rexi.erl"},{line,204}]},{fabric_rpc,view_cb,2,[{file,"src/fabric_rpc.erl"},{line,308}]},{couch_mrview,map_fold,3,[{file,"src/couch_mrview.erl"},{line,511}]},{couch_btree,stream_kv_node2,8,[{file,"src/couch_btree.erl"},{line,848}]},{couch_btree,fold,4,[{file,"src/couch_btree.erl"},{line,222}]},{couch_db,enum_docs,5,[{file,"src/couch_db.erl"},{line,1450}]},{couch_mrview,all_docs_fold,4,[{file,"src/couch_mrview.erl"},{line,425}]}]
req_err(3206982071) unknown_error : normal [<<"mochiweb_request:recv/3 L180">>,<<"mochiweb_request:stream_unchunked_body/4 L540">>,<<"mochiweb_request:recv_body/2 L214">>,<<"chttpd:body/1 L636">>,<<"chttpd:json_body/1 L649">>,<<"chttpd:json_body_obj/1 L657">>,<<"chttpd_db:db_req/2 L386">>,<<"chttpd:process_request/1 L295">>]
System running to use fully qualified hostnames ** ** Hostname localhost is illegal
COMPACTION-ERRORS
Supervisor couch_secondary_services had child compaction_daemon started with couch_compaction_daemon:start_link() at <0.18509.478> exit with reason {compaction_loop_died,{timeout,{gen_server,call,[couch_server,get_server]}}} in context child_terminated
CRASH REPORT Process couch_compaction_daemon (<0.18509.478>) with 0 neighbors exited with reason: {compaction_loop_died,{timeout,{gen_server,call,[couch_server,get_server]}}} at gen_server:terminate/7(line:826) <= proc_lib:init_p_do_apply/3(line:240); initial_call: {couch_compaction_daemon,init,['Argument__1']}, ancestors: [couch_secondary_services,couch_sup,<0.200.0>], messages: [], links: [<0.12665.492>], dictionary: [], trap_exit: true, status: running, heap_size: 987, stack_size: 27, reductions: 3173
gen_server couch_compaction_daemon terminated with reason: {compaction_loop_died,{timeout,{gen_server,call,[couch_server,get_server]}}} last msg: {'EXIT',<0.23195.476>,{timeout,{gen_server,call,[couch_server,get_server]}}} state: {state,<0.23195.476>,[]}
Error in process <0.16890.22> on node 'couchdb#10.0.7.4' with exit value: {{rexi_DOWN,{'couchdb#10.0.7.5',noproc}},[{mem3_rpc,rexi_call,2,[{file,"src/mem3_rpc.erl"},{line,269}]},{mem3_rep,calculate_start_seq,1,[{file,"src/mem3_rep.erl"},{line,194}]},{mem3_rep,repl,2,[{file,"src/mem3_rep.erl"},{line,175}]},{mem3_rep,go,1,[{file,"src/mem3_rep.erl"},{line,81}]},{mem3_sync,'-start_push_replication/1-fun-0-',2,[{file,"src/mem3_sync.erl"},{line,208}]}]}
#vm.args
-name couchdb#10.0.7.4
-setcookie monster
-kernel error_logger silent
-sasl sasl_error_logger false
+K true
+A 16
+Bd -noinput
+Q 134217727`
local.ini
[fabric]
request_timeout = infinity
[couchdb]
max_dbs_open = 10000
os_process_timeout = 20000
uuid =
[chttpd]
port = 5984
bind_address = 0.0.0.0
[httpd]
socket_options = [{recbuf, 262144}, {sndbuf, 262144}, {nodelay, true}]
enable_cors = true
[couch_httpd_auth]
secret =
[daemons]
compaction_daemon={couch_compaction_daemon, start_link, []}
[compactions]
_default = [{db_fragmentation, "50%"}, {view_fragmentation, "50%"}, {from, "23:00"}, {to, "04:00"}]
[compaction_daemon]
check_interval = 300
min_file_size = 100000
[vendor]
name = COUCHCLUSTERNODE0X
[admins]
adminuser =
[cors]
methods = GET, PUT, POST, HEAD, DELETE
headers = accept, authorization, content-type, origin, referer
origins = *
credentials = true
[query_server_config]
os_process_limit = 2000
os_process_soft_limit = 1000

Resources