I want to automate and monitor my websocket connection whether it is able to send successful response for the connection request. I know different plugins already available but I am looking for a script whether in bash or ruby so that it can test my websocket connection and successful message .
Any help will be very much appreciated .
Run
curl -i -N -H "Connection: Upgrade" -H "Upgrade: websocket" -H "Host: echo.websocket.org" -H "Origin: http://www.websocket.org" http://echo.websocket.org
Where http://www.websocket.org is host origin, and echo.websocket.org is endpoint for websocket.
Those flags say:
Return headers in the output
Don’t buffer the response
Set a header that this connection needs to upgrade from HTTP to something else
Set a header that this connection needs to upgrade to a WebSocket connection
Set a header to define the host (required by later WebSocket standards)
Set a header to define the origin of the request (required by later WebSocket standards)
If your WebSocket is working running the above should return the handshake information, for further information, refer to this post
Related
My curl message updates a webhook on my website perfectly when I run the site locally, using 'localhost:800' as the URI, with 'ws://localhost:8080/ws' in the webpage javascript code and "curl -H "cache-control: no-cache" -H "content-type: application/json" -XPOST -d '{"object":"event","data":{"paid":true}}' localhost:8080/webhook" as the curl message sent. However when I send the same curl message to the Heroku hosted site:'https://XXXX.herokuapp.com/webhook' with 'ws://XXXX.herokuapp.com/ws' on the webpage it doesn't update with the info received in the curl, though from the Heroku logs I can see that the message was received. Does anyone know what the problem might be?
Turns out I just needed to change "var exampleSocket = new WebSocket("ws://myapp.herokuapp.com/ws")" to "var exampleSocket = new WebSocket("wss://myapp.herokuapp.com/ws")" in the .html page. Now it works fine. I have a SSL Cert in Heroku for that App and this seems to work with wss as well as https
I'm trying to use WebUpd8 team's oracle-java8-installer to install Java 8 on my Ubuntu 14.04 computers. Some of them could succeed but others failed. After some debugging, I realized it was caused by the HTTP proxy setting. I'll provide more details below, but basically my questions are: Why does the use of http_proxy cause the problem? I believe it's must be related to how an HTTP proxy works, but since I have little experience in that, could someone tell me what knowledge I should learn to understand this issue?
Here are more details.
Under the hood, the oracle-java8-installer uses wget to download the jdk-8u181 package. So I can reproduce the issue with the steps below:
Install apt-cacher-ng: sudo apt-get install apt-cacher-ng
You don't have to configure anything in the APT configuration to reproduce this problem. apt-cacher-ng uses localhost:3142 by default to cache the packages.
Run http_proxy="http://localhost:3142" wget --continue --no-check-certificate -O jdk-8u181-linux-x64.tar.gz --header "Cookie: oraclelicense=a" http://download.oracle.com/otn-pub/java/jdk/8u181-b13/96a7b8442fe848ef90c96a2fad6ed6d1/jdk-8u181-linux-x64.tar.gz
Here are some notes:
The http://localhost:3142 is configured for apt-cacher-ng. Those machines that failed had apt-cacher-ng installed before I tried to install jdk-8u181.
The Cookie: oraclelicense=a is to indicate the user has accepted the license.
If you run the last command, the download of the jdk-8u181-linux-x64.tar.gz is finished instantly. There is a line saying "Proxy request sent, awaiting response... 200 OK". But if you open the received ".tar.gz", you'll see it's merely an HTML page that contains error information.
If you remove the http_proxy environment variable and run:
wget --continue --no-check-certificate -O jdk-8u181-linux-x64.tar.gz --header "Cookie: oraclelicense=a" http://download.oracle.com/otn-pub/java/jdk/8u181-b13/96a7b8442fe848ef90c96a2fad6ed6d1/jdk-8u181-linux-x64.tar.gz
You will have the full package downloaded correctly.
My best guess is that an HTTP proxy works with wget if the target URL is the final URL, so the proxy would cache it in its storage. Conceptually, it's like a key-value store:
proxy['URL'] = result
However, in this case, the target URL (http://download.oracle.com/otn-pub/java/jdk/8u181-b13/96a7b8442fe848ef90c96a2fad6ed6d1/jdk-8u181-linux-x64.tar.gz) actually returns a "302" code and a "Location" header field for the new URL. This can be seen from the output:
ywen#ubuntu:~$ wget --continue --no-check-certificate -O
jdk-8u181-linux-x64.tar.gz --header "Cookie: oraclelicense=a"
http://download.oracle.com/otn-pub/java/jdk/8u181-b13/96a7b8442fe848ef90c96a2fad6ed6d1/jdk-8u181-linux-x64.tar.gz
--2018-08-01 11:10:04-- http://download.oracle.com/otn-pub/java/jdk/8u181-b13/96a7b8442fe848ef90c96a2fad6ed6d1/jdk-8u181-linux-x64.tar.gz
Resolving download.oracle.com (download.oracle.com)... 23.32.72.143
Connecting to download.oracle.com
(download.oracle.com)|23.32.72.143|:80... connected.
HTTP request sent, awaiting response... 302 Moved Temporarily
Location:
https://edelivery.oracle.com/otn-pub/java/jdk/8u181-b13/96a7b8442fe848ef90c96a2fad6ed6d1/jdk-8u181-linux-x64.tar.gz
[following]
--2018-08-01 11:10:04-- https://edelivery.oracle.com/otn-pub/java/jdk/8u181-b13/96a7b8442fe848ef90c96a2fad6ed6d1/jdk-8u181-linux-x64.tar.gz
Resolving edelivery.oracle.com (edelivery.oracle.com)...
23.216.148.161, 2001:559:19:3081::2d3e, 2001:559:19:3086::2d3e
Connecting to edelivery.oracle.com
(edelivery.oracle.com)|23.216.148.161|:443... connected.
HTTP request sent, awaiting response... 302 Moved Temporarily
Location:
http://download.oracle.com/otn-pub/java/jdk/8u181-b13/96a7b8442fe848ef90c96a2fad6ed6d1/jdk-8u181-linux-x64.tar.gz?AuthParam=1533136324_72efc4e6208a5a7fc1cbba0527c741b6
[following]
--2018-08-01 11:10:04-- http://download.oracle.com/otn-pub/java/jdk/8u181-b13/96a7b8442fe848ef90c96a2fad6ed6d1/jdk-8u181-linux-x64.tar.gz?AuthParam=1533136324_72efc4e6208a5a7fc1cbba0527c741b6
Connecting to download.oracle.com
(download.oracle.com)|23.32.72.143|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 185646832 (177M) [application/x-gzip]
Saving to: ‘jdk-8u181-linux-x64.tar.gz’
Handling the redirection is out of the capability of a proxy (Am I right??), therefore those machines set with the HTTP proxies failed.
I have 2 Linux Servers (with LAMP):
Web Server with SSL (https://www.example.com)
Admin Server (needs to connect to Web Server, via https)
When i connect from Admin Server (to Web Server) via curl command. It is refusing. Then when i use curl with --caeert option, its going through. Like this:
# curl --cacert CAchain.crt -I https://www.example.com
HTTP/1.1 200 OK
..
I'm getting 200 OK only because of --cacert CAchain.crt.
Then obviously i need the pure/basic curl command without defining the --cacert, to be working. Like:
# curl -I https://www.example.com
HTTP/1.1 200 OK
..
So that my Admin Application will for sure be able to connect to it (via https).
But now, when i connect to https://www.example.com from Admin Server (via its Application), it is bouncing back. Not able to reach, with SSL.
How do i make my Linux (RHEL) to install the client's CA-CERT inside, in order automatically AVOID defining the cert file. So that any communications to "https://www.example.com" via CURL or Web Browser (from Admin), can just then successfully go through. (Is it something like, we make "SSH without Keys" logic? But how, please?)
You need to add the CA cert to somewhere that curl can use it - it looks like you're just keeping it in your local directory (which isn't where curl looks for it - typically in some /etc/pki/ssl/ca-bundle.crt-type location). There's a handful of ways to do this. I don't have much experience doing it in RHEL (or CentOS), but have done it for Debian.
This ServerFault Post might help.
Likewise, This Post might help you install/import the CA cert properly.
I have a web application that I need to debug because I suspect that the request send is altered on its way to the server.
I want to dump the HTTPS traffic received on port localhost:443 and decrypt it so I can check the packages.
Obviously I do have the private hey from the server.
Is there a way to do this from the command line?
You can use ssldump.(it works on top of libpcap).
ssldump -r <File_Name>.pcap -k <Key_File>.key -d host <IP_Address>
You specify the following options with the ssldump utility:
-r: Read data from the <File_Name>.pcap file instead of from the network.
-k: Use <Key_File>.key file as the location for the SSL keyfile.
-d: Display the application data traffic.
You may refer the complete example here
You can import the SSL key in wireshark to decrypt https if Wireshark is compiled with SSL decryption support:
http://www.etherlook.com/howto/use-wireshark-to-decrypt-https/
http://wiki.wireshark.org/SSL
I am using a script to pull down some XML data on a authentication required URL with WGET.
In doing so, my script produces the following output for each url accessed (IPs and hostnames changed to protect the guilty):
> Resolving host.name.com... 127.0.0.1
> Connecting to host.name.com|127.0.0.1|:80... connected.
> HTTP request sent, awaiting response... 401 Access denied
> Connecting to host.name.com|127.0.0.1|:80... connected.
> HTTP request sent, awaiting response... 401 Unauthorized
> Reusing existing connection to host.name.com:80.
> HTTP request sent, awaiting response... 200 OK
Why does WGET complain that accessing the URL fails twice before successfully connecting? Is there a way to shut it up, or get it to connect properly in the first attempt?
For reference, here's the line I am using to call WGET:
wget --http-user=USERNAME --password=PASSWORD -O file.xml http://host.name.com/file.xml
This appears to be by design. Following the advice of #Wayne Conrad, I added the -d switch and was able to observe the first attempt failing because NTLM was required, and the second attempt failing because the first NTLM attempt was only level 1, where a level 3 NTLM challenge-response was required. WGET finally provides the needed authentication at the third attempt.
WGET does get a cookie to prevent re-authenticating for the duration of the session, which would prevent this if the connection wasn't terminated between files. I would need to pass WGET a list of files for this to occur, however I am unable to because I do not know the file names in advance.
You seem to have a new version of wget. After 1.10.2, wget will not send out authentication unless challenged by the server first. And that is why the first one is failing. The second is failing cause of the what you described.
You can reduce one of them by adding the parameter --auth-no-challenge. This sends out the first in "basic" which will fail and the second one will be sent in "digest" mode. Which should work.