I'm trying to demonstrate to our QA department that we have (or have not) fixed a vulnerability shown in a recent scan. I would like to write a simple script that can demonstrate the vulnerability and show that the remediation actually fixes it. However since the vulnerability involves a invalid (possibly intentionally) HTTPS request I can't use a standard client to easily replicate it. Because there are several servers to test and several different vulnerabilities I would like to automate the testing a bit.
The following command line replicates the test but requires human intervention:
>openssl s_client -connect {server:ip} | grep Location
GET /images HTTP/1.0 <---- (user types this plus two Enter keys)
Location:{text here proves success/failure}
How can I automate the test above?
I'm using openssl because it's convenient. I'm willing to get another tool if it can accept arbitrary HTTPS request headers.
In bash you can just pipe your text into the OpenSSL command-line utility:
printf 'GET /images HTTP/1.0\r\n\r\n' | openssl s_client -connect {server:ip} | grep Location
Related
I have been following the steps of the courses pre-work, including:
checking for, generating, copy/paste, and
saving the SSH keys to GitHub.
But when I am instructed to check the matching fingerprints using "ssh -T git#github.com", the prints don't match.
I've even started from the beginning clear through, but they still don't match.
Thought I'd reach out here before using my 1 tutoring.
Hopefully the screenshot showing what I see helps(link).
EDIT- I understand there's some stuff in there that shouldn't be, I was just trying things for diff results. I would just like to know where I went wrong and how to avoid it.
What you ssh is the remote site SSH key fingerprint, not you registered SSH key fingerprint.
You see (or should see if you are contacting the correct github.com) the fingerprints exposed with api.github.com/meta as explained here.
Using jq, you can add them to your ~/.ssh/known_hosts with:
curl --silent https://api.github.com/meta \
| jq --raw-output '"github.com "+.ssh_keys[]' >> ~/.ssh/known_hosts
From there, you can test your connection with ssh -Tv github.com, and check if you see a welcome message:
Hi username!
You've successfully authenticated, but GitHub does not provide shell access
I am working on a tool that sends out automated reports to our clients. This specific client wants the file to be encrypted and then signed. I have tried several different methods, with hours of searching, and have not had much luck. I know GPG signs then encrypts, but does anyone know if it is possible to swap the order? And if not does anyone know of any command line alternatives that can be run in a Linux container?
Example:
gpg --always-trust --batch --yes -s -u 'signee#email.com' -r 'receiver#email.com' -o 'test.txt.pgp' -e 'test.txt'
On verify :
gpg: verify signatures failed: Unexpected error
GPG doesn't seem to allow this in a single pass.
You have two options:
use a detached signing, then you'll need to send two files: one with encrypted data and second with the signature
encrypt data in first pass and then sign it in the second. However that would also need two steps on the receiving side: first verify signatures/unwrap data, then decrypt it.
Also it could be useful to ask client what exact format he expects to receive. Just example of gpg --list-packets report-file should be helpful.
I have been using pscp to upload some files to a remote server but apparently they are updating the security so that only certain SFTP and MAC ciphers are allowed, but I'm not really a programmer so I don't know what this all entails.
Right now I have this command in a batch script (using generic capital letters here instead of the actual words/strings used):
echo y | "CURRENT_PATH\pscp.exe" -sftp -P 22 -pw "PASSWORD"
"LOCAL\PATH\TO\FILE.txt" SOME_SERVER#SERVER.COM:/SERVER/PATH/TO/FILE.txt
How do I change or update this so it is compatible with the following:
Allowed SSH Ciphers: aes256-cbc, aes256-ctr
Allowed MAC Ciphers: hmac-sha2-512, hmac-sha2-256
I don't know if I need only one or both of these SSH/MAC things to make it work.
PSCP (as any SSH client) will automatically pick the best algorithms out of those mutually supported by it and the server. There's nothing you should do.
If PSCP supports any algorithm out of those supported/allowed by the server, it will use them automatically.
If not, no configuration will fix it (except a rare case, when the best such algorithm is actually considered insecure by PSCP/PuTTY – what is not your case). All you can do, if it does not work, is to make sure you have the latest version of PSCP/PuTTY.
Obligatory warning: Never use echo y as an automated response to a pcp hostkey prompt.
As I work remotely I do often have to run scripts that make sense only when I am on intranet.
But I a not always connected to intranet and I would prefer to define a more generic way of testing the connectivity so I bypass when I am not in "work-mode" ;)
I to implement this as a simple bash command or script so I can do something like:
#!/bin/bash
is-intranet-on || echo "Yeah, time to do something!"
If I do this I will be include this even on crontab so I can have scheduled tasks that run only when connected to intranet.
I need to make this work on both MacOS and Linux. Currently I use OpenVPN but I think that testing for network interfaces would be the wrong approach because: I could configure the VPN on my router or I could be in the office.
My impression is that the final solution would have to involve some kind of DNS check, but I need to make it kinda safe as I don't want surprised from captive portals that may return me fake IP for a DNS entry.
If for example you know that you have a server named example.intra and the name is only resolvable within the intranet or on VPN, and lets say the name resolves to 10.1.1.3 and the machine is pingable, the code would simply be something like:
is_intranet_on() {
[[ $(dig example.intra +short) == "10.1.1.3" ]] && ping -c 1 example.intra &> /dev/null
}
Which checks that the dns name resolves to a specific ip and then pings the ip to make sure there is at least some kind of network connectivity.
The output will be return code 0 when there is connectivity and return code 1 when there is none. You can put this function in your script, or source it.
You could modify this to use curl instead to check for a https-site by first obtaining the certificate-chain from your server with something like:
openssl s_client -showcerts -servername example.intra -connect example.intra:443 > cacert.pem
This command saves the certs into a file named cacert.pem
And then using curl to check that the server is ok using the certificates:
[[ $(curl -s -I -L -m 4 --cacert cacert.pem https://example.intra | head -n 1) == "HTTP/1.1 200 OK" ]]
Change the string HTTP/1.1 200 OK to whatever your server responds if needed (for example a 204 status or whatever)
All,
I'm attempting to create a bash shell script that uses openssl to do an https query for me (/dev/tcp and wget are unavailable) along the lines of:
openssl s_client -connect xxx.xxx.xxx.xxx:port <<EOF
GET / HTTP/1.1
Connection: close
...more http here...
EOF
If I do the command line by hand, typing in the request, it works as expected and I see the correct HTML. However, if I run it from inside of a shell script I am not getting an HTTP document back from the server. Any thoughts?
I wonder whether -ign_eof helps. The original problem is described in http://www.mail-archive.com/openssl-users#openssl.org/msg02926.html (note this is very old) and this switch seems to fit.