I don't knwo much about server maintainance and configuration, but i just started using one form mediatemple VE-Server...everything is fine and easy btu i dont get how can i enable the https connections...
if now i type https://mysite.com/login.php it doesnt work (page not found)
There are a number of tutorials out there for this. For example: http://tim.oreilly.com/pub/a/onlamp/2008/03/04/step-by-step-configuring-ssl-under-apache.html
Related
I just setup a custom domain for an AWS API Gateway and set up CNAME entries in Google Domains to redirect to my API Gateway. After maybe 30 minutes of waiting I was able to use Chrome to do a simple GET request to my custom domain that properly forwarded to my API Gateway. I tested in Firefox and it worked fine too.
About 3-4 hours later I came back and tried making the same call using Python requests and it worked the first 3 times then failed.
SSLError: HTTPSConnectionPool(host='ids.references.app', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLCertVerificationError("hostname '<my_custom_domain>' doesn't match '*.execute-api.us-east-2.amazonaws.com'")))
At first I thought this was a requests problem, but then I opened up Firefox and it didn't work as well. I tried Edge and the call worked. Then I went back to Python and it worked for a bit, then stopped working. I went back to Firefox and it no longer worked. Then I tried Edge and it no longer worked. Sprinkled in there I've tried Chrome and it has worked every time since it started working. (this order of events is from memory and may be slightly off).
Is this a known issue with updating DNS entries that you get some randomness when things first start until the DNS changes have fully propagated. How would I go about even tracking where the error is occurring? I think that's the most frustrating thing about this, it all seems like magic and there's no obvious point where you get something like server 1.2.3.4 says that cert_1 doesn't go with cert_2 and then later you see something like server 4.5.6.7 says cert_2 is all good (so it works). Would I need to install curl for Windows (Is is possible to make a cURL request and get the route that is taken (similar to traceroute)). Would this even matter though? What if curl was like Chrome, it always worked? Does requests have this functionality (bonus points if someone can show a requests solution)? What about Firefox or Chrome? Or could I use something like wireshark (yikes) that could somehow observe the whole system?
I'm using requests 2.25.1 and Python 3.8.5 on Windows 10 and I believe the latest versions of Edge and Firefox.
I want to log everything firefox send to a server, down to every exact byte so I can reproduce it in a python client. So my idea was to make a quick and dirty hack :
run a openssl s_server,
make firefox connect to localhost by adding a line in my /etc/hosts.
This shouldn't have taken more than 5 seconds to setup, run, remove.
My issue is on the firefox side. First, it doesn't allow me to add a security exception. Second, even when I add one in about:preferences#advanced > Certificates > View certificates > Servers, it changes nothing and show me the error SEC_ERROR_UNKNOWN_ISSUER anyway.
How do I make firefox ignore the certificate error?
Is there another quick and easy way to log SSL traffic?
The easiest way I found was to use firefox's SSLKEYLOGFILE environment variable and configure wireshark to use this file to decrypt the HTTP requests.
This is all explained here:
https://jimshaver.net/2015/02/11/decrypting-tls-browser-traffic-with-wireshark-the-easy-way/
However, care must be taken to clear the cache for the website so that firefox actually send the requests and don't use the cached result.
I have set up a squid proxy on EC2, and I'm trying to use it from behind a corporate firewall. After configuring firefox to use my proxy, I tried to surf to yahoo.com. The browser seems to hang as if handling an extremely long running request. Checking the squid logs I see:
1431354246.891 11645 xxx.0.xx.xxx TCP_MISS/200 7150 CONNECT www.yahoo.com:443 username HIER_DIRECT/xx.xxx.XX.xx-
So far, I don't have a good explanation of most of these entries , but from http://wiki.squid-cache.org/SquidFaq/SquidLogs#access.log , I've found that:
MISS = The response object delivered was the network response object.
What does this mean? Is anything I can do to connect to the outside internet?
This has been asked a long time ago, but maybe someone can still use this...
This means you connected to squid and the request was made to yahoo using the TCP protocol that HTTP uses. Furthermore, the MISS means it's a cache miss, squid doesn't have this page stored.
The reason for the hanging might be caused by the response being caught somewhere along the line (corporate firewall, maybe? local firewall?) or even misconfiguration of the proxy.
For more, perhaps you should search on https://serverfault.com, for example this is a good starting point, then you can narrow down the problem: https://serverfault.com/questions/514716/whats-the-minimum-required-squid-config-to-make-a-public-proxy-server
I'm trying to get some protocols work through my company's firewall. Until now I have been succesfull in masking either http or https data by setting a http proxy on localhost and one on a remote server I own. The communication is done via $_POSTed and received modified .bmp files that contain a header and the encripted serialised request array.
This works fine, but there are a few drawbacks that make me think I might have taken a wrong approach.
Firstly I do not use apache's mod-proxy. instead I just created a local subdomain (proxy.localhost) and use that in browser's proxy settings. the subdomain's index.php does all the work. This creates some problems. I cannot use http and https simultaneously or the server will complain of using either "http on a https enabled port" or "incoresc ssl response length".
The second problem is, well, other protocols. I could make use of some ftp, sftp, remote deskoptop, ssh, nust name another... I need it
there are 2 solutions I can think of: First is if I run a php script in CLI so that it listens on a predefined port and handles the requests differently, or some sort of ssh tunnel. Problem is I haven't had any success with freeSSHd and putty because of my ignorance.
Thanks in advance for any advice.
I used the free version of bitvise SSH Client and server and it seems to work just fine.
HI. in node.js, if it is http request, I can get the remoteAddress at req.connection.remoteAddress,
so, how to get it if https request? I find there is req.socket.remoteAddress but I'm not sure. Please advice. thanks.
It appears something is strange/broken indeed.
As of node 0.4.7, it seems http has remoteAddress available on:
req.connection.remoteAddress
req.socket.remoteAddress
on https, both of these are undefined, but
req.connection.socket.remoteAddress
does work.
That one isn't available on http though, so you need to check carefully.
I cannot imagine this behavior is intentional.
Since googling "express js ip" directly points to here, this is somehow relevant.
Express 3.0.0 alpha now offers a new way of retrieving IP adresses for client requests.
Simply use req.ip. If you're doing some proxy jiggery-pokery you might be interested in app.set("trust proxy", true); and req.ips.
I recommend you to read the whole discussion in the Express Google Group.
var ip = req.headers['x-forwarded-for'] ||
req.connection.remoteAddress ||
req.socket.remoteAddress ||
req.connection.socket.remoteAddress;
Note that sometimes you can get more than one ip address in req.headers['x-forwarded-for'], specially when working with mobile phones accessing your server (wifi and carrier data).
As well req.headers['x-forwarded-for'] is easily manipulated so you need a properly configured proxy server.
Is better to check req.connection.remoteAddress against a list of known proxy servers before to go with req.headers['x-forwarded-for'].