Does memory protection protect servers from the heartbleed exploit? - heartbleed-bug

I understand that the heartbleed-bug allows a remote attacker to read memory from your machine. Is this mitigated by memory protection?
For example, if I have a public-facing https webserver, any webserver traffic and data is obviously compromised, as well as any information accessible using credentials sent over the webserver. But what about other processes on the same box? If I was running bash locally from the console, far from ssl?

According to the site Heartbleed only affects openssl and related software (like something linked to libssl). Any other program in your box should be safe from this particular bug.

Related

play-framework [2.0] HTTPS

i'me working on a web server using play framework 2.0, where the login is executed by a android device software we're also making. And are main concern is that we can't find any support for HTTPS in play 2.0. Sense this is a school project we can't aford clouds nor other proxy to solve the HTTPS for us.
Our main problem is the password and email going in plain sight in the request's body, encrypting and decrypting in the mobile device and on the server looks costly in performance and sense HTTPS takes care of this we wanted to avoid it. Is there any way we can use HTTPS to protect the users login data, or any other suggestion.
If not we might have to migrate all are application to another framework, because it wont look good important confidential data going through the internet without encryption.
Historically, I've seen most folks run the Java/Scala application server behind a reverse proxy of some kind. Setting up HTTPS in apache isn't too hard, and then just use ModProxy to send requests internally to your Play application.
Any one of the reverse proxy systems can likely do this, nginx is popular too, and generally has easier configuration than apache, but I've never used it with HTTPS.
The number one reason normally to do this is security. You can't start a Java program as a non privileged user on port 80. If you start your Java program as root running on port 80, then any hole in your application has root privileges! As a result, starting the Java app on another port, then reverse proxy from an web server that can run as a non-priveleged user on port 80.
(*) This is a slightly over-simplified, but a discussion of this weirdness is beyond the scope of this I think.
It's now possible to use Play and https directly. This was added in Play 2.1
Simply start the server with:
JAVA_OPTS=-Dhttps.port=9001 play start

What security risks are associated with attaching remote debugger to IIS?

I'm a web developer. I used to work in an environment where I could build entire production web sites and run them in local IIS for debugging purposes.
I recently switched jobs and now that's not allowed anymore. Security policy (please don't ask about it) does not allow for me to run IIS on my development workstation. However, it seems that there is no reason why I may not attach a remote debugger (msvsmon.exe) to the IIS running the development web site because it is not public-facing (neither is my workstation public facing, but let's not talk about the security policy that I have no control over).
I would like to know what security concerns there are for using the remote debugger. The documentation says that UDP port 135 must be open between the remote development workstation and the web server being debugged...
Is there any particular security concern that I should bear in mind?
The only security concerns would be internal traffic sniffing on that port in case https traffic was being debugged and the unencrypted values were part of what was being debugged this data would likely go over the wire unencrypted.
Also, the vulnerabilities in the service that receives the UDP packets could be futzed with (again internally) to gain access in a way that would normally not be available (UDP port listening off).

If a site is secured via SSL, can a network sniffer still read the URLs being requested?

Can URLs be sniffed even though a client communicates with a server over SSL? I'm asking because I'm doing remote login & redirect to a physically different server via URL, and wondered if securing the communication via SSL would prevent replay attacks and the like.
The sniffer will know the IP (and probably hostname) of the server you're requesting from, and the timing/quantity of information transferred, but nothing else.
Yes, replay (and man in the middle) attacks are prevented by SSL if you don't trust a compromised root certificate.
An attacker can observe both the hostname (by watching your DNS traffic) and the IP address you're connecting to. The username, password and path part of the URL should not be available, however.
Of course, the client themselves always has access to this information.
The network sniffer would need both the public and private key to decrypt the SSL traffic.
SSL sets up an encrypted session between the two machines and then runs "ordinary" HTTP over that encrypted connection so they can see what physical machine you are connected to but beyond that can't see anything at all in your connection.
As others have said they can look at the DNS requests most likely to determine the hostname.
Also there are products out there which bypass this protection in a business environment by installing a new root certificate on the client machine and having a proxy server make the connection on your behalf, and then generating a "fake" certificate for the site generated using their root key to make the session to the browser so you appear to have a secure SSL connection to the server but in fact it's only to the proxy. You can look at the certificate chain for the connection to determine if this is happening but few people will bother.
So to answer your question - no the full URL can't be sniffed - but with access to the client machine it is possible to do it part way.,

same cgi proxy script behaves differently in 2 different servers

i have one dedicated server and one shared hosting server.
i download and put both cgi proxy script in /cgi-bin/
these two files are identical.
on both servers, i checked that there is no cookies recorded by cgiproxy. virgin.
URL flags are all same.
i navigate to myspace.com
Behavior Difference:
dedicated server's cgi proxy
redirects to google.com
shared server's cgi proxy
successfully loads myspace.com
I suspect something wrong with the dedicated server's settings? But what could be wrong or different from the shared hosting ?
Since you've now posted multiple questions about your server being blocked by other web sites, I would guess there's a good chance your server is in a netblock with a bad reputation for abuse.

My IP seems to be blocked by web hosting server

I have a strange problem, I just installed my php web site on a shared hosting, all services were working fine. But after configuring my app I just could visit my web site only once, other attempts gives:
"The server is taking too long to respond.".
But from other IP i can access, but only once, it seems all ip addressess beeing blocked after first visit(even ftp and other services get down, no access at all from the IP), can anyone help to explore this problem ? I don't think that it's my app problem, the app works fine on my local PC.
Thanks.
First thing to try would be a traceroute to determine where your traffic is being blocked.
In a windows command prompt:
tracert www.yoursharedhostingserver.com
At the moment, trying to access this address gives this:
Fatal error: Class 'mainController'
not found in
/home/myicms/public_html/core/application/crApplication.class.php
on line 181
I have tried it multiple times and it didn't block me. It might be that You have already solved this problem.
As far as I know, the behavior described by You could only be explained by a badly configured intelligent firewall. It may have been misconfigured by Your host.
If You visit a site at a certain host and suddenly You cannot access an ftp on this host, then it's either a (really bad) firewall or a (very mean) site that explicitly adds a firewall rule to ignore that address.
Some things that You might look into:
It might be something with identd too. What was the service You have configured on Your host? Was it by any chance any kind of server-controll panel (that might have an ability to controll a firewall)?
Is the blockade permanent, or does it go off after 24h, or does it only go off after rebooting the server? Does restarting some services makes the blockade go off?
Did You install any software that "protects Your server from portscanning"? It might be a bit too aggressive.
I wish You good luck in finding a source of this problem!
Chances are that if you can access it once that its actually working. The problem is more than likely in the php code than in the server.

Resources