While integrating BrainTree (client and server) for a new solution, I have been wondering whether it's "safe" to maintain a non secure connection between my BT client (mobile app) and my BT server (web service). I.e. a web service without SSL in my case.
Thinking about it, even if someone discovers my non secure BT Server (web service) and sends remote calls, it shouldn't be too bad since BT (a) needs a payment nonce to generate a transaction, and (b) unless auto-settlement is enabled in BT, transactions won't be sent out for settlement without you manually doing so.
Am I wrong? Do you see any reasons for SSL and securing connection?
Many thanks,
Polis
You don't need a secure connection, HOWEVER I recommend that you do have SSL enabled. When users see a credit card form without a secure connection, they may not trust your site. They don't know (and shouldn't even care about) the security details. They'll just see that SSL is not enabled and assume their data is not secure.
Related
I'm trying to integrate an openFire XMPP server to my current company Spring server but have two major questions I cannot find the answer to -
I'll start with my current architecture first -
1. The xmpp server have a DB-server of it's own seperated from the Spring server DB, This is a dedicated machine to keep the users char history etc
2. The spring server have a DB of it's own where it keeps the user credentials (md5 encrypted) and also client applications data
3. The spring server is dedicated to serve HTTP requests (a dedicated REST server)
All in all I have 2 DB servers once chat server and one Rest server
Now for the questions -
1. Can I forbid registration to the xmpp server (i.e. whitelist the rest server ip and let it be the only one who can create users after a user registers on it)?
2.For security reasons the Rest server switch the session for a logged in user every 2 days the iOS and Android clients deal with session managment locally - How can I use those session with the XMPP server?
To clarify - I want the users to be able use the xmpp server only for chat purposes but only after they logged in to the application itself since the user session may expire the chat client will also have to re-authenticate against the REST server, how can I achieve this?
3. Won't it create an overload on the REST server? (i.e. the Rest server will now have to handle client requests and also XMPP server requests)
4. What is the best architecture to achieve this kind of a system (chat server, db server for chat server, rest server, db server for rest server) so that the system can scale horizontally?
I searched google for an article or something related to describe the general architecture but couldn't find nothing relevant, since I'm not "inveneting the wheel" here I would love to hear a good advice or be directed to an article that explains the How-To's
Thanks in advance.
The standard way in XMPP world for user authentication is SASL.
SASL have a very simple model: server sends to client some "challenge" string to client, and client sends "response" string to server, and they repeat this until server decides client send all required data. What data to send is defined in SASL "mechanism". There are number of well-known SASL mechanisms, e.g. SCRAM, and they are provided by most XMPP servers and clients "out of the box".
Your problem is - you already have authentication system and user database and want to reuse it for chat purposes. There are two ways:
Add your custom REST authentication as SASL module to your server. Google say it is already possible to write and add Openfire SASL plugin. Your SASL REST mechanism will do the same things as for browser, but required urls, tokens, etc. will be wrapped as "challenges" and "responses", e.g. server will send REST auth url as "challenge" for client, and client will open url, post credentials, get a token and send them as "response" back to server. Of course you need to add this SASL REST mechanism in client too.
Adopt your XMPP server to use your authentication database directly. In this case you only need to modify Openfire code to link it with your users/passwords tables (maybe there is already an admin tool for this). In this case clients will continue to use standard SASL mechanisms without modification. When this way may be easier than first one, remember your XMPP server should have access to plain-text passwords, which may be insecure.
You questions in order:
Yes, you can disable registration from XMPP client and point users to registration website.
You will see chat sessions in Openfire administration console and able to stop them, also you can write a module for do this by your schedule
If you will write SASL REST mechanism, there will no any difference between requests from chat clients and web clients for your REST backend, they will look the same.
As I described first, you no need separate DB for chat server and you able to setup multiple chat servers connected to your REST backend.
I'd like to know if it's possible for web services to detect HTTPS connections with "faked" root certificates created by Fiddler4 (Web debugging proxy) to prevent reverse engineering.
Is there any method to check whether the encryption is done with the original certificate or with one made by Fiddler?
A server has no way to know what certificate the client received unless the client sends the server that information.
From client JavaScript, you cannot detect such interception today; JavaScript does not expose the capabilities to introspect the certificate. It is possible to use Java or Flash inside a webpage to inspect the certificate received upon connecting to a server, but a sufficiently devious interceptor could just avoid MITM'ing the Java/Flash connection.
In contrast, a native code client application can detect what certificate was presented by the server and reject any certificate that doesn't match the expected certificate; this is called certificate pinning and it's a technique used by some applications. Note that this will block more than Fiddler; it'll also block connections through corporate inspection proxies (e.g. BlueCoat, ISA TMG, etc) and through some popular consumer antivirus programs' proxies (e.g. BitDefender). More importantly, users can circumvent your certificate pinning checks if they like; your code is running on their device, and they have the ability to modify your code in memory to strip out your certificate pinning checks. On some mobile devices, this code modification requires "jail-breaking" the device, but this isn't an insurmountable barrier.
Scenario: Sensitive information is exchanged (1) from client to server AND (2) from server to client.
Problem: Data exchanged is not encrypted, so sniffing is easy (it's always theoretically possible, right?)
Solution: Encrypt all data transmitted in either direction (server-to-client and client-to-server).
Implementation:
(1) Client to server - Generate certificate, install private key on server and configure Tomcat to work on HTTPS (Many tutorials for this online).
(2) Server to client - Private key goes to (or generated by) clients, however it seems that some tutorials strongly emphasize that that every client should have their own certificate for the sake of authentication.
Question: If I am already authenticating my users through a database username/password (hashed with salt) combo, but I still need to encrypt server-to-client data transmissions to reduce chance of sniffing, can I just generate one private key for all clients? Are there other ways of achieving what I need with Tomcat/Spring?
It seems you're mixing something up:
Regular https includes encryption in both directions, and only a private key + certificate on the server side. Once a client requests resources through https, they get the answer encrypted. So you'll just need to enforce the https connection (e.g. by redirecting certain requests to https with no delivery of data through http)
If you want client certificates, these are purely used for client authentication, so sharing a common client key/certificate with all possible clients will defeat this purpose. Having client keys/certs does not add any more encryption to your data transfer.
Answering to your follow-up question in the comment:
For https, the server keeps its private key, the public key is what is shared with the client. On typical https, the client can be reasonably sure who the server is (authentication, done through the trustworthy signature on the server's public key. This is what you pay trustcenters for) However, the server has no clue who the client is (here client certificates would come into play, but purely for authentication, not for encryption)
Server and client negotiate a common session key. For this purpose there are many different implementations of the key-exchange protocol. This forum is probably not the right place to describe session negotiation and the ssl handshake again, but you can be sure that you only need a server side key for the purpose you describe above: Take any website as an example: If you go to google mail, their https encryption works through them having a private key and a certified (signed) public key: You have no client side certification, but provide your username and password through the encrypted connection to them. Otherwise you'd have to install a client side key/certificate for a lot of services - and that would be too much of a burden for the average internet user.
Hope that helps.
I am writing a little app similar to omegle. I have a http server written in Java and a client which is a html document. The main way of communication is by http requests (long polling).
I've implemented some sort of security by using the https protocol and I have a securityid for every client that connects to the server. When the client connects, the server gives it a securityid which the client must always send back when it wants a request.
I am afraid of the man in the middle attack here, do you have any suggestions how I could protect the app from such an attack.
Note that this app is build for theoretical purposes, it won't be ever used for practical reasons so your solutions don't have to be necessarily practical.
HTTPS does not only do encryption, but also authentication of the server. When a client connects, the server shows it has a valid and trustable certificate for its domain. This certificate can not simply be spoofed or replayed by a man-in-the-middle.
Simply enabling HTTPS is not good enough because the web brings too many complications.
For one thing, make sure you set the secure flag on the cookies, or else they can be stolen.
It's also a good idea to ensure users only access the site via typing https://<yourdomain> in the address bar, this is the only way to ensure an HTTPS session is made with a valid certificate. When you type https://<yourdomain>, the browser will refuse to let you on the site unless the server provides a valid certificate for <yourdomain>.
If you just type <yourdomain> without https:// in front, the browser wont care what happens. This has two implications I can think of off the top of my head:
The attacker redirects to some unicode domain with a similar name (ie: looks the same but has a different binary string and is thus a different domain) and then the attacker provides a valid certificate for that domain (since he owns it), the user probably wouldn't notice this...
The attacker could emulate the server but without HTTPS, he would make his own secured connection to the real server and become a cleartext proxy between you and the server, he can now capture all your traffic and do anything he wants because he owns your session.
We currently have a group of web-services exposing interfaces to a variety of different client types and roles.
Background:
Authentication is handled through SSL Client Certificate Verification. This is currently being done in web-service code (not by the HTTP server). We don't want to use any scheme less secure than this. This post is not talking about Authorisation, only Authentication.
The web-services talk both SOAP and REST(JSON) and I'm definitely not interested in starting a discussion about the merits of either approach.
All operations exposed via the web-services are stateless.
My problem is that verifying the client certificate on each requests is very heavyweight, and easily dominates CPU time on the application server. I've already tried seperating the Authentication & Application portions onto different physical servers to reduce load, but that doesn't improve dispatch speed overall - the request still takes a constant time to authenticate, no matter where that is done.
I'd like to try limiting the number of authentications by generating an HTTP cookie (with an associated server-side session) after successful client certificate verification, which when supplied by the client will cause client certificate verification to be skipped (though still talking over SSL). I'd also like to time-limit the sessions, and make the processes as transparent as possible from a client perspective.
My questions:
Is this still as secure? (and how can we optimise for security and pragmatism?)
Are there free implementations of this scheme? (I'm aware of the SiteMinder product by CA)
Given the above, should we continue to do Authentication in-application, or move to in-server ?
generating an HTTP cookie (with an
associated server-side session) after
successful client certificate
verification, which when supplied by
the client will cause client
certificate verification to be skipped
Is this still as secure? (and how can
we optimise for security and
pragmatism?)
It's not quite as secure in theory, because the server can no longer prove to himself that there's not a man-in-the-middle.
When the client was presents a client-side certificate, the server can trust it cryptographically. The client and server should be encrypting and data (well, the session key) based on the client's key. Without a client-side cert, the server can only hope that the client has done a good job of validating the server's certificate (as perceived by the client) and by doing so eliminated the possibility of Mr. MitM.
An out-of-the-box Windows client trusts over 200 root CA certificates. In the absence of a client-side cert, the server ends up trusting by extension.
Here's a nice writeup of what to look for in a packet capture to verify that a client cert is providing defense against MitM:
http://www.carbonwind.net/ISA/ACaseofMITM/ACaseofMITMpart3.htm
Explanation of this type of MitM.
http://www.networkworld.com/community/node/31124
This technique is actually used by some firewall appliances boxes to perform deep inspection into the SSL.
MitM used to seem like a big Mission Impossible-style production that took a lot to pull off. Really though it doesn't take any more than a compromised DNS resolver or router anywhere along the way. There are a lot of little Linksys and Netgear boxes out there in the world and probably two or three of them don't have the latest security updates.
In practice, this seems to be good enough for major financial institutions' sites, although recent evidence suggests that their risk assessment strategies are somewhat less than ideal.
Are there free implementations of this scheme? (I'm aware of the SiteMinder product by CA)
Just a client-side cookie, right? That seems to be a pretty standard part of every web app framework.
Given the above, should we continue to do Authentication in-application, or move to in-server ?
Hardware crypto accelerators (either a SSL proxy front end or an accelerator card) can speed this stuff up dramatically.
Moving the cert validation into the HTTP server might help. You may be doing some duplication in the crypto math anyway.
See if you would benefit from a cheaper algorithm or smaller key size on the client certs.
Once you validate a client cert, you could try caching a hash digest of it (or even the whole thing) for short time. That might save you from having to repeat the signature validations all the way up the chain of trust on every hit.
How often to your clients transact? If the ones making up the bulk of your transactions are hitting you frequently, you may be able to convince them to combine multiple transactions in a single SSL negotiation/authentication. Look into setting the HTTP Keep-Alive header. They may be doing that already to some extent. Perhaps your app is doing client cert validation on every HTTP request/response, or just once at the beginning of each session?
Anyway, those are some ideas, best of luck!