Security of Objective C Post Method (iOS) over HTTPS - xcode

I am using the following code to post to a server from Objective C under iOS 7. It should be mentioned that this post IS over SSL.
NSString *externalURL = #"https://someurl";
NSMutableURLRequest *request = [NSMutableURLRequest requestWithURL:[NSURL URLWithString:externalURL]];
request.HTTPMethod = #"POST";
[request setValue:#"application/x-www-form-urlencoded;charset=utf-8" forHTTPHeaderField:#"Content-Type"];
NSString *postDataStr = [NSString stringWithFormat:#"auth=%s&id=%#&title=%#&name=%#&msg=%#&sec=%#&img=%#&code=%#",AUTH_CODE,channelID,channelTitle,screenName,msg,secName,imgKey,passCode];
NSData *requestBodyData = [postDataStr dataUsingEncoding:NSUTF8StringEncoding];
[request setHTTPBody:requestBodyData];
NSURLSessionDataTask *postDataTask = [self.session dataTaskWithRequest:request];
[postDataTask setTaskDescription:#"postMessage"];
[postDataTask resume];
My intention is to use the "auth" you see above to protect the server from acceping a call from another source. Again, I am tramsitting this over SSL but I am wondering if it is possible for the user to intercept the call before it goes over SSL and potentially see the value sent for "auth"? If this can be intercepted than the whole notion of using an authorization code like this becomes pretty much useless.
---Update----
As a general update for anyone coming across this thread, I have decided to approach this problem as follows knowing unfortunately there are still possible holes.
I am using the values from the data I am sending the service, combined with a secret key known to my app and the server, to create a SHA-256 hash. I send this hash along with the data to the server. The server than also computes the hash and if the two are equal the request is processed. I've used this process elsewhere to verify passwords. The obvious hole here is that if someone gets a hold of my secret code the jig is up. This is far more likely to occur on the client than the server. They would need to disassemble the app which would expose the code. So not perfect but the best I have for now.

In general it's hard to give a useful answer to security questions until you state a threat model. Without knowing what or who you are attempting to protect against there's no way to evaluate what protection, if any, a given scheme provides.
Is your intent is to keep a shared secret (AUTH_CODE) which is known to the server and client apps but not to the users of those apps who control the devices they run on? If so then this is a pointless exercise. As the owner of my device I can man-in-the-middle my own SSL connections with a trusted cert and read the content of their requests and responses, I can observe messages sent to NSURLSession and other classes, and I can dig through installed apps to identify constants and other resources. This sort of approach will be broken the moment someone finds it useful to do so.
If your intent is to prohibit third parties from connecting to your service then such an approach is still likely to fail. Without the ability to inspect a request they may be unable to reconstruct this token however all they have to do is download the app to promote themselves into the case above. They are then free to extract this token and use it in their own clients. Additionally if this is a globally shared secret then it only needs to be compromised once by one user and it can then be shared with anyone interested in connecting to your server. Once again I suspect such an approach will last only until someone finds it useful to break.
In fact I will argue that there is nothing you can do to successfully guard against the first case if your users are determined to use their own client to connect to your system. No matter how convoluted you make the system you have to hand it over to the end users and at that point they are free to reverse engineer it.
There are however two things you can do which might mitigate whatever threat you are concerned about.
Establish per-user sessions rather than global shared secrets. This could mean requiring a set of log-in credentials (possibly via a third party platform) or verifying a receipt with a unique transaction id proving a purchase of the app. Such credentials can still be shared by many users but at least you can then act on that shared account.
Accept that you cannot trust clients to be well behaved and design your back-end system to account for that.
What threat do you actually face and why do you think is it important that you be able to identify "valid" clients?

Related

Encrypting OkHttp's HttpResponseCache

Are there any examples of using encryption to encrypt the disk-cache used by OkHttp's HttpResponseCache? Naively, I don't think this is a very hard thing to do, but I'd appreciate any advice or experience to avoid security-pitfalls.
Without too many specifics, here's what I'm trying to achieve: a server that accept user's api-keys (typically 40-character random string) for established service X, and makes many API calls on the users behalf. The server won't persist user's api-keys, but a likely use case is that users will periodically call the server, supplying the api-key each time. Established service X uses reasonable rate-limiting, but supports conditional (ETag, If-Modified-Since) requests, so server-side caching by my server makes sense. The information is private though, and the server will be hosted on Heroku or the like, so I'd like to encrypt the files cached by HttpResponseCache so that if the machine is compromised, they don't yield any information.
My plan would be to create a wrapper around HttpResponseCache that accepts a secret key - which would actually be a hash of half of the api-key string. This would be used to AES-encrypt the cached contents and keys used by HttpResponseCache. Does that sound reasonable?
Very difficult to do with the existing cache code. It's a journaled on-disk datastructure that is not designed to support privacy, and privacy is not a feature you can add on top.
One option is to mount an encrypted disk image and put the cache in there. Similar to Mac OS X's FileVault for example. If you can figure out how to do that, you're golden.
Your other option is to implement your own cache, using the existing cache as a guide. Fair warning: the OkResponseCache is subject to change in the next release!

Store encrypted https query strings in lieu of user credentials?

I'm building an app for which I need to store my users' login credentials for a 3rd party service. Communication with the 3rd party service is done via https GET requests.
From what I've seen, looking at posts like this one, there's no clear answer as to the best practices for doing this, and the specific methods discussed in that post at least all leave something to be desired.
So one thought I had was that perhaps it'd be possible to get around the problem by "pre-encrypting" the query string for the 3rd party request and storing that encrypted data in my db in lieu of storing the users' credentials directly. This way I can store the credentials in an encrypted form but not worry about the key being compromised, as it's held by the 3rd party, not me. And if my db were compromised the intruder wouldn't get anything more than he could obtain by packet sniffing.
I can't seem to find any examples of anyone doing something like this, so I'd like feedback on whether the community thinks it's a reasonable approach. Beyond that, a little help on how exactly to do it would be great. I'm building my app in node.js/express, and currently I'm just using the https module to handle communication with the 3rd party, but clearly I'd have to go at it at a lower level in order to take this approach.
The basic process would be:
Do the same thing as https.request in order to establish an ssl/tls connection to the 3rd party and encrypt the query string containing the user's credentials
Stop short of actually sending the encrypted data to the 3rd party and instead store it in my db
At a later time, "reconstruct" the https connection with the stored data and send it to the 3rd party, process response, win
That won't work, sorry. HTTPS renegotiates new session key each time, so data would look different over the wire with each new request.

DotNetOpenAuth on web farm

I am implementing DotNetOpenAuth for both an OpenId provider and a relying party. In both cases, the servers are behind a load balancer, so for any HTTP request, we can't assume that we'll hit the same server.
It appears that DotNetOpenAuth depends on the Session to store a pending request key. Because the server may change between requests, we can't depend on the standard InProc Session. Unfortunately, we've been unable to successfully implemented SQL as the store for Session.
My question is: is it safe to store a PendingAuthenticationRequest as a client cookie? Any worse than using Session?
The ProviderEndpoint.PendingAuthenticationRequest property is there for your convenience only, primarily for simpler scenarios. If it doesn't work for you, by all means store it another way and totally ignore this property. No harm done there.
Ultimately a session is tracked by an HTTP cookie, so you can certainly store the auth request state entirely in a cookie if you prefer so that it works in a web farm environment. Another approach is to not require the client (or the server) to track state at all by either making everything (including authentication) handled directly at the OP Endpoint URL, or redirecting the user from the OP Endpoint URL with a query string that includes all the state informaiton you need to track. Be careful of the latter approach though since you'll be exposing your state data to the user to see and possibly tamper with.
In short, you may or may not choose to store user sessions in a SQL store. That should be fine. The issue I think you ran into (that we discussed by email) was that you needed to implement your own IProviderApplicationStore, which will store nonces and associations in a database that is shared across all your web servers. This is imperative to do, and is orthogonal to the user session state since this is stored at the application level.

What's the best way to store Logon User information for Web Application?

I was once in a project of web application developed on ASP.NET. For each logon user, there is an object (let's call it UserSessionObject here) created and stored in RAM. For each HTTP request of given user, matching UserSessoinObject instance is used to visit user state information and connection to database. So, this UserSessionObject is pretty important.
This design brings several problems found later:
1) Since this UserSessionObject is cached in ASP.NET memory space, we have to config load balancer to be sticky connection. That is, HTTP request in single session would always be sent to one web server behind. This limit scalability and maintainability.
2) This UserSessionObject is accessed in every HTTP request. To keep the consistency, there is a exclusive lock for the UserSessionObject. Only one HTTP request can be processed at any given time because it must to obtain the lock first. The performance and response time is affected.
Now, I'm wondering whether there is better design to handle such logon user case.
It seems Sharing-Nothing-Architecture helps. That means long user info is retrieved from database each time. I'm afraid that would hurt performance.
Is there any design pattern for long user web app?
Thanks.
Store session state in the database and put memcached in front of it.
One method discussed on StackOverflow and elsewhere is the signed cookie. A cookie that has information you would otherwise not be able to trust, along with a hash created in such a way that only your server could have created it, so you know the information is valid. This is a scalable way to save non-high-security information, such as username. You don't have to access any shared resource to confirm that the user is logged in as long as the signed cookie meets all criteria (you should have a date stamp involved, to keep cookie theft from being a long term issue, and you should also keep track that the user has not authenticated, so they should have no access to more secure information without going through the usual login process).
StackOverflow: Tips on signed cookies instead of sessions

Where should you enable SSL?

My last couple of projects have involved websites that sell a product/service and require a 'checkout' process in which users put in their credit card information and such. Obviously we got SSL certificates for the security of it plus giving peace of mind to the customers. I am, however, a little clueless as to the subtleties of it, and most importantly as to which parts of the website should 'use' the certificate.
For example, I've been to websites where the moment you hit the homepage you are put in https - mostly banking sites - and then there are websites where you are only put in https when you are finally checking out. Is it overkill to make the entire website run through https if it doesn't deal with something on the level of banking? Should I only make the checkout page https? What is the performance hit on going all out?
I personally go with "SSL from go to woe".
If your user never enters a credit card number, sure, no SSL.
But there's an inherent possible security leak from the cookie replay.
User visits site and gets assigned a cookie.
User browses site and adds data to cart ( using cookie )
User proceeds to payment page using cookie.
Right here there is a problem, especially if you have to handle payment negotiation yourself.
You have to transmit information from the non-secure domain to the secure domain, and back again, with no guarantees of protection.
If you do something dumb like share the same cookie with unsecure as you do with secure, you may find some browsers ( rightly ) will just drop the cookie completely ( Safari ) for the sake of security, because if somebody sniffs that cookie in the open, they can forge it and use it in the secure mode to, degrading your wonderful SSL security to 0, and if the Card details ever get even temporarily stored in the session, you have a dangerous leak waiting to happen.
If you can't be certain that your software is not prone to these weaknesses, I would suggest SSL from the start, so their initial cookie is transmitted in the secure.
If the site is for public usage, you should probably put the public parts on HTTP. This makes things easier and more efficient for spiders and casual users. HTTP requests are much faster to initiate than HTTPS and this is very obvious especially on sites with lots of images.
Browsers also sometimes have a different cache policy for HTTPS than HTTP.
But it's alright to put them into HTTPS as soon as they log on, or just before. At the point at which the site becomes personalised and non-anonymous, it can be HTTPS from there onwards.
It's a better idea to use HTTPS for the log on page itself as well as any other forms, as it gives the use the padlock before they enter their info, which makes them feel better.
I have always done it on the entire website.
I too would use HTTPS all the way. This doesn't have a big performance impact (since browser cache the negociated symmetric key after the first connection) and protects against sniffing.
Sniffing was once on its way out because of fully switched wired networks, where you would have to work extra hard to capture anyone else's traffic (as opposed to networks using hubs), but it's on its way back because of wireless networks, which create a broadcast medium once again an make session hijacking easy, unless the traffic is encrypted.
I think a good rule of thumb is forcing SSL anywhere where sensitive information is going to possibly be transmitted. For example: I'm a member of Wescom Credit Union. There's a section on the front page that allows me to log on to my online bank account. Therefore, the root page forces SSL.
Think of it this way: will sensitive, private information be transmitted? If yes, enable SSL. Otherwise you should be fine.
In our organization we have three classifications of applications -
Low Business Impact - no PII, clear-text storage, clear-text transmission, no access restrictions.
Medium Business Impact - non-transactional PII e.g. email address. clear-text storage, SSL from datacenter to client, clear-text in data center, limited storage access.
High Business Impact - transactional data e.g. SSN, Credit Card etc. SSL within and outside of datacenter. Encrypted & Audited Storage. Audited applications.
We use these criteria to determine partitioning of data, and which aspects of the site require SSL. Computation of SSL is either done on server or through accelerators such as Netscaler. As level of PII increases so does the complexity of the audit and threat modelling.
As you can imagine we prefer to do LBI applications.
Generally anytime you're transmitting sensitive or personal data you should be using SSL - e.g. adding an item to a basket probably doesn't need SSL, logging in with your username/password, or entering your CC details should be encrypted.
I only ever redirect my sites to SSL when it requires the user to enter sensitive information. With a shopping cart as soon as they have to fill out a page with their personal information or credit card details I redirect them to a SSL page.
For the rest of the site its probably not needed - if they are just viewing information/products on your commerce site.
SSL is pretty computationally intensive and should not be used to transmit large amounts of data if possible. Therfore it would be better to enable it at the checkout stage where the user would be transmitting sensitive information.
There is one major downside to a full https site and it's not the speed (thats ok).
It will be very hard to run Youtube, "Like"boxes etc without the unsecure warning.
We are running a full forces secured website and shop for two years now and this is the biggest drawback. We managed to get Youtube to work now but the "Add this" is still a big challenge. And if they change anything to the protocol then it could be that all our Youtube movies are blank...

Resources