Why do some scenarios require both ciphering and integrity whereas some scenarios require only ciphering ? What are the factors that decide this in the case of networking domain ?
Most systems that do ciphering also provide message integrity along the way, so your question is really posing a false dichotomy.
Ciphering is needed when you want that only authorized people can ACCESS TO SEE the data. Integrity is when authorized people can ACCESS TO MODIFY the data.
As you can see, both ciphering and integrity need an authentication and authorization phase before.
Ex: Data could be chipered with different private keys and deciphered with the relative different public keys. These phases depend onto the authentication & authorization phase.
Ex: when you connect via HTTPS, the first phase is a negotiation of the correct certificate. Typically the client authorize the server checking the trust of the certificate chain.
Ex: You have to access to data in your central DB. Data could be ciphered or not, but the access to the key and/or the data must be done only after an authentication and authorization check.
I hope my considerations help you
Encryption protects your text in transport, but it doesn't prove who you are. Adding an integrity control also proves your identity.
A scenario:
I can encrypt data between an ATM and a bank's server. No-one can sniff this traffic and decrypt it, so you can assume that it's "secure". But there's nothing to stop an intermediary from replaying those transactions. Or from replaying traffic seen at a different ATM location, even if the attacker doesn't know what the transaction actually contains. The transactions are not linked to any specific ATM as an entity. So if I withdraw $100 then an intermediary can replay the traffic exchange 10 times from multiple locations and cause me to withdraw $1000.
Adding an integrity control to the exchange can lock the transaction to only a single system and also prove that the transaction was not modified. So, for example, I can get the ATM to sign a digitally timestamped copy of each transaction. Now when the encrypted traffic is replayed, the server can tell that it's a false transaction as the timestamp will be old. Or if the a transaction from a similar ATM at a different location is replayed, then the server can also ascertain that it's talking to a different identity than the one actually expected. So while encryption secures the transaction channel, integrity makes sure that the two end-point decrypting the traffic are actually talking to the party that they expect.
Related
I am using the following code to post to a server from Objective C under iOS 7. It should be mentioned that this post IS over SSL.
NSString *externalURL = #"https://someurl";
NSMutableURLRequest *request = [NSMutableURLRequest requestWithURL:[NSURL URLWithString:externalURL]];
request.HTTPMethod = #"POST";
[request setValue:#"application/x-www-form-urlencoded;charset=utf-8" forHTTPHeaderField:#"Content-Type"];
NSString *postDataStr = [NSString stringWithFormat:#"auth=%s&id=%#&title=%#&name=%#&msg=%#&sec=%#&img=%#&code=%#",AUTH_CODE,channelID,channelTitle,screenName,msg,secName,imgKey,passCode];
NSData *requestBodyData = [postDataStr dataUsingEncoding:NSUTF8StringEncoding];
[request setHTTPBody:requestBodyData];
NSURLSessionDataTask *postDataTask = [self.session dataTaskWithRequest:request];
[postDataTask setTaskDescription:#"postMessage"];
[postDataTask resume];
My intention is to use the "auth" you see above to protect the server from acceping a call from another source. Again, I am tramsitting this over SSL but I am wondering if it is possible for the user to intercept the call before it goes over SSL and potentially see the value sent for "auth"? If this can be intercepted than the whole notion of using an authorization code like this becomes pretty much useless.
---Update----
As a general update for anyone coming across this thread, I have decided to approach this problem as follows knowing unfortunately there are still possible holes.
I am using the values from the data I am sending the service, combined with a secret key known to my app and the server, to create a SHA-256 hash. I send this hash along with the data to the server. The server than also computes the hash and if the two are equal the request is processed. I've used this process elsewhere to verify passwords. The obvious hole here is that if someone gets a hold of my secret code the jig is up. This is far more likely to occur on the client than the server. They would need to disassemble the app which would expose the code. So not perfect but the best I have for now.
In general it's hard to give a useful answer to security questions until you state a threat model. Without knowing what or who you are attempting to protect against there's no way to evaluate what protection, if any, a given scheme provides.
Is your intent is to keep a shared secret (AUTH_CODE) which is known to the server and client apps but not to the users of those apps who control the devices they run on? If so then this is a pointless exercise. As the owner of my device I can man-in-the-middle my own SSL connections with a trusted cert and read the content of their requests and responses, I can observe messages sent to NSURLSession and other classes, and I can dig through installed apps to identify constants and other resources. This sort of approach will be broken the moment someone finds it useful to do so.
If your intent is to prohibit third parties from connecting to your service then such an approach is still likely to fail. Without the ability to inspect a request they may be unable to reconstruct this token however all they have to do is download the app to promote themselves into the case above. They are then free to extract this token and use it in their own clients. Additionally if this is a globally shared secret then it only needs to be compromised once by one user and it can then be shared with anyone interested in connecting to your server. Once again I suspect such an approach will last only until someone finds it useful to break.
In fact I will argue that there is nothing you can do to successfully guard against the first case if your users are determined to use their own client to connect to your system. No matter how convoluted you make the system you have to hand it over to the end users and at that point they are free to reverse engineer it.
There are however two things you can do which might mitigate whatever threat you are concerned about.
Establish per-user sessions rather than global shared secrets. This could mean requiring a set of log-in credentials (possibly via a third party platform) or verifying a receipt with a unique transaction id proving a purchase of the app. Such credentials can still be shared by many users but at least you can then act on that shared account.
Accept that you cannot trust clients to be well behaved and design your back-end system to account for that.
What threat do you actually face and why do you think is it important that you be able to identify "valid" clients?
Are there any examples of using encryption to encrypt the disk-cache used by OkHttp's HttpResponseCache? Naively, I don't think this is a very hard thing to do, but I'd appreciate any advice or experience to avoid security-pitfalls.
Without too many specifics, here's what I'm trying to achieve: a server that accept user's api-keys (typically 40-character random string) for established service X, and makes many API calls on the users behalf. The server won't persist user's api-keys, but a likely use case is that users will periodically call the server, supplying the api-key each time. Established service X uses reasonable rate-limiting, but supports conditional (ETag, If-Modified-Since) requests, so server-side caching by my server makes sense. The information is private though, and the server will be hosted on Heroku or the like, so I'd like to encrypt the files cached by HttpResponseCache so that if the machine is compromised, they don't yield any information.
My plan would be to create a wrapper around HttpResponseCache that accepts a secret key - which would actually be a hash of half of the api-key string. This would be used to AES-encrypt the cached contents and keys used by HttpResponseCache. Does that sound reasonable?
Very difficult to do with the existing cache code. It's a journaled on-disk datastructure that is not designed to support privacy, and privacy is not a feature you can add on top.
One option is to mount an encrypted disk image and put the cache in there. Similar to Mac OS X's FileVault for example. If you can figure out how to do that, you're golden.
Your other option is to implement your own cache, using the existing cache as a guide. Fair warning: the OkResponseCache is subject to change in the next release!
I am lookin to harden security on one of my client sites. There is no payment provider set up so sensitive Direct Debit information needs to be on a mySql server. This Direct Debit information needs to be human readable by users from accounting department.
Testing server is set up as follows:
At present, main site is sitting on a wordpress blog.
Customer completes HTTPS encrypted form with an EV SSL certificate.
Data is stored in a separate database to the wordpress database.
Direct debit details are currently stored as plain text
Now part 4 is what bothers me... but it's ok at the moment, because only on the testing server!
This is really difficult to answer, as it depends on how far you need to protect this data.
First step is obviously encrypting all details stored in mysql, incase someone gets a dump of your database.
This solution is good, but it introduces the vulnerability as if someone gets the decryption keys from your application server, they would be able to decrypt the dump of the database anyway.
There are many solutions to consider from here, i'm sure with some research you should be able to find some decent ones, but one way that comes to mind is:
You could encrypt the data on the application servers with a public/private key encryption algorithm. Public key can only be used to encrypt the information for storage, which lives on your application server. If that gets hacked, the only thing that they will be able to do is to add more data to your database =/. The private key in this case will be a password that would need to be entered every time a human needs to see this information.
This has the obvious disadvantage that you can't do any machine processing on your data, as its traveling completely encrypted all the way until its displayed.
(And you still have vulnerabilities of someone gaining access to your application server and simply dumping the session files/memcache where the key would have to be stored temporarily)
To be honest, first thing i'd do is encrypt the entire database one way or another. That alone adds a decent layer of protection. Dumping the database is easier than getting access to the file system of a server in most cases.
Are you talking about bank account details / credit card details or both?
Be aware storing credit card details brings attached fulfilling PCI requirements.
Also, if you are planning to store confidential details, NEVER store them unencrypted.
Any questions, just let me know.
Fabio
#fcerullo
I was wondering if anyone has done some performance testing with two different approaches for security. Mostly concerned with the server side of things.
1) Using active directory, the user account is validated each time a message is sent.
2) Using certificate, each message is encrypted with a certificate.
My guess would be that decrypting the message is more computer intensive hence the active directory approach is likely to perform better.
You have a few mixed bits of security there.
Which ones do you require?
Securing a queue against access from accounts you don't want
Ensuring a message is from the account it says it is (authentication)
Ensuring no one can see the message body (encryption)
Let me know and I can give you a better idea of what works performancewise.
You write "Using active directory, the user account is validated each time a message is sent."
That doesn't sound right. All MSMQ does is put the SID of the sending user account in the message header. This is why you shouldn't rely on just setting account level access on queues as anyone can spoof the SID in an MSMQ message.
Cheers
John Breakwell
Being a starter on MSMQ, I will do my best to answer the question here.
[1.] Securing a queue against access from accounts you don't want
Answer: My understanding is that if I use a private queue, it will implicitly do that. In other words, if anyone does not know about it, then how "outsiders" can access it ?
[2.] Ensuring a message is from the account it says it is (authentication)
Answer: I can debate about this. I am not sure it will make a difference in my particular environment since everything is driven by a custom app with structured data sent. If data is not structured the way it should be, the message will simply be ignored.
[3.] Ensuring no one can see the message body (encryption)
Answer: More relevant here, I do think that some level of encryption to prevent any "peeking" of the data.
Finally, I was not aware that the SID was inside the message header.
Let me know how performance is affected but these various security settings. Also, what's your advice on security with regards to MSMQ ?
Thx for all the info...
Christian Martin
I was once in a project of web application developed on ASP.NET. For each logon user, there is an object (let's call it UserSessionObject here) created and stored in RAM. For each HTTP request of given user, matching UserSessoinObject instance is used to visit user state information and connection to database. So, this UserSessionObject is pretty important.
This design brings several problems found later:
1) Since this UserSessionObject is cached in ASP.NET memory space, we have to config load balancer to be sticky connection. That is, HTTP request in single session would always be sent to one web server behind. This limit scalability and maintainability.
2) This UserSessionObject is accessed in every HTTP request. To keep the consistency, there is a exclusive lock for the UserSessionObject. Only one HTTP request can be processed at any given time because it must to obtain the lock first. The performance and response time is affected.
Now, I'm wondering whether there is better design to handle such logon user case.
It seems Sharing-Nothing-Architecture helps. That means long user info is retrieved from database each time. I'm afraid that would hurt performance.
Is there any design pattern for long user web app?
Thanks.
Store session state in the database and put memcached in front of it.
One method discussed on StackOverflow and elsewhere is the signed cookie. A cookie that has information you would otherwise not be able to trust, along with a hash created in such a way that only your server could have created it, so you know the information is valid. This is a scalable way to save non-high-security information, such as username. You don't have to access any shared resource to confirm that the user is logged in as long as the signed cookie meets all criteria (you should have a date stamp involved, to keep cookie theft from being a long term issue, and you should also keep track that the user has not authenticated, so they should have no access to more secure information without going through the usual login process).
StackOverflow: Tips on signed cookies instead of sessions