I have a question about protecting data against an untrusted end-user in the following scenario:
Basic feature to implement:
User logs into desktop app using username and password in order to have access to a subscription based software
The server respond with an encrypted token having a validity (30 days for example) and a Hardware Identifier specific for that one machine for which the user is now entitled. The server also responds with a public key with which the token above can be decrypted/checked
Every time the user launch the desktop app, if we are in an offline scenario we check the validity of the token by decrypting it with the public key, and then we can check if it is still within the validity period, and if it is on the correct Hardware as well (i.e. if the user did not just copy the encrypted files on a different machine and try to use the same account on more computers simultaneously)
Problem:
The scenario above has no protection against an untrusted user.
Example of possible attack:
The user decrypts the token using the public key
He now edits the token and sets a new Hardware Identifier, for the new machine he wants to copy the tokens
He edits the expiring date to 31.12.2099 for example
He generates his own private-public pair of keys
He encrypt the token again with his own private key
Now he can transport the encrypted token together with his own public key on the new machine, and as soon as he launches the desktop app in an offline scenario, the app has no way to tell that the token has been altered. It checks the authenticity of the token, but it can't check that the public key has been corrupted as well.
Every asymmetric encryption approach assumes that the end-user is trustable. For example when signing an app with a certificate, we assume that the user has no intention in attacking the offline trusted root certificates.
A solution I avoid would be to encrypt the private key or the public key within the code, but:
That is very very unsafe
That is not flexible in case you want to update the keys
It can be reversed engineered
After studying cryptography for a while I never realized that this scenario is so unsafe. Now I am wondering how a lot of already existing apps work in this scenario without exposing themselves to this so easy attack?
Thank you!
Related
I want to use asymmetric key pairs to sign/verify data sent from an Xamarin forms smartphone app to a web service.
I have an Xamarin forms smartphone app targeted at Android and iOS devices. The client on the device connects through a web service to a database and, on successful login, retrieves and sends data. Currently the user logs in, providing username and password. The password is hashed and the database user table is searched for the combination of the username and password hash. If this combination is found then the user is deemed to be legitimate and information can be retrieved and sent.
I now want to introduce another layer of security, so that each device that installs the app would need, additionally, a private key. This key would be used to produce a digital signature or hash of data sent up to the web service. When the web service receives the request, it will use the corresponding public key of the key pair to verify the signature, and only allow the request through if the signature is verified. From time to time, I may want to eliminate the user base and start afresh and I was thinking that I could do this easily by creating a new asymmetric key pair and sending the new private key out to each user whom I wanted to be able to use the system while also changing the corresponding public key on the web service to the new one. This way anyone still using the old private key would not gain access. The difficulty I have found with this approach is that I don’t know how to get a new key on to the user's device and, having done that, I don’t know how to get access to this key in the app’s code in order to create the digital hash. I have tried experimentally to look at the key store, but I don’t seem to be able to do that on an iPhone the way I can on the PC, so my question, at its simplest is, how do I get a private key onto an iPhone or an Android phone and, having got it on there, how do I get access to it in code in order to use it to generate a digital hash. Of course, I could just use symmetric encryption, and pass a password to the user base which could then be used in code to encrypt some mutually agreed piece of text and the web service, on receiving it, would use the same password to decrypt it. I just thought that the asymmetric key pair approach was a more elegant, and, in the end, more robust solution. The other point is that I want to use the .Net System.Security.Cryptography classes only, ie no third party code if possible.
I'm setting up a new Near account, and I want to use its keys to sign a message in an app I'm building. How can I do this?
I used the wallet.nearprotocol.com page to create an account. Then, I used nearlib to connect to the testnet, and verify the account's balance and public keys.
But I couldn't find a way to add the account into the localStorage key store or otherwise access a method to sign a message. Nor could I find a wallet plugin or extension that would provide me access.
Generally the idea is that you never transfer given private key between 2 devices / security contexts.
So normally instead of getting private key out of wallet you just want to generate new key pair and request wallet to add public key.
https://github.com/nearprotocol/nearlib/blob/master/src.ts/wallet-account.ts provides relatively easy way to do it for webapp.
Note that it limits access to a give contract ID, so if you need unrestricted access you basically just need to omit contractId.
See examples at https://near.dev/ for WalletAccount usage.
Is there a Windows API (preferably with a managed .NET wrapper) that allows data to be encrypted and the same data to only be decrypted when called from the same digitally signed application?
For example, I have a cached security token for the desktop application that gets sent to the server. This token is used on login when the user checks "Remember me". I'd like to encrypt this token that the application stores in such a way that only my application that encrypted it can decrypt it. I can't have the key/iv hard-coded in the application. Somehow the OS (Windows) must support something like this where it uses a digital signature on the entry point's executable file to validate and allow the decryption.
I need to avoid having the user enter any credentials to encrypt/decrypt this token. The whole point of auto-login is for the user to not have to enter credentials.
Yes, the cached login is a security risk, but restricting the token usage to the digitally signed application reduces the surface area exposed.
I don’t think desktop Windows apps can do that. Windows store apps probably can, I think they have some per-install security stuff.
The closest thing for desktops is probably ProtectedData class from .NET. Specify DataProtectionScope.CurrentUser to use some OS-provided crypto key specific to user account.
Don’t forget about optionalEntropy argument. I usually use a buffer from random.org I hardcode in the source.
I logged in keychain on my MacBook Pro El Capitan and click to show password but I get it encrypted like image below or XML like the other image. I can't figure out where the problem is. My password is correct and keychain is unlocked with it. So why it gives me the protected password or XML!
It's because it's not a password you typed but an application-made credential. The contents of a keychain item is not always a password you typed, and often in the case of Apple and other Cloud systems it's a token or OAuth ID that simply represents the fact that at some point you logged in somewhere and allowed a computer or app to access your account. From that point forward the app or computer you authorised uses a special key or token to act on your behalf.
The reason this is done is twofold:
Security: your password isn't stored and therefore can't really be 'stolen'. Since the token can be revoked from the other side (i.e. from your Apple ID or Google account) and usually is only valid for a specific computer it's not something you can 'steal' and use elsewhere as-is. It is still sensitive information that can be used to impersonate the trust between your account and the computer.
Ease of use (or, automation): if the application or computer you authorised needs to act on your behalf, it would be annoying to retype your password all the time. Using a special kind of authentication allows the computer or app to do certain things on your behalf, but not every possible action as there usually are limits to how many things it's allowed to do in your name before you have to re-authorise the ID with your password. So while your Apple ID can be used to receive iMessages once you are logged in, that same token won't allow some other app to 'read' your stored credit card information or change your email address.
Long story short: it's not a password (it's a token), it's not for you (it's for computers), it's a 'special ID' and it's for the apps that added it to the keychain to function in your name.
I have a server (RoR app) sending information to a client (a Ruby Sinatra app) and I need a way for the client to be certain the data has come from my server, rather than an evil third party.
The client has to login to the server before anything will be sent back the other way so the server could reply to the login with a shared key used to sign all further responses, but then the 3rd party could capture that response and be evil.
I'd like to find some way (in Ruby, with a view to cross-platform applicability) to sign the server's response so that it can be verified without inspection of the client's code leading to forgeries. Any ideas?
UPDATE: Lets see if I can explain this better!
(I've added code to github since I wrote this question, so you can (if you like!) have a poke around : The 'client' The 'server')
The process is this: Joe Bloggs uses a bookmarklet on his mobile device. This posts the currently visited URL to sitesender.heroku.com. When sitesender.heroku.com receives that request it checks its DB to see if anyone has logged into the same account using the Target application. If they have, their IP address will have been noted and sitesender.heroku.com will make a GET request of the target app (a webserver) at that IP asking the target to lanch the bookmarked URL in the default browser.
The basic idea being that you can send a site to your main computer from your iPhone for later viewing when you find the iPhone can't cope with the page (eg. flash, screen size).
Obviously the major issue is that with an open server anyone could send a request to open 'seriouslyevilwebsite.com' to a broad range of IPs and I've unleashed a plague on the digital world. Seeing as I'm using heroku.com as a server (its an incredibly good but cloud based RoR host) I can't just test the originating IP.
As far as I understand HTTPS, for this setting I'd have to sort out certificates for every target application? I agree that I need some form of asymmetric crypto, sign the outgoing requests from sitesender.heroku.com with a private key (never distributed) and get the target to perform the same operation using the public key and test for similarity - but you've guessed correctly, I'm still slightly clueless as to how HMAC works! How is it asymmetric? Is it formulated so that performing the same HMAC operation with the private key and public key will generate the same signature? In which case - HMAC is a winner!
Thanks for your patience!
I'm not sure exactly what you mean by "freely examined, but not replicated".
In general, if you need a secure communications channel, https is your friend.
Failing that (or if it's insufficient due to some architectural issue), HMAC and asymmetric crypto is the way to go.
UPDATE: I'm not sure I understand the problem, so I will try to describe the problem I think you're trying to solve: You have clients that need to be confident that the response they are seeing is actually coming from your server.
Assuming that I'm correct and this is really the problem you're trying to solve, HTTPS solves it nicely. You install a cert on your server—you can sign it yourself, but clients won't trust it by default; for that, you need to buy one from one of the standard certificate authorities (CAs)—and then the client makes an HTTPS request to your server. HTTPS handles verifying that the provided certificate was issued for the server it's talking to. You're done.
Finally, I think there's a misunderstanding with how an HMAC works. The key principle of asymmetric crypto is to NEVER distribute your private key. With asymmetric crypto, you encrypt messages with the recipient's public key and he/she decrypts it with his/her private key. You sign messages with your private key, and he/she verifies it using your public key.