The public key and private key pairs are created on the client side via a java script algorithm and the public key is then transferred over to the Server.
A copy of the persons private key was stored on the users computer in the form of a java script variable.
When User A sends a message to User B
The server encrypts the message with User B's public key.
User B picks up the message and decrypts (algorithm written in java script) it with User B's private key which is private and kept in a java script variable.
At no point in the time the User B's private key is disclosed over the network what so ever.
Would that be secure???
'public' and 'private' are just names given to the two keys. It doesn't matter WHICH of the two keys is public and which is private, as long as you never ever mix up the usage. Once both keys are available to someone at the same time, the security of the messaging system is utterly destroyed.
Technically, since you say the keys are stored in javascript variables, you're implying that the variables were sent IN THE CLEAR embedded in some browser-based html/javascript. That further implies that there's no security - since both keys are exposed to the network.
To decide if something is "secure", you have to know what the security requirements are. Your case satisfies a few likely requirements, but there are several likely requirements that it does not satisfy. For example:
A plaintext copy of the message is apparently transmitted over the network from User A to the server, so anyone can eavesdrop on it at that point. (This is likely to be a serious problem.)
You don't explain how the public key is transmitted to the server. If it's not transmitted in an authenticated fashion, then a man-in-the-middle can generate his own public-private key pair, and give his public-key to the server. (This could be a serious problem.)
User B cannot verify the authenticity of the message (s)he receives. The message may have come from the server (and ultimately from User A), or it may have come from anyone else with a copy of the public key. (This may or may not be a serious problem, depending on the application.)
So overall, I would not consider this design to be "secure".
It wouldn't be too secure since:
The private key of any user (say user B) can be leaked out of your app via injected JS code, or a bad browser addon
Once this is done, any one who gets access to any of the messages directed at User B, will be able to decrypt it and make sense of it
Of course the above wont happen if you are the only one using the app - since you'll likely have other users with different browser setups/addons/browsing-behaviors etc, it is entirely possible
When user A tries to send something to user B, you said the server will encrypt the message using user B's public key - Now, this request made via JS can be interpreted by a middle man. Once done, this middle man can initiate any request to any user by manipulating the sender, referrer etc. This can lead to impersonation and so..
You also mentioned that after generation, you intend to send the public key over to the server. This call made from JS can easily be interpreted, which means the public key can be leaked.
Related
NEAR's accounts can have many different key pairs accessing the same account. The keys can also change and rotate. Which means the default way of encrypting messages for specific user with their public key doesn't work.
What is the best pattern to encrypt a message for specific user?
NEAR account keys are not intended for this use case.
Generally, having end-to-end encrypted messages (in the most specific sense a end-to-end encrypted chat, but in general any application that exchanges encrypted messages) with each participant having multiple devices is not trivial. E.g. it is for a reason that in Telegram private chats are attached to a device, and are not available on the other device.
The reason is that generally that would require sharing private keys between devices, doing which securely is a challenge on its own.
Here's a verbatim proposal of how to build a end-to-end encrypted chat with
a) Each participant potentially participating from multiple devices
b) Messages not only shared with someone directly, but also with "groups" of participants.
The design goal was that sending a message should be constant time (not depend on the number of devices target users use / number of people in the group it is sent to), while some operations can be linear.
There's a plan to add is as a library to NEAR, but work on it is not started and is not scheduled to start yet.
Proposal
Problem statement:
We want group chats into which new members can be added, and old members can be removed;
New members being able to see messages posted before they joined is a wish-list feature;
Old members should not be able to see new messages after they left;
Users should be able to use multiple devices and see all the messages in all their group chats from all the devices;
Each message must be stored once (not once per participant of the group);
The proposed solution:
There are three kinds of key pairs in the system: account key (not to be confused with NEAR account keys), device key and a message key.
Each account has exactly one account key. It is generated the first time an account uses the service.
account_keys: PersistentMap
Each device has its own device key generated the first time the chat is accessed from the device (or each time the local storage is erased)
class DeviceKey {
name: string,
device_public_key: PublicKey,
encrypted_account_secret_key: EncryptedSecretKey?,
}
device_keys[account]: PersistentVector
The persistent vector is per account, and each such persistent vector contains device public key (device private key only exists on the device), and the account secret key encrypted with such a public key, or null if the secret key was not encrypted with such public key yet.
There are three methods to manage the device keys:
addDeviceKey(device_public_key: PublicKey, name: string): void
Adds the new key, and associates null as the corresponding encrypted account secret key.
removeDeviceKey(device_public_key: PublicKey): void
Removes the device key
authorizeDeviceKey(device_public_key: PublicKey, encrypted_account_secret_key: EncryptedSecretKey): void
Sets the encrypted account secret key for the device key.
The flow for the user thus will be:
a) Launch the chat from a new device, give it a name.
b) Open chat from some other device that already has the encrypted account key, go to Devices setting and authorize the new device.
All the messages keys are stored in a large persistent vector:
all_message_public_keys: PersistentVector<PublicKey>
And in all other places are referenced using u32 indexes into the vector.
Each user knows some message secret keys:
encrypted_message_secret_keys[account]: PersistentMap<u32, EncryptedSecretKey>
encrypted_mesasge_secret_keys_indexes[account]: PersistentVector<u32>
The map and the vector are per account. The vector is only needed so that when the user changes their account key, we know all the message keys that we need to reencrypt.
The keys are encrypted with the account key.
Each channel has exactly one message key associated with it at each moment, though the keys might change throughout the lifetime of the channel.
channel_public_keys: PersistentMap<u32, u32>
Where the key is the channel id and the value is the message key ID.
Each message has a u32 field that indicates what message key was used to encrypt it. If it is not encrypted, the value is u32::max. Whenever a message is sent to a channel, it is encrypted with the current channel message key.
The flow is then the following:
When a channel is created with the initial set of participants, the creator of the channel creates the message key pair, encrypts the secret key with the account keys of each participant, and calls to
createChannel(channel_name: string,
accounts: AccountId[],
message_public_key: PublicKey,
encrypted_message_secret_keys: EncryptedSecretKey[])
That registers the message key, adds the encrypted secret keys to the corresponding collections, and creates the channel.
If a new user needs to be added, the addUserToChannel(account: AccountId, encrypted_message_secret_key) adds the user to the list of channel users, and grants him access to the latest message access key.
If a user needs to be deleted, the deleteUserFromChallen(account: AccountId) removes the user. In such a case, or if otherwise the channel participant believe their message key was compromised, they call to
updateChannelMessageKey(message_public_key: PublicKey,
encrypted_message_secret_keys: EncryptedSecretKey[])
Note that since each message has the associated key with it, and the channel participants didn’t lose access to the old message keys, the existing channel participants will be able to read all the history, without having to re-encrypt it. However, new users who join the channel will only see the messages since the last time the key was updated.
When a user needs to update the account key, they need to:
a) Encrypt it with all the device keys;
b) Encrypt all their message keys with the new account key;
c) Supply (a) and (b) into a contract method that will update the corresponding collections.
After such a procedure the user will have access to all their old messages from all the devices with the new account key.
Indeed there is no default way to do this.
The easiest way is if specific application, like chat needs to encrypt messages is to require user to "Login with NEAR" - which will create a new key pair on the application's side and authorize this public key in the user's account for the app.
Now any other user can scan recipient's account and find key that authorized for this app and use it for encryption. This will behave similarly to Telegram secret chats, where they only can be decrypted on a single device that started the chat.
To make this work across devices (domains, applications), one can create a key pair, where public key is known and attached to given account. Private key is also stored on chain but encrypted with all the access keys from different devices. When new device / app is added, an existing app needs to authorize this and this will allow to decrypt the private key within this session and re-encrypt with access key of this session.
Is it possible to get some pseudocode for this? Another concern for me is where are these application private keys stored then? Usually I am used to the system where, I have a private key, and I back it up or use a mnemonic. Now when I log in to another device I recover that key.
How can I mirror the private keys on multiple devices?
The other side of this, querying the chain to get the specific public key for a user for an app (maybe with a tag even), makes sense.
what is the best method to transmit a public private key pair when the recipients initially have neither, diffie-hellman is one such method but susceptable to man in the middle attacks what other methods are available? most information appears to suggest parties already have a secret key
public key / private key encryption was designed to deal with just this problem.
both parties can freely exchange only their public keys for the other to encrypt outgoing messages with.
Man in the middle attacks can only occur in the sense that a 3rd party can generate messages as either party but can decrypt neither (as they would need the private keys).
A complete secure exchange to eliminate man in the middle might go like this:
both parties exchange public keys
party A sends a message with a random number contained within it
party B decrypts the random number and replies with the same number encrypted for party A.
when party A gets the same number back, they can be sure that no man in the middle attack has occurred.
all messages will continue to use the number as proof of who the message came from.
TLS uses a more complicated version of this scheme for it's handshake: https://en.wikipedia.org/wiki/Transport_Layer_Security#TLS_handshake
The typical process of creating a a new Parse user does not allow for server-side validation or any sort of input requirements. These can be implemented on the client-side but can be easily circumvented by anyone willing to try.
So how would someone provide a Sign Up where the fields are checked against different requirements (defined by regex) in cloud code?
Two possibilities I see immediately:
An extra method that takes in all inputs as parameters, checks against regex, on success continues to Parse.User.signup() then returns session key and assigns it to the device that just signed up.
Parse.Cloud.beforeSave(...) before the user is saved you check the fields, and reject if it doesn't pass a test.
Problems I see with each:
EVERYONE with my AppID and a client ID can call this method. Since the checks are being done server side there's need for additional filtering client-side which can be overwritten; an adversary could flood my Parse app with requests or bloated inputs. Also you are then sending the user's password over the network.
The password is encrypted upon user creation(setting password), according to documentation I've read. Everything but the password can be checked against a regex.
I'm in an infosec class and I stumbled upon this concept online and it intrigued me. I've also looked at a few websites and wikipedia that explain the concept, as well as a few posts on stackoverflow, but I'm still getting confused. From what I understand is in a typical HTTPS public key exchange, a browser and a server come together with keys to create a session key...if someone ever obtained a private key that derived the session key, they could see all the data that was sent between this connection, even in the past.
My understanding is that with PFS, the 'session key' is never sent , even in encrypted form. It is kept secret so that even if someone found a private key, they wouldn't be able to access encrypted recorded information from the past. Is this correct?
I also was wondering, If I am partaking in a PFS exchange call me "A", with a server "B", PFS is supposed to work with the fact that if my key becomes compromised, A and B's conversation wont become compromised because they don't know the session key. But how does "B" authenticate me as "A", if my key has in fact became compromised...e.g. how would it know the difference between me (A) or another user (C) using my key attempting to access the data.
I really like the answer on Quora given by Robert Love: http://www.quora.com/What-is-perfect-forward-secrecy-PFS-as-used-in-SSL
Let's look at how key exchange works in the common non-ephemeral case.
Instead of giving a practical example using, say, Diffie-Hellman, I'll
give a generalized example where the math is simple:
Alice (client) wants to talk to Bob (server).
Bob has a private key X and a public key Y. X is secret, Y is public.
Alice generates a large, random integer M.
Alice encrypts M using Y and sends Y(M) to Bob.
Bob decrypts Y(M) using X, yielding M.
Both Alice and Bob now have M and use it as the key to whatever cipher they agreed to use for the SSL session—for example, AES.
Pretty simple, right? The problem, of course, is that if anyone ever finds out X, every single communication is compromised: X lets an attacker decrypt Y(M), yielding M. Let's look at the PFS version of this scenario:
Alice (client) wants to talk to Bob (server).
Bob generates a new set of public and private keys, Y' and X'.
Bob sends Y' to Alice.
Alice generates a large, random integer M.
Alice encrypts M using Y' and sends Y'(M) to Bob.
Bob decrypts Y'(M) using X', yielding M.
Both Alice and Bob now have M and use it as the key to whatever cipher they agreed to use for the SSL session—for example, AES.
(X and Y are still used to validate identity; I'm leaving that out.)
In this second example, X isn't used to create the shared secret, so even if X becomes compromised, M is undiscoverable. But you've just pushed the problem to X', you might say. What if X' becomes known? But that's the genius, I say. Assuming X' is never reused and never stored, the only way to obtain X' is if the adversary has access to the host's memory at the time of the communication. If your adversary has such physical access, then encryption of any sort isn't going to do you any good. Moreover, even if X' were somehow compromised, it would only reveal this particular communication.
That's PFS.
In a non-PFS session the browser decides on the session key (or rather secret from which it is derived) and encrypts it using RSA, with the RSA public key obtained from a certificate that belongs to the server. The certificate is also used to authenticate the server. The server then uses its private key (what you call master key) to decrypt the session key.
All connections to the server use different session keys, but if you possess the master key you can figure them all out, the way the server does.
In PFS you use algorithms such as Diffie-Hellman, where the master key is not used. In such connection the master key is used to authenticate the parameters for the algorithm. After the parameters are agreed on, the key exchange takes place using those parameters, and a secret of both parties. The parameters are not secret, and the secrets the parties used are discarder after the session key is established (ephemeral). This way if you discover the master key you cant discover the session key. However you can pose as the server if you get the key, and the certificate is not invalidated.
To find out more read about Diffie-Hellman.
You generate a new public key for every message, and use the real permanent public key only for authentication
This was mentioned in other answers, but I just want to give a more brain parseable and contextual version of it.
There are two things you can do with someone's public key:
verify that a message was written by them, AKA verify a message signature AKA authenticate a message. This is needed to prevent a man in the middle attack.
encrypt a message that only they can decrypt
In many ways, authentication is the more critical/costly step, because to know that a given public key belongs to someone while avoiding a man in the middle attack, you need to take steps such as:
meet them in real life and share the public key (leave your home???)
talk to them over video (deepfakes???)
trusted signature providers (centralization!!!)
Generating new keys is however comparatively cheap.
So once you have done this costly initial key validation step, you can now just:
ask the receiver to generate a new temporary public key for every message you want to send them
the receiver sends you the temporary public key back to you, signed by their permanent public key. Nothing ever gets encrypted by the permanent key, only signed. No need to encrypt public keys being sent!
you verify the message signature with the permanent public key to avoid MITM, and you then use that temporary key encrypt your message
After the message is received and read, they then immediately delete that temporary private key and the decrypted message.
So now if their computer gets hacked and the permanent private key leaks, none of the old encrypted messages that the attacker captured over the wire can be decrypted, because the temporary key was used to encrypt them, and that has been long since deleted.
Future messages would be susceptible to MITM however if they don't notice and change their permanent key after the leak.
Brainstorming request
I need an idea for an authentication algorithm with some unusual requirements.
The algorithm would be used to verify that the sender of a message is legitimate.
Restrictions:
The "transport layer" is e-mail
the sender ('Alice') is a human being
Alice only has access to a web browser and internet access (including a webmail account) as her tools; therefore she can't do very complicated calculations
The receiver ('Bob') is a computer with no direct access from the internet.
Bob has an email account that it checks periodically.
Bob can send email.
No sending info to a 3rd party: Alice and Bob can't send any out-of-band info. Reading some publicly available info (such as the time from a time server) is ok.
Assumptions:
Alice can access some information locally: maybe she carries a notebook, or we could even assume her web mail account is hack-proof, therefore sensitive information can be stored there.
Alice and Bob can exchange sensitive information directly at a time prior to the authentication (private keys?)
Non-goals:
encoding of the actual payload of the message is not necessary.
speed/latency are not (big) issues
Some ideas to get you started:
Plain old hard-coded password.
Problems:
brute force attack (not likely)
eavesdroping possible if the communication is done in clear text, then replay attacks possible
Simple algorithm based on current date/time
Example: Alice adds the current date, hour and minute and sends the result as the auth token, which Bob can verify. Let's assume that read-only access to a time server does not violate rule #7 (no 3rd party).
Problems:
security through obscurity: the algorithm is somewhat safe only because it is not publicly available (well, it is now... oops!)
Some sort of challenge-response mechanism - Alice sends a request for authentication, Bob replies with a challenge, Alice sends the expected response and the actual payload.
What are the details of the mechanism? I don't know :)
What can you think of? I'm hoping to see some creative answers ;-)
Edit:
Maybe an example would make rule #3 clearer: let's assume that Alice is using a proprietary closed-source device <cough> iPhone <cough> to access the Internet, or she is standing in front of a public internet kiosk.
My idea of a human-friendly low-tech challenge-response mechanism:
Bob changes the challenge every time he receives a valid message (for example he makes a salted hash of the current time)
every invalid message sent to Bob makes him reply with the current challenge, so Alice can query him by sending an empty mail
once Alice knows the challenge, she goes to https://www.pwdhash.com/
in "Site Address" she enters the current challenge
in "Site Password" she enters her personal password (which is known to Bob)
PwdHash generates a "Hashed Password"
Alice writes a message to Bob, using the hash just created as the subject
Bob receives the message, hashes the current challenge and Alice's password according to the PwdHash algorithm, and sees if his result matches the message subject
if it does, Bob accepts the message and and sends out a confirmation containing the new challenge (essentially this is step 1)
Advantages:
cheap & simple, may even run on reasonably modern mobile devices
human friendly (no math, easy to remember, prerequisites easily available on the net)
no replay attack possible
no clear text passwords over the wire
does not run out of passwords (like one-time pads do)
no inherent time limits (like RSA tokens have)
the PwdHash web site can be saved on disk and called locally, no third party dependency here
Disadvantages:
Bob and Alice must pre-share a key (Alice's password), therefore Alice cannot change her password off-site
compromising Alice's password is the easiest attack vector (but that's the case with almost all password protected systems)
Note that PwdHash is an open hashing algorithm, Bob can easily implement it. The PwdHash web site works without post-backs, everything is client side JavaScript only, no traces left behind.
Two options I can think of:
Issue a card with one-time passwords (communication before the fact, notebook)
Electronic device that produces pincodes (avoids replay-attacks)
In addition to Treb's answer, you can use one-time passwords you can print instead of SecurID. See "Perfect Paper Passwords" for details.
Am I missing something obvious in suggesting a simple public/private key and signing the email?
Firefox has at least one extension to allow GPG in webmail.
Elaborating on lassevks answer:
In my company we use SecurID tokens from RSA for remote authentication.
It gives you a 6 digit number that changes every 60 seconds as an authentication token, supposedly your token generator and the server are the only ones in the universe which know the token that is valid right now.
As a low tech alternative, a set of n (10, 20, 100 - whatever is reasonable in your specific case) one time authentication codes can be given to Alice. I would ask her for a specific code (e.g. number 42 in the list). After using this code, it becomes invalid for further use.
Edit: See lacop's answer for a good implementation of the low tech solution.
Consider to create a web page which contains the algorithm as JavaScript, possibly as a download (so she can download it once and carry it along on an USB drive).
The idea is that she opens the page, checks the source code (all JavaScript must be inline) and then enters her password in a text field on the page. The JavaScript will translate this into a code as she types (so no network traffic while she does this; if there is, there might be a keylogger running in the background).
After she has the code, she can copy it somewhere.
The JavaScript can use the current time as a seed. Slice the current time into five minute intervals. Most of the time, using the current time will be enough to decode the password and if you're close to the start of the five minute interval, try with the previous one.
See this site for an example: https://www.pwdhash.com/
If Alice can run code on her machine (for example by using JavaScript that is found on some public site, like: http://www.functions-online.com/en/sha1.html), she can receive the challenge, hash it together with the password, and send it back.
Here's another suggestion:
Start with the Diffie-Hellman key exchange, resulting in a shared private key, known only to presumably-Alice and Bob.
Have a pre-defined password known only by Alice and Bob.
Have Alice encrypt the password using the shared key and send it to Bob
Now Bob can see that presumably-Alice really is Alice.
Problems:
Diffie-Hellman is not safe using small numbers.
What would be a simple symmetric encryption algorithm (for encrypting the password)?
A simple way to protect data in transit without exchanging passwords is the three-way-XOR:
Alice creates a few bytes using her own key.
She XORs the data with these bytes to make them unreadable.
Alice sends the encrypted data to Bob
Bob creates a few bytes using his own key.
He XORs the data with these bytes.
Bob send the double-encrypted data back to Alice
Alice applies her XOR pattern once more. Now, the data is only encoded with Bob's pattern
Alice sends the data back to Bob
Bob can now decode the data with his own pattern
If you're not going to use PKI, which is far and away the best solution, try using a challenge-response system like CRAM-MD5 (although I'd suggest a different digest algorithm).
Your constraints make the implementation of a secure cryptographic system almost infeasible. Is there nothing you can do to change the transport?
The most simple solution is to make Bob periodically send mails to Alice's mail account. When she needs something from Bob, she has to reply using one of these mails. Bob can put some check tokens into the mail (mail ID, or a string which must be repeated in the subject or body of the mail).
Just like many of the email verification schemes work.
Drawback: this only proves that the attacker has access to Alice's mail account, not that it is in fact Alice herself. To solve this, you could tell Alice a password and use the "JavaScript HTML page" trick so she can encode the key from Bob using her password.
This would prove that she has access to her mail account and that she knows the password.
There are several methods I can think of:
Install a https encrypted service similar to:
http://webnet77.com/cgi-bin/helpers/blowfish.pl
or
http://cybermachine.awardspace.com/encryption.php/
Or you could issue one-time passwords in combination with a XOR encryption
Or you can write a simple Java App (if Java can be executed in the machine) that can be loaded via www and provides public key encryption.
Hmm... would this count as a thrid party?
Set up a brother of Bob - Charlie, who is accessible from the internet via HTTPS. To send a message to Bob Alice would first have to log on to Charlie (via plain old password) and then Charlie would give her a one-use token. Then she sends her email along with the token to Bob.