Should expired subkeys be revoked? - gnupg

I have 3 subkeys in my keyring that have just expired (created a year ago).
My private key has no expiration date and is maintained offline most of the time.
My plan was to rotate subkeys every year (not sure if it makes sense), I created 3 new subkeys to replace them (due to expire in a year), each with one capability out of (S, E, A).
Now, what should I do with the expired subkeys? Should I simply delete them from the keyring (delkey?) or revoke them? What is the best way to go about this?

I guess revoking makes sense only if you use those keys "publicly", i.e. there are some people who should know that your key is outdated. You definitely should revoke a key which was published at a key server. Otherwise, I see no difference between revoking and deleting.

Related

Laravel sanctum limit no of tokens for each user?

I am using laravel sanctum for API authentication for my mobile app.
How can we limit the maximum number of active tokens per user?
Currently, in the personal_access_tokens sanctum generated table, there is no user_id reference. With the current table, imagine if a user logs in and logs out unlimitedly. We will have N number of new tokens created in the table.
Is there a default way of limiting the total number of tokens per user out of the box or this needs to be done on my own?
Is this a good practice to have new rows of tokens added to the DB table on every new login?
There is a reference to user, namely tokenable_type and tokenable_id. Which in this case references App\Models\User and the user ID in the tokenable_id.
Somewhere in your application, you are creating these tokens for that specific user. You have the choice here to issue new tokens for every login session, but you could also demand the user to use an old token. That is up to you and the use case of the application.
However, if you are creating new tokens for every login session, consider revoking old tokens (since they will probably not be used anymore). Check the Sanctum documentation.
Tokens are valid for as long as defined in: config/sanctum.php in the expiration key. Standard, personal access tokens do not expire because the expiration key is set to null.
Answering your questions:
Yes, you can simply get the amount of tokens using $user->tokens()->count(); and do whatever you want to do with it (removing old tokens, or returning an error).
This answer depends on your use case. If tokens are valid forever, why would you create a new one on every login, instead of demanding the token that is still valid? Alternatively, you could create a form for the user to request a new token if they forgot their old one, removing the old token and issuing a new one. This way, all tokens in the DB are valid.

How to do an atomic update in RethinkDB with more than just the primary key?

I have a table called Wallet which can be thought of as a users balance. There's one document per user so I made the primary key userId and there is a balance field which stores the users balance. My initial thought is I need to be able to pull the document from the DB and do certain checks at the application layer before changing the user balance, so this rules out a .get(id).update(...) which I know is atomic.
What I did instead is added a property called lock which is a string. The application must first .get(id).update({ lock: reallylongstring }) and then when the application is ready to commit changes they must pass that lock back up and ideally Rethink will reject any changes if the lock is wrong (meaning someone else came in and acquired a lock afterwards).
If this was mongo I would do something like:
this.findOneAndUpdate({
_id: id,
lock: lock
}, {
...
})
And then any update that had the wrong lock would fail because the document would not be found. What is the best way to do this in Rethink? Or is my approach just all around wrong and what is a better approach?
You can put a branch in the update:
.get(id).update(function(row) {
return r.branch(row('lock').eq(LOCK), UPDATE, r.error("error: ..."));
})
EDIT: removed incorrect option.

Can you add an expiration date for an existing OpenPGP key that has none?

I created and uploaded (to the keyservers) an OpenPGP key that has no expiration date. Oops. I'd like to add a date to the key. Is this possible? I've read that you can extend the expiration date, but not that you can pull it back... and I'm guessing that you cannot.
For example, perhaps I could revoke the current key and re-upload my key, this time with an expiration date. (I presume that this wouldn't work, because then you would have no protection if your password was compromised.) I've tried just doing gpg --send-key with the expiration date, but this doesn't seem to have succeeded.
Related links:
g-loaded.eu
help.riseup.net
You can arbitrarily change and set expiration dates at any time, including both setting an expiration date when none existed before and "reactivating" expired keys by extending their expiry time. Have in mind expiry dates on primary keys don't add up anything to the key's security.
If you changed the expiry date, uploaded the key to the key server network, but don't see any changes -- wait for some time. There is not a single key server, but a whole bunch of them, most of them organized in the "SKS key server network". They talk to each other exchanging new data, but reconciliation can take some minutes or even hours. Given nine hours passed between this answer and your question, very likely the new expiry date is already visible.

Why can't I trust a client-generated GUID? Does treating the PK as a composite of client-GUID and a server-GUID solve anything?

I'm building off of a previous discussion I had with Jon Skeet.
The gist of my scenario is as follows:
Client application has the ability to create new 'PlaylistItem' objects which need to be persisted in a database.
Use case requires the PlaylistItem to be created in such a way that the client does not have to wait on a response from the server before displaying the PlaylistItem.
Client generates a UUID for PlaylistItem, shows the PlaylistItem in the client and then issue a save command to the server.
At this point, I understand that it would be bad practice to use the UUID generated by the client as the object's PK in my database. The reason for this is that a malicious user could modify the generated UUID and force PK collisions on my DB.
To mitigate any damages which would be incurred from forcing a PK collision on PlaylistItem, I chose to define the PK as a composite of two IDs - the client-generated UUID and a server-generated GUID. The server-generated GUID is the PlaylistItem's Playlist's ID.
Now, I have been using this solution for a while, but I don't understand why/believe my solution is any better than simply trusting the client ID. If the user is able to force a PK collison with another user's PlaylistItem objects then I think I should assume they could also provide that user's PlaylistId. They could still force collisons.
So... yeah. What's the proper way of doing something like this? Allow the client to create a UUID, server gives a thumbs up/down when successfully saved. If a collision is found, revert the client changes and notify of collison detected?
You can trust a client generated UUID or similar global unique identifier on the server. Just do it sensibly.
Most of your tables/collections will also hold a userId or be able to associate themselves with a userId through a FK.
If you're doing an insert and a malicious user uses an existing key then the insert will fail because the record/document already exists.
If you're doing an update then you should validate that the logged in user owns that record or is authorized (e.g. admin user) to update it. If pure ownership is being enforced (i.e. no admin user scenario) then your where clause in locating the record/document would include both the Id and the userId. Now technically the userId is redundant in the where clause because the Id will uniquely find one record/document. However adding the userId makes sure the record belongs to the user that's doing the update and not the malicious user.
I'm assuming that there's an encrypted token or session of some sort that the server is decrypting to ascertain the userId and that this is not supplied by the client otherwise that's obviously not safe.
A nice solution would be the following: To quote Sam Newman's "Building Microservices":
The calling system would POST a BatchRequest, perhaps passing in a
location where a file can be placed with all the data. The Customer
service would return a HTTP 202 response code, indicating that the
request was accepted, but has not yet been processed. The calling
system could then poll the resource waiting until it retrieves a 201
Created indicating that the request has been fulfilled
So in your case, you could POST to server but immediately get a response like "I will save the PlaylistItem and I promise its Id will be this one". Client (and user) can then continue while the server (maybe not even the API, but some background processor that got a message from the API) takes its time to process, validate and do other, possibly heavy logic until it saves the entity. As previously stated, API can provide a GET endpoint for the status of that request, and the client can poll it and act accordingly in case of an error.

Outlook contact sync - How to identify the correct object to sync with?

I have a web application that syncs Outlook contacts to a database (and back) via CDO. The DB contains every contact only once (at least theoretically, of course doublets happen), providing a single point of change for a contact, regardless of how many users have that particular contact in Outlook (like Interaction or similar products).
The sync process is not automatic, but user-initialized. An arbitrary timespan can pass before users decide to sync their contacts. A subset of these contacts may have been updated by other users in the meantime.
Generally, this runs fine, but I have never been able to solve this fundamental problem:
How do I doubtlessly identify a contact object in a mailbox?
I can't rely on PR_ENTRYID, this
property changes on contact move or
mailbox move.
I can't rely on my own IDs (e.g. DB
table ID), because these get copied
with the contact.
I absolutely can't rely on fields
like name or e-mail address, they
are subject to changes and updates.
Currently I use a combination of 1 (preferred) and 2 (fall-back). But inevitably, sometimes users run into the problem of synching to the wrong contact because there is none with a given PR_ENTRYID, but two with the same DB ID, of which the wrong one is chosen.
There are a bunch of Outlook-synching products out there, so I guess the problem must be solvable.
I had a similar problem to overcome with an internal outlook plugin that does contact syncing. I ended up sticking a database id in the Outlook object and referring to that when doing syncs.
The difference here is that our system has a bunch of duplicates that get resolved later by the users. When they get merged I'll remove the old records and update outlook with all of the new information along with a new id.
You could do fuzzy matching to identify duplicates, but duplicate resolution is a funny problem that's mostly trial and error. We've been successful at implementing "fuzzy" matching logic using the levenshtein distance algorithm for names and addresses cleaned down to a hash code.
Good luck, my syncing experiences have been somewhat painful.

Resources