Why can't I trust a client-generated GUID? Does treating the PK as a composite of client-GUID and a server-GUID solve anything? - client-server

I'm building off of a previous discussion I had with Jon Skeet.
The gist of my scenario is as follows:
Client application has the ability to create new 'PlaylistItem' objects which need to be persisted in a database.
Use case requires the PlaylistItem to be created in such a way that the client does not have to wait on a response from the server before displaying the PlaylistItem.
Client generates a UUID for PlaylistItem, shows the PlaylistItem in the client and then issue a save command to the server.
At this point, I understand that it would be bad practice to use the UUID generated by the client as the object's PK in my database. The reason for this is that a malicious user could modify the generated UUID and force PK collisions on my DB.
To mitigate any damages which would be incurred from forcing a PK collision on PlaylistItem, I chose to define the PK as a composite of two IDs - the client-generated UUID and a server-generated GUID. The server-generated GUID is the PlaylistItem's Playlist's ID.
Now, I have been using this solution for a while, but I don't understand why/believe my solution is any better than simply trusting the client ID. If the user is able to force a PK collison with another user's PlaylistItem objects then I think I should assume they could also provide that user's PlaylistId. They could still force collisons.
So... yeah. What's the proper way of doing something like this? Allow the client to create a UUID, server gives a thumbs up/down when successfully saved. If a collision is found, revert the client changes and notify of collison detected?

You can trust a client generated UUID or similar global unique identifier on the server. Just do it sensibly.
Most of your tables/collections will also hold a userId or be able to associate themselves with a userId through a FK.
If you're doing an insert and a malicious user uses an existing key then the insert will fail because the record/document already exists.
If you're doing an update then you should validate that the logged in user owns that record or is authorized (e.g. admin user) to update it. If pure ownership is being enforced (i.e. no admin user scenario) then your where clause in locating the record/document would include both the Id and the userId. Now technically the userId is redundant in the where clause because the Id will uniquely find one record/document. However adding the userId makes sure the record belongs to the user that's doing the update and not the malicious user.
I'm assuming that there's an encrypted token or session of some sort that the server is decrypting to ascertain the userId and that this is not supplied by the client otherwise that's obviously not safe.

A nice solution would be the following: To quote Sam Newman's "Building Microservices":
The calling system would POST a BatchRequest, perhaps passing in a
location where a file can be placed with all the data. The Customer
service would return a HTTP 202 response code, indicating that the
request was accepted, but has not yet been processed. The calling
system could then poll the resource waiting until it retrieves a 201
Created indicating that the request has been fulfilled
So in your case, you could POST to server but immediately get a response like "I will save the PlaylistItem and I promise its Id will be this one". Client (and user) can then continue while the server (maybe not even the API, but some background processor that got a message from the API) takes its time to process, validate and do other, possibly heavy logic until it saves the entity. As previously stated, API can provide a GET endpoint for the status of that request, and the client can poll it and act accordingly in case of an error.

Related

Laravel: calculated field used in a query

I am working on a function that allows a user to check if their existing device contacts are using our platform, based on phone numbers.
For privacy and security, we are hashing the user's contact's phone numbers on device (salted with the user's id) before sending to our server.
Server side, we then need to hash our entire contacts table (using the user's id as a salt), which is currently being done in a for loop.
We then check this list against the request list, and return the details for any matches.
However, I'm sure there is a more efficient way of doing this, something like computing the hash in a calculated field then including the $request->hashes in a "whereIn" clause.
Could someone give me a pointer on the best approach to be taking here?
The question is, what privacy and security are you achieving by sending hashed value of contact number?
You are hasing the contact in client side(device), that means you are using a key and salt that is available in clinet side already. How can that be a security feature?
If you want to search hashed value in database then it's better to save hashed contract number in a column in the first place. So you can directly run where query in database table.
Ideally, if you really concern about user's contact number you should:
Encrypt the user's contacts in backend/databse not in frontend.
If you need to query for a field in database then you should make a hash valued column that can be matched easily. I mean searchable fields should be hashed so you can run direct query.
Nothing to worry about user's contact security in frontend if you are already passing it trhough Secure HTTP(HTTPS).
Even it a common practice in the industry, to pass a submitted plain password via HTTPS when a user submit it in frontend. It shouln't be a concern of privacy or security.

How to manage store "created by" in micro-service?

I am building the inventory service, all tables keep track the owner of each record in column createdBy which store the user id.
The problem is this service does not hold the user info, so it cannot map the id to username which is required for FE to display data.
Calling user service to map the username and userid for each request does not make sense in term of decouple and performance. Because 1 request can ask for maximum 100 records. If I store the username instead of ID, there will be problem when user change their username.
Is there any better way or pattern to solve this problem?
I'd extend the info with the data needed with from the user service.
User name is a slow changing dimension so for most of the time the data is correct (i.e. "safe to cache")
Now we get to what to do when user info changes - this is, of course, a business decision. In some places it makes sense to keep the original info (for example what happens when the user is deleted - do we still want to keep the original user name (and whatever other info) that created the item). If this is not the case, you can use several strategies - you can have a daily (or whatever period) job to go and refresh the users info from the user service for all users used in the inventory, you can publish a daily summary of changes from the user service and have the inventory subscribe to that, you can publish changes as they happen and subscribe to that etc. - depending on the requirement for freshness. The technology to use depends on the strategy..
In my option what you have done so far is correct. Inventory related data should be Inventory Services' responsibility just like user related data should be User Services'.
It is FE's responsibility to fetch the relevant user details from User Service that are required to populate the UI (Remember, call backend for each user is not acceptable at all. Bulk search is more suitable).
What you can do is when you fetch inventory data from Inventory Service, you can publish a message to User Service to notify that "inventory related data was fetched for these users. So there is a possibility to fetch user related data for these users. Therefore you better cache them."
PS - I'm not an expert in microservices architecture. Please add any counter arguments if you have any.*

CRUD operations validation

Supposed I have a database with 3 tables:
Customers
Orders
CustomerOrders
I build a WebAPI with standard auth using bearer token and I have a middleware to receive all necessary claims from the token, and I have a controller for basic CRUD operations for Orders.
for example:
DELETE - Orders/{id}
PUT - Orders/{id}
How can I make sure that the order that the user is trying to manipulate belongs to the current user?
Do I first need to query the database to make sure that the OrderId belongs to the current UserId before each operation? or is there an easier way to do it?
You can somehow manage to have the information if the user the token was issued for granted the client application to manipulate orders in general dependending on the options of your identity management and token provider.
But to make sure that this specific order belongs to the current user can only be checked in your backend and this needs of course to be done with every operation. The order id could be brute-forced (guessed) and manipulated in the request so therefore you need check this on each request.
I suggest though to extract this checking logic - does the passed order id belong to the user id provided in the token - to some service method to make it reusable from different places. In your case, for instance reuse it for the different CRUD methods such as DELETE and PUT.

Linking logged in user to object data on Parse.com

I'm new to using Parse.com and I'm trying to understand the general relationship between a logged in user and user-specific data.
I've figured out and understand how to create users and objects but I'm fuzzy on how to connect the two.
Is it as simple as creating a user and then once their logged in, storing an object with their username as the key?
Then when a user signs in successfully, you retrieve the object under their username key?
I just want to make sure I'm approaching this from the right angle, since I plan on having a lot of users and I also want the most secure approach.
I've read through the Parse.com documentation but can't seem to find the connection between the two. Any help is appreciated!
Do you mean when the user submits any details it is recorded with their User ID? If so, then this code will work for you:
ParseUser user = ParseUser.getCurrentUser();
//yourObjectID.put("User", user);
There is no user-specific data (all data is global with respect to the app ID you registered, as Parse is a database), but you can store data inside a ParseUser object. You can also give it access controls (an ACL), so only that user can read/write it. When the user signs in successfully, I don't believe it will be part of the ParseUser object yet, you need to fetch the data. (This is definitely true for object fields, but I'm not sure about simple fields like strings and ints. It deserves testing.)
There is a caveat to this. Depending on which SDK you're using, some of that information may be cached. In Unity 3D, for instance, the ParseUser object will retain all its data between program invocations (and indeed, will remain logged in).

RESTful API - validation of related records

I implementing RESTful API service and i have a question about saving related records.
For example i have users table and related user_emails table. User emails should be unique.
On client side i have a form with user data fields and a number of user_email fields (user can add any number of fields independently). When the user saves the form i must first make query to create record in users table to get her ID, ​​and only then i can make query to save user emails (because in now i have id of record which come with response after saving user data). But if user enters not unique email in any field then the request will fail. So I create a record in the users table but not create record in user_emails table.
What are the approaches to implement validation of all this data before saving?
This is nor related restful api but transactional processing on the backend. If you are using Java, with JPA you can persist both element in the same transaction then you can notice if there is a problem and rollback the entire transaction returning a response.
I would condense it down to a single request, if you could. Just for performance's sake, if nothing else. Use the user_email as your key, and have the request return some sort of status result: if the user_email is unique, it'll respond with a success message. Otherwise, it'd indicate failure.
It's much better to implement that check solely on the server side and not both with the ID value unless you need to. It'll offer better performance to do that, and it'll let you change your implementation later more easily.
As for the actual code you use, since I'm not one hundred percent on what you're actually asking, you could use a MERGE if you're using SQL Server. That'd make it a bit easier to import the user's email and let the database worry about duplicates.

Resources