How to prevent lost writes in Parse/Back4app? - parse-platform

I couldn't find any support for a CAS operation or a findAndModify like in MongoDB that could allow me to include a version field in objects and update them only if the retrieved version is the same as the current one in the server, like the optimistic locking approach in JPA.
There is some support to do such thing that I missed? Otherwise how we can implement any logic that accepts concurrent access to the same objects from different users?

take a look at the Idempotency feature ? Not specifically what you are asking, but might be some logic in there that helps you...
https://github.com/parse-community/parse-server/pull/6748

Maybe you can clarify your question. But I answer your question as I understand it.
MongoDb updates already atomic operations. and Parse Server while updating data,
mongoDb already used optimistic locking.
But if you want to use the direct command on mongoDb. Don't remember every update used optimistic locking.
we can`t know if the documents were updated before by any users without using findAndModify command.

Related

How to define compatible logs

I have the chance to influence the log format of a logging solution we are about to set up for an existing backend system. It is not open-telemetry based and may never be, but at the moment I can still make suggestions and would like to make sure the logs are written in a compatible format. Is there some kind of overview or definition I can use as a base? Some kind of list of mandatory fields the need to be filled?
I see you found the data model (https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/logs/data-model.md) in the specification - keep in mind, logging support for OpenTelemetry is currently not stable and so this may change. Generally, I suspect that if you use something like the Elastic Common Schema (https://www.elastic.co/guide/en/ecs/master/ecs-log.html) then you should be broadly compatible going forward.

ModifiedNodeDoesNotExistException ODL while writing statistic to operational datastore

Scenario is application is writing data directly to device and after 4 seconds they are writing to config datastore. Now in the time gap there is statistic collection triggered which will collect the data written and will write it to operational datastore.
My question is whether the data should be present in config datastore before statistic is collected or before the same data is being written to operational datastore.
You are not saying which ODL release you are using, nor (more importantly) which application features you have installed are using, but when you write "statistic collection triggered" that sounds like openflowplugin project related? I wonder if https://git.opendaylight.org/gerrit/#/c/66207/ which was proposed just today (not yet reviewed and merged) has anything to do with your problem, and may fix it... If not, you would need to provide more details about you are writing to - as far as I understand, the ModifiedNodeDoesNotExistException basically just means something like that what you want to write to meanwhile was concurrently already removed.

Multitenant app with single database

I'm developing a multitenant application using Laravel. I've read different blogs, posts, sites for this and I decided to do it with a single database.
So, I know that I only need to filter every query with the tenant_id and that's it! But if I do it from every query, probably someday there'll be an error and I don't want to cause any information security issue for my tenants.
I read, probably, an old article for it, culttt.com/2014/03/31/multi-tenancy-laravel-4, and I found many concepts that I still don't understand because I'm new to Laravel.
Is this approach still the best for do it? Or has Laravel now its own solution to do it?
I like something similar to this: stackoverflow.com/questions/33219951/php-pdo-add-filter-to-all-queries but from Eloquent. How can I do this?
Thanks.
If I were you I would not go this way. I would create separate database for each client/each app - it's much safer solution and in addition in case you will need create Database backups or restore some client data it will be much simpler to do that than dealing with huge database when you have all your clients.

Application data in Sinatra

Say I have some objects that need to be created only once in the application yet accessed from within multiple requests. The objects are immutable. What is the best way to do this?
Store them in the session.
If you don't want to lose them after a server's restart, then use a database (SQLite, which is a single file, for example).
You want to persist your objects. Normally you'd do it with some ORM like Active Record or Datamapper. Depending on what is available to you. If you want something dead simple without migrations and you have access to a MongoDB use mongomapper.
If that object is used only for some time, then it is discarded (and if needed again, then recreated), use some caching mechanism, like memcached or redis.
If setting up such services is heavy and you want to avoid it, and - say - you are using Debian/Ubuntu then save your objects into files (with Marshal-ing) in the /shm device which is practically memory.
If the structure of the data is complex, then go with SQLite as suggested above.

Multiple programs updating the same database

I have a website developed with ASP.NET MVC, Entity Framework Code First and SQL Server.
The website has entities that each have a history of statuses that we defined (NEW, PACKED, SHIPPED etc.)
The DB contains a table in which a completely separate system inserts parcel tracking data.
I have to read this data tracking data and, following certain business rules, add to the existing status history of my entities.
The best way I can think of is to write an independent Windows service to poll the tracking data every so often and update my entity statuses from that. However, that makes me concerned about DB concurrency issues.
Please could someone advise me on the best strategy for this scenario?
Many thanks
There are different ways to do it. It also depends on the response time you need. If you need to update your system as soon as the tracking system updates the record then a trigger is the preferred way. Alternative way is to schedule a job which will run every 15/30mins and sync the 2 systems.
As for the concurrency issue you can use a concurrency token field. Entity framework has support for this.

Resources