How to use Redis for checking availablity of a person - algorithm

Lets consider a meeting request where i can find if a person is available in a particular time slot.
Example: I need to check if person is available for meeting from 3 to 3:30. So if a person is busy from 2:30 to 3:01 means person is unavailable.
Question: how can i use the redis cache here.
Do i need to store cache of every minute of a user and can then application decides or any other way ?

I'm not sure if Redis is the data store that I'd choose for this task.
Still, if you're willing to work in a 1 minute resolution then you could store the minutes in which a person is occupied inside a Sorted Set of minutes, and then check if a time range overlaps that person's scheduled appointments with the ZINTER command.

Related

Updating Same Record with Multiple Times at the Same Time on Oracle

I have a Voucher Management System. And I also have a campaign on that system works first click first get order. First minutes on the start of campaign, so many people trying to get to vouchers. And every time when they try to do this, the same mechanism works on thedesign image.
At the updating number of given voucher, I realize a bottleneck. All this updates trying to update same row on a limited time. Because of that, transactions adding a queue and waiting for the current update.
After the campaign, I see some updates waited for 10 seconds. How can I solve this?
Firstly, I try to minimize query execution time, but it is already a simple query.
Assuming everyone is doing something like:
SELECT ..
FROM voucher_table
WHERE <criteria indicating the first order>
FOR UPDATE
then obviously everyone (except one person) is going to queue up until the first person commits. Effectively you end up with a single user system.
You might to check out the SKIP LOCKED clause, which allows a process to attempt to lock an eligible row but still skip over it to the next eligible row if the first one is locked.

Is there any way to track how much rent was paid given by an account address and timestamp in solana?

I'm new to solana. Currently, I'm working on an app that supports user to track their wallet historical balance and transactions.
For example, given by an account and time period range, the app will calculate the opening and closing balance and how much sol were sent and recevied during the time range.Since the rpc dose not support such features, I fetch all the historical transactions of an account and instead of using prebalance and postbalance directly returned by rpc, I try to calculate the historical balance by every transcations.(I use the absolute value of the subtraction of the prebalance and postbalance to get the transfer amount in every transaction, so that I can get the sent and the received value.) I found that in solana the rent does not show in the transaction, which will cause the balance calculation error.
I'd like to know if there is any way to track how much rent was paid given by an account address and timestamp in solana? I tried googling it and didn't find a solution.
Any comments and suggestions will be appreciated.
Unless I'm misunderstanding the question, the rent-exempt balances are included in transactions. For example, here's a transaction creating a USDC account: https://explorer.solana.com/tx/32oAkYzp47zF7DiPRFwMKLcknt6rhu43JW2yAfkEc2KgZpX35BoVeDBUs4kkiLWJ4wqoEFspndvGdUcB215jY931?cluster=testnet
There, you'll see that the new token account 2XBTsdaRTYdmsqLXRjjXonbVHCwvvGfHjBRfTXPcgnsS received 0.00203928 SOL, and the funding account 4SnSuUtJGKvk2GYpBwmEsWG53zTurVM8yXGsoiZQyMJn lost 0.00204428 SOL, which is higher since it paid for the transaction.
Roughly speaking, if you go though all a wallet's transactions, you can see if a payment was for rent-exemption if the destination account had 0 SOL to start, and the wallet paid for it. Note that this isn't perfect, since a lot of balances can move in a transaction!

High Level Design for billing cycle closure

I need to build a system for billing cycle closure for millions of users. The system behaves similarly to the credit card billing cycle. The main aim of this system is to close the previous billing cycle(a period for opening and closing a billing statement) and open a new one so, that the new transactions get attached to it. However, here the user has the freedom to change his cycle at any time and it has an immediate effect, i.e., it will affect the current running cycle.
The approach I was thinking of is:
At 00:00:00 every day I would run a cron that executes a function.
This function essentially updates all the settlements whose closing date is yesterday date to close.
Going thru this updated list I construct new settlements(open and close date) based on the user's configuration for the settlement cycle.
Insert the newly constructed settlements into the table.
The problem I realize here is that it is not scalable. As and when my users are growing, there are a lot of write operations(steps 2 and 4) and for loop(step 3) that need to be run. So, was wondering how the credit card billing cycle closure works?
Tech stack:
DB - PostgresSQL
Programming Language - Node.js / Java

How to process a logic or job periodically for all users in a large scale?

I have a large set of users in my project like 50m.
I should create a playlist for each user every day, for doing this, I'm currently using this method:
I have a column in my users' table that holds the latest time of creating a playlist for that user, and I name it last_playlist_created_at.
I run a query on the users' table and get the top 1000s, that selects the list of users which their last_playlist_created_at is past one day and sort the result in ascending order by last_playlist_created_at
After that, I run a foreach on the result and publish a message for each in my message-broker.
Behind the message-broker, I start around 64 workers to process the messages (create a playlist for the user) and update last_playlist_created_at in the users' table.
If my message-broker messages list was empty, I will repeat these steps (While - Do-While)
I think the processing method is good enough and can be scalable as well,
but the method we use to create the message for each user is not scalable!
How should I do to dispatch a large set of messages for each of my users?
Ok, so my answer is completely based on your comment where you mentioned that you use while(true) to check if the playlist needs to be updated which does not seem so trivial.
Although this is a design question and there are multiple solutions, here's how I would solve it.
First up, think of updating the playlist for a user as a job.
Now, in your case this is a scheduled Job. ie. once a day.
So, use a scheduler to schedule the next job time.
Write a Scheduled Job Handler to push this to a Message Queue. This part is just to handle multiple jobs at the same time where you could control the flow.
Generate the playlist for the user based on the job. Create a Schedule event for the next day.
You could persist Scheduled Job data just to avoid race conditions.

Scaling message queues with lots of API calls

I have an application where some of my user's actions must be retrieved via a 3rd party api.
For example, let's say I have a user that can receive tons of phone calls. This phone call record should be update often because my user want's to see the call history, so I should do this "almost in real time". The way I managed to do this is to retrieve every 10 minutes the list of all my logged users and, for each user I enqueue a task that retrieves the call record list from the timestamp of the latest saved record to the current timestamp and saves all that to my database.
This doesn't seems to scale well because the more users I have, then, the more connected users I'll have and the more tasks i'll enqueue.
Is there any other approach to achieve this?
Seems straightforward with background queue of jobs. It is unlikely that all users use the system at the same rate so queue jobs based on their use. With fall back to daily.
You will likely at some point need more workers taking jobs from the queue and then multiple queues so if you had a thousand users the ones with a later queue slot are not waiting all the time.
It also depends how fast you need this updated and limit on api calls.
There will be some sort of limit. So suggest you start with committing to updated with 4h or 1h delay to always give some time and work on improving this to sustain level.
Make sure your users are seeing your data and cached api not live call api data incase it goes away.

Resources