Kafka Streams with large sliding windows - apache-kafka-streams

I need to show usage stats at any points on time for last 3 months, 6 months and 1 year. I am planning to use the KStream sliding windows for the durations mentioned above. Most of the examples I see are using durations in minutes or seconds. I would like to know is that OK to use the bigger time duration for sliding windows? Any performance impact? Any specific configuration It should use to get optimum performance?
Thanks,
Jinu

It will really depend on the density of the data and what kind of aggregations you are doing. It could end up with very large number of windows updating and not closing since the end time is so far out. Also if it is too heavy I am not sure the state stores could handle it. But with the correct load and retention times I don't see an obvious reason it wouldn't work.
Edit: If you do end up trying it I would be very interested in seeing how it works out.

Related

Why real time is much higher than "user" and "system" CPU TIME combined?

We have a batch process that executes every day. This week, a job that usually does not past 18 minutes of execution time (real time, as you can see), now is taking more than 45 minutes to finish.
Fullstimmer option is already active, but we don't know why only the real time was increased.
In old documentation there are Fullstimmer stats that could help identify the problem but they do not appear in batch log. (The stats are those down below: Page Faults, Context Switches, Block Operation and so on, as you can see)
It might be an I/O issue. Does anyone know how we can identify if it is really an I/O problem or if it could be some other issue (network, for example)?
To be more specific, this is one of the queries that have increased in time dramatically. As you can see, it is reading from a data base (SQL Server, VAULT schema) and work and writing in work directory.
Number of observations its almost the same:
We asked customer about any change in network traffic, and they said still the same.
Thanks in advance.
For a process to complete, much more needs to be done than the actual calculations on the CPU.
Your data has te be read and your results have to be written.
You might have to wait for other processes to finish first, and if your process includes multiple steps, writing to and reading from disk each time, you will have to wait for the CPU each time too.
In our situation, if real time is much larger than cpu time, we usually see much trafic to our Network File System (nfs).
As a programmer, you might notice that storing intermediate results in WORK is more efficient then on remote libraries.
You might safe much time by creating intermediate results as views instead of tables, IF you only use them once. That is not only possible in SQL, but also in data steps like this
data MY_RESULT / view=MY_RESULT;
set MY_DATA;
where transaction_date between '1jan2022'd and 30jun2022'd;
run;

Redis Stream Managing a time frame

I am writing events into Redis Stream.
But I would like to keep only the last 48 hours events.
According to the Redis documentations, I saw that I can manage my list size only using the MAXLEN which take affect by the records count and not by time frame.
Is there any way I can use the XADD function but to limit on insertion records oldest that the last 48 hours?
Thanks for the help!
This is yet not clear. I don't like the vanilla way of time capping a stream, that is, "trim by <seconds>", because it means that if there is a delay in the process XADD-ing items, later the next XADD will have to evict things potentially for seconds, causing latency spikes. Moreover it does not make a lot of sense semantically. Your real "capped resource" is memory, so it's not really so important how many items you want to store in the past VS how many items you can store, so the number of items limit makes more sense. Yet in certain applications where there are multiple streams with insertion rates that vary a lot between different producers, it makes sense to cap by time, to avoid wasting memory in certain producers that emit very few entries per unit of time. Maybe at some point I'll add some "best effort" time capping that does not do more work than a given amount, but that eventually will be able to trim the stream, given enough XADD calls.
AFAIK not yet. There were discussions about adding a timestamp cap (to XADD, and possible to XTRIM as well), but it doesn't look like this feature has been implemented in the latest release candidates.
A possible solution in nodejs based on trimming to a specified key (not on time per se).
https://gist.github.com/jakelowen/22cb8a233ac0cdbb8e77808e17e0e1fc
Proof of concept. Not battle tested.

GAE variance in performance, especially datastore reads

We've seen an overall degradation in performance over, what seems like the last year or so, and also a degradation the first time a given user's data is accessed on a given day.
We've identified a specific datastore query, that often can return entities at a rate of about 50ms each, degrade to about 500ms:
users = User.get_by_id(usersToGet_IntArray)
Since it seems like the front end is mostly waiting on the server, it doesn't seem like a faster machine class would help. We're accessing the datastore natively, using Python. Any idea what we can do to consistently get performance on the better end of the range? Thanks.
While I don't know why, it looks like we found a super simple (magic) fix. By setting
threadsafe = false
in app.yaml -- i.e. going back to running our handlers in single-threaded mode -- our average performance is now faster than the best case performance numbers we were seeing before (for the aforementioned query, entities getting returned in 12ms consistently).
And somehow, also, while the number of loading instances has doubled, the billable instances hasn't increased. So it looks like it's not going to increase cost.

Specific Cache Hit Rate calculation

Scenario:
Suppose we have infinite cache memory size. Caching is just limited by timeout, value of this timeout is half an hour. Cache is initially empty.
Problem:
We have 50,000 distinct request. Our system is querying, randomly, at the rate of 15 request/second i.e. 27,000 request in half an hour . What kind of curve or average value of cache hit rate could we expect for first 5 hours?
Note: This scenario is fixed. I need an approach to find out hit rate. If you think tag is wrong, please suggest appropriate tag.
I think you're right and this is a math question (certainly not a programming
problem).
One approach is to consider the extremes -- what is the hit rate for the
first query when the the system starts running? For the second query?
After one second? After 10? After a minute? And what is the likelyhood
that any random query will be found in the cache once the system has been
running a long time?
These are few specific values, and together they give you a curve.
I don't think great numeric precision is necessary; the long-term average
and the shape of the curve is more interesting.

Spreading out data from bursts

I am trying to spread out data that is received in bursts. This means I have data that is received by some other application in large bursts. For each data entry I need to do some additional requests on some server, at which I should limit the traffic. Hence I try to spread up the requests in the time that I have until the next data burst arrives.
Currently I am using a token-bucket to spread out the data. However because the data I receive is already badly shaped I am still either filling up the queue of pending request, or I get spikes whenever a bursts comes in. So this algorithm does not seem to do the kind of shaping I need.
What other algorithms are there available to limit the requests? I know I have times of high load and times of low load, so both should be handled well by the application.
I am not sure if I was really able to explain the problem I am currently having. If you need any clarifications, just let me know.
EDIT:
I'll try to clarify the problem some more and explain, why a simple rate limiter does not work.
The problem lies in the bursty nature of the traffic and the fact, that burst have a different size at different times. What is mostly constant is the delay between each burst. Thus we get a bunch of data records for processing and we need to spread them out as evenly as possible before the next bunch comes in. However we are not 100% sure when the next bunch will come in, just aproximately, so a simple divide time by number of records does not work as it should.
A rate limiting does not work, because the spread of the data is not sufficient this way. If we are close to saturation of the rate, everything is fine, and we spread out evenly (although this should not happen to frequently). If we are below the threshold, the spreading gets much worse though.
I'll make an example to make this problem more clear:
Let's say we limit our traffic to 10 requests per seconds and new data comes in about every 10 seconds.
When we get 100 records at the beginning of a time frame, we will query 10 records each second and we have a perfect even spread. However if we get only 15 records we'll have one second where we query 10 records, one second where we query 5 records and 8 seconds where we query 0 records, so we have very unequal levels of traffic over time. Instead it would be better if we just queried 1.5 records each second. However setting this rate would also make problems, since new data might arrive earlier, so we do not have the full 10 seconds and 1.5 queries would not be enough. If we use a token bucket, the problem actually gets even worse, because token-buckets allow bursts to get through at the beginning of the time-frame.
However this example over simplifies, because actually we cannot fully tell the number of pending requests at any given moment, but just an upper limit. So we would have to throttle each time based on this number.
This sounds like a problem within the domain of control theory. Specifically, I'm thinking a PID controller might work.
A first crack at the problem might be dividing the number of records by the estimated time until next batch. This would be like a P controller - proportional only. But then you run the risk of overestimating the time, and building up some unsent records. So try adding in an I term - integral - to account for built up error.
I'm not sure you even need a derivative term, if the variation in batch size is random. So try using a PI loop - you might build up some backlog between bursts, but it will be handled by the I term.
If it's unacceptable to have a backlog, then the solution might be more complicated...
If there are no other constraints, what you should do is figure out the maximum data rate that you are comfortable with sending additional requests, and limit your processing speed according to that. Then monitor what is happening. If that gets through all of your requests quickly, then there is no harm . If its sustained level of processing is not fast enough, then you need more capacity.

Resources