Small question regarding the TTL of a Cassandra table, with a SpringBoot app writing to it please.
The setup is very simple: a SpringBoot app which would like to save some pojo/entity/object inside cassandra for a year, and perform some update on it once a month.
The data to be saved is very simple:
public class TvPlan {
#PrimaryKey
private UUID uuid;
private String planName;
private long planStartTime;
private long planEndTime;
private long lastMonthlyPaymentTime;
}
The table is created as follow:
CREATE TABLE tvplan(
uuid uuid PRIMARY KEY,
planName text,
planStartTime long,
planEndTime long,
lastMonthlyPaymentTime long
) with default_time_to_live = {one-year};
So, for example, when a TV plan, starting let's say January 1st 2022, I expect this plan to end January 1st 2023 (last one year), this is why I configured the TTL in Cassandra with a "default_time_to_live = {one-year};".
But as you can notice, the data tracks the last payment as well.
The issue observed is that every month, when we update the statement with the last payment date, the data TTL is actually "pushed back".
Meaning, every time I update the statement, updating lastMonthlyPaymentTime each month, the TTL in pushed to one year from the moment I perform the update.
Question, how to create a table, or save a data, which for sure is deleted after a certain time, or should I say, the TTL is not pushed back every time there is an update on the record?
Thank you
Related
I have an entity with a Date field with type of java.util.Date, and this field in oracle db has the timestamp type.
the problem is that when i use the repository to find by given specs, it returns nothing for the given date.
It only happens when i want the exact equal date. it works fine with greaterThen and lessThen and that's because the time of the saved records are different.
how could I ignore the time of records and fetch them only for the given date?
Thanks a million
I think you need to look at #Temporal #JsonFormat annotations. Link
Sample usage;
#Temporal(TemporalType.TIMESTAMP)
#JsonFormat(shape = JsonFormat.Shape.STRING, pattern = "yyyy-MMdd'T'HH:mm:ss")
protected Date lastUpdate;
because the data was saved with the time best way was to do it with between condition from 00:00 to 23:59 in that day. special thanks to this link :
Spring Data Querying DateTime with only Date
but i solved the problem by creating a view from that table and removing the time from the date in oracle.
I need suggestions for designing tables and records in Oracle to handle business processes, status, and report times between statuses.
We have a transaction table that records an serial numbered record id, a document id, a date and time, and a status. Statuses reflect where a document is in the approval process, reflecting a task that needs to be done on a document. There are up to 40 statuses, showing both who needs to approve and what is the task being done. So there is a document header or parent record, and multiple status records as child records.
The challenge is to analyzes where bottlenecks are, which tasks are taking the longest, etc.
From a business pov, a task receives a document, we have the date and time this happens. We do not have a release or finish date and time for a current task. All we have is the next task's start date and time. Note that a document can only have one status at a time.
For reasons I won't go into, we cannot use ETL to create an end date and time for a status, although I think that is the solution.
Part of the challenge is that statuses are not entirely consecutive or have a fixed order. Some statuses can start, stop, and later in the process start again.
What I would like to report is the time, on a weekly or monthly basis, that each status record takes, time date time end minus date time start. Can anyone suggest a function or other way to accomplish this?
I don't need specific code. I could use some example in pseudo code or just in outline form of how to solve this. Then I could figure out the code.
You can use a trigger after insert and after update on transaction table to record on a LOG_TABLE every change: id transaction, last status, new status, who approve, change date-time (maybe using TiMESTAMP data type is fractional seconds care), terminal, session ID, username.
For insert: you need to define a type of "insert status", diferent for other 40 statuses. For example, is status are of numeric type, a "insert status" can be "-1" (minus one), so last status is "-1" and new status is status of the inserted record on transaction table.
With this LOG_TABLE you can develop a package with functions to calculate time between changed status, display all changes, dispplay last change, etc.
I have created an Hibernate Entity which has a OffsetTime column. While saving a time value with time offset e.g (+05:00), it discards time offset, and saves the time with the time offset according to the system timezone. Here is my column in entity:
#Column(name = "start_time", nullable = false)
private OffsetTime startTime;
Then if I try to save "12:15+01:00". It save "12:15+00:00", if my machine is in UTC timezone, if my machine is in IST timezone, it save "12:15+05:00". I want it to save "12:15+01:00" irrespective of machine's timezone.
Please suggest what I need to correct/review to ensure I fix this issue.
Thanks.
Question is how do you do the setting of the value. You didn't show your setter method. You either parse it from String or operate some instance of Temporal that you convert to OffsetTime in either case you will need to provide the desired offset if you don't want to use the default one. Say that you want to save current time with given offset. So you will have to do this:
OffsetTime startTime = OffsetTime.of(LocalTime.now, ZoneOffset.of("+05:00");
See method of() as well of other methods of OffsetTime
I wanna persistant an object with LocalDateTime fields with spring data and a couchbase behind in a spring boot app.
Here are the fieldmapping:
#Field
private LocalDateTime start;
#Field
private LocalDateTime end;
When I save the object then the dates are stored as numbers in couchbase.
Here are the stored data in couchbase:
"start": 1518818215508,
So the problem is, if I store an LocalDateTime e.g at 10.00, and then read it from db the result is 09:00 instead of 10:00 because of my local time +1:00.
In Postgres I would save the date in an column with mapping: columnDefinition= "TIMESTAMP WITH TIME ZONE"
How I can solve this problem in couchbase?
JSON (famously) has no date format (unlike relational databases which have a sprawling number of different formats). If you want to store timezone data in JSON, here are two options I can think of:
Using a string instead of a number, and then using something like ISO-8601.
Store the epoch time as you currently are as a UTC value, but also store a separate number field that represents a timezone offset from UTC
I would recommend going with the first approach.
I'm no Spring/Java expert, but a quick look at the documentation says you can do this by setting system property org.springframework.data.couchbase.useISOStringConverterForDate to true
I have a site with millions of users (well, actually it doesn't have any yet, but let's imagine), and I want to calculate some stats like "log-ins in the past hour".
The problem is similar to the one described here: http://highscalability.com/blog/2008/4/19/how-to-build-a-real-time-analytics-system.html
The simplest approach would be to do a select like this:
select count(distinct user_id)
from logs
where date>='20120601 1200' and date <='20120601 1300'
(of course other conditions could apply for the stats, like log-ins per country)
Of course this would be really slow, mainly if it has millions (or even thousands) of rows, and I want to query this every time a page is displayed.
How would you summarize the data? What should go to the (mem)cache?
EDIT: I'm looking for a way to de-normalize the data, or to keep the cache up-to-date. For example I could increment an in-memory variable every time someone logs in, but that would help to know the total amount of logins, not the "logins in the last hour". Hope it's more clear now.
IMO the more correct approach here would be to implement a continuous calculation that holds the relevant counters in memory. Every time a user is added to your system you can fire up an event which can be processed in multiple ways and update last hour, last day or even total users counters. There are some great frameworks out there to do this sort of processing. Twitter Storm is one of them, another one is GigaSpaces XAP (disclaimer - I work for GigaSpaces) and specifically this tutorial, and also Apache S4 and GridGain.
If you don't have a db then never mind. I don't have millions of users but I have table with a years worth of logon that has a million rows and simple stats like that in sub second. A million rows is not that much for a database. You cannot make date a PK as you can have duplicates. For minimal fragmentation and speed of insert make date a clustered non unique index asc and that is how the data comes in. Not sure if you have a DB but in MSSQL you can. Index user_id is something to test. What that would do is slow down insert as that is an index that will fragment. If you a looking for a fairly tight time span a table scan might be OK.
Why distinct user_id rather then a login is a login.
Have a property that only only runs the query every x seconds. Even if every second and reports the cached answer. If or 200 pages hit that property in one second for sure you don't want 200 queries. And if the stat is one second stale for information over the last hour that is still a valid stat.
I've ended up using Esper/NEsper. Also Uri's suggestions where useful.
Esper allows me to compute real-time stats of data as it's being obtained.
If you're just running off of logs, you probably want to look at something like Splunk.
Generally, if you want this in-memory and fast (real time), you would create a distributed cache of login data with an eviction after e.g. 24 hours, and then you could query that cache for e.g. logins within the past hour.
Assuming a login record looks something like:
public class Login implements Serializable {
public Login(String userId, long loginTime) {..}
public String getUserId() {..}
public long getLoginTime() {..}
public long getLastSeenTime() {..}
public void setLastSeenTime(long logoutTime) {..}
public long getLogoutTime() {..}
public void setLogoutTime(long logoutTime) {..}
String userId;
long loginTime;
long lastSeenTime;
long logoutTime;
}
To support the eviction after 24 hours, simply configure an expiry (TTL) on the cache
<expiry-delay>24h</expiry-delay>
To query for all users currently logged in:
long oneHourAgo = System.currentTimeMillis() - 60*60*1000;
Filter query = QueryHelper.createFilter("loginTime > " + oneHourAgo
+ " and logoutTime = 0");
Set idsLoggedIn = cache.keySet(query);
To query for the number of logins and/or active users in the past hour:
long oneHourAgo = System.currentTimeMillis() - 60*60*1000;
Filter query = QueryHelper.createFilter("loginTime > " + oneHourAgo
+ " or lastSeenTime > " + oneHourAgo);
int numActive = cache.keySet(query).size();
(See http://docs.oracle.com/cd/E15357_01/coh.360/e15723/api_cq.htm for more info on queries. All these examples were from Oracle Coherence.)
For the sake of full disclosure, I work at Oracle. The opinions and views expressed in this post are my own, and do not necessarily reflect the opinions or views of my employer.