I have a database with the total played game time in seconds. I want to fetch these seconds from the database, add the current session play time in seconds and then update the database.
This should happen every 5 seconds. I have done this, but because I do currentSession + totalTimePlayedDB it keeps adding the full duration of my current session over and over... Any ideas?
local currentPlayTime = player:TimeConnected()
print(math.Round(currentPlayTime))
local playerValues = MySQLite.queryValue([[SELECT time FROM chiz_time WHERE sid=']].. player:SteamID() ..[[']], function(time)
if time == "" then
time = math.Round(currentPlayTime)
else
time = math.Round(time + time - currentPlayTime )
end
MySQLite.query([[UPDATE chiz_time SET time = ']].. time ..[[' WHERE sid=']].. player:SteamID() ..[[']])
end)
I do currentSession + totalTimePlayedDB it keeps adding the full duration of my current
You just need to compute the delta from your last save time.
In your init code somewhere:
lastSaveTime = 0
In your save routine:
totalTimePlayedDB = totalTimePlayedDB + currentSession - lastSaveTime
if (totalTimePlayedDB is written to the database successfully) then
lastSaveTime = currentSession
end
Related
I get different answers depending on which watch the following code is run on
val prefs = getSharedPreferences(MY_PREFS_NAME, MODE_PRIVATE)
val appClosedTime = prefs.getLong(KEY_APP_CLOSE_TIME, System.currentTimeMillis()) // default to now
val elapsedTime = System.currentTimeMillis() - appClosedTime // in milliseconds
The above code gives the right answer on both the Samsung Galaxy Watch 4 (Wear OS 3) and a Google Emulator (Wear Round API 30)
Unfortunately when the same code is run on the TicWatch (Wear OS 2) the elapsedTime includes the local time offset from UTC.
The following code can be used to adjust for the difference
val prefs = getSharedPreferences(MY_PREFS_NAME, MODE_PRIVATE)
val appClosedTime = prefs.getLong(KEY_APP_CLOSE_TIME, System.currentTimeMillis()) // default to now
val currentSystemTime = GregorianCalendar()
val localTimeOffsetFromUTC = currentSystemTime.timeZone.getOffset(appClosedTime)
val elapsedTime = System.currentTimeMillis() - (appClosedTime-localTimeOffsetFromUTC)
My code currently checks which watch I am using and makes the adjustment however this 'hack' is clearly impractical since it must be tested on each and every watch!
I am hoping someone can suggest something I am missing / overlooking / doing wrong ?
Code snippets and further information
System.currentTimeMillis() says it returns the difference, measured in milliseconds, between the current time and midnight, January 1, 1970 UTC.
appCloseTime is stored in onStop()
val editor = getSharedPreferences(MY_PREFS_NAME, MODE_PRIVATE).edit()
editor.putLong(KEY_APP_CLOSE_TIME, System.currentTimeMillis())
editor.apply()
I am using Observable.interval to schedule code execuiton at specified times:
let obs = Observable.interval(50).subscribe(()=>{
console.log(this.currentFrame+" "+new Date().getTime());
this.currentFrame++
});
This is the output. As you can see, after 6 iterations I already have a 10ms drift. How can I use Observable.interval, but also specify that it needs to recalculate next iteration based on the current drift?
0 1513972329849
1 1513972329901
2 1513972329952
3 1513972330004
4 1513972330057
5 1513972330110
Until #cartant's fix gets repulled, you could use expand and create the behavior yourself. Assuming delay will always drift forward, try the following:
function correcting_interval(interval) {
const start_time = new Date().getTime();
return Observable.of(-1)
.expand(v => Observable.of(v + 1).delay(interval - (new Date().getTime() - start_time) % interval))
.skip(1);
}
I am using Firebase realtime database and overtime there is a lot of stale data in it and I have written a script to delete the stale content.
My Node structure looks something like this:
store
- {store_name}
- products
- {product_name}
- data
- {date} e.g. 01_Sep_2017
- some_event
Scale of the data
#Stores: ~110K
#Products: ~25
Context
I want to cleanup all the data which is like 30 months old. I tried the following approach :-
For each store, traverse all the products and for each date, delete the node
I ran ~30 threads/script instances and each thread is responsible for deleting a particular date of data in that month. The whole script is running for ~12 hours to delete a month data with above structure.
I have placed a limit/cap on the number of pending calls in each script and it is evident from logging that each script reaches the limit very quickly and speed of firing the delete call is much faster than speed of deletion So here firebase becomes a bottleneck.
Pretty evident that I am running purge script at client side and to gain performance script should be executed close to the data to save network round trip time.
Questions
Q1. How to delete firebase old nodes efficiently ?
Q2. Is there a way we can set a TTL on each node so that it cleans up automatically ?
Q3. I have confirmed from multiple nodes that data has been deleted from the nodes but firebase console is not showing decrease in data. I also tried to take backup of data and it still is showing some data which is not there when I checked the nodes manually. I want to know the reason behind this inconsistency.
Does firebase make soft deletions So when we take backups, data is actually there but is not visible via firebase sdk or firebase console because they can process soft deletes but backups don't ?
Q4. For the whole duration my script is running, I have a continuous rise in bandwidth section. With below script I am only firing delete calls and I am not reading any data still I see a consistency with database read. Have a look at this screenshot ?
Is this because of callbacks of deleted nodes ?
Code
var stores = [];
var storeIndex = 0;
var products = [];
var productIndex = -1;
const month = 'Oct';
const year = 2017;
if (process.argv.length < 3) {
console.log("Usage: node purge.js $beginDate $endDate i.e. node purge 1 2 | Exiting..");
process.exit();
}
var beginDate = process.argv[2];
var endDate = process.argv[3];
var numPendingCalls = 0;
const maxPendingCalls = 500;
/**
* Url Pattern: /store/{domain}/products/{product_name}/data/{date}
* date Pattern: 01_Jan_2017
*/
function deleteNode() {
var storeName = stores[storeIndex],
productName = products[productIndex],
date = (beginDate < 10 ? '0' + beginDate : beginDate) + '_' + month + '_' + year;
numPendingCalls++;
db.ref('store')
.child(storeName)
.child('products')
.child(productName)
.child('data')
.child(date)
.remove(function() {
numPendingCalls--;
});
}
function deleteData() {
productIndex++;
// When all products for a particular store are complete, start for the new store for given date
if (productIndex === products.length) {
if (storeIndex % 1000 === 0) {
console.log('Script: ' + beginDate, 'PendingCalls: ' + numPendingCalls, 'StoreIndex: ' + storeIndex, 'Store: ' + stores[storeIndex], 'Time: ' + (new Date()).toString());
}
productIndex = 0;
storeIndex++;
}
// When all stores have been completed, start deleting for next date
if (storeIndex === stores.length) {
console.log('Script: ' + beginDate, 'Successfully deleted data for date: ' + beginDate + '_' + month + '_' + year + '. Time: ' + (new Date()).toString());
beginDate++;
storeIndex = 0;
}
// When you have reached endDate, all data has been deleted call the original callback
if (beginDate > endDate) {
console.log('Script: ' + beginDate, 'Deletion script finished successfully at: ' + (new Date()).toString());
process.exit();
return;
}
deleteNode();
}
function init() {
console.log('Script: ' + beginDate, 'Deletion script started at: ' + (new Date()).toString());
getStoreNames(function() {
getProductNames(function() {
setInterval(function() {
if (numPendingCalls < maxPendingCalls) {
deleteData();
}
}, 0);
});
});
}
PS: This is not the exact structure I have but it is very similar to what we have (I have changed the node names and tried to make the example a realistic example)
Whether the deletes can be done more efficiently depends on how you now do them. Since you didn't share the minimal code that reproduces your current behavior it's hard to say how to improve it.
There is no support for a time-to-live property on documents. Typically developers do the clean-up in a administrative program/script that runs periodically. The more frequently you run the cleanup script, the less work it has to do, and thus the faster it will be.
Also see:
Delete firebase data older than 2 hours
How to delete firebase data after "n" days
Firebase actually deletes the data from disk when you tell it to. There is no way through the API to retrieve it, since it is really gone. But if you have a backup from a previous day, the data will of course still be there.
I have to make a random numer (1 and 2) in .lua, and change this value every 3 seconds.
I have a variable = randomMode, this randomMode have to change every 3 seconds (1 or 2)
You could try making a kind of timer that changes the value. For example the main program loop could to change the variable every 3 seconds by using time stamps.
If you cant use a good way to implement a timer, maybe just checking time stamps since last call is good enough. For example this function randomizes the number on each call to GetRandomMode if more than 3 seconds has passed:
local lastChange = os.time()
local mode = math.random(1, 2)
function GetRandomMode()
local now = os.time()
if os.difftime(now, lastChange) > 3 then
lastChange = now
mode = math.random(1, 2)
end
return mode
end
I am trying to read and write the samples from a video file at a specific start point and end point for trimming a video. I am using AVAssetReader and AVAssetWriter.
The logic used here is -
STEP A:
Create asset reader instance with the specified asset.
Set the time range to the reader as per the start and end points. ( Say for example - start point = 5, end point = 15, file length = 55 sec )
Start reading the samples.
Get the sample's exact time stamp with respect to start point that we have passed in.
Store the time stamp of the sample that is accurate with the start point.( could be 5.13 or so ). Say ta = 5.13
Release the reader.
STEP B:
Create a new reader instance with the specified asset.
Set the time range to the reader as per the start and end points. ( Say for example - start point = 5, end point = 15, file length = 55 sec )
Start reading the samples.
Create a new sample buffer with sample timing info altered as
( sample buffer's time stamp t1- ta fetched from STEP A ) - This starts to write from 0
( sample buffer's time stamp t2- ta fetched from STEP A )
( sample buffer's time stamp t3- ta fetched from STEP A ) and so on till end point.
Release the reader.
The code sample for the same is:
STEP A:
while ([assetWriterInput isReadyForMoreMediaData] )
{
CMSampleBufferRef sampleBuffer = [assetReaderOutput copyNextSampleBuffer];
if (sampleBuffer != NULL)
{
CMTime originalTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
float time = CMTimeGetSeconds( originalTime );
// This is where we store the time stamp
fTimeTange = time;
[HelpMethods setCorrectEditTime:fTimeTange]; // This is stored globally
// This is to release the readers and writers and start a fresh call with the stored time stamp ta
[delegate resetTimeRange];
return;
}
}
STEP B:
while ([assetWriterInput isReadyForMoreMediaData] )
{
CMSampleBufferRef sampleBuffer = [assetReaderOutput copyNextSampleBuffer];
if (sampleBuffer != NULL)
{
CMSampleBufferRef finalBuffer = sampleBuffer;
CMSampleBufferRef newSampleBuffer;
CMSampleTimingInfo sampleTimingInfo;
CMTime cmm1 = CMSampleBufferGetOutputDuration(sampleBuffer);
CMTime originalTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
float time = CMTimeGetSeconds( originalTime );
// This is a helper method to get the stored ta at STEP A
fTimeTange = [HelpMethods getCorrectEditTime];
sampleTimingInfo.duration = cmm1;
float milliseconds = (time - fTimeTange) * 600;
NSLog( #"Timestamp in milliseconds = %f", milliseconds );
sampleTimingInfo.presentationTimeStamp = CMTimeMake(milliseconds, 600);
sampleTimingInfo.decodeTimeStamp = kCMTimeInvalid;
CMSampleBufferCreateCopyWithNewTiming(kCFAllocatorDefault,
sampleBuffer,
1,
&sampleTimingInfo,
&newSampleBuffer);
finalBuffer = newSampleBuffer;
BOOL success = YES;
success = [assetWriterInput appendSampleBuffer:finalBuffer];
CFRelease(sampleBuffer);
sampleBuffer = NULL;
}
Since the time stamps of the samples are read in out of order fashion, we end up getting an error saying
"(kFigFormatWriterError_InvalidTimestamp) (decode timestamp is less than previous sample's decode timestamp)"
and the values of time stamps are -
Timestamp in milliseconds = 0.000000
Timestamp in milliseconds = 79.999924
Timestamp in milliseconds = 39.999962
Timestamp in milliseconds = 119.999886
Timestamp in milliseconds = 200.000092
Timestamp in milliseconds = 160.000137
Timestamp in milliseconds = 280.000031
Timestamp in milliseconds = 240.000061
Timestamp in milliseconds = 319.999969
Timestamp in milliseconds = 399.999908
Timestamp in milliseconds = 359.999939
and so on
Any manipulation done to the presentation stamps results in out of order reading of samples.
Looking out a way to overcome this out of order reading of timestamps. Please let know.
Thanks in advance,
Champa