How to purge old content in firebase realtime database - performance

I am using Firebase realtime database and overtime there is a lot of stale data in it and I have written a script to delete the stale content.
My Node structure looks something like this:
store
- {store_name}
- products
- {product_name}
- data
- {date} e.g. 01_Sep_2017
- some_event
Scale of the data
#Stores: ~110K
#Products: ~25
Context
I want to cleanup all the data which is like 30 months old. I tried the following approach :-
For each store, traverse all the products and for each date, delete the node
I ran ~30 threads/script instances and each thread is responsible for deleting a particular date of data in that month. The whole script is running for ~12 hours to delete a month data with above structure.
I have placed a limit/cap on the number of pending calls in each script and it is evident from logging that each script reaches the limit very quickly and speed of firing the delete call is much faster than speed of deletion So here firebase becomes a bottleneck.
Pretty evident that I am running purge script at client side and to gain performance script should be executed close to the data to save network round trip time.
Questions
Q1. How to delete firebase old nodes efficiently ?
Q2. Is there a way we can set a TTL on each node so that it cleans up automatically ?
Q3. I have confirmed from multiple nodes that data has been deleted from the nodes but firebase console is not showing decrease in data. I also tried to take backup of data and it still is showing some data which is not there when I checked the nodes manually. I want to know the reason behind this inconsistency.
Does firebase make soft deletions So when we take backups, data is actually there but is not visible via firebase sdk or firebase console because they can process soft deletes but backups don't ?
Q4. For the whole duration my script is running, I have a continuous rise in bandwidth section. With below script I am only firing delete calls and I am not reading any data still I see a consistency with database read. Have a look at this screenshot ?
Is this because of callbacks of deleted nodes ?
Code
var stores = [];
var storeIndex = 0;
var products = [];
var productIndex = -1;
const month = 'Oct';
const year = 2017;
if (process.argv.length < 3) {
console.log("Usage: node purge.js $beginDate $endDate i.e. node purge 1 2 | Exiting..");
process.exit();
}
var beginDate = process.argv[2];
var endDate = process.argv[3];
var numPendingCalls = 0;
const maxPendingCalls = 500;
/**
* Url Pattern: /store/{domain}/products/{product_name}/data/{date}
* date Pattern: 01_Jan_2017
*/
function deleteNode() {
var storeName = stores[storeIndex],
productName = products[productIndex],
date = (beginDate < 10 ? '0' + beginDate : beginDate) + '_' + month + '_' + year;
numPendingCalls++;
db.ref('store')
.child(storeName)
.child('products')
.child(productName)
.child('data')
.child(date)
.remove(function() {
numPendingCalls--;
});
}
function deleteData() {
productIndex++;
// When all products for a particular store are complete, start for the new store for given date
if (productIndex === products.length) {
if (storeIndex % 1000 === 0) {
console.log('Script: ' + beginDate, 'PendingCalls: ' + numPendingCalls, 'StoreIndex: ' + storeIndex, 'Store: ' + stores[storeIndex], 'Time: ' + (new Date()).toString());
}
productIndex = 0;
storeIndex++;
}
// When all stores have been completed, start deleting for next date
if (storeIndex === stores.length) {
console.log('Script: ' + beginDate, 'Successfully deleted data for date: ' + beginDate + '_' + month + '_' + year + '. Time: ' + (new Date()).toString());
beginDate++;
storeIndex = 0;
}
// When you have reached endDate, all data has been deleted call the original callback
if (beginDate > endDate) {
console.log('Script: ' + beginDate, 'Deletion script finished successfully at: ' + (new Date()).toString());
process.exit();
return;
}
deleteNode();
}
function init() {
console.log('Script: ' + beginDate, 'Deletion script started at: ' + (new Date()).toString());
getStoreNames(function() {
getProductNames(function() {
setInterval(function() {
if (numPendingCalls < maxPendingCalls) {
deleteData();
}
}, 0);
});
});
}
PS: This is not the exact structure I have but it is very similar to what we have (I have changed the node names and tried to make the example a realistic example)

Whether the deletes can be done more efficiently depends on how you now do them. Since you didn't share the minimal code that reproduces your current behavior it's hard to say how to improve it.
There is no support for a time-to-live property on documents. Typically developers do the clean-up in a administrative program/script that runs periodically. The more frequently you run the cleanup script, the less work it has to do, and thus the faster it will be.
Also see:
Delete firebase data older than 2 hours
How to delete firebase data after "n" days
Firebase actually deletes the data from disk when you tell it to. There is no way through the API to retrieve it, since it is really gone. But if you have a backup from a previous day, the data will of course still be there.

Related

Google Fit API - International users sync issue

I'm using the Google Fit API Rest
Here is the data I'm retrieving from Google Fit using the API:
2021-03-21 29989 Steps
2021-03-20 12 Steps
Here is the data the user exported from Google:
3/22/2021 16,480 Steps
3/21/2021 13,521 Steps
In both circumstances, the steps equal 30,001
The dates are clearly off by one day because of the time zone. The daily count is also off for the same reason, however, it added up to the same steps.
What general approach/strategy can I take to get the steps obtained from the API match those on Google Fit when I don't have a timezone?
My API currently loops through the database and syncs all user data, not distinguishing domestic vs international users.
Here is the code snippet used to get steps:
//***** Get steps
case DATATYPE_STEP_COUNT_DELTA:
if ($dataStreamId == 'derived:com.google.step_count.delta:com.google.android.gms:estimated_steps') {
$listDatasets = $dataSets->get("me", $dataStreamId, $startTime . '000000000' . '-' . $endTime . '000000000');
if ($debug == 1) PrintR($listDatasets,"DATATYPE_STEP_COUNT_DELTA");
$step_count = 0;
foreach ($listDatasets as $dataSet) {
if ($dataSet['startTimeNanos']) {
$sec = $dataSet['startTimeNanos'] / 1000000000;
$activity_date = date('Y-m-d', $sec);
$dataSetValues = $dataSet['value'];
if ($dataSetValues && is_array($dataSetValues)) {
foreach ($dataSetValues as $dataSetValue) {
if(!isset($stepsArr[$studentencodedid][$activity_date])) $stepsArr[$studentencodedid][$activity_date] = 0;
$stepsArr[$studentencodedid][$activity_date] += $dataSetValue['intVal'];
$step_count += $dataSetValue['intVal'];
}
}
}
}
}
break;
//***** End get steps

Why isn't Laravel timing out?

I'm using Laravel 8.x on Windows 10 and am developing a CRUD that uses Vuetify on a Vue component that is trying to display the data in a small array of objects - just 11 items with four fields each - in a v-data-table. Most of the functionality is working now and I'm just doing the last bits of the code.
Everything has been going fine for several days but something has gone very wrong tonight. I changed a line or two of code, let npm run watch recompile the module I changed and hit refresh on my browser to see if the changes are working better than the original code. The refresh happened an hour and a half ago and I'm STILL waiting for it to render something!
It shows no sign of ending and I can't find any error messages anywhere. Apparently something is seriously wrong even though I've only changed a couple of lines of code. I'm completely baffled. I've never touched the default timeout settings and, honestly, don't even know where they are.
I have had timeouts before, in fact I manage to have two or three a day. For no reason I know, the first refresh after I've been away from the computer for an hour or two will tend to timeout while subsequent ones will go very quickly and let me get back to work coding. When I do get a timeout, the message says the timeout happened after 60 seconds. Of course that is not elapsed time - elapsed time is more like 5 to 10 minutes - but surely my current process should have timed out long ago.
Why am I getting timeouts when I get them? Why am I NOT getting one despite my last refresh having been over 90 minutes ago now? I should mention that I've rebooted the entire computer and even that did not help.
How can I get Laravel working better?
Update
As per Maarten's suggestion, I've reverted my code. Well, not exactly reverted it but it solved the problem. I had this:
deleteRow() {
this.deleteRowFlag = false;
this.deleteMessage = "Task deleted";
console.log(this.name + ".deleteRow() - deleteID: " + this.deleteID);
console.log(this.name + ".deleteRow() - deleteDescription: " + this.deleteDescription);
var index = -1;
for (let todo of this.todos) {
if (todo.id == deleteID) index = todo.index;
}
// const index = this.todos.indexOf(this.deleteDescription);
console.log(this.name + ".deleteRow() - index: " + index);
var deletedItemsArray = this.todos.splice(index, 1);
console.log(this.name + ".deleteRow() - deletedItemsArray.length: " + deletedItemsArray.length);
console.log(this.name + ".deleteRow() - deletedItemsArray[0].id: " + deletedItemsArray[0].id);
console.log(this.name + ".deleteRow() - deletedItemsArray[0].description: " + deletedItemsArray[0].description);
//FIX: Isn't actually deleting the row!
this.showDeleteSnackbar = true;
},
I couldn't imagine any of that being a problem and the little for loop was the last thing I added so I simply commented it out like so:
deleteRow() {
this.deleteRowFlag = false;
this.deleteMessage = "Task deleted";
console.log(this.name + ".deleteRow() - deleteID: " + this.deleteID);
console.log(this.name + ".deleteRow() - deleteDescription: " + this.deleteDescription);
// var index = -1;
// for (let todo of this.todos) {
// if (todo.id == deleteID) index = todo.index;
// }
// const index = this.todos.indexOf(this.deleteDescription);
console.log(this.name + ".deleteRow() - index: " + index);
var deletedItemsArray = this.todos.splice(index, 1);
console.log(this.name + ".deleteRow() - deletedItemsArray.length: " + deletedItemsArray.length);
console.log(this.name + ".deleteRow() - deletedItemsArray[0].id: " + deletedItemsArray[0].id);
console.log(this.name + ".deleteRow() - deletedItemsArray[0].description: " + deletedItemsArray[0].description);
//FIX: Isn't actually deleting the row!
this.showDeleteSnackbar = true;
},
I gave npm run watch a moment to compile the code, cloned the browser window, and refreshed. This time, the page rendered immediately. It looks as if I created an infinite loop and Laravel apparently failed to detect it and shut it down! I am astonished. I thought all professional development software for decades monitored for that kind of thing at compile time and execution time. I've been coding for multiple decades and I haven't seen a runaway loop since very early in my coding days.
I can see immediately that I need a this in front of deletedID in the loop. I'm surprised that the compiler didn't jump on that! (Actually, I think the logic still wouldn't work with that change so I'm going to take a few minutes to relearn how to manipulate arrays of objects in Javascript.)
As for the network tab in the browser, there was nothing in it. There was nothing in most of the tabs. And there was nothing in the npm run watch terminal window after I'd written the bad code.

Why the TiDB performance drop for 10 times when the updated field value is random?

I set up the TiDB, TiKV and PD cluster in order to benchmark them with YCSB tool, connected by the MySQL driver.
The cluster consists of 5 instances for each of TiDB, TiKV and PD.
Each node run a single TiDB, TiKV and PD instance.
However, when I play around the YCSB code in the update statement, I notice that if the value of the updated field is fixed and hardcoded, the total throughput is ~30K tps and the latency at ~30ms. If the updated field value is random, the total throughput is ~2k tps and the latency is around ~300ms.
The update statement creation code is as follow:
#Override
public String createUpdateStatement(StatementType updateType) {
String[] fieldKeys = updateType.getFieldString().split(",");
StringBuilder update = new StringBuilder("UPDATE ");
update.append(updateType.getTableName());
update.append(" SET ");
for (int i = 0; i < fieldKeys.length; i++) {
update.append(fieldKeys[i]);
String randStr = RandomCharStr(); // 1) 3K tps with 300ms latency
//String randStr = "Hardcode-Field-Value"; // 2) 20K tps with 20ms latency
update.append(" = '" + randStr + "'");
if (i < fieldKeys.length - 1) {
update.append(", ");
}
}
// update.append(fieldKey);
update.append(" WHERE ");
update.append(JdbcDBClient.PRIMARY_KEY);
update.append(" = ?");
return update.toString();
}
How do we account for this performance gap?
Is it due to the DistSQL query cache, as discussed in this post?
I manage to figure this out from this post (Same transaction returns different results when i ran multiply times) and pr (https://github.com/pingcap/tidb/issues/7644).
It is because TiDB will not perform the txn if the updated field is identical to the previous value.

how can I export hbase table using starttime endtime?

I am trying to perform incremental backup , I have already checked Export option but couldn't figure out start time option.Also please suggest on CopyTable , how can I restore.
Using CopyTable you just receive copy of given table on the same or another cluster (actually CopyTable MapReduce job). No miracle.
Its your own decision how to restore. Obvious options are:
Use the same tool to copy table back.
Just get / put selected rows (what I think you need here). Please pay attention you should keep timestamps while putting data back.
Actually for incremental backup it's enough for you to write job which scans table and gets/puts rows with given timestamps into table with the name calculated by date. Restore should work in reverse direction - read table with calculated name and put its record with the same timestamp.
I'd also recommend to you following technique: table snapshot (CDH 4.2.1 uses HBase 0.94.2). It looks not applicable for incremental backup but maybe you find something useful here like additional API. From the point of view of backup now it looks nice.
Hope this will help somehow.
The source code suggests
int versions = args.length > 2? Integer.parseInt(args[2]): 1;
long startTime = args.length > 3? Long.parseLong(args[3]): 0L;
long endTime = args.length > 4? Long.parseLong(args[4]): Long.MAX_VALUE;
The accepted answer doesn't pass version as a parameter. How did it work then?
hbase org.apache.hadoop.hbase.mapreduce.Export test /bkp_destination/test 1369060183200 1369063567260023219
From source code this boils down to -
1369060183200 - args[2] - version
1369063567260023219 - args[3] - starttime
Attaching source for ref:
private static Scan getConfiguredScanForJob(Configuration conf, String[] args) throws IOException {
Scan s = new Scan();
// Optional arguments.
// Set Scan Versions
int versions = args.length > 2? Integer.parseInt(args[2]): 1;
s.setMaxVersions(versions);
// Set Scan Range
long startTime = args.length > 3? Long.parseLong(args[3]): 0L;
long endTime = args.length > 4? Long.parseLong(args[4]): Long.MAX_VALUE;
s.setTimeRange(startTime, endTime);
// Set cache blocks
s.setCacheBlocks(false);
// set Start and Stop row
if (conf.get(TableInputFormat.SCAN_ROW_START) != null) {
s.setStartRow(Bytes.toBytesBinary(conf.get(TableInputFormat.SCAN_ROW_START)));
}
if (conf.get(TableInputFormat.SCAN_ROW_STOP) != null) {
s.setStopRow(Bytes.toBytesBinary(conf.get(TableInputFormat.SCAN_ROW_STOP)));
}
// Set Scan Column Family
boolean raw = Boolean.parseBoolean(conf.get(RAW_SCAN));
if (raw) {
s.setRaw(raw);
}
if (conf.get(TableInputFormat.SCAN_COLUMN_FAMILY) != null) {
s.addFamily(Bytes.toBytes(conf.get(TableInputFormat.SCAN_COLUMN_FAMILY)));
}
// Set RowFilter or Prefix Filter if applicable.
Filter exportFilter = getExportFilter(args);
if (exportFilter!= null) {
LOG.info("Setting Scan Filter for Export.");
s.setFilter(exportFilter);
}
int batching = conf.getInt(EXPORT_BATCHING, -1);
if (batching != -1){
try {
s.setBatch(batching);
} catch (IncompatibleFilterException e) {
LOG.error("Batching could not be set", e);
}
}
LOG.info("versions=" + versions + ", starttime=" + startTime +
", endtime=" + endTime + ", keepDeletedCells=" + raw);
return s;
}
Found out the issue here, the hbase documentation says
hbase org.apache.hadoop.hbase.mapreduce.Export <tablename> <outputdir> [<versions> [<starttime> [<endtime>]]]
so after trying a few of combinations, I found out that it is converted to a real example like below code
hbase org.apache.hadoop.hbase.mapreduce.Export test /bkp_destination/test 1369060183200 1369063567260023219
where
test is tablename,
/bkp_destination/test is backup destination folder,
1369060183200 is starttime,
1369063567260023219 is endtime

Entity Framework SaveChanges() first call is very slow

I appreciate that this issue has been raised a couple of times before, but I can't find a definitive answer (maybe there isn't one!).
Anyway the title tells it all really. Create a new context, add a new entity, SaveChanges() takes 20 seconds. Add second entity in same context, SaveChanges() instant.
Any thoughts on this? :-)
============ UPDATE =============
I've created a very simple app running against my existing model to show the issue...
public void Go()
{
ModelContainer context = new ModelContainer(DbHelper.GenerateConnectionString());
for (int i = 1; i <= 5; i++)
{
DateTime start = DateTime.Now;
Order order = context.Orders.Single(c => c.Reference == "AA05056");
DateTime end = DateTime.Now;
double millisecs = (end - start).TotalMilliseconds;
Console.WriteLine("Query " + i + " = " + millisecs + "ms (" + millisecs / 1000 + "s)");
start = DateTime.Now;
order.Note = start.ToLongTimeString();
context.SaveChanges();
end = DateTime.Now;
millisecs = (end - start).TotalMilliseconds;
Console.WriteLine("SaveChanges " + i + " = " + millisecs + "ms (" + millisecs / 1000 + "s)");
Thread.Sleep(1000);
}
Console.ReadKey();
}
Please do not comment on my code - unless it is an invalid test ;)
The results are:
Query 1 = 3999.2288ms (3.9992288s)
SaveChanges 1 = 3391.194ms (3.391194s)
Query 2 = 18.001ms (0.018001s)
SaveChanges 2 = 4.0002ms (0.0040002s)
Query 3 = 14.0008ms (0.0140008s)
SaveChanges 3 = 3.0002ms (0.0030002s)
Query 4 = 13.0008ms (0.0130008s)
SaveChanges 4 = 3.0002ms (0.0030002s)
Query 5 = 10.0005ms (0.0100005s)
SaveChanges 5 = 3.0002ms (0.0030002s)
The first query takes time which I assume is the view generation? Or db connection?
The first save takes nearly 4 seconds which for the more complex save in my app takes over 20 seconds which is not acceptable.
Not sure where to go with this now :-(
UPDATE...
SQL Profiler shows first query and update are fast and are not different for first. So I know delay is Entity Framework as suspected.
It might not be the SaveChanges call - the first time you make any call to the database in EF, it has to do some initial code generation from the metadata. You can pre-generate this though at compile-time: http://msdn.microsoft.com/en-us/library/bb896240.aspx
I would be surprised if that's the only problem, but it might help.
Also have a look here: http://msdn.microsoft.com/en-us/library/cc853327.aspx
I would run the following code on app start up and see how long it takes and if after that the first SaveChanges is fast.
public static void UpdateDatabase()
{
//Note: Using SetInitializer is reconnended by Ladislav Mrnka with reputation 275k
//http://stackoverflow.com/questions/9281423/entity-framework-4-3-run-migrations-at-application-start
Database.SetInitializer<DAL.MyDbContext>(
new MigrateDatabaseToLatestVersion<DAL.MyDbContext,
Migrations.MyDbContext.Configuration>());
using (var db = new DAL.MyDbContext()) {
db.Database.Initialize(false);//Execute the migrations now, not at the first access
}
}

Resources