Redis in django-rest-framework in get method - caching

I am using redis in django restframe work and geting problem in get method
I have save data for multiple users with different keys
#api_view(['GET'])
def abc(request):
key = request.META['HTTP_KEY']
if cache.get(key) == None:
print('create a cache and return data ');
cache.set(key,key,timeout =100)
return JsonResponse({'data': cache.get(key) })
else:
print('return data from cache')
return JsonResponse({'data': cache.get(key) })
first time it is creating a cache and return a data and when I hit next time with different key it will return same data event it is not execute the if else condition/not printing print command. I thing it create url base cache, how can solve this problem?
I hit with key "a" first time and it return me = a and print "create a cache and return data"
next time I hit with with key b, it return me old data "a" and not print any line 'create a cache and return data '/'return data from cache'

Use this
from django.views.decorators.vary import vary_on_headers
#vary_on_headers('key')
'key' Please menstion your header key here
Using Very header

Related

How does Apollo paginated "read" and "merge" work?

I was reading through the docs to learn pagination approaches for Apollo. This is the simple example where they explain the paginated read function:
https://www.apollographql.com/docs/react/pagination/core-api#paginated-read-functions
Here is the relevant code snippet:
const cache = new InMemoryCache({
typePolicies: {
Query: {
fields: {
feed: {
read(existing, { args: { offset, limit }}) {
// A read function should always return undefined if existing is
// undefined. Returning undefined signals that the field is
// missing from the cache, which instructs Apollo Client to
// fetch its value from your GraphQL server.
return existing && existing.slice(offset, offset + limit);
},
// The keyArgs list and merge function are the same as above.
keyArgs: [],
merge(existing, incoming, { args: { offset = 0 }}) {
const merged = existing ? existing.slice(0) : [];
for (let i = 0; i < incoming.length; ++i) {
merged[offset + i] = incoming[i];
}
return merged;
},
},
},
},
},
});
I have one major question around this snippet and more snippets from the docs that have the same "flaw" in my eyes, but I feel like I'm missing some piece.
Suppose I run a first query with offset=0 and limit=10. The server will return 10 results based on this query and store it inside cache after accessing merge function.
Afterwards, I run the query with offset=5 and limit=10. Based on the approach described in docs and the above code snippet, what I'm understanding is that I will get only the items from 5 through 10 instead of items from 5 to 15. Because Apollo will see that existing variable is present in read (with existing holding initial 10 items) and it will slice the available 5 items for me.
My question is - what am I missing? How will Apollo know to fetch new data from the server? How will new data arrive into cache after initial query? Keep in mind keyArgs is set to [] so the results will always be merged into a single item in the cache.
Apollo will not slice anything automatically. You have to define a merge function that keeps the data in the correct order in the cache. One approach would be to have an array with empty slots for data not yet fetched, and place incoming data in their respective index. For instance if you fetch items 30-40 out of a total of 100 your array would have 30 empty slots then your items then 60 empty slots. If you subsequently fetch items 70-80 those will be placed in their respective indexes and so on.
Your read function is where the decision on whether a network request is necessary or not will be made. If you find all the data in existing you will return them and no request to the server will be made. If any items are missing then you need to return undefined which will trigger a network request, then your merge function will be triggered once data is fetched, and finally your read function will run again only this time the data will be in the cache and it will be able to return them.
This approach is for the cache-first caching policy which is the default.
The logic for returning undefined from your read function will be implemented by you. There is no apollo magic under the hood.
If you use cache-and-network policy then a your read doesn't need to return undefined when data

Dexie.js - table.delete(id) not working for per-row deletion

i'm just starting out with Dexie, and I seem to be coming unstuck.
I have a small database (less than 1000 rows), and i'm trying to delete each row one-by-one once I know that the row has been sent to a remote API.
I can also successfully save to the table (which is defined by an ID and a column storing a serialised object)
here's my code:
if (online) {
//we query the db and send each event
database.open()
let allEvents = database.events.toCollection()
let total = allEvents.count(function (count) {
console.log(count + ' events in total')
//a simple test to ensure we're seeing the right number of records
})
allEvents.each(function(thisEvent){
//push to remote API
console.log('deleting ' + thisEvent.id)
database.events.delete(thisEvent.id) //<= this doesn't seem to be working
})
}
All of this with the exception of the final delete statement.
Any ideas on how I should fix this? the important thing for me is to delete on a per-row basis.
thanks in advance!
I was experiencing the same problem, and the answer from Eugenia Pais wasn't working for me. So after some tests, I saw the trouble was with the type of the variable: I was using a string, but a number is needed, so this is how I solved it:
function removeRow (primaryKey) {
primaryKey = parseInt(primaryKey);
databaseName.tableName.where('primaryKey').equals(primaryKey).delete().then(function(deleteCount) {
console.log ("Deleted " + deleteCount + " rows");
}).catch(function(error) {
console.error ("Error: " + error);
});
So be aware you are using a number as argument.
The correct way to delete each row should be selecting the specific row and delete it:
database.tableName.where(indexId).equals(indexValue).delete();
The data type of the key is not a problem you could verify it in my example here: example
db.example.where('key').equals('one').delete();
Maybe you are trying to delete by a property that not is an index.

How to append to dexie entry using a rolling buffer (to store large entries without allocating GBs of memory)

I was redirected here after emailing the author of Dexie (David Fahlander). This is my question:
Is there a way to append to an existing Dexie entry? I need to store things that are large in dexie, but I'd like to be able to fill large entries with a rolling buffer rather than allocating one huge buffer and then doing a store.
For example, I have a 2gb file I want to store in dexie. I want to store that file by storing 32kb at a time into the same store, without having to allocate a 2gb of memory in the browser. Is there a way to do that? The put method seems to only overwrite entries.
Thanks for putting your question here at stackoverflow :) This helps me build up an open knowledge base for everyone to access.
There's no way in IndexedDB to update an entry without also instanciating the whole entry. Dexie adds the update() and modify() methods, but they only emulate a way to alter certain properties. In the background, the entire document will always be loaded in memory temporarily.
IndexedDB also has Blob support, but when a Blob i stored into IndexedDB, its entire content is cloned/copied into the database by specification.
So the best way to deal with this would be to dedicate a table for dynamic large content and add new entries to it.
For example, let's say you have a the tables "files" and "fileChunks". You need to incrementially grow the "file", and each time you do that, you don't want to instanciate the entire file in memory. You could then add the file chunks as separate entries into the fileChunks table.
let db = new Dexie('filedb');
db.version(1).stores({
files: '++id, name',
fileChunks: '++id, fileId'
});
/** Returns a Promise with ID of the created file */
function createFile (name) {
return db.files.add({name});
}
/** Appends contents to the file */
function appendFileContent (fileId, contentToAppend) {
return db.fileChunks.add ({fileId, chunk: contentToAppend});
}
/** Read entire file */
function readEntireFile (fileId) {
return db.fileChunks.where('fileId').equals(fileId).toArray()
.then(entries => {
return entries.map(entry=>entry.chunk)
.join(''); // join = Assume chunks are strings
});
}
Easy enough. If you want appendFileContent to be a rolling buffer (with a max size and erase old content), you could add truncate methods:
function deleteOldChunks (fileId, maxAllowedChunks) {
return db.fileChunks.where('fileId').equals(fileId);
.reverse() // Important, so that we delete old chunks
.offset(maxAllowedChunks) // offset = skip
.delete(); // Deletes all records older before N last records
}
You'd get other benefits as well, such as the ability to tail a stored file without loading its entire content into memory:
/** Tail a file. This function only shows an example on how
* dynamic the data is stored and that file tailing would be
* simple to do. */
function tailFile (fileId, maxLines) {
let result = [], numNewlines = 0;
return db.fileChunks.where('fileId').equals(fileId)
.reverse()
.until(() => numNewLines >= maxLines)
.each(entry => {
result.unshift(entry.chunk);
numNewlines += (entry.chunk.match(/\n/g) || []).length;
})
.then (()=> {
let lines = result.join('').split('\n')
.slice(1); // First line may be cut off
let overflowLines = lines.length - maxLines;
return (overflowLines > 0 ?
lines.slice(overflowLines) :
lines).join('\n');
});
}
The reason I know that chunks will come in the correct order in readEntireFile() and tailFile() is that indexedDB queries will always be retrieved in in the order of the queried column primary, but secondary in the order of the primary keys, which are auto-incremented numbers.
This pattern could be used for other cases, like logging etc. In case the file is not string based, you would have to alter this sample a little. Specifically, don't use string.join() or array.split().

Deleting an object without objectId, from REST

Using REST, I try to delete an object in a Parse.com database, but without directly pointing to the objectId.
Here is the code:
deleteFavoriteActivity: function (from, to) {
var deleteObjects = "?where={\"fromUser\":\"" + from + "\", \"toPro\":\"" + to + "\"}";
return $http.delete(favoritesActivityUrl + deleteObjects, parseCredentials);
}
As you can see I try to delete object based on a query on 2 fields: "fromUser" and "toPro".
This won't work and return bad request. I don't know if it is even possible to delete object based on query. Is it possible ? Or must I absolutely point to objectID I want to delete ?
The endpoint for delete needs an objectid; just get the object with a get request query, get that object's id, then call delete with that id.

How to cache mvc3 webgrid results (so that col sort click doesn't?

Can somebody please tell me how I can cache my webgrid results so that when I sort by column it doesn't re-run my stored procedure query every time?
When clicking on a column link to sort, the stored proc (which is a little slow) that populates the table/grid is re-executed every time and hits the database. Any caching tips and tricks would be greatly appreciated.
Thx!
Well inside the controller action which is invoking the method on your repository supposed to query the database you could check whether the cache already contains the results.
Here's a commonly used pattern:
public ActionResult Foo()
{
// Try fetching the results from the cache
var results = HttpContext.Cache["results"] as IEnumerable<MyViewModel>;
if (results == null)
{
// the results were not found in the cache => invoke the expensive
// operation to fetch them
results = _repository.GetResults();
// store the results into the cache so that on subsequent calls on this action
// the expensive operation would not be called
HttpContext.Cache["results"] = results;
}
// return the results to the view for displaying
return View(results);
}

Resources