Sencha Touch storage sync - model-view-controller

I’m new to Sencha Touch and still not quite confident with its data handling patterns. The way I want to set up my application is something like this:
Retrieve the user’s data from the remote server via AJAX.
Save it in the local storage. Any modifications (editing, adding, deleting items) update the local data.
At some point in time (when the user clicks ‘sync’, when the user logs out, or something like that), the locally stored stored data is synced with the server, again, through an request AJAX.
So what would the basic structure of my application be, to achieve this pattern? And also, while we are here, is there a way to use a local database (as opposed to local key-value storage) for a specified store in Sencha Touch?

First of all Sencha.IO Sync provides the functionality that you're looking for. It's still in beta, but it probably will do exactly what you need and you won't have to host the database yourself:
http://www.sencha.com/products/io
For me I've built apps that use the localstorage proxy to store data locally. It's super easy. Here are a couple of examples of using data storage:
http://www.sencha.com/learn/taking-sencha-touch-apps-offline/
http://data-that.blogspot.com/2011/01/local-storage-proxy-with-sencha-touch.html
http://davehiren.blogspot.com/2011/09/sencha-touch-working-with-models.html
http://www.sencha.com/learn/working-with-forms/
Later in the app I have an AJAX call that will take all of that local data and send it up to the server to generate some reports.
Once you have your stores and models setup correctly it's easy to get the data back out of them. For example I have a contactInfo store that only ever has one entry:
var myContactInfo = contactInfo.first().data;
I have another store called settings, which has many entries. I can easily retrieve them like this (though there may be a better way):
var settingsArr = []
settings.each(function() {
settingsArr.push(this.data);
});
I then can easily send this up to the server like so:
var data = {settings: settingsArr, contactInfo: myContactInfo};
Ext.Ajax.request({
url: 'save.php',
params: {json: Ext.encode(data)},
success: function(response, opts) {
// celebrate
}
});
As with all things a good look at the examples and the API should help you once you have the basics figured out:
http://dev.sencha.com/deploy/touch/docs/?class=Ext.data.Store

Related

Is there a way to delay cache revalidation in service worker?

I am currently working on performance improvements for a React-based SPA. Most of the more basic stuff is already done so I started looking into more advanced stuff such as service workers.
The app makes quite a lot of requests on each page (most of the calls are not to REST endpoints but to an endpoint that basically makes different SQL queries to the database, hence the amount of calls). The data in the DB is not updated too often so we have a local cache for the responses, but it's obviously getting lost when a user refreshes a page. This is where I wanted to use the service worker - to keep the responses either in cache store or in IndexedDB (I went with the second option). And, of course, the cache-first approach does not fit here too well as there is still a chance that the data may become stale. So I tried to implement the stale-while-revalidate strategy: fetch the data once, then if the response for a given request is already in cache, return it, but make a real request and update the cache just in case.
I tried the approach from Jake Archibald's offline cookbook but it seems like the app is still waiting for real requests to resolve even when there is a cache entry to return from (I see those responses in Network tab).
Basically the sequence seems to be the following: request > cache entry found! > need to update the cache > only then show the data. Doing the update immediately is unnecessary in my case so I was wondering if there is any way to delay that? Or, alternatively, not to wait for the "real" response to be resolved?
Here's the code that I currently have (serializeRequest, cachePut and cacheMatch are helper functions that I have to communicate with IndexedDB):
self.addEventListener('fetch', (event) => {
// some checks to get out of the event handler if certain conditions don't match...
event.respondWith(
serializeRequest(request).then((serializedRequest) => {
return cacheMatch(serializedRequest, db.post_cache).then((response) => {
const fetchPromise = fetch(request).then((networkResponse) => {
cachePut(serializedRequest, response.clone(), db.post_cache);
return networkResponse;
});
return response || fetchPromise;
});
})
);
})
Thanks in advance!
EDIT: Can this be due to the fact that I put stuff into IndexedDB instead of cache? I am sort of forced to use IndexedDB instead of the cache because those "magic endpoints" are POST instead of GET (because of the fact they require the body) and POST cannot be inserted into the cache...

Dropzone.js - Multiple file upload without duplicated response

TLDR;
I managed to simplify my question after a good night's sleep. Here's the simpler question.
I want to upload N files to a server, which would process them together and return a single response (e.g. Total foobars in all files combined = XYZ).
What's the best way to send this single response back to the client?
Thanks.
&
Below is the old question, left behind as a lesson for me.
I'm using Dropzone.js to build D&D functionality into my app.
Please note: I know there are a couple of questions already that discuss multifile uploads. But they are different from my question. They talk about how to get a single callback call instead of multiple ones.
My issue is related to the situation where I drag and drop multiple files into the dropzone, but am seeing the single server response being duplicated multiple times. Here is my config:
Dropzone.options.inner = {
init: function() {
this.on("dragenter", function(e) {
$('#inner').addClass('drag-over');
//// TODO - find out WTF this isn't working (low priority)
}),
this.on("completemultiple", function(file, resp) {
//// TODO
})
},
url: "php/...upload...php",
timeout: 120000, // 2m
uploadMultiple: true,
autoProcessQueue: false,
clickable: false,
};
//// ... Some other stuff
//// ...
$(document).ready(function() {
$('#inner').click(function() {
Dropzone.forElement('.dropzone').processQueue();
});
In the beginning I intercepted the "complete" event, rather than "completemultiple". That resulted in its handler being invoked multiple separate times (once for each file), even though the server-side php was only being invoked once. Each invocation returned a duplicate copy of the same server-side message.
I didn't want that, so I changed it to "completemultiple", and now I can confirm that the handler only gets called once with an array of files, but the single server response is now buried within each file object returned - each has a duplicate copy of the exact same response.
It doesn't matter ultimately because it is the same message, after all. But the whole esthetics of the thing now seems off which indicates to me I'm doing something wrong - the response seems to indicate two independent uploads, but they were part of a single invocation of the server side php. Why make the client "believe" there were two separate upload requests when the server-side script only has one opportunity to respond (i.e. The php is not sending back different messages for each file - should it? And if so, what's the best way to do it?)
How can I make it so that if I have a scenario in which it's all-or-none, I get a single response back from the php script?
This is especially important to me because my server response will contain the status and some other data. The script does more than simply receiving the uploaded files (hence the longer timeout).
I thought maybe that's a sign that I should separate the uploading part from the processing part and trigger the processing once the upload is complete.
But that means that the server side upload script can't clean up after itself. It needs to persist data beyond its own life. Also it now needs to return a handle to this data back to the client, which would dispatch the server-side processor in a different ajax call passing it this handle - and the subsequent call needs to clean up the files left by the uploader after it is done processing them.
This seems the less elegant solution. Is this something I just need to get used to? Or is there a better way of accomplishing what I want?
Also, any other free tips and hints from the front-end gurus in my network will be gratefully accepted.
Thanks.
&
The following approach works. Until something better can be found.
Dropzone.options.inner = {
// . . .
init: function() {
this.on("completemultiple", function(file) {
var code = JSON.parse(file[0].xhr.response).code;
var data = { "code" : code };
$.post('php/......php', data, function(res) {
// TODO - surface the res back to the user
});
})
},
};
&

Redux Local Storage Workflow

I am using Redux and would like to store some state on local storage.
I only like to store the token which I receive from the server. There are other things in the store that I don't like to store.
The workflow that I found on google is to grab from local storage in initial store map.
Then he uses store.subscribe to update the local storage on regular interval.
That is valid if we are storing the entire store. But for my case, the token is only updated when user logs out or a new user logs in.
I think store.subscribe is an overkill.
I also read that updating local storage in reducers is not the redux way.
Currently, I am updating local storage in action before reducer is updated.
Is it the correct flow or is there a better way?
The example you found was likely about serializing the entire state tree into localstorage with every state change, allowing users to close the tab without worrying about constantly saving since it will always be up to date in LocalStorage.
However, it's clear that this isn't what you are looking for, as you are looking to cache specific priority data in LocalStorage, not the entire state tree.
You are also correct about updating LocalStorage as part of a reducer being an anti-pattern, as all side-effects are supposed to be localized to action creators.
Thus you should be reading from and writing to LocalStorage in your action creators.
For instance, your action creator for retrieving a token could look something like:
const TOKEN_STORAGE_KEY = 'TOKEN';
export function fetchToken() {
// Assuming you are using redux-thunk for async actions
return dispatch => {
const token = localStorage.getItem(TOKEN_STORAGE_KEY);
if (token && isValidToken(token)) {
return dispatch(tokenRetrieved(token));
}
return doSignIn().then(token => {
localStorage.setItem(TOKEN_STORAGE_KEY, token);
dispatch(tokenRetrieved(token));
}
}
}
export function tokenRetrieved(token) {
return {
type: 'token.retrieved',
payload: token
};
}
And then somewhere early on in your application boot, such as in one of your root component's componentWillMount lifecycle methods, you dispatch the fetchToken action.
fetchToken takes care of both checking LocalStorage for a cached token as well as storing a new token there when one is retrieved.

Programmatically change database for heroku dataclips

We just upgraded our Heroku postgres database using the follower changeover method. We have over 50 dataclips attached to the old database, and now we need to move them over to the new database. However, doing them one by one will take a lot of time.
Is there a programatic way to update the database a dataclip is attached to, perhaps with the CLI tools?
At least once the old database has been deprovisioned, you can now (as of March 2016) reattach them to another database:
Go to https://dataclips.heroku.com/clips/recoverable. It will display your old database and a set of 'orphaned' dataclips and you can choose to transfer them to another database (in my case the promoted follower from the changeover).
Note that this only affects the dataclips that you created, it does not affect the dataclips one of your team members created and that you only had access to. So they will have to go through this process as well.
Official devcenter article: https://devcenter.heroku.com/articles/dataclips#dataclip-recovery
Thanks to Heroku CSRF measures, programmatically updating data clips is much more difficult than you might expect. You'll need to suck it up and start clicking buttons by hand, or beg their support team to do it for you, which is just as difficult.
There is no official support for programmatically moving the dataclips. That being said, you can script it out against their HTTP API.
The base URL is https://dataclips.heroku.com/api/v1/. There are three relevant endpoints:
clips /clips
resources (databases) /heroku_resources
move clip /clips/:slug/move
Find the slug of the clip you want to move, find the resource id of the new database, and make a post to the move clip endpoint:
POST /api/v1/clips/fjhwieufysdufnjqqueyuiewsr/move
Content-Type: application/json
{"heroku_resource_id":"resource123456789#heroku.com"}
I had over 300 dataclips to move. I used the following technique to update them all (essentially reverse engineering the dataclips API).
Open Chrome with Web Developer tools, Network tab.
Log into Heroku Dataclips
Observe the network call which returns all the dataclips, in JSON (https://dataclips.heroku.com/api/v1/clips). Take this response and extract out all dataclip slugs.
Update the database for one dataclip. Observe the network call which does this (https://dataclips.heroku.com/api/v1/clips/:slug/move). Right click, Copy as cURL. This is the easiest way to get all the correct parameters, since the API uses cookies for authentication.
Write a script that loops through each dataclip slug, and shells out to curl. In Ruby, this looks like:
slugs = <paste ids here>.split("\n")
slugs.each do |slug|
command = %Q(curl -v 'https://dataclips.heroku.com/api/v1/clips/#{slug}/move' -H 'Cookie: ...' --data '{"heroku_resource_id":"resource1234567#heroku.com"}')
puts command
system(command)
end
You can contact Heroku support, and they will bulk transfer the dataclips to your new database for you.
Batch working on dataclips
I've finally found a solution to work on my Dataclips as a batch using the javascript console and some scraping technique. I needed it to retrieve every dataclips. But it guess It can be updated as such:
// Go to the dataclip listing (https://data.heroku.com/dataclips).
// Then execute this script in your console.
// Be careful, this will focus a new window every 4 seconds, preventing
// you from working 4 seconds times the number of dataclips you have.
// Retrieve urls and titles
let dataclips = Array.
from(document.querySelectorAll('.rt-td:first-child a')).
map(el => ({ url: el.href, title: el.innerText }))
/**
* Allows waiting for a given timeout before execution.
* #param {number} seconds
*/
const timeout = function(seconds) {
return new Promise(resolve => {
setTimeout(() => {
resolve()
}, seconds);
})
}
/**
* Here are all the changes you want to apply to every single
* dataclip.
* #param {object} window
*/
const applyChanges = function(window) {
}
// With a fast connection, 4 seconds is OK. Dial it down if you
// have errors.
const expectedLoadTime = 4000 // ms
// This is the main loop, windows are opened one by one to ensure focus and a
// correct loading time.
for (const dataclip of dataclips) {
// This opens another window from the script, having access to its DOM.
// See https://github.com/buonomo/kazoo for a funnier example usage!
// And don't be shy to star and share :D
const externWindow = window.open(dataclip.url)
// A hack to wait for loading, this could be improved for sure.
await timeout(expectedLoadTime)
applyChanges(externWindow)
externWindow.close()
}
You'd still have to implement applyChanges yourself which I conceed is a bit tedious and I don't have time to do it know (if one does, please share!). But at least it can be done on all of your dataclips in a single function.
For an example usage of this script, you can take a look at the gist I made to scrape every dataclips and related errors.

Sending events from server to client(s) in Meteor

Is there a way to send events from the server to all or some clients without using collections.
I want to send events with some custom data to clients. While meteor is very good in doing this with collections, in this case the added complexity and storage its not needed.
On the server there is no need for Mongo storage or local collections.
The client only needs to be alerted that it received an event from the server and act accordingly to the data.
I know this is fairly easy with sockjs but its very difficult to access sockjs from the server.
Meteor.Error does something similar to this.
The package is now deprecated and do not work for versions >0.9
You can use the following package which is originally aim to broadcast messages from clients-server-clients
http://arunoda.github.io/meteor-streams/
No collection, no mongodb behind, usage is as follow (not tested):
stream = new Meteor.Stream('streamName'); // defined on client and server side
if(Meteor.isClient) {
stream.on("channelName", function(message) {
console.log("message:"+message);
});
}
if(Meteor.isServer) {
setInterval(function() {
stream.emit("channelName", 'This is my message!');
}, 1000);
}
You should use Collections.
The "added complexity and storage" isn't a factor if all you do is create a collection, add a single property to it and update that.
Collections are just a shape for data communication between server and client, and they happen to build on mongo, which is really nice if you want to use them like a database. But at their most basic, they're just a way of saying "I want to store some information known as X", which hooks into the publish/subscribe architecture that you should want to take advantage of.
In the future, other databases will be exposed in addition to Mongo. I could see there being a smart package at some stage that strips Collections down to their most basic functionality like you're proposing. Maybe you could write it!
I feel for #Rui and the fact of using a Collection just to send a message feel cumbersome.
At the same time, once you have several of such message to send around is convenient to have a Collection named something like settings or similar where you keep these.
Best package I have found is Streamy. It allows you to send to everybody, or just one specific user
https://github.com/YuukanOO/streamy
meteor add yuukan:streamy
Send message to everybody:
Streamy.broadcast('ddpEvent', { data: 'something happened for all' });
Listen for message on client:
// Attach an handler for a specific message
Streamy.on('ddpEvent', function(d, s) {
console.log(d.data);
});
Send message to one user (by id)
var socket = Streamy.socketsForUsers(["nJyQvECmkBSXDZEN2"])._sockets[0]
Streamy.emit('ddpEvent', { data: 'something happened for you' }, socket);

Resources