It seems that Ember-Data is very well suited to manage change originating from the client, and I've handled changing coming in realtime via websockets with store.load(type, id, data), but I can't figure out how to tell the store that an object has been removed on the server and thus needs to be removed locally as well.
Anyone knows how to do that?
Thanks!
Related
I'm working on a mobile client. Dev backend server, I'm working with, isn't stable at all. It may be unusable for a full working day. Prod server is a bit better but still sometimes it doesn't work either. The other problem is it's much more difficult to use it in development. Besides that it's completely wrong to work like that. Basically these servers have been made for web, not for mobile. And it has other strange and annoying thing that destructs me from my primary work - token life time is only 60 seconds. That means if the app didn't refresh the token in that period the token dies. And next time you run the app you need to authorize from scratch. And that process takes centuries(((. May be I just don't understand how it works or something, but as I see web site just spams the sever every minute.
I was thinking how to fix this problem and started using mocking manually. But it's very annoying and time consuming either. The other idea is to use some kind of proxy / cache server that will send request to original server and if it fails return cached data. It seems that it may help in my situation. I'm not sure would such proxy / cache server be able to eliminate token problem. Basically I need to refresh it as soon as first token has been received. But who knows? May be I'm lucky enough?)
So the question: is there some simple to use proxy cache server that I will be able to run locally to achieve what I want?
The other opportunity is to write such proxy server myself. I have no experience in writing servers at all. But as a last chance I could try. The benefit of writing proxy server myself is that I should be able to "fix" token problem for sure. But I don't want to reinvent the wheel.
So any help and thoughts are appreciated.
Not entirely sure if this will solve your problem but let's give it a shot.
I myself have been programming against a rate-limited API. During development I often max out the allowed requests and have to wait before I could continue. I have developed a small caching proxy server that sits between your client and the server. It intercepts the requests and puts both the request and response in it's cache. Whenever it intercepts a request that it's already seen it will respond from cache without forwarding the request to the target server.
I'm not sure what your requests look like. The proxy that I build currently retrieves cache based on URL and HTTP Method, so that may or may not be what you need.
Here's the link to the GitHub repository: https://github.com/RobinvandenHurk/cache-proxy
Disclaimer: For if it wasn't clear, I am the author of this proxy
I have a use case where I have to update a class in the local storage with the changes that have been made in my parse server. I have deleted some entries in my parse server and want those to be deleted in the local storage of the app on the user device. What is the best way to handle this. For now, I
Unpin all the objects for that class from my local storage.
Try to fetch the data from my parse server and pin them to the local storage.
Is there a better way to do this?
Parse pin to local datastore is not made as a framework for synching data between device and server, but rather as a way to speed up your app by providing a local version of your data, and to avoid your app becoming unusable if the device is temporarily without a data connection. Therefore, there are no streamlined ways of synching your data between the device and the backend.
You can go about this in a couple of ways. For most situations, I would say that just unpinning and refetching is the way to go. In almost all other scenarios, you end up creating your own synching service, which can quickly become quite complex.
You can, of course, keep track of all objects that have been removed or changed since last synch, and then only unpin/re-fetch those, but this gets very hard to handle for multiple users. By far, the easiest way is to unpin all and fetch all again from the server. If this means fetching a lot of objects, you might want to rethink your logic and maybe not keep that many locally pinned objects.
I don't mean using coherence. I am looking for a way to avoid hitting my application to look something up that I've already looked up. When the client performs a GET on a resource I want it to hit the application the first time only and after that return a cached copy.
I think I can do this with apache and mod_mem_cache, but I was hoping there was a weblogic built in solution that I'm just not able to find.
Thanks.
I don't believe there's inbuilt features to do that across the entire app server, but if you want to do it programmatically, perhaps CacheFilter might work.
I'm new to CouchDb and am trying to comprehend how to properly make use of it. I'm coming from MongoDB where I would always write a web layer and put it in front of mongo so that I could allow users to access the data inside of it, etc. In fact, this is how I've used all databases for every web site that I've ever written. So, looking at Couch, I see that it's native API is HTTP and that it has built in things like OAuth support, and other features that hint to me that perhaps I should no longer have my code layer sitting in front of Couch, but instead write Views and things and just give out accounts to Couch to my users? I'm thinking in terms of like an HTTP-based API for a site of mine, or something that users would consume my data through. Opening up Couch like this seems odd to me, though. Is OAuth, in Couch's sense, meant more for remote access for software that I'd write and run internal to my own network "officially", or is it literally meant for the end users?
I know there might be things that could only be done through a code layer on top of CouchDB, like if you wanted additional non-database related things to occur during API requests, also. So thinking along those lines I think I will still need a code layer, anyway.
Dealer's choice.
Nodejitsu has a great writeup on this sort of topic here.
Not knowing your application specifics I'll take a broad approach...
Back-end
If you want to prevent users from ever seeing your database then make it back-end. You can pipe everything through something like node.js and present only what the user needs to see and they'll never know anything about the database.
See Resource View Presenter
Front-end
If you are not concerned about data security, you can host an entire app on CouchDB; see CouchApp. This approach has the benefit of using the replication mechanism to control publishing your site/data. The drawback here is that you will almost certainly run into some technical limitations that will require moving CouchDB closer to the backend.
Bl-end
Have the app server present the interface and the client pull the data from the database separately. This gives the most flexibility but can be a bag of hurt because even with good design this could lead to supportability and scalability issues.
My recommendation
Use CouchDB on the backend. If you need mobile clients to synchronize then use a secondary DB publicly exposed for this purpose and selectively sync this data to wherever it needs to go.
Simply put, no.
There's no way to secure Couch properly on a public facing site. There's no way to discriminate access at a fine enough granular level. If someone has access to any of the data, they have access to all of the data.
Not all data on a site is meant for public consumption, save for the most trivial of sites.
I'm not sure which technology I should be using, or even what exactly I'm trying to do is called, so I was hoping to just get some guidance on the issue.
We have a client/server architecture, and from the client side you should be able to send a command to the server side either by going from Browser -> Client -> Server, or just directly from Browser -> Server
My question is, what should I be looking in to to help me accomplish this task? I believe if I were to use a Chrome Extension, it would have to use NPAPI to interact locally with my PC, which is less than recommended ;)
The solution only needs to work on Windows, and will not be accessing any of the local users files.
Thanks for your help!
Within Chrome Extensions, you are allowed to access external resources if and only if you explicitly define the permissions (url pattern) in the manifest file.
Depending on the need of your application, you could use RESTful server approach or WebSockets server approach. Once you finish developing your server, your extension can communicate through it using existing web technologies (XmlHTTPRequest, WebSocket).
Assuming your going to use RESTful, what I would do is create a JavaScript service class/library that communicates to your backend (Server) using XHR, and include that in your background page within the extension. Then you can use Extension Message Passing to communicate to your service class.
Think of it as this, the scripts defined in the background context within your extension lives in between your extension and your server, acting like a facade. Search on GitHub/StackOverflow if you need questions regarding how, there are many useful posts/projects.