How can I delete old objects in parse.com - parse-platform

Is any way to delete old objects when using parse.com database?
For example, if user saves it's position,
is any way to delete the users position after some period of time when he disconnects

More to what #Phil said, you can run some other code (not in your app), something like stand alone script that deletes whatever you need.

While this question is old, there is a new-ish feature Parse has implemeneted - Cloud Code. I recommend you take a look at the documentation around Background jobs within Cloud Code. Perhaps you may run a background job that will "clean up" your old data.
Here is the link:
https://parse.com/docs/cloud_code_guide#jobs

For what version? iOS? if you're using the iOS SDK, what you could do is when they log back in, check to see how much time has passed, and then delete their position in whatever PFObject has the users position. Outside of that maybe this could be done with Cloud code.

Related

Guide/tutorial on how to persistently save data on Spring-Boot with jpa-repositiroies

I have googled so much how to save data persistent (meaning it is still there after I shutdown / restart my apllication).
But every guide there is is "Hey we show you how to use spring-boot with jpa real quick and we use h2 database (in-memory)"
I am looking for a guide who shows the setup to make it possible to use a database that stores the data somewhere that it can be retrieved at any later point in time, even after the application was shut down.
If any of you can provide a link to something like this, I am very grateful!
I think Google can give a lot of examples about that. I am not sure what was your query.
Here are some examples:
https://www.mkyong.com/spring-boot/spring-boot-spring-data-jpa-mysql-example/
https://www.callicoder.com/spring-boot-rest-api-tutorial-with-mysql-jpa-hibernate/
https://spring.io/guides/gs/accessing-data-mysql/
HTH

arrowDB - is there a way to move development data to production?

I've built an app using arrowDB for the backend. Is there a simple way to duplicate development data to production?
Seems like an oversight not to be able to do this, have an app going through review process and just realised all our test data won't be accessible
As far as I know, there is no feature like this right now.
You could probably build your own using their REST API. I haven't seen a solution like this built yet but I definitely think it is possible. If I get some free time, I will try to put one together and will post a link here.

Any way to use MvcMiniProfiler on windows application? Or is there a sister application?

So I've started using MvcMiniProfiler on our websites and quite like it. We have a Windows Application component/framework that is leveraged by the website and I was wondering if it was possible to use the profiler on that. I'm assuming not, but maybe there is a subcomponent of the code that could be used? I see that there is a way to configure where the results are stored (i.e. Sql Server) so maybe it is close to possible?
We have the following flow:
Website submits job to 'broker' then returns a 'come back later' page.
Broker runs and eventually data in the websites database gets updated by the broker.
Website displays the results.
I'd be great if there was a way I could get the entire workflow profiled. If there is no way/no intentions from the developers to make MvcMiniProfiler available to Windows applications, any recommendations for similar styled profilers?
You could get this working by using SqlServerStorage, there is very little in the code base that heavily depends on ASP.NET, in fact the SQL interceptor is generalized and so it the stack used to capture the traces.
I imagine that a few changes internally need to be made, eg: use Thread.SetData as opposed to HttpContext but they are pretty superficial.
The way you would get this going is by passing the "profiling identity" into the App and then continuing tracking there. Eventually when the user hits the site after it happens, it would show up as little "chiclets" on the left side.
A patch is totally welcome here but it is not something it does in its current version.
(note to future readers, this is probably going to be out of date at some point, if it is please suggest an edit)
Yes, there's a Windows porting of MiniProfiler: http://nootn.github.io/MiniProfiler.Windows/

Streaming, Daemons, Cronjobs, how do you use them? (in Ruby)

I've finally had a second to look into streaming, daemons, and cron
tasks and all the neat gems built around them! But I'm not clear on
how/when to use these things.
I have a few questions:
1) If I wanted to have a website that stayed constantly updated, realtime, with my Facebook friends' activity feeds, up-to-the-minute Amazon book reviews on my favorite books, and my Twitter feed, would I just create some custom streaming implementation using the Daemon gem, the ruby-yali gem for streaming the content, and the Whenever gem, which could say, check those sites every 3-10 seconds to see if content I'm looking for has changed? Is that how it would work? Or is it typically/preferably done differently?
2) Is (1) too processor intensive? Is there a better way you do it, a better way for live content streaming, given that the website you want realtime updates on doesn't have a streaming api? I'm thinking about just sending a request every few seconds in a separate small ruby app (with daemons and cronjobs), getting the json/xml result, using nokogiri to remove the stuff I don't need, and then just going through the small list of comments/books/posts/etc., building a feed of what's changed, and using Juggernaut or something to push those changes to some rails app. Would that work?
I guess it all boils down to the question:
How does real-time streaming of the latest content of some website work? How do YOU do it?
...so if someone is on my site, they can see in real time the new message or new book that just came out?
Looking forward to your answers,
Lance
Well first, if a website that doesn't provide an API, then it's a strong indication that it's not legal to parse and extract their data, however you'd better check their terms of use and privacy policy.
Personally I'm not aware of something called "Streaming API", but supposing that they have an API , you still need to pull the results provided by it(xml, json, ....), parse them and present them back to the user. The strategy will vary depending on your app type:
Desktop app: then you just can pull the data directly, parse it and provide it to the user, many apps are like that just like Twhirl.
Web app: then you need to cut down the time for extracting the data. Typically you will pull the data from the API and store it. However, storing the data is a bit tricky! You don't want want your database to be a lock down for the app by the extreme pull queries that it gonna get to retrieve the data back. One way to do this is to use push methodology; follow option 2 in this case to get the data and then push to the user. If you want instant updates like chat for example you can have a look at orbited. If it's ok to save the data to some kind of user and followers' 'inboxes', then the simplest way as I can tell is to use IMAP to send the updates to the user inbox.

How would you make an RSS-feeds entries available longer than they're accessible from the source?

My computer at home is set up to automatically download some stuff from RSS feeds (mostly torrents and podcasts). However, I don't always keep this computer on. The sites I subscribe to have a relatively large throughput, so when I turn the computer back on it has no idea what it missed between the the time it was turned off and the latest update.
How would you go about storing the feeds entries for a longer period of time than they're available on the actual sites?
I've checked out Yahoo's pipes and found no such functionality, Google reader can sort of do it, but it requires a manual marking of each item. Magpie RSS for php can do caching, but that's only to avoid retrieving the feed too much not really storing more entries.
I have access to a webserver (LAMP) that's on 24/7, so a solution using a php/mysql would be excellent, any existing web-service would be great too.
I could write my own code to do this, but I'm sure this has to be an issue previously encountered by someone?
What I did:
I wasn't aware you could share an entire tag using Google reader, thanks to Mike Wills for pointing this out.
Once I knew I could do this it was simply a matter of adding the feed to a separate Google account (not to clog up my personal reading list), I also did some selective matching using Yahoo pipes just to get the specific entries I was interested in, this too to minimize the risk that anything would be missed.
It sounds like Google Reader does everything you're wanting. Not sure what you mean by marking individual items--you'd have to do that with any RSS aggregator.
I use Google Reader for my podiobooks.com subscriptions. I add all of the feeds to a tag, in this case podiobooks.com, that I share (but don't share the URL). I then add the RSS feed to iTunes. Example here.
Sounds like you want some sort of service that checks the RSS feed every X minutes, so you can download every single article/item published to the feed while you are "watching" it, rather than only seeing the items displayed on the feed when you go to view it. Do I have that correct?
Instead of coming up with a full-blown software solution, can you just use cron or some other sort of job scheduling on the webserver with whatever solution you are already using to read the feeds and download their content?
Otherwise it sounds like you'll end up coming close to re-writing a full-blown service like Google Reader.
Writing an aggregator for keeping longer history shouldn't be too hard with a good RSS library.

Resources