WP 7 Isolated Storage - windows-phone-7

In my WP 7 App, i have to store the images and XML file of two types,
1: first type of files are not updated frequently on server so i want to store them Permanently on local storage so that when ever app starts it can access these files from local storage , and when these files are updated on server , also update local storage files.I want these files not to be deleted on application termination.
2: Second type of files are those that i want to save in isolated storage temporarily e.g. app requested a XML file from server , i stored it locally and next time if app requests same file instead of getting it from server get it from local storage , and Delete these files when the application terminates..
How can i do this ?
Thanks

1) Isolated Storage is designed to be used to store data that should remain permanent (until the user uninstalls the app). There's example code of how to write and save a file on MSDN. Therefore, any file you save (temp or not), will be stored until the user uninstalls the app or your app deletes the file.
2) For temporary data, you can use the PhoneApplicationState property. This will automatically delete the files after your app closes. However, there's a size limit (I belive PhoneApplicationService.State has a limit of 4mb).
Alternatively, if the XML file is too big, you can write it to the Isolated Storage. Then, you can handle your page's Closing event and delete the file from Isolated Storage there using the DeleteFile method.

Related

Heroku - Redis memory > 'maxmemory' with a 20MB file in hobby:dev where it should be 25MB

So I am trying to upload a file with Celery that uses Redis on my Heroku website. I am trying to upload a .exe type file with the size of 20MB. Heroku is saying in they're hobby: dev section that the max memory that could be uploaded is 25MB. But I, who is trying to upload a file in Celery(turning it from bytes to base64, decoding it and sending it to the function) is getting kombu.exceptions.OperationalError: OOM command not allowed when used memory > 'maxmemory'. error. Keep in mind when I try to upload for e.g a 5MB file it works fine. But 20MB doesn't. I am using Python with the Flask framework
There are two ways to store files in DB (Redis is just an in-memory DB). You can either store a blob in the DB (for small files, say a few KBs), or you can store the file in memory and store a pointer to the file in DB.
So for your case, store the file on disk and place only the file pointer in the DB.
The catch here is that Heroku has a Ephemeral file system that gets erased every 24 hours, or whenever you deploy new version of the app.
So you'll have to do something like this:
Write a small function to store the file on the local disk (this is temporary storage) and return the path to the file
Add a task to Celery with the file path i.e. the parameter to the Celery task will be the "file-path" not a serialized blob of 20MB data.
The Celery worker process picks the task you just enqueued when it gets free and executes it.
If you need to access the file later, and since the local heroku disk only has temporary, you'll have to place the file in some permanent storage like AWS S3.
(The reason we go through all these hoops and not place the file directly in S3 is because access to local disk is fast while S3 disks might be in some other server farm at some other location and it takes time to save the file there. And your web process might appear slow/stuck if you try to write the file to S3 in your main process.)

How to release an app with preloaded coreData ? [duplicate]

This question already has answers here:
How to use a pre-populated database on a coredata context
(2 answers)
Closed 7 years ago.
I try to find the best way to release an app with some preoloaded data.
I have an app that have 2 tables. I want to fill this tables with some data. The problem is that data is not only text info. 1 entity contains about 40 attributes (numbers, strings, transformable data), so to embedded that in code it's not a solution.
Thanks for help.
Write a very small CLI OS X app that stands up your existing Core Data stack.
This CLI creates a pre-populated SQLite file in a known location.
Run this CLI as part of your build procedure
Include the created SQLite file as part of your app bundle
On launch, if the destination SQLite file does not exist (NSFileManager will tell you this); copy the SQLite file from your app bundle.
Launch as normal.
This makes the procedure scriptable and consistent. It reuses your existing code structure to build the pre-populated database and lets you keep it up to date.
Here's how I handle it:
I use the default setup, where the backing store for Core data is an SQL file.
I set up my app to set up the persistent store coordinator with the SQL file in the app's documents directory.
I build my pre-populated Core Data database on the simulator.
I then go to the app's documents directory on the sim and copy the sql file into the app's bundle.
At the beginning of my app's didFinishLaunching method in the app delegate, I check to see if the Core data database's sql file exists in the documents directory. If not, I copy it from the bundle into the documents directory.
Then I invoke the code that creates the persistent store coordinator, which expects the sql file in the documents directory. On first launch, this is the initial file copied from the bundle. On subsequent launches, it's the working file in the documents directory that has the current data in it.
When the user first attempts to access the data, run a check to see if there are any objects in the persistent store by either executing a fetch request or getting the count of the objects in the persistent store.
If the results of the fetch request is nil, or the count of objects of the fetch request is 0, load data from some file (JSON, plist, XML) into Core Data by hand.

Blob files have to renamed manually to include parent folder path

We are new to Windows azure and have used Windows azure storage for blob objects while developing sitefinity application but the blob files which are uploaded to this storage via publishing to azure from Visual Studio uploads files with only the file names and do not maintain the prefix folder name and slash. Hence we have to rename all files manually on the windows azure management portal and put the folder name and slash in the beginning of each file name so that the page which is accessing these images can show the images properly otherwise the images are not shown due to incorrect path.
Though in sitefinity admin panel , when we upload these images/blob files in those pages , we upload them inside a folder and we have configured to leverage sitefinity to use azure storage instead of database.
Please check the file attached to see the screenshot.
Please help me to solve this.
A few things I would like to mention first:
Windows Azure does not support rename functionality. Rename blob functionality = copy blob followed by delete blob.
Copy blob operation is asynchronous so you must wait for copy operation to finish before deleting the blob.
Blob storage does not support folder hierarchy natively. As you may have already discovered, you create an illusion of a folder by prepending a blob name (say logo.png) with the name of folder you want (say images) and separate them with slash (/) so your blob name becomes images/logo.png.
Now coming to your problem. Needless to say that manually renaming the blobs would be a cumbersome exercise. I would recommend using a storage management tool to do that. One such example would be Azure Management Studio from Cerebrata. If you use that tool, essentially what you can do is create an empty folder in the container and then move the files into that folder. That to me would be the fastest way to achieve your objective.
If you wish to write some code to do that, here are the steps you will take:
First you will list all blobs in a blob container.
Next you will loop over this list.
For each blob (let's call it source blob), you would get its name and prepend the folder name that you want and create an instance of a CloudBlockBlob object.
Next you would initiate a copy blob operation on that blob using StartCopyFromBlob on this new blob where source is your source blob.
You would need to wait for the copy operation to finish. Once the copy operation is finished, you can safely delete the source blob.
P.S. I would have written some code but unfortunately I'm stuck with something else. I might write something later on (but please don't hold your breath for that :)).

Is IndexedDB duplicating local files to store for an offline application?

Firefox only allows access to full file path via extensions
It has also been stated that if you store files in IndexedDB that they are stored externally, outside the DB (see this)
If I insert a bunch of files into IndexedDB, close it down, come back tmw and open the DB, how does it know where my files that I inserted yesterday are located?
Does IndexedDB have access to the full file path? If so, can I get access to the full file path via InexedDb?
OR does IndexedDB make duplicate copies?
(this is for offline use)
EDIT
I can store a bunch of files with their own separate keys in IndexedDB and iterate over them to repopulate an application.
IndexedDB is smart enough not to store the same copy of the file. How does it do this?
Most importantly, if the application is an image viewer for offline use then importing those images into IndexedDB to be managed will duplicate the files(?) Now I have two sets of vacation photos. Is this correct?
My guess would be that these files will get stored at the same location where the indexeddb databases will be located. For more information where to find it, take a look at my post I wrote about the location of the indexeddb some time ago

ACCESS_DENIED_ERROR when using NetFileClose API

I have a windows network in which many files are shared across many users with full control. I have a folder shared for everyone in my system, so whenever I try to access it using the machine name (run->\Servername) from another system, I can see the shared folder and open/write files in it.
But my requirement is to close any open files(in my system) in network. So I used NetFileEnum to list all opened file ids so that I can close those files using NetFileClose API.
But the problem is NetFileEnum returns invalid junk ids like 111092900, -1100100090 etc so that I can't close it from another machine. So I listed the network opened files using net file command and by noting the id, say it be 43 I hard coded the id in my function call NetFileClose("Servername", 43); But when I executed, I got ACCESS_DENIED_ERROR. But if the same code is run on the server, it is successfully closing the files. I had given full permission in share for all users.
But why ACCESS_DENIED_ERROR and why NetFileEnum returning invalid ids? Is there anything to be done for this API to work? How can I use these APIs properly to close network opened files?

Resources