What is the more reasonable way to persistently store around 300 values with Appcelerator? - appcelerator

With Axway Appcelerator one can persistently store data values as - among others - arrays and list of objects Ti.App.Properties:
https://wiki.appcelerator.org/display/guides2/Lightweight+Persistence+with+the+Properties+API
Alternative, one can store those values also in a database.
I need to store around 200-300 objects/values with the structure word-translation-property1-property2. No complex queries are needed, just plain text.
Should I use a database or can I store them as lightweight App Properties?

You can use the properties. That's likely the easiest.

I have done some basic testing (plain tab based app) and think AppProperties are indeed just fine.
The AppProperties list I stored had around 200 entries with the structure:
{original: "die Hauptstadt", translation: "la capitale", type: "noun", gcase: "0", color: "cccccc", gender: "F"}
App start up time seems to be not much different in the simulator (iOS) with around 30 ms.
The app size is also only negligibly bigger (the properties list with 200 elements showed around 40 KB more Data in Android's app settings).

Related

Images storage performance react native (base64 vs uri path)

I have an app to create reports with some data and images (min 1 img, max 6). This reports keeps saved on my app, until user sent it to API (which can be done at the same day that he registered a report, or a week later).
But my question is: What's the proper way to store this images (I'm using Realm), is it saving the path (uri) or a base64 string? My current version keeps the base64 for this images (500 ~~ 800 kb img size), and then after my users send his reports to API, I deleted this base64 hash.
I was developing a way to save the path to the image, and then I display it. But image-picker uri returned is temporary. So to do this, I need to copy this file to another place, then save the path. But doing it, I got (for kind of 2 or 3 days) 2x images stored on phone (using memory).
So before I develop all this stuff, I was wondering, will it (copy image to another path then save path) be more performant that save base64 hash (to store at phone), or it shouldn't make much difference?
I try to avoid text only answers; including code is best practice but the question about storing images comes up frequently and it's not really covered in the documentation so I thought it should be addressed at a high level.
Generally speaking, Realm is not a solution for storing blob type data - images, pdf's etc. There are a number of technical reasons for that but most importantly, an image can go well beyond the capacity of a Realm field. Additionally it can significantly impact performance (especially in a sync'ing use case)
If this is a local only app, storing the images on disk in the device and keep a reference to where they are (their path) stored in Realm. That will enable the app to be fast and responsive with a minimal footprint.
If this is a sync'd solution where you want to share images across devices or with other users, there are several cloud based solutions to accommodate image storage and then store a URL to the image in Realm.
One option is part of the MongoDB family of products (which also includes MongoDB Realm) called GridFS. Another option is a solid product we've leveraged for years is called Firebase Cloud Storage.
Now that I've made those statements, I'll backtrack just a bit and refer you to this article Realm Data and Partitioning Strategy Behind the WildAid O-FISH Mobile Apps which is a fantastic article about implementing Realm in a real-world use application and in particular how to deal with images.
In that article, note they do store the images in Realm for a short time. However, one thing they left out of that (which was revealed in a forum post) is that the images are compressed to ensure they don't go above the Realm field size limit.
I am not totally on board with general use of that technique but it works for that specific use case.
One more note: the image sizes mentioned in the question are pretty small (500 ~~ 800 kb img size) and that's a tiny amount of data which would really not have an impact, so storing them in realm as a data object would work fine. The caveat to that is future expansion; if you decide to later store larger images, it would require a complete re-write of the code; so why not plan for that up front.

How should I store data(100 x 2 array) in Gatsby?

I'm developing a map which loads 100 location pinpoints. To do so, I want to save double array of 100 rows and 2 column, which saves 100 locations.(latitude/longitude)
I am considering CMS or GraphQL-filesystem, but I don't know which to use. I want to to prioritize webpage load time, but if possible, it'd be nice to conceal my data.
Where should I store my data? Please give me advise. Other options are also welcome Thank you.
100 locations with latitude and longitude is still only a few KB of data. You do not need sophisticated data storage solutions for this.
The easiest way is to use your local filesystem i.e. an array and store it in a javascript file inside your project.
The next easiest way is to use any external solution that Gatsby allows. Take a look at their possible external sources.

Adobe Air - Database vs XML for large static data

I'm developing an application using Flas Builder / Flex for Adobe Air. This application will be processing a large set of static text (100 - 200 MB) using a variable set of processing instructions. The target platforms will be iOS, Android and Desktop.
The data set can be either one large XML file or broken into a bunch of XML files about 3MB each. This will be decided at design time.
From your experience would it be better to store the text in an Adode Air database or a set of XML files for best performance (including speed and battery life)?
What other considerations should I take into account?
I quote one of my favourite bookmarks:
There are several different methods for persisting data in AIR applications:
Flat files
Local shared objects
EncryptedLocalStore
Object serialization
SQL database
Each of these methods has its own set of advantages and disadvantages (an explanation of which is beyond the scope of this article). One of the advantages of using a SQL database is that it helps to keep your application's memory footprint down, rather than loading a lot of data into memory from flat files. For example, if you store your application's data in a database, you can select only what you need, when you need it, then easily remove the data from memory when you're finished with it.
Source: http://www.adobe.com/devnet/air/articles/10_tips_building_on_air.html
I don't understand one thing: is EVERY file 100-200 Mb in size? Or this is the total size of ALL your files?

Transferring lots of objects with Guid IDs to the client

I have a web app that uses Guids as the PK in the DB for an Employee object and an Association object.
One page in my app returns a large amount of data showing all Associations all Employees may be a part of.
So right now, I am sending to the client essentially a bunch of objects that look like:
{assocation_id: guid, employees: [guid1, guid2, ..., guidN]}
It turns out that many employees belong to many associations, so I am sending down the same Guids for those employees over and over again in these different objects. For example, it is possible that I am sending down 30,000 total guids across all associations in some cases, of which there are only 500 unique employees.
I am wondering if it is worth me building some kind of lookup index that I also send to the client like
{ 1: Guid1, 2: Guid2 ... }
and replacing all of the Guids in the objects I send down with those ints,
or if simply gzipping the response will compress it enough that this extra effort is not worth it?
Note: please don't get caught up in the details of if I should be sending down 30,000 pieces of data or not -- this is not my choice and there is nothing I can do about it (and I also can't change Guids to ints or longs in the DB).
Your wrote at the end of your question the following
Note: please don't get caught up in the details of if I should be
sending down 30,000 pieces of data or not -- this is not my choice and
there is nothing I can do about it (and I also can't change Guids to
ints or longs in the DB).
I think it's your main problem. If you don't solve the main problem you will be able to reduce the size of transferred data to 10 times for example, but you still don't solve the main problem. Let us we think about the question: Why so many data should be sent to the client (to the web browser)?
The data on the client side are needed to display some information to the user. The monitor is not so large to show 30,000 total on one page. No user are able to grasp so much information. So I am sure that you display only small part of the information. In the case you should send only the small part of information which you display.
You don't describe how the guids will be used on the client side. If you need the information during row editing for example. You can transfer the data only when the user start editing. In the case you need transfer the data only for one association.
If you need display the guids directly, then you can't display all the information at once. So you can send the information for one page only. If the user start to scroll or start "next page" button you can send the next portion of data. In the way you can really dramatically reduce the size of transferred data.
If you do have no possibility to redesign the part of application you can implement your original suggestion: by replacing of GUID "{7EDBB957-5255-4b83-A4C4-0DF664905735}" or "7EDBB95752554b83A4C40DF664905735" to the number like 123 you reduce the size of GUID from 34 characters to 3. If you will send additionally array of "guid mapping" elements like
123:"7EDBB95752554b83A4C40DF664905735",
you can reduce the original size of data 30000*34 = 1020000 (1 MB) to 300*39 + 30000*3 = 11700+90000 = 101700 (100 KB). So you can reduce the size of data in 10 times. The usage of compression of dynamic data on the web server can reduce the size of data additionally.
In any way you should examine why your page is so slowly. If the program works in LAN, then the transferring of even 1MB of data can be quick enough. Probably the page is slowly during placing of the data on the web page. I mean the following. If you modify some element on the page the position of all existing elements have to be recalculated. If you would be work with disconnected DOM objects first and then place the whole portion of data on the page you can improve the performance dramatically. You don't posted in the question which technology you use in you web application so I don't include any examples. If you use jQuery for example I could give some example which clear more what I mean.
The lookup index you propose is nothing else than a "custom" compression scheme. As amdmax stated, this will increase your performance if you have a lot of the same GUIDs, but so will gzip.
IMHO, the extra effort of writing the custom coding will not be worth it.
Oleg states correctly, that it might be worth fetching the data only when the user needs it. But this of course depends on your specific requirements.
if simply gzipping the response will compress it enough that this extra effort is not worth it?
The answer is: Yes, it will.
Compressing the data will remove redundant parts as good as possible (depending on the algorithm) until decompression.
To get sure, just send/generate the data uncompressed and compressed and compare the results. You can count the duplicate GUIDs to calculate how big your data block would be with the dictionary compression method. But I guess gzip will be better because it can also compress the syntactic elements like braces, colons, etc. inside your data object.
So what you are trying to accomplish is Dictionary compression, right?
http://en.wikibooks.org/wiki/Data_Compression/Dictionary_compression
What you will get instead of Guids which are 16 bytes long is int which is 4 bytes long. And you will get a dictionary full of key value pairs that will associate each guid to some int value, right?
It will decrease your transfer time when there're many objects with the same id used. But will spend CPU time before transfer to compress and after transfer to decompress. So what is the amount of data you transfer? Is it mb / gb / tb? And is there any good reason to compress it before sending?
I do not know how dynamic is your data, but I would
on a first call send two directories/dictionaries mapping short ids to long GUIDS, one for your associations and on for your employees e.g. {1: AssoGUID1, 2: AssoGUID2,...} and {1: EmpGUID1, 2:EmpGUID2,...}. These directories may also contain additional information on the Associations and Employees instances; I suspect you do not simply display GUIDs
on subsequent calls just send the index of Employees per Association { 1: [2,4,5], 3:[2,4], ...}, the key being the association short id and the ids in the array value, the short ids of the employees. Given your description building the reverse index: Employee to Associations may give better result size wise (but higher processing)
Then its all down to associative arrays manipulations which is straightforward in JS.
Again, if your data is (very) dynamic server side, the two directories will soon be obsolete and maintaining synchronization may cost you a lot.
I would start by answering the following questions:
What are the performance requirements? Are there size requirements? Speed requirements? What is the minimum performance that is truly needed?
What are the current performance metrics? How far are you from the requirements?
You characterized the data as possibly being mostly repeats. Is that the normal case? If not, what is?
The 2 options you listed above sound reasonable and trivial to implement. Try creating a look-up table and see what performance gains you get on actual queries. Try zipping the results (with look-ups and without), and see what gains you get.
In my experience if you're not TOO far from the goal, performance requirements are often trial and error.
If those options don't get you close to the requirements, I would take a step back and see if the requirements are reasonable in the time you have to solve the problem.
What you do next depends on which performance goals are lacking. If it is size, you're starting to be limited if you're required to send the entire association list ever time. Is that truly a requirement? Can you send the entire list once, and then just updates?

Using Core Data with many images per entity?

I'm new to Core Data and I'm working on my first personal iOS app.
I have an entity, lets call it Car, which has a thumbail as well as a gallery of other images associated with it. The data is synced to an online service using ASIHTTPRequest and JSONKit. The app doesn't need to create new Car's, just display them.
The thumbnail could be around 100kB so I may store that as blob data within the Car entity.
However I'm not sure how I should store the other multiple images?
The images would be around 800kB to 1MB each using so storing them in the Core Data store doesn't seem to be recommended.
The only options I can think of are:
Store the url of each photo within another entity CarImage and rely on ASIHTTPRequest's cache.
Create a folder structure and save each image into it's corresponding Car's folder and keep references to the file path in the CarImage entity
Because the data is synced, there is the potential for Car's to be deleted, so images in folders would have to be deleted as well. I can see this getting out of hand pretty quickly.
I would appreciate any advice. Thanks.
I'd take your first option.
Regarding the images that would have to be deleted: isn't that taken care of automatically by ASIHTTPRequest's cache, once they expire? At least that's what I'd expect from a cache...
I'd go with the first option. I've done something similar in the past, though I actually did store the image binary data in Core Data as well. I wouldn't recommend storing the data, though, as this caused problems for me - just rely on ASIHTTPRequest's cache.

Resources