Cocoa API for Disk Space Usage Breakdown - macos

I would like to be able to display a disk space usage breakdown chart similar to the one used in the System Information app built into Mac OS X (see image below). I've searched but have been unable to find an API which returns any detailed breakdown. The best I can find is the total disk space used.
As far as I can tell, the data in the screenshot (which actually looks incorrect in this example) is not calculated by sizing the default Music, Movies, Photos and Application folders. It does seem to add up the data used by specific file types.

Perhaps they are using the Metadata APIs and customizing the search a bit?
That's what I had used in the past to get a breakdown of certain types...
https://developer.apple.com/library/mac/documentation/Carbon/Conceptual/SpotlightQuery/Concepts/Introduction.html

Related

Images storage performance react native (base64 vs uri path)

I have an app to create reports with some data and images (min 1 img, max 6). This reports keeps saved on my app, until user sent it to API (which can be done at the same day that he registered a report, or a week later).
But my question is: What's the proper way to store this images (I'm using Realm), is it saving the path (uri) or a base64 string? My current version keeps the base64 for this images (500 ~~ 800 kb img size), and then after my users send his reports to API, I deleted this base64 hash.
I was developing a way to save the path to the image, and then I display it. But image-picker uri returned is temporary. So to do this, I need to copy this file to another place, then save the path. But doing it, I got (for kind of 2 or 3 days) 2x images stored on phone (using memory).
So before I develop all this stuff, I was wondering, will it (copy image to another path then save path) be more performant that save base64 hash (to store at phone), or it shouldn't make much difference?
I try to avoid text only answers; including code is best practice but the question about storing images comes up frequently and it's not really covered in the documentation so I thought it should be addressed at a high level.
Generally speaking, Realm is not a solution for storing blob type data - images, pdf's etc. There are a number of technical reasons for that but most importantly, an image can go well beyond the capacity of a Realm field. Additionally it can significantly impact performance (especially in a sync'ing use case)
If this is a local only app, storing the images on disk in the device and keep a reference to where they are (their path) stored in Realm. That will enable the app to be fast and responsive with a minimal footprint.
If this is a sync'd solution where you want to share images across devices or with other users, there are several cloud based solutions to accommodate image storage and then store a URL to the image in Realm.
One option is part of the MongoDB family of products (which also includes MongoDB Realm) called GridFS. Another option is a solid product we've leveraged for years is called Firebase Cloud Storage.
Now that I've made those statements, I'll backtrack just a bit and refer you to this article Realm Data and Partitioning Strategy Behind the WildAid O-FISH Mobile Apps which is a fantastic article about implementing Realm in a real-world use application and in particular how to deal with images.
In that article, note they do store the images in Realm for a short time. However, one thing they left out of that (which was revealed in a forum post) is that the images are compressed to ensure they don't go above the Realm field size limit.
I am not totally on board with general use of that technique but it works for that specific use case.
One more note: the image sizes mentioned in the question are pretty small (500 ~~ 800 kb img size) and that's a tiny amount of data which would really not have an impact, so storing them in realm as a data object would work fine. The caveat to that is future expansion; if you decide to later store larger images, it would require a complete re-write of the code; so why not plan for that up front.

What does an Area Description File (ADF) looks like?

I'm starting to work with the Google Tango Tablet, hopefully to create (basic) 2D / 3D maps from scanned areas. But first I would like to read as much about the Tango (sensors / API) as I can, in order to create a plan to be as time efficient as possible.
I instantly noticed the ability to learn areas, which is a very interesting concept, nevertheless I couldn't find anything about these so called Area Description Files (ADF).
I know the ADF files can be geographically referenced, that they contain metadata and an unique UUID. Furthermore I know their basic functionalities, but that's about it.
In some parts of the modules ADF files are referred to as 'maps', in other parts they are just called 'descriptions'.
So what do these files look like? Are they already basic (GRID) (2D) maps, or are they just descriptions?
I know there are people who already extracted the ADF files, so any help would be greatly appreciated!
From Tango ADF Doco
Important: Saved area descriptions do not directly record images or
video of the location, but rather contain descriptions of images of
the environment in a very compressed form. While those descriptions
can’t be directly viewed as images, it is in principle possible to
write an algorithm that can reconstruct a viewable image. Therefore,
you must ask the user for permission before saving any of their
learned areas to the cloud or sharing areas between users to protect
the user's privacy, just as you would treat images and video.
Other than that there doesn't seem to be much info about the file internals - I use a lot of them, but I've never been compelled to look inside - curious yes, but not compelled
Without any direct info from the project Tango folks anything we provide would be merely speculation. I'm with Mark, not much compelling reason to get details. My speculation: probably contains a set of image descriptors, like SIFT, and whatever other known device settings are available, like GPS location, orientation (gravity), time(?), etc.
I got the ADF file, basically coded binaries and seems difficult to decode.
I will be happy to share the file if anyone is still interested.

Where can I find a memory map for Mac OS X

I remember that, back in the days of the Commodore 64, I had a Reference Book showing me which part of memory was assigned to do what.
I was wondering if something of the likes was available for Mac OS X.
My iMac has a 1TB hard disk and I'd like to know where the free space is, which part is allocated to the display, etc.
Any resources where I could learn more about this subject would be helpful.
Thank you.
The hard disk is not "memory" in the sense you're thinking of. None of it is used for active apps, the display, etc...
You're thinking of ram, but on modern OSs that isn't divided by task like it may have been on past systems. Where things are located is dynamically assigned as needed, and intentionally randomized for security reasons (google for ASLR for the randomization).
If you go into the "Utilities" folder hidden in your Macintosh "/Applications" folder, you'll see an application named "Activity Monitor".
Open that, and you'll be able to see both memory usage (mine looks like this):
And disk usage looks like this:
Now these two views may look a little boring, or underwhelming, compared to what you might have seen with your Commodore 64 (or in my case, I was using a Trash-80 Color Computer).... but these screenshots are just a quick slice of data from about 10 seconds of looking at each view. If you leave the disk usage screen up for more than 10 seconds, you'll see the I/O ribbon gets more detailed with time. And there's probably more functionality hidden in there that I haven't even noticed yet. Try it out.

mobile barcode readers - where do they draw their information?

I'm pretty much a noob. I've been wondering how mobile barcode readers worked. I've seen several apps on the market that would let you scan a barcode, and then show you corresponding product data.
I was wondering where the product data typically comes from. Is it usually from a built-in database, or do apps tend to connect to a server to access a database?
Thanks for any and all assistance!
Barcode readers/scanners work by using some sort of standard format to communicate data to the device 'reading' the code. There are typically two types of 'barcodes' used today:
The standard Barcode - often referred to as a UPS code
And the QR code - popular for cell-phone apps.
From a developer's standpoint, both work the same way:
A device 'reads' the code, the code is interpreted to represent a set of numbers (typical of the standard Barcode), or numbers and characters (QR code).
The interpreted code is used to seek the related data in a database somewhere - A UPC code would have a database of items referenced with a number (just like the number you would read at the bottom of a UPC label), and a QR code frequently references a URL that can be opened in any web browser.
The information from a barcode comes from the referenced data that the barcode points to - so you don't have to carry around a database of information anytime you want to scan a code - you just have to be able to connect to that source of information.
Hope this helps.

Is it possible for a native OS X app to read and copy the Spotlight search index?

I don't want to alter the index in any way, simply read it, monitor it for changes, and replicate it. It would be with a native app/service, that would run in the background. I"m assuming I'd be targeting 10.6+, but that's not written in stone.
Where is the actual index? Can I read it in any semantically useful way?
Googling around, I haven't found any references to the actual Spotlight index location, or an API to read the whole thing. I did find the Search Kit Reference, which seems to explain how the underlying technology works and might be helpful, but doesn't explain how one might retrieve the entire index, or monitor the index over time.
I also noticed an app called Houdah that portends to provide an improved frontend to Spotlight, which may of interest, though I don't know how they acheived their effect - if it's literally just a frontend that calls the same Search Kit API's as Spotlight against the same index, that's not quite what I'm after...
Edit: Can't believe I hadn't read the wikipedia article on Spotlight - good reference, but I think my question stands.
(I'm a front-end web guy, apologies for noobishness.)
UPDATE: An OS X developer friend thought it would be stored in an SQLite database in a hidden file, but couldn't locate the actual file in the few minutes he spent looking. He did find a hidden .spotlight directory, but this was empty.
On Mac OS X 10.7 -- previous versions are significantly different -- the Spotlight index is stored in /.Spotlight-V100/Store-V2. The storage format is undocumented, but is definitely not SQLite.
I doubt that there's any useful way to extract data from the Spotlight index without an impractical amount of reverse engineering. Even if you did, it'd be likely to break with new releases of Mac OS X.

Resources