What's the correct methodology to implement In App Purchase that unlocks existing functionality? - xcode

So I've built an iOS app (my first) and I want to distribute it for free. It's a content creation app, and my plan is to allow the user full access to record up to 5 records of content for the purpose of evaluation. If the user likes the app and wants to continue generating new content, he'll have to purchase an unlock via in-app-purchase.
I've looked at the documentation, and I'm going to use MKStoreKit to do this. I understand that I'm going to be creating a non-consumable, non-subscription product to sell.
So my problem is that while I can find lots of information on HOW to do the actual IAP, I can't find anything on where or how to track that it was purchased. That is, how do I go about ensuring the app is unlocked? Does it require a round trip to the AppStore servers on every app startup? If this is the case, I'm a bit concerned about it because network connectivity is not a guarantee.
Another possibility I've been thinking about is writing some kind of semaphore somewhere when unlock is purchased, whether it's a file or just modifying a setting in a .plist. This is certainly optimal from a user-experience point of view, but can it be easily hacked? If I write a file, can a user just take that file and distribute it to whomever?
Is there some standard mechanism or methodology that's typically employed here?
Thanks for any assistance.

What I usually do is check with the Apple servers if the content is unlocked. If so, I change some attribute in a .plist and check it to unlock the content.
There are two common approaches to achieve that: The first is to check only if the attribute is not set (or with a specific value) and the other, more secure but, im my opinion, not the best, is to have a point in your app that everytime it is executed the Apple servers are verified again.
What you need to have in mind is that if your application is hacked, you can't do anything, but there is a great number of users (most of them) that don't care about hacks and not even Jailbreaks... so forget it and apply the check when the app opens and only if it is not unlocked yet.

Related

Windows: where can I store data secretly in order to implement a time-limited demo?

I'm writing a Windows app that has a time limited demo. There's not going to be a server that the app can phone home to, so I need to store data on the system in order to figure out if the demo has been started and how much time is remaining. The location of this data needs to be obfuscated so that a typical user (and possibly even some power users) are unlikely to be able to find it.
I already know the logistics of how to implement a time limited demo as long as I can store data secretly somewhere on the system, but I'm not sure how to do that last part. The requirements here are:
The data needs to be globally readable and writable so that any user account can access it and modify it without requiring elevated privileges (as the demo applies system-wide and not on a per-user basis)
Preferably it doesn't require elevated permissions to create the data, but if it's necessary to do that once (for example to create the data and adjust its permissions so that everyone has write access) that's acceptable though not ideal.
Whatever method or combination of methods I use to do this needs to work in Windows 7 and later
Does anyone have any idea on how I can accomplish this?

how can I Uniquely identify a computer

I would like to develop an application that can connect to server and uniquely identify clients then give them permissions to run a specific query on server's database.
How can I identify clients in a unique way. Is MAC address reliable enough? or should I use something like CPU id or something else?
clarification : I do not what to create a registration code for my app. As it's suppose to be a free application. I would want to detect each client by an id and decide which one could have the permissions to run a specific method on server or not.
The usual approach is to give each client a login (name + password). That way, it's easy to replace clients when they need upgrade or when they fail.
MAC address should be unique but there is no central registry which enforces this rule. There are also tools to change it, so it's only somewhat reliable.
CPU and HD IDs are harder to change but people will come complaining when their hard disk died or when they upgrade their system.
Many PCs have TPM modules which have their own IDs but they can be disabled and the IDs can be wiped. Also, there are privacy issues (people don't like it when software automatically tracks them).
Another problem with an automated ID approach is how to identify them on the server. When several clients connect for the first time in quick succession, you will have trouble to tell them apart.
This question appears to have already been asked and answered in detail (although, you may not like the answers, since they appear to add up to: it's problematic.) I agree with Xefan's comment that more details would help define your question. Here's a link to earlier discussion on this:
What is a good unique PC identifier?

How many security scoped bookmarks can be opened at once?

I have an application that I'd like to prepare for the Mac App Store.
The application needs to constantly access a number of "sources", be these volumes or folders, in order to be warned of any changes via FSEvents. The number of sources depends on the user.
To do this across relaunches I'll need to create and access each of them via a secutiry scoped bookmark. However, the documentation forewarns me of this problem:
"If you fail to relinquish your access to file-system resources when you no longer need them, your app leaks kernel resources. If sufficient kernel resources are leaked, your app loses its ability to add file-system locations to its sandbox, such as via Powerbox or security-scoped bookmarks, until relaunched."
Can somone tell me how many location I can actually have open at one time. I don't expect a user to need more than 30 or so sources at the very maximum, but I have no idea at what point I'm start having issues with having too many secutiry scoped bookmarks open at once.
Regards,
Tim
You will be fine with 30. In fact, you would probably be fine even with 1000. This has been confirmed by an Apple employee: https://devforums.apple.com/message/802537

Protecting authentication data in WP7 app

I am writing a WP7 app and would like to include features to share highscore data using Amazon's AWS as storage service.
As far as I understand WP7 XAP files are (currently) safely encrypted and no known jailbreak for the phone exists. However, given that such a 'safe' encryption can be temporary, I would like to understand if/how this violates best practice.
AWS' dynamoDB uses temporary access tokens that can be generated using given account data and are valid for 36 hours the tokens must be verified using a signature with any request.
I am considering that all access data will be stored in the XAP file, which will also generate the temporary access token and signature. The information will be passed via https requests between the phone and AWS.
I was trying to work out alternative processes including passing the generation of the temporary token calculation to an external webservice, however I cannot think of a way to protect this data which would not be similarly compromised if the XAP file was accessible.
Am I missing the best practice approach completely or am I just overly cautious?
Thanks.
You won't ever be able to prevent users from sending false scores, pretty much for the same reason as unofficial cheating apps exist for every popular game. The best you can do is making it harder.
With a simple approach, the client sends the score directly to the server, without any kind of encryption. Someone can cheat just by running the app on the emulator and capturing the outgoing packets, then opening the same URL on his desktop browser. Estimated time: less than 10 minutes, and it can be done by anyone who knows that he can download XAPs directly from the marketplace, remove the manifest, and deploy it on the emulator.
Then you can add an encryption key on the client. Now someone has to know C# and Reflector to extract it, but it's still easy for someone having those skills.
Next level, you can add an encryption key AND obfuscate the assembly. Knowledge of CIL and Relector are required to extract the key. It'll take 30 minutes to an hour to a highly skilled developper to extract the key, and many hours for most developpers.
Finally, you can add multiple steps to confuse even more the intruder (for instance, downloading a temporary token from a server and using it somehow in the score sending process). Also, you can design the scoring system in a way that some scores are illegal (dumb example: if the minimal scoring action earns 2 points, then if someone sends an odd number as a score you know he's cheating. This one is easy to figure out, but you can make much more complex rules).
In any way, keep in mind that your system will always be vulnerable, it's only a matter of how much time it will take to the attacker to break through it. If it takes many hours or days to a highly skilled developer, then unless you're offering some worthy prize to the best player, you can safely assume that nobody will bother doing that.

is it reasonable to protect drm'd content client side

Update: this question is specifically about protecting (encipher / obfuscate) the content client side vs. doing it before transmission from the server. What are the pros / cons on going in an approach like itune's one - in which the files aren't ciphered / obfuscated before transmission.
As I added in my note in the original question, there are contracts in place that we need to comply to (as its the case for most services that implement drm). We push for drm free, and most content providers deals are on it, but that doesn't free us of obligations already in place.
I recently read some information regarding how itunes / fairplay approaches drm, and didn't expect to see the server actually serves the files without any protection.
The quote in this answer seems to capture the spirit of the issue.
The goal should simply be to "keep
honest people honest". If we go
further than this, only two things
happen:
We fight a battle we cannot win. Those who want to cheat will succeed.
We hurt the honest users of our product by making it more difficult to use.
I don't see any impact on the honest users in here, files would be tied to the user - regardless if this happens client or server side. This does gives another chance to those in 1.
An extra bit of info: client environment is adobe air, multiple content types involved (music, video, flash apps, images).
So, is it reasonable to do like itune's fairplay and protect the media client side.
Note: I think unbreakable DRM is an unsolvable problem and as most looking for an answer to this, the need for it relates to it already being in a contract with content providers ... in the likes of reasonable best effort.
I think you might be missing something here. Users hate, hate, hate, HATE DRM. That's why no media company ever gets any traction when they try to use it.
The kicker here is that the contract says "reasonable best effort", and I haven't the faintest idea of what that will mean in a court of law.
What you want to do is make your client happy with the DRM you put on. I don't know what your client thinks DRM is, can do, costs in resources, or if your client is actually aware that DRM can be really annoying. You would have to answer that. You can try to educate the client, but that could be seen as trying to explain away substandard work.
If the client is not happy, the next fallback position is to get paid without litigation, and for that to happen, the contract has to be reasonably clear. Unfortunately, "reasonable best effort" isn't clear, so you might wind up in court. You may be able to renegotiate parts of the contract in the client's favor, or you may not.
If all else fails, you hope to win the court case.
I am not a lawyer, and this is not legal advice. I do see this as more of a question of expectations and possible legal interpretation than a technical question. I don't think we can help you here. You should consult with a lawyer who specializes in this sort of thing, and I don't even know what speciality to recommend. If you're in the US, call your local Bar Association and ask for a referral.
I don't see any impact on the honest users in here, files would be tied to the user - regardless if this happens client or server side. This does gives another chance to those in 1.
Files being tied to the user requires some method of verifying that there is a user. What happens when your verification server goes down (or is discontinued, as Wal-Mart did)?
There is no level of DRM that doesn't affect at least some "honest users".
Data can be copied
As long as client hardware, standalone, can not distinguish between a "good" and a "bad" copy, you will end up limiting all general copies, and copy mechanisms. Most DRM companies deal with this fact by a telling me how much this technology sets me free. Almost as if people would start to believe when they hear the same thing often enough...
Code can't be protected on the client. Protecting code on the server is a largely solved problem. Protecting code on the client isn't. All current approaches come with stingy restrictions.
Impact works in subtle ways. At the very least, you have the additional cost of implementing client-side-DRM (and all follow-up cost, including the horde of "DMCA"-shouting lawyer gorillas) It is hard to prove that you will offset this cost with the increased revenue.
It's not just about code and crypto. Once you implement client-side DRM, you unleash a chain of events in Marketing, Public Relations and Legal. A long as they don't stop to alienate users, you don't need to bother.
To answer the question "is it reasonable", you have to be clear when you use the word "protect" what you're trying to protect against...
For example, are you trying to:
authorized users from using their downloaded content via your app under certain circumstances (e.g. rental period expiry, copied to a different computer, etc)?
authorized users from using their downloaded content via any app under certain circumstances (e.g. rental period expiry, copied to a different computer, etc)?
unauthorized users from using content received from authorized users via your app?
unauthorized users from using content received from authorized users via any app?
known users from accessing unpurchased/unauthorized content from the media library on your server via your app?
known users from accessing unpurchased/unauthorized content from the media library on your server via any app?
unknown users from accessing the media library on your server via your app?
unknown users from accessing the media library on your server via any app?
etc...
"Any app" in the above can include things like:
other player programs designed to interoperate/cooperate with your site (e.g. for flickr)
programs designed to convert content to other formats, possibly non-DRM formats
hostile programs designed to
From the article you linked, you can start to see some of the possible limitations of applying the DRM client-side...
The third, originally used in PyMusique, a Linux client for the iTunes Store, pretends to be iTunes. It requested songs from Apple's servers and then downloaded the purchased songs without locking them, as iTunes would.
The fourth, used in FairKeys, also pretends to be iTunes; it requests a user's keys from Apple's servers and then uses these keys to unlock existing purchased songs.
Neither of these approaches required breaking the DRM being applied, or even hacking any of the products involved; they could be done simply by passively observing the protocols involved, and then imitating them.
So the question becomes: are you trying to protect against these kinds of attack?
If yes, then client-applied DRM is not reasonable.
If no (for example, you're only concerned about people using your app, like Apple/iTunes does), then it might be.
(repeat this process for every situation you can think of. If the adig nswer is always either "client-applied DRM will protect me" or "I'm not trying to protect against this situation", then using client-applied DRM is resonable.)
Note that for the last four of my examples, while DRM would protect against those situations as a side-effect, it's not the best place to enforce those restrictions. Those kinds of restrictions are best applied on the server in the login/authorization process.
If the server serves the content without protection, it's because the encryption is per-client.
That being said, wireshark will foil your best-laid plans.
Encryption alone is usually just as good as sending a boolean telling you if you're allowed to use the content, since the bypass is usually just changing the input/output to one encryption API call...
You want to use heavy binary obfuscation on the client side if you want the protection to literally hold for more than 5 minutes. Using decryption on the client side, make sure the data cannot be replayed and that the only way to bypass the system is to reverse engineer the entire binary protection scheme. Properly done, this will stop all the kids.
On another note, if this is a product to be run on an operating system, don't use processor specific or operating system specific anomalies such as the Windows PEB/TEB/syscalls and processor bugs, those will only make the program even less portable than DRM already is.
Oh and to answer the question title: No. It's a waste of time and money, and will make your product not work on my hardened Linux system.

Resources