Practical difference between Consumable and Non-Renewable subscription - cocoa

I've been experimenting with the In-App purchases to see what's more suitable for my product.
There are clear differences between Consumable/Non-Consumable and Auto renewable subscriptions
But when it comes to Non-renewable, the only difference I see is a semantic one. From Apple docs: "Non-renewing subscriptions and consumable products are not automatically restored by Store Kit. Non-renewing subscriptions must be restorable, however."
So, my question as stated, is there any real difference (for me as a developer) between those two? (that I can benefit from)

This was actually not clear at all. The best resource I can guide you to is Session 308 from WWDC 2012.
They don't explicitly explain the difference but near the end you can see things like the popup names and email formats are different. Generally subscriptions seem to be handled more like subscriptions for customers.
For you as a developer, yes there are differences. Mainly to do with how the transaction receipt is handled (again in the WWDC session).

Related

Verifying multiple Apple Pay merchants on the same domain

We already have a /.well-known/apple-developer-merchantid-domain-association.txt file under our domain which is used to verify our domain to Apple in relation to a checkout.com Apple Pay integration. Now we wish to have a completely independent Apple Pay integration (i.e. a different Apple merchant) using Adyen, operating under the same domain. This means we need to verify that too, by hosting a different /.well-known/apple-developer-merchantid-domain-association.txt... how can this be done while making sure the existing checkout.com integration doesn't lose its verification?
I was hoping maybe Apple includes some kind of header in the request signifying which merchant ID it's verifying, and based off of that we could dynamically change what we present? But I couldn't find anywhere detailing the exact process that goes on.
I've found lots of threads on the Apple developer forums about this, but none with a conclusive answer:
https://developer.apple.com/forums/thread/718160
https://developer.apple.com/forums/thread/118725
https://developer.apple.com/forums/thread/695538
Only the last one provides any kind of answer, which doesn't feel particularly robust as it seems like a big assumption that once it's verified Apple will never check again, an assumption that doesn't seem to be documented anywhere?
Are there any other possible solutions here?

What are the steps of using historical chat data in RASA

There's a crucial part in the process that says, the best place for the chatbot to learn is from real users, what if I already have that data, and would like to test the model on it.
Think of Interactive Learning, but in scale and possibly automated. Does such a feature already exist within RASA?
Think of Interactive Learning, but in scale and possibly automated.
Are you referring to something like reinforcement learning? Unfortunately something like that currently doesn't exist. Measuring the success of conversations is a tough problem (e.g. some users might give you positive feedback when the bot solved their problem, while others would simply leave the conversation). Something like external business metrics could do the trick (e.g. whether the user turned out buying something from you within the next 24h), but it's still hard. Another problem is that you probably want to have some degree of control over how your chatbot interacts with your users. Training the bot on user conversations without any double checking could potentially lead to problems (e.g. Microsoft once had an AI trained on Twitter data, which didn't turn out well).
Rasa is offering Rasa X for learning from real conversations. The community edition is a free, closed source product which helps you monitor and annotate real user conversations quickly.
Disclaimer: I am a software engineer working at Rasa.

When to delete newly deprecated code?

I spent a month writing an elaborate payment system that handles both credit card payments and electronic fund transfers. My work was used on production server for about a month. I was told recently by the client that he no longer wants to use the electronic fund transfer feature.
Because the way I had to interface and communicate with the credit card gateway is drastically different from the electronic fund transfer api (eg. the cc company gives transaction responses immediately after an http request, while the eft company gives transaction responses 5 business days after an http request), I spent a lot of time writing my own API to abstract common function calls like
function pay(amount, pay_method,pay_freq)
function updateRecurringSchedule(user_id,new_schedule)
etc..
Now that the client wants to abandon the EFT feature, all my work for this abstracted payments API is obsolete.
I'm deliberating over whether I should scrap my work. Here's my pro vs. con for scrapping it now:
PRO 1: Eliminate code bloat
PRO 2: New developers do not need to learn MY API. They only need to read the CC company's API
PRO 3: Because the EFT company did not handle recurring payment schedules, refunds, and validation, I wrote my own application to do it. Although the CC company's API permitted this functionality, I opted to use mine instead so that I could streamline my code. now that EFT is out of the picture, I can delete all this confusing code and just rely on the CC company's system to manage recurring billing, payment schedules, refunds, validations etc...
CON 1: Although I can just delete the EFT code, it still takes time to remove the entire framework that consolidates different payment systems.
CON 2: with regards to PRO 3, it takes time to build functionality that integrates the payment system more closely with the CC company.
CON 3: I feel insecure deleting all this work. I don't think I'll ever use it again. But, for some inexplicable reason, I just don't feel comfortable deleting this work "right now".
CON 4: There's also the issue of the database. If I delete my business logic code, then normalize the database (which will end up with a new db schema), it will be difficult to revive this feature because of data migration issues. Whereas, if I keep the existing code against the existing database, it's more trouble for the developer to maintain, but no fear of losing anything.
So my question is, should I delete one month's worth recent development? If yes, should I do it immediately or wait X amount of time before doing so?
Additional details
I added CON 4
Using a VCS properly means never having to feel guilty about deleting code.
Delete it. No reason to keep it around. I am sure you are using a version control system, so you can always get it back on the off chance that you need it. No one likes losing a month's worth of work, but its a sunk cost. Whether you keep it or not you aren't getting that time back.
As I read your question, you are handling recurring payment schedules, refunds, and validations, using the code which might be deleted. The code currently works fine as far as you know.
PRO 3: Because the EFT company did not handle recurring payment schedules, refunds, and validation, I wrote my own application to do it. Although the CC company's API permitted this functionality, I opted to use mine instead so that I could streamline my code. now that EFT is out of the picture, I can delete all this confusing code and just rely on the CC company's system to manage recurring billing, payment schedules, refunds, validations etc...
CON 2: with regards to PRO 3, it takes time to build functionality that integrates the payment system more closely with the CC company.
I think you overlooked the issue that you might introduce bugs into a system that is currently working well. If your code to handle recurring payments, etc., is working, are you sure that it's worthwhile to throw all that over to the CC API?
It sounds like there might be some risk inherent in making these changes, which should be considered in the ROI. Also, speaking of Return On Investment, you are talking about spending paid time to rip out the EFT code, right? Otherwise, that would be another reason not to.
You should not only delete the code, you should delete the requirement that caused you to write the code. Then delete all the decisions you made as a result of the requirement.
For instance, you said you needed to abstract the interactions with the two systems because they were so different. There are no longer two systems, so delete the abstraction. Any other decisions you made because of two systems need to go.
Yes, you may wind up using some of this code again, which is why a version control system is a Good Thing. But the next time you have a requirement like this, it's likely to be a different second system, which would lead to a different abstraction.
That is, it will lead to a different abstraction if you don't tie it down to your original abstraction by keeping the old code.

simple user-feedback collection service

Short: I am looking for a very simple (configuration/maintenance wise) solution, that would allow to collect user-feedback/bug-reports from my apps/web-sites over the internet.
Long:
Right now I have very simple web-app written using ASP.NET MVC that receives http-post requests at http://localhost/feedback and saves them as c:\temp\{guid}-feedback.txt. I used UltiDev HttpVpn (BTW it's very cool) to expose this page to the internet without having to put my app in DMZ. I collect following information (through a feedback form in the app, or a web-site's feedback page): user name, e-mail, type of the message (feature request, bug report, comment), application name (hard-coded in the app that sends the feedback), and message-text/comment.
About E-mail:
E-mail is not good enough, since there will be no e-mail client on most of the computers my apps are running at (also, it takes too many clicks to send an e-mail).
About JIRA:
IMHO JIRA is too heavy for what I need. I might be wrong, since I have never installed/configured it myself. Does it have a http-post interface (so I can put my own interface on)?
.NET on Windows solution preferred
FREE is a requirement
To my opinion http://www.useresponse.com is a nice alternative to SaaS services (will be once released on December 2011) you can install on your site and customize up to your satisfaction (both look and functionality).
Commercial, though. Don't think you'll have enough functionality from free scripts (neither support and new features).
Worth checking out FogBugz. I used it many versions ago and it has changed a lot since then.
But it allows you to report app crashes / bugs, etc. through a number of mechanisms (see link for details): http://www.fogcreek.com/FogBUGZ/LearnMore.html?section=NewPlatform#hist_PullCases

is it reasonable to protect drm'd content client side

Update: this question is specifically about protecting (encipher / obfuscate) the content client side vs. doing it before transmission from the server. What are the pros / cons on going in an approach like itune's one - in which the files aren't ciphered / obfuscated before transmission.
As I added in my note in the original question, there are contracts in place that we need to comply to (as its the case for most services that implement drm). We push for drm free, and most content providers deals are on it, but that doesn't free us of obligations already in place.
I recently read some information regarding how itunes / fairplay approaches drm, and didn't expect to see the server actually serves the files without any protection.
The quote in this answer seems to capture the spirit of the issue.
The goal should simply be to "keep
honest people honest". If we go
further than this, only two things
happen:
We fight a battle we cannot win. Those who want to cheat will succeed.
We hurt the honest users of our product by making it more difficult to use.
I don't see any impact on the honest users in here, files would be tied to the user - regardless if this happens client or server side. This does gives another chance to those in 1.
An extra bit of info: client environment is adobe air, multiple content types involved (music, video, flash apps, images).
So, is it reasonable to do like itune's fairplay and protect the media client side.
Note: I think unbreakable DRM is an unsolvable problem and as most looking for an answer to this, the need for it relates to it already being in a contract with content providers ... in the likes of reasonable best effort.
I think you might be missing something here. Users hate, hate, hate, HATE DRM. That's why no media company ever gets any traction when they try to use it.
The kicker here is that the contract says "reasonable best effort", and I haven't the faintest idea of what that will mean in a court of law.
What you want to do is make your client happy with the DRM you put on. I don't know what your client thinks DRM is, can do, costs in resources, or if your client is actually aware that DRM can be really annoying. You would have to answer that. You can try to educate the client, but that could be seen as trying to explain away substandard work.
If the client is not happy, the next fallback position is to get paid without litigation, and for that to happen, the contract has to be reasonably clear. Unfortunately, "reasonable best effort" isn't clear, so you might wind up in court. You may be able to renegotiate parts of the contract in the client's favor, or you may not.
If all else fails, you hope to win the court case.
I am not a lawyer, and this is not legal advice. I do see this as more of a question of expectations and possible legal interpretation than a technical question. I don't think we can help you here. You should consult with a lawyer who specializes in this sort of thing, and I don't even know what speciality to recommend. If you're in the US, call your local Bar Association and ask for a referral.
I don't see any impact on the honest users in here, files would be tied to the user - regardless if this happens client or server side. This does gives another chance to those in 1.
Files being tied to the user requires some method of verifying that there is a user. What happens when your verification server goes down (or is discontinued, as Wal-Mart did)?
There is no level of DRM that doesn't affect at least some "honest users".
Data can be copied
As long as client hardware, standalone, can not distinguish between a "good" and a "bad" copy, you will end up limiting all general copies, and copy mechanisms. Most DRM companies deal with this fact by a telling me how much this technology sets me free. Almost as if people would start to believe when they hear the same thing often enough...
Code can't be protected on the client. Protecting code on the server is a largely solved problem. Protecting code on the client isn't. All current approaches come with stingy restrictions.
Impact works in subtle ways. At the very least, you have the additional cost of implementing client-side-DRM (and all follow-up cost, including the horde of "DMCA"-shouting lawyer gorillas) It is hard to prove that you will offset this cost with the increased revenue.
It's not just about code and crypto. Once you implement client-side DRM, you unleash a chain of events in Marketing, Public Relations and Legal. A long as they don't stop to alienate users, you don't need to bother.
To answer the question "is it reasonable", you have to be clear when you use the word "protect" what you're trying to protect against...
For example, are you trying to:
authorized users from using their downloaded content via your app under certain circumstances (e.g. rental period expiry, copied to a different computer, etc)?
authorized users from using their downloaded content via any app under certain circumstances (e.g. rental period expiry, copied to a different computer, etc)?
unauthorized users from using content received from authorized users via your app?
unauthorized users from using content received from authorized users via any app?
known users from accessing unpurchased/unauthorized content from the media library on your server via your app?
known users from accessing unpurchased/unauthorized content from the media library on your server via any app?
unknown users from accessing the media library on your server via your app?
unknown users from accessing the media library on your server via any app?
etc...
"Any app" in the above can include things like:
other player programs designed to interoperate/cooperate with your site (e.g. for flickr)
programs designed to convert content to other formats, possibly non-DRM formats
hostile programs designed to
From the article you linked, you can start to see some of the possible limitations of applying the DRM client-side...
The third, originally used in PyMusique, a Linux client for the iTunes Store, pretends to be iTunes. It requested songs from Apple's servers and then downloaded the purchased songs without locking them, as iTunes would.
The fourth, used in FairKeys, also pretends to be iTunes; it requests a user's keys from Apple's servers and then uses these keys to unlock existing purchased songs.
Neither of these approaches required breaking the DRM being applied, or even hacking any of the products involved; they could be done simply by passively observing the protocols involved, and then imitating them.
So the question becomes: are you trying to protect against these kinds of attack?
If yes, then client-applied DRM is not reasonable.
If no (for example, you're only concerned about people using your app, like Apple/iTunes does), then it might be.
(repeat this process for every situation you can think of. If the adig nswer is always either "client-applied DRM will protect me" or "I'm not trying to protect against this situation", then using client-applied DRM is resonable.)
Note that for the last four of my examples, while DRM would protect against those situations as a side-effect, it's not the best place to enforce those restrictions. Those kinds of restrictions are best applied on the server in the login/authorization process.
If the server serves the content without protection, it's because the encryption is per-client.
That being said, wireshark will foil your best-laid plans.
Encryption alone is usually just as good as sending a boolean telling you if you're allowed to use the content, since the bypass is usually just changing the input/output to one encryption API call...
You want to use heavy binary obfuscation on the client side if you want the protection to literally hold for more than 5 minutes. Using decryption on the client side, make sure the data cannot be replayed and that the only way to bypass the system is to reverse engineer the entire binary protection scheme. Properly done, this will stop all the kids.
On another note, if this is a product to be run on an operating system, don't use processor specific or operating system specific anomalies such as the Windows PEB/TEB/syscalls and processor bugs, those will only make the program even less portable than DRM already is.
Oh and to answer the question title: No. It's a waste of time and money, and will make your product not work on my hardened Linux system.

Resources