Does using Heroku impose GDPR requirements on my app? [closed] - heroku

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 2 years ago.
Improve this question
I am working on a small web-app as a hobby, and I would like to avoid any functionality that would trigger GDPR requirements. As such, the web-app neither collects nor processes personal data, does not set cookies (or otherwise track individual users), and also does not integrate any services that do these things.
My question is, if I deploy this app on Heroku, does Heroku do anything behind the scenes (e.g., collecting IP addresses) that would then impose GDPR requirements on my web-app?
Another way to put this would be, is it possible to use Heroku and have GDPR not apply to your website? (without preventing traffic from EU countries)

The first thing to check is hosting location. When you create an app, Heroku allows you to select whether it's hosted in the US or Europe (though no more specifically than that – you just have to hope it doesn't include the UK!).
Next, because Heroku is a managed app service, it means that they get more access than a typical VM would have. You then need to read their privacy policy, which presents a problem: Heroku is owned by Salesforce.com, who have taken a belligerent Facebook-style head-in-sand denial approach to recent court verdicts in this doc. They say in there that the ECJ did not invalidate standards contractual clauses (SCCs), which is true, but not the end of the story. The ECJ said that while SCCs are valid as legal instruments, they can only be used to manage transfer between jurisdictions that uphold EU data protection and privacy standards (which, as far as the US is concerned, has been shot down with the collapse of Privacy Shield), and this is deemed to be the responsibility of the service in question to substantiate. So, what you then want to know is where is the detailed analysis of the US legal position and the audit of the US security services that Salesforce is required to conduct if the SCCs they are using are to be considered valid?
This is of course a rhetorical question: Salesforce has conducted no such audit, nor could they do so in sufficient detail, which then means of course that SCCs are not a valid mechanism for transfers between the EU and US for any service that Salesforce runs.
That said, their privacy policy is pretty large, and I recommend you read it, though they still make reference to the now-defunct Privacy Shield, and make some assertions that would concern me. I'd suggest finding out exactly what they do with data held in EU data centres, what they do with logging, and look harder at their third-party sharing, as that's often the biggest problem area.
This isn't really the place to go further into this, so I'd recommend you read their policies, and also read the GDPR (that's not the official source, but I find it's much more usable), or find a lawyer if you want a more precise analysis. The primary focus of GDPR is on the broad principles, not implementation details, so if something seems dodgy, creepy, or overreaching, it probably is.
I apologise if this has raised more questions than it's answered!

Related

How to write User Stories for technical implementation details? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I'm trying to work in a more organised way and started adopting user stories.
I think I have misunderstanding of how should I use user stories for technical stuff.
Let's say I'm coding an app that gives me the ranking of my site for a certain Keyword in Google.
The user story goes like that:
As an Internet Marketer
I want to find out where my website ranks for a keyword
So I'll know whether my SEO efforts work
Now this is pretty straight forward and user centric... However, what happens if I need to introduce Proxies into the loop.
On one hand, Proxies are technical implementation detail on the other hand, proxies is part of the Internet Marketer's domain.
How should I craft such story?
As an Internet Marketer
I want to use Proxies when searching in Google
So we'll be able to check a lot of keywords without Google blocking us
The above scenario doesn't sound right for me... Maybe I can rewrite it to be something like:
As an Internet Marketer
I want to be able to check a lot of Keywords at a time
So it'll save me time
This sounds more right, however what acceptance criteria can I give it? try scraping google 100 times in a min? Isn't it waste of time?
Here's another scenario. How should I craft a user story when the feature I want to implement is that a proxy can be used once in 30 seconds? I don't have any idea of how to approach this problem from a user centric perspective...
Another thing I thought of doing is to present another Role. Instead of being centered around Internet Marketer, I can say we have a role called Google Scraper. I can say that Internet Marketer is in relation with Google Scraper.
Now I can write a user story like:
As Google Scraper
I want to change proxies every Search
So Google won't ban me
What would you say about approaching technical implementation details like above? It can also help breaking the system down into modules...
You don't write technical stories. User stories should meet the INVEST criteria.
Proxies do sound like an implementation detail and should be avoided. You should not be mentioning proxy servers in your story. Even if they are part of the domain, there are potentially other ways to achieve the same effect.
Instead of writing "I want to use a Proxy, so that I don't get blocked", you should write, "I want to disguise my identity, so that I don't get blocked". If I was your customer, I wouldn't know why you wanted a proxy? Is it a forward, open or reverse proxy? There are loads of uses for a proxy server. You should pick the feature that you want to exploit.
However, you shouldn't get too hung up on perfect stories. The agile manifesto says, "Individuals and interactions over processes and tools".
When writing a user story, you should also consider the 3 C's: Card, Conversation, Confirmation. Do both the customer and you understand the meaning of the story?
Does the card meet INVEST criteria? If you answered yes to both those questions then the story is fine.
User Stories should not include technical details. During Sprint Planing technical details should be added as Delivery Team tasks nested below the User Story. These tasks should be created through discussion by the delivery team. You should not attempt to document every implementation detail under the sun as you will reach a point of diminishing return. Aim for 60-75 percent coverage on implementation details (tasks) for each user story as the details may change as coding begins. Any additional details developer discover during coding can be shared and documented briefly during the daily stand-up. should The User Story can be simple and non-technical while the Delivery / Development Team will flesh out story details as nested Tasks.
These Task should be visible to Developers through their Integrated Development Environment (IDE). As Developers complete tasks they can associate their checked in code with the task in your work item tracking tool (Jira, Team Foundation Server, On-Time)

Openfire performance on EC2 [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
We are planning to introduce real time chat feature in our mobile apps. Ofcourse we would be going the XMPP way.
Can anybody shed some light on stats for maximum number of concurrent users Openfire has supported on EC2 instances (windows server) of different sizes in the real world?
We are looking at numbers ranging from 22500 concurrent users to 75000 concurrent users depending upon growth patterns predicted for app downloads and user adaptability for this new real time chat feature. time range = next 12 months.
From whatever googling I have done so far, it seems Openfire may not be the best bet when it comes to scaling out so can these numbers be supported on a single instance of ec2 over time? ie: we start hosting on smaller instances and keep increasing instance size as load demands.
Ejabbered seemed to be the best option when it comes to scaling out but since we would need to have erlang skills in order to extend it makes ejabbered a difficult choice for us. The other alternate is tigase which is java so we could extend it much easily but if Openfire can work for us for the next 12 months or so by scaling up versus scaling out, we would be happy to use it for now and see how well this new chat feature is embraced. Number one reason being ease of management.
Lastly, if you could help with links on SaaS / PaaS providers for XMPP chat + Push Notifications to mobile devices when user is offline, it would be awesome. We got in touch with quickblox.com but their enterprise offerings appear to be expensive for us at the moment. We want 100% ownership and portability of our data if we go the SaaS / PaaS way.
There are several references to Openfire handling those and larger numbers of concurrent users on a single server.
There is document on scalability from 2007 that shows 50000 users supported on version 3.2. The current release is 3.7.1. Don't forget that that also means a much slower machine than anything you are likely to run on today.
You also have to take into account what features of XMPP you will be using, but simple messaging should be able to easily handle the numbers you are referring to.
The numbers you mention should be easily handled by ejabberd.
I am unsure as to how you want to "extend" ejabberd. Multi-user chat and messaging are handled fine by all servers and of course ejabberd. Additionally, if you are thinking of custom protocols, these can be written in your language of choice and connect to ejabberd as an XMPP component.
The only thing you might miss is a web interface (which ejabberd has but it's rather limited), but then again if you expect to manage things through a web UI for an application, you will need to think again ;)
if you want to go with ejabberd, you can always get support from ProcessOne.
This is another plus for ejabberd, as it can be commercially supported if you want to / can afford.
Android Push Notification is a good solution.
With Android-Push services, you (Android developers) can send messages directly to the people who have installed your app. All you need is to include a code snippet into your app, and post to a specific URL to reach your app users, even if your app is inactive on their phone.
Feature:
Free
Free, unless you need extensive number of pushes for your app. Of course you can pay for more push and a quicker tech support.
Easy
Extremely easy to integrate into your app
Super simple to push to the app: just send a URL request
No C2DM limit, you don't have to have a gmail account to use the push service
Cloud service, no need to setup your own push server
Effective
Low battery and network consumption on the phone
Track user interaction, find out how users react to your push

Suspicions regarding Magento licencing [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I have been doing webdesign for a small business in Denmark, which alrady have a deal with a larger company to create the final site.
Among this companys proposal, I see that they charge a rather large fee for installing Magento on my clients server, and an additional fee to integrate the design.
Same company forbids my client from having FTP or similar access to the server, and they are therefornot able to install this themselves.
My question is : is resale of the Magento really allowed by the licence? This company wants to charge a rather steep amout for even installing a blank version of it, no Magento-licencing included.
Ihave looked larger company up, and this company does NOT have a standing licence for Magento. And even if they got one, I have a sneeky feeling that something is legal/licence wrong here.
The reason I share this with you is that I have a guts feeling that I should raise some critical questions and suggest that My client uses another company for their webaite, but I need to be certain that Im on the right side.
The IT company has no partnership with Magento/Varien, and have a somewhat tarnished reputation already...
I have mailed Magento about this, but have not had any response yet.
Your question is not entirely clear. But a company can certainly charge for installing a licensed product on behalf of the licencee, this is just a consulting or service fee (unless the licence specifically prohibits a third party from doing this, which is possible (although unlikely) if a) source code is being exposed, or b) there are other commercial sensitivities such as NDAs. But then that is not your risk, it's the licensee's)
As for Ubuntu, a company can again charge for installing or maintaining an Ubuntu install, again this is consulting/service. In fact you can SELL a copy of Ubuntu too, if someone is willing to pay for it that is their perogative (and they in turn can sell it themselves). You just have to provide the source and the licence, not just a compiled binary in order to comply with the GPL.
I can understand the position of the 'large company' providing the managed hosting for the Magento build. However, I also understand your concerns.
Assuming that you are only working on the design, there is no reason why you cannot implement your design on localhost with the Magento 'demo store' products. You can then take your design along to the 'small company', get your designs signed off, archive the /skin/frontend/default/macguffin and /app/design/frontend/default/macguffin folders, hand them over to the company providing the 'managed hosting' and then collect your pay-cheque.
By not allowing you access via FTP the 'managed hosting' provider are ensuring that their clients have no third-parties able to access any-of-their-stuff. Furthermore, design is not that big a deal in a Magento build, there is also the payment gateway, the shipping setup, analytics and everything else that happens on go-live. They are also taking the responsibility of providing uptime, availability and the aforementioned security.
You and I know that you can do all of that on a virtual-private-server and get it done in a matter of days, with lots of testing but no client liaison meetings, office overheads to pay for, an expensive project manager to explain everything to, excessive time-sheeting to keep up to date and so on.
However, the 'small company' will have reservations on allowing someone other than the 'large company' doing all of that. Given that their web presence is pivotal to the success of their business, given that they may not have management resources, given the fear of the unknown, given a lack of in-house expertise, politically the solution they have arrived at can be considered as making business sense to them.
There is nothing wrong with the business arrangement from a legal/licensing point of view. From your point of view of getting the job done, you can do your design offline, i.e. on localhost, deliver the deliverables and collect your cheque.
If the deal with the 'large company' does not work out then, if your work is good, you will be well placed to take on the project, to charge 'freelancer' rather than 'agency' rates and build a long term relationship with the 'small company'. However, you are not there yet, your best bet is to forge a close working relationship with the 'small company' and the 'large company'. For all you know, the 'large company' may have other clients, and, if you work well with them (i.e. drop the suspicions and animosity-from-the-outset), then you will possibly get other design work from their other clients.

How does SMS gateways work? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I've been looking at systems such as txtlocal, esendex and clickatell. I need to send out a very large number of messages and ideally would like to go in at a lower level then using systems like these. Does anyone know how these SMS gateways like I've listed work in terms of actually sending out the messages? Will they have agreements with different carriers and be sending them out programmatically? I've tried contacting some UK carriers directly but as of yet haven't had any success getting any information from them.
Aggregators typically work by talking directly with a mobile carrier's internal SMSC using IP/X.25/frame relay and using a protocol like SMPP/CIMD to request a message send.
They will have connections to multiple networks SMSC's so they can do least cost routing (i.e sending a message to a user on their home network being cheaper).
Here are some contact details for Orange/Voda.
That said, MXTelecom as mentioned by Phill offer a good gateway service, as do mBlox - both of whom have already done all the hard (and expensive) work for you.
Working with an aggregator is definitely worth the effort. They handle the legal contracts with the providers as well as with the auditing services. You can go directly to a provider (e.g. AT&T, etc.) and broker the deal yourself but generally speaking you'll only need that if you have very specific program/campaign needs. Coke, for example, brokered their own deal to get the four-digit shortcode for COKE (2653).
Keep in mind, when working with an aggregator like MXTelecom you'll be signing a contractual agreement with them (usually for 6 to 12 months) and it'll take between 8-12 weeks (in the US) to get your shortcode provisioned and setup. It's not the funnest process, IMHO.
Oh, and don't forget, they will audit your system to make sure it does what it says it will do in your campaign document.
It is also possible to create your own system (at least in the US) and use a long code. One of our original prototype systems was built with Kannel using a mobile phone tethered to an Ubuntu box. With an unlimited plan it was quite nice. Usage is related to your carrier contract so be mindful.
Per your question of how they work... They generally work via an API (HTTP or SMPP are most common). Depending on your in/out volume you may want to put a queue in between your application and the aggregators API.
First if you're going to do any bulk SMS messaging you should get a Short Code. An aggregator will have all the necessary API's/SDK and documentation for you.
Try MXTelecom (AKA OpenMarket)

Using Twitter as a mechanism to remote control applications? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I was brainstorming interesting usages of Twitter and came up with the following:
An application can use it as a call home mechanism
An application that has an invalid license could broadcast its location
A software company could use it as a remote shell like interface and issue commands to shutdown, restart and to publish patches
An application can use it for heartbeat purposes
Has anyone else came up with other non-standard usages of Twitter?
I fail to see the advantage of using a proprietary, third-party chat site in place of an appropriate networking protocol.
Matthew nailed the point that all these "applications" just represent a communications protocol between twitterer and remote host, and there are lots of mature protocols you could use instead right out of the box, rather than rolling your own on twitter.
But depending on your situation, of course there could be scenarios in which twitter is the easy way. I have written similar hacks that use e-mail as transport mechanism for automated tasks, simply because corporate red tape doesn't permit us other more conventional means. They can reboot machines, restart processes, post public messages, etc.
One of it is already available for Windows - "TweetMyPC v2.0 lets you shutdown/restart/LogOff and lots more in your windows PC.remotely."
I'm not sure this counts as a very practical use (a bit of fun mainly), but it certainly attracted my interest:
Twitter image encoding challenge
The idea of this challenge is to try to encode a picture into a 140 (Unicode) character Tweet. It's quite astounding how much information some of the algorithms posted there can fit into a message.
Scott Hanselman used Twitter to create an app for ordering a sandwich.
Check out his post
I think the main advantage of using twitter in instances like this is its SMS capabilities (and the fact they're free - whereas you can buy services that charge a monthly fee to allow you to receive SMS messages to a HTTP page or something like that).
I'd considered using it to make a little budget app for myself where I could SMS twitter things I'd bought to a private twitter account, similar for tracking petrol usage I was planning on smsing the odometer reading,cost etc in a certain format and capturing it at home to run statistics and stuff on it. There are limitations to it though - like you can only hook up an SMS number to 1 twitter account...
It's good to think outside the box, but don't be too focused on using just twitter because it's cool.
If you were comfortable setting up sensors and such, you could get a microcontroller, hook it up to a twitter feed, and then give it remote commands.
For instance, remote controlled house lights. You could then just tweet "Home lights on GXSDFXV" (The garbage at the end is to prevent real tweets from turning on and off your lights).
I wouldn't use Twitter in particular for transferring any private information (think about security if someone hacks the account and can shutdown your corporate servers or transfer fake licenses). For that I would setup a private server which implements the open microblogging protocol (like identi.ca) as long as - like others already said - there is another more suitable protocol.
For publishing PUBLIC information (heartbeat messages can be considered that, too) I like the idea pretty much. We recently had a very successfull (but unfortunately effectless) E-Petition in Germany where a Twitter account posted the number of signatures every couple of minutes.
Carsonified are using this to allow people to discover other people sitting in the same room at their conferences.
They label each chair with a tag and then you tweet that tag to an account they have and it registers you on a floorplan on the venue. Users are coloured in on the plan by their interests.
Clever but a bit overcomplicated for my tastes...
http://hello.carsonified.com/Home/Faq

Resources