Is Plaid's development environment more prone to need frequent reauthentication? - plaid

I'm using Python and Plaid's development environment to download bank balances and transactions. To get the initial access tokens, I'm launching Link from quickstart, and can do that in standard and update mode.
The problem I'm running into is how frequently my API call returns the ITEM_LOGIN_REQUIRED error and I have to re-authenticate. For a Regions account I've been testing, this happens a few times throughout the day. For a Pinnacle Financial Partners bank, this happens almost immediately after updating the access token. As in, I can log in through link, successfully fire an auth/get request, and by the time I can send another request (e.g., balance/get), I'm already getting ITEM_LOGIN_REQUIRED again.
As I'm evaluating Plaid for production use, is this frequent authentication atypical? Is it a known limitation with development, or with specific banks even on production? I've seen some banks (Bank of America) only work in production, but I'm hoping what I'm experiencing is just the nature of working in development. Thanks.

Development vs. Production environments are virtually identical and shouldn't impact how often you hit ITEM_LOGIN_REQUIRED.
What you're seeing is atypical, though. Unless you have multi-factor auth turned on and configured not to trust known devices, this shouldn't happen.
Assuming you don't have that configured, would you mind submitting a support ticket so Plaid Support can look into this and help figure out why it's happening?

Related

Best practice to store App Key in Laravel

I have been doing a lot of research on this and I can't seem to find a definitive answer. Obviously these days security is a big issue, hacks are going on all over the place of major companies that invest millions into security and they're still getting hacked.
I work on Laravel a lot and use shared hosting with Hostgator or some similar company of high report. Laravel comes with a built in function for encrypting database info and decrypting to the user when requested.
However, I have a question on how secure this ACTUALLY is. If someone gets into my cPanel, my app key which is used for encryption is right there in front of them. Granted, my cPanel password is the one that's auto-generated by Hostgator and it's complete jibberish with semicolons and alphanumeric strings all over, so it's not easy to guess.
But I'm trying to learn a little bit more about security. If my app key in my env file is locked securely behind my cPanel login, is Laravels built in "encrypt()" method "enough" to call an app "secure"? Is there other measures within Laravel or my host provider that could make it more secure than just tight passwords? Is there some sort of practice of referencing the app key through an external source that's not located in the cPanel area? So even if my cPanel got hacked, my app key wouldn't be in those files and get exposed?
I'm not a security expert, but there are a few points I can share from my experience in working at highly-secured companies.
First, Laravel itself is fine. You can generally trust open source software since it's transparent and security bugs get discovered and addressed early. So you don't need to improve Laravel, just use it as is, preferably an LTS version.
Then, CPanel is a liability. You should minimize weak points on your system, i.e. those that are externally accessible. Get a VPS or a private server and access it via an SSH, don't use tools like CPanel and PhpMyAdmin on it. The less software you have that talks to the outer world, the less vulnerable you are to bugs in that software.
In my current company the production server can only be accessed via SSH from a single IP address, the address of the dev server. So I log in to dev server first, and then log in from there to the prod. It denies all connections from all other IPs.
If you are limited to using CPanel or something similar, consider protecting the login page with HTTP Basic Auth, some hosting providers allow that.
You also want to keep your system and software up to date. Not too new either as that may have bugs that haven't been caught yet. Our devops prefer to have it a couple of minor versions behind, so that the community has time to test it out and get hacked for you.
That's all I know as a web-dev, sure enough there are special tools and ddos protection services but that's beyond a dev's concern imo. If you just follow these steps, you should be safe. Hope that helped a bit, cheers :)

User Sessions across devices on Google Analytics Universal

I have a quick question...may sound a little straightforward but still want to throw it out there.
I am aware that typically a session is limited to a single browser and client instance.
With that said though say a user signs up on your mobile device and starts to do some shopping...maybe adds something to their cart and then decides that they want to complete the purchase on their desktop.
I have some people that want to call this a single session while technically its a new session.
Does this make any sense?
In theory this should work with Universal Analytics, at least for logged in users ( I assume that your users are logged in if they want to buy).
You can pass a client id as a parameter when you create the tracker. The client id is supposed to be formatted as UUID, so you'd have to store that along with your real client id in you backend system and pass it in to the tracker as a part of the confuguration json object (optional third parameter in ga create). Apparently this get retroactively applied to the running session (no written source for that but I recently attented a conference where a google employee said as much, so I assume this is legit).
So as far as it concerns data collection UA is ready for multidevice. I frankly do not know to what extent this already works in the Analytics Interface.
I recently had a glimpse at a Analytics Premium Account which already had some new multdevice reports. I don't know if the fact that those reports are, at least for the moment, absent from the free version means that multidevice tracking does not work yet on the free version (those reports are along the line of Venn diagramms for "How many users used more than one device" and the like).

Windows Azure Portal login to portal and receive error "We are having trouble logging you into the portal"

Open browser
Navigate to http://www.windowsazure.com/en-us/
Select portal top right
login with my email address
Receive the below error
https://manage.windowsazure.com/Error/Login?getsupport=true&f=255&MSPPError=-2147217320
Receive the error
" We are having trouble logging you into the portal
Please contact Customer Service for assistance."
Using IE or Chrome, incognito or not, cookies cleared or not, cache cleared or not. The problem still exists. Also tried on multiple devices media centre PC, desktop running windows 7, iPhone 5, ipad 3...
Prior to November 2012 I have accessed the windows azure subscription without a problem.
I clicked the customer service link and the australian number is 13 20 58 I have contacted that number explaining that I cannot access my windows azure subscription and each time I login I receive an error. They proceed to redirect me to other support teams where I repeat my details and the problem they either redirect me again or provide a number to call.
In one case I was redirected to a number that no longer exists. Another I was told to raise a case on the windows azure portal page the same portal page that I receive an error on when logging in, when I asked for alternative options there were none.
So far I've spoke with the msdn support team, windows subscription support, online services, etc and still no resolution. In the latest call to support they have said to raise the issue on the forums so here goes.
Anyways long story short I have probably spent 3+ hours calling Microsoft support explaining the problem, waiting on hold, being redirect, repeating... still I can't access my windows azure subscription
I checked in commerce.microsoft.com and there is a windows azure subscription associated with my email address
Subscription-1
Windows Azure MSDN - Visual Studio Premium
Does anyone have any suggestions on how to resolve this issue?
Some time it's happened wait for while and retry
or just ask azure support in twitter
Editing for those who are like me and skipped reading the comments (in small font) below the OP's question. This was resolved and was due to the first reason I list below. However, it could (and has) happen in past for other reasons as well, so might as well keep this response here in case it helps someone else out.
Try logging at https://portal.azure.com/
The manage.windowsazure.com isn't even DNS resolvable to any website - I am not sure how you are getting that address (maybe its from some part of Azure IAM pipeline that hasn't been updated) and (more interestingly) how you are able to open that link - Maybe this is something available only in your region ! (but I am stretching here).
Regardless, I also tried to find other instances of similar issues and in general I see this issue is related to cases when the an account has been transitioned to Office 365.
Here an account was moved and resulted in creation of two accounts with different passwords - solution here was to set the Office 365 account (new account) as a co-admin on the old account that was used to setup the Azure account.
Here the account was not provisioned correctly in Azure AD Store and had to be removed and re-created using DirSync
Here, the problem seems to be related to (the new) Account Provisioning in Azure AD.
In general, it seems this is a problem that might be harder to explain to level 1 support. You might have better mileage speaking to your organizations IT admin and have them check for any inconsistencies that might be similar to those stated above.
Try forcing the directory in the URL like so
https://portal.azure.com/#domain.name
For example in case of MS AAD domain
https://portal.azure.com/#mycompany.onmicrosoft.com
In case of custom domain
https://portal.azure.com/#mycompany.com
Sometimes there is some odd behaviour with redirect loops or when you no longer have access to the tenant but you have selected 'last visited' in the Startup directory.
Glad to hear this was resolved by support. Since this was posted, we made a number of updates to the login process and types of accounts (incl. the addition of MFA). At TechEd we announced a new portal (video # http://channel9.msdn.com/Blogs/Windows-Azure/Azure-Preview-portal) if you want to see what is coming.

Windows Azure Caching (Preview) ErrorCode<ERRCA0017>:SubStatus<ES0006>:

I'm using the role-based caching feature for a windows azure web role.
Configured as co-located. I've followed the steps given by windows azure docs for caching (preview). I get the following error:
ErrorCode <ERRCA0017>:SubStatus<ES0006>:There is a temporary failure.
Please retry later. (One or more specified cache servers are
unavailable, which could be caused by busy network or servers. For
on-premises cache clusters, also verify the following conditions.
Ensure that security permission has been granted for this client
account, and check that the AppFabric Caching Service is allowed
through the firewall on all cache hosts. Also the MaxBufferSize on the
server must be greater than or equal to the serialized object size
sent from the client.). Additional Information : The client was trying
to communicate with the server: net.tcp://127.255.0.4:20010/.
I'm running everything as localhost, using the local development storage, my cache client is in the same role as the server. Changed many configuration attributes, but I always get that excpection or similar like "cannot connect to tcp....".
I'd appreciate some help. Thanks.
There are couple of things which could go wrong with your application.
Very first thing to make sure that you have SDK 1.7 in your machine even with Windows Azure Caching Services and then verify that you have reference set from Windows Azure Cache (not from Windows Server App Fabric SDK). I have seen such misconfiguration in past which lead to such errors.
Now have you changed your dataCacheClient, identifier to your ROLE Name as described in the documentation link here. If you follow the documentation as described to you should not hit any error so for the sake of checking what could be wrong, you can create exact same application as described in this link and see if that works or not.
To get more details error, please be sure to increase the DataCacheFactoryConfiguration.ChannelOpenTimeout value to longer i.e. 2 minutes then default 20 seconds as described here. This step will help you to get details about inner exception which may lead to actual root cause to your problem.
We use Azure co-located caching (not in preview anymore) as our session backer and have fairly regular outages. About once a month.
We tried using the Enterprise library Transient Fault Handling but our instances still hang when caching experiences problems. I think that the transient fault code would work for data caching, but for session backing there is some activity closer to the metal that we can't seem to code against.
The error codes have become more informative over the last year and go something like...
ErrorCode:SubStatus:The request timed out..
Additional Information : The client was trying to communicate with the
server: net.tcp://10.xx.xxx.xx:xxxxx/.
Our best guess so far from experimenting and MS support is that each, or at least one co-located cache role/instance needs to know about all the other instance's IPs, since Azure can destroy and re-up instances whenever they want, this sometimes fails to update the dependent instances. This is secret sauce for Azure, but it is not a secret when our site goes down. I'm looking for any more information on this and to see how others are working around this issue.
One possible work-around. One of our talented platform administrators found that resetting IIS on the instances and scaling up two more instances seem to help the problem. This makes sense to me because it gives caching another chance to gather all the required info about the other instances. This is NOT CONFIRMED to solve the problem but if we repeat this during the next outage it could be a valid work around.

How to do Continuous Integration with a live website without affecting users?

I have implemented Continuous Integration using TFS Version Control and TFS Build 2010. The compiled website project gets dropped in a shared folder with a version number.
Now I have a very basic question and may be a stupid question. When we normally deploy a website project from VS 2010 to a webserver it uploads App_Offline.htm file to the website folder so no requests are served to the user. After publish is completed that App_Offline.htm file is removed. During that period of time users see outage.
If we use CI on a live website then how can we eliminate that outage which appears to a user. I believe the whole point of CI is that users get to see newer features and the site is never down.
How is this accomplished? If we deploy website project to root folder then existing users will be affected and that is certainly no advisable.
I wanted to know what is the recommended practice with VS2010, TFS2010 Build & Version Control.
There's no real foolproof method for this, service up-time is never 100%, that's why people usually define it in 'nines'
But, if you had multiple web servers (Backup, fail-over, mirror etc.), you could roll out the update across them, so that as you update some servers, others will still be online (albeit with the old version) to serve users.
In general, only some of the largest websites have to worry so meticulously about being down for a few short minutes, so make sure you're focusing your energy in the right place ; )
Regarding taking down the site for the shortest time possible, the only way I've seen this done successfully is using multiple sites - either load balancing, or 2 sites on the same machine + swapping host headers after the release/warm up. But in most cases it's not worth the effort, releases shouldn't take down the site for more than a few seconds in which time there should be relatively few requests. You're better off trying a few things you can do to help your users live through a site release.
Move session out of proc.
If the users session lives in the app pool it will be lost when a new version is released, change the config to move it into a session server or the database.
Specify a machine key for the website
Viewstate (and cookies?) are encrypted using a key that is generated when a site starts, if a site restarts due to a release any users filling out a form will receive a invalid viewstate exception on postback. (Note: this may have other security implications)

Resources