I have basically two URL's http://xyzwebsite.com (for Development Testing) and http://abcwebsite.com (For Production). I have a simple Login mechanism where a user can click on Google Plus icon to log in rather than using their Username and Password. I created one Project for Development with obviously different Client ID and different for Production with a separate client ID.
But I tested both the URL's above with the client ID of Development project and it worked fine. I am wondering why there is a need ot having multiple projects in Google API console?
There is no particular need. A single project can have several URLs and client IDs for use.
Some reasons you might use multiple projects include:
Changing project settings in dev without worrying about breaking production
If you have a development script that gets into an endless loop or something it might use up all of the quota and the production app might start throwing errors
You might want clear branding on the dev app that explicitly identifies as not production.
Some unknown reason I can't think of.
Related
The company I work for has 3 applications, every app has its own database and login panel.
So what I want to do is, from the main app be able to log in to other apps.
These applications are totally different and have different databases. So what I'm thinking is using OAuth2 but I'm not quite sure if is the correct path to solve this.
In Google's latest docs, they say to test Go 1.12+ apps locally, one should just go build.
However, this doesn't take into account all the routing etc that would happen in the app engine utilizing the app.yaml config file.
I see that the dev_appserver.py is still included in the sdk. But it doesn't seem to work in Windows 10.
How does one test their Go App Engine App locally with the app.yaml. ie: as an actual emulated app engine app.
Thank you!
On one hand, if your application consists of just the default service I would recommend to follow #cerise-limón comment suggestion. In general, it is recommended for the routing logic of the application to be handled within the code. Although I'm not a Go programmer, for single service applications that use static_files and static_dir there shouldn't be any problems when testing the application locally. You might also deploy the new version without promoting traffic to it in order to test it as explained here.
On the other hand, if your application is distributed across multiple services and the routing is managed through the dispatch.yaml configuration file you might follow two approaches:
Test each service locally one by one. This could be the way to go if each service has a single responsibility/functionality that could be tested in isolation from the other services. In fact, with this kind of architecture the testing procedure would be more or less the same as for single service applications.
Run all services locally at once and build your own routing layer. This option would allow to test applications where services needs to reach one another in order to fulfill the requests made to them.
Another approach that is widely used is to have a separate project for development purposes where you could just deploy the application and observe it's behavior in the App Engine environment. As for applications with highly coupled services it would be the easiest option. But it largely depends on your budget.
I have some Web API applications that uses OWIN for authentication. Currently they are hooked up to Google and Facebook. I have them installed in multiple environments (local, dev, test, etc). Recently ALL of my applications in my development environment started failing. When trying to authenticate I would get a response back "access_denied". The URL would look like this:
https://{mydevserver}/{mywebapiapp}/#error=access_denied
The same code base works locally as well as in my test environment.
I tried using the same project (just adding redirect uris and orgins) as well as creating a new project.
I also updated my test environment to use the dev project (id and secret).
Nothing seems to have changed on the server recently. But it seems to be environment specific (because multiple applications are affected as well as multiple providers).
Are there any logging techniques I can use to drill down to a more detailed error message? Any tips or hints for what to try next?
The fix was a bit of an odd one. I had to log into my server, open up a browser and connect to a web page (any page). After doing so it started working again.
I have implemented Continuous Integration using TFS Version Control and TFS Build 2010. The compiled website project gets dropped in a shared folder with a version number.
Now I have a very basic question and may be a stupid question. When we normally deploy a website project from VS 2010 to a webserver it uploads App_Offline.htm file to the website folder so no requests are served to the user. After publish is completed that App_Offline.htm file is removed. During that period of time users see outage.
If we use CI on a live website then how can we eliminate that outage which appears to a user. I believe the whole point of CI is that users get to see newer features and the site is never down.
How is this accomplished? If we deploy website project to root folder then existing users will be affected and that is certainly no advisable.
I wanted to know what is the recommended practice with VS2010, TFS2010 Build & Version Control.
There's no real foolproof method for this, service up-time is never 100%, that's why people usually define it in 'nines'
But, if you had multiple web servers (Backup, fail-over, mirror etc.), you could roll out the update across them, so that as you update some servers, others will still be online (albeit with the old version) to serve users.
In general, only some of the largest websites have to worry so meticulously about being down for a few short minutes, so make sure you're focusing your energy in the right place ; )
Regarding taking down the site for the shortest time possible, the only way I've seen this done successfully is using multiple sites - either load balancing, or 2 sites on the same machine + swapping host headers after the release/warm up. But in most cases it's not worth the effort, releases shouldn't take down the site for more than a few seconds in which time there should be relatively few requests. You're better off trying a few things you can do to help your users live through a site release.
Move session out of proc.
If the users session lives in the app pool it will be lost when a new version is released, change the config to move it into a session server or the database.
Specify a machine key for the website
Viewstate (and cookies?) are encrypted using a key that is generated when a site starts, if a site restarts due to a release any users filling out a form will receive a invalid viewstate exception on postback. (Note: this may have other security implications)
We are looking at a standard way of configuring the various "endpoints" of our application. Our application is a distributed system with Windows Desktop applications, Windows Server "services" and databases.
We currently configure each piece using XML files. This is getting a little out of hands as we work with larger customers who can have dozens of Servers running our application and hundreds of desktop clients.
Can anyone recommend a Microsoft technology or a third party that would allow us to centralize all that configuration information and manage it in a one place for all our applications? Any changes would be "pushed" to the endpoint(s) that are interested.
For example, if we were to change the login for one of our database, we would make that change on the database, then reflect that change in our centralized system. Following that last step, any service that needs to connect to the database would be notified of the change (and potentially receive the new data). How and what each endpoint does with that information is outside the scope of the system.
Our primary business is not "Centralized Configuration Services". We are a GIS company that provides solutions for various utilities worldwide.
I've done a couple of things to give myself this functionality over the years. I build enterprise applicatons that may be distributed across many servers. I don't want to bury config settings in each services config file or each web server's web.config file. For application specific stuff I usually create an application settings table in the app's database. The table only has two fields. SettingName and SettingValue. I then write a web or wcf service whose sole function it is to retrieve these settings. I write a function called GetSetting where you pass "SettingName" and it returns SettingValue or an empty string if your setting is not found. This way I can store all application settings for all components of the application in one spot. Maintenance and troubleshooting for this is really easy, I'm not hunting through scads of config files spread across a dozen web and app servers.
For larger scale apps I might create a separate AppSettings database where I add a new field to my table mentioned above. ApplicationName. My web or wcf service for this approach has the same method call (GetSetting) only at this scope I pass ApplicationName and SettingName and it returns SettingValue or an empty string.
Doing either of these things allows you to centralize all app settings for any size application or IT shop. It has worked really well for us.
You could use RSS together with BitTorrent to distribute changes. See Wikipedia. It is not MS specific however, but should provide the flexibility you need - a configuration server holding the configuration and providing the feeds needed to configure the clients and possibly servers.
Any VCS through a secure channel?
For example, git through ssh (both available in cygwin).
I think the first step is to have the secure channel (if you want the push ability, pulling might be different).
As for managing the "versions" in different "branches", what's better than a version control system?
As it goes for the Microsoft requirement, well the Microsoft sofwares in that exists in that area would suck pretty bad in your case (as in not the best tool for the job).