Was few days away from the computers and when back to work I found this very strange activity on my ape requests graph.
I have nothing running whole weekend.
Looks like some think monitoring each 3 seconds.
Do you know what it can be or what to check?
You can go to your Admin console and from there check on the logs for your app
(under "Monitoring - > Logs"). This will tell you what was the request that caused a request to appear. Without access to your logs it's the best I can offer from here.
and 0.033 request a second is every 30 seconds, not every 3.
Related
No matter what we try, all YouTube API requests we make are failing.
As we first thought this was a propagation issue, we waited out 10 minutes, then 30 minutes, 2 hours and now over 24 hours, to no avail.
We have found this thread, which covers a similar issue with an iOS app, but does not correspond to our use case.
Here is a run-down of what the problem is:
Activating the "Youtube Data API v3" API for our account shows as successful, and the API shows as enabled.
A POST to https://www.googleapis.com/upload/youtube/v3/videos (videos insert) consistently fails with the following error, despite the fact that we have waited hours for the API enabling to propagate:
Access Not Configured. YouTube Data API has not been used in project XXXXXXXXXXXX before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/youtube.googleapis.com/overview?project=928939889952 then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry.
Although the error does not directly point to this, our "Queries per day" quota for the YouTube Data API is showing as "0". We are not able to understand why this is showing as zero, and unfortunately, all attempts to edit it to something higher have failed, and disabling and then re-enabling the API has not solved the problem. In a completely separate project/account, this shows as "10,000" when enabling the YouTube Data API, and indeed video insert API calls work under that project.
This is a significant roadblock for us, as it prevents us from deploying our application: any help would be appreciated.
No access configured
Actually means that you dont have permission to access the api. It basically means you have enabled the api but dont have the quota to use it. Its different then the you have run out of quota error message.
After a strange change sometime last year by default the quota for the Youtube api is now 0. You need to request a quota extension it can take anywhere between a week to several months to get permission to use it.
It took me three months. No i dont have any idea how they expect anyone to develop a new application without any quota and to know ahead of time that they need to apply for quota in order to start developing their application. Its quite frustrating.
this might be a totally noob question.
We just migrated to AWS a week back. We have two separate apps , call them App1 and App2. For every request that App1 receives , it makes a web service call to App2 with a read timeout of 2 sec.So ,if the response isn't delivered within 2 sec,it is aborted.However, App2 server is facing some problems due to which sometimes App2 server goes down. But the problem is that whenever App2 server goes down,App1 server goes down with it. And when it comes back up ,the App1 server immediately comes back up as well.
This is weird problem.What do you guys think is happening ?
Any help will be greatly appreciated.
My guess is that requests are piling up on app 1 (due to increased latency) as app 2 goes down, which eventually causes app 1 to become unresponsive as well. I would also look into what actually happens when you abort your request after the two second timeout. Are you actually making sure the connection is aborted? If not, you may be using up system resources for dead connections.
But the above is just guessing in the dark; I think we need more information to make more educated guesses :).
We have IIS 7 running a Classic ASP app and I've been noticing the following issue lately. Over the course of the day, if I look at Server Node --> Worker Processes some requests seem to fill up there. The elapsed time is something crazy like 12 hours at the end of the day. This requests all sit in the ExecuteRequestHandler stage.
There is no way anything is executing for that long, and I cannot seem to reproduce the issue. I have tried dumping w3wp.exe, using FRT, and all that good stuff, but I have some general questions:
Is there a setting that controls WHEN IIS stops a request? To be specific, in development, if I purposely design a page to be slow (i.e. update a SQL table thats locked) and then CLOSE out of browser, and monitor the requests in IIS, I see that the request still sits there for about 20 seconds before being removed. Is that 20 seconds a random interval, or can that be SET somewhere? To be clear, it's not that the page takes 20 seconds to execute, it will execute forever (in this test case) but it seems IIS gives up on it after 20 or so seconds after I log out.
Is there some way to see "orphaned" requests, I.E. requests in the app pool that nobody is waiting for anymore
What else can I do to try and debug this? A dump of w3wp says there are client connections with an HTTP request state of HTR_READING_CLIENT_REQUEST.
I keep getting suggestions of modifying IIS config settings such as AspRequestQueueMax, every time I try looking those up in the ApplicationHost.config I don't see those items set, so either I'm looking at the wrong place, or a default value would not be explicitly set in the config. This begs 2 questions: a) How do you READ these config values, i.e. get current value, b) how do you SET these.
A Classic ASP request will keep running until the script timeout is reached, regardless of whether the client is connected or not. I believe the default is 90 seconds, but an .ASP file can override this by setting the Server.ScriptTimeout property directly (which is pretty common). If your request queue is filling up then this is likely the reason and changing the defaults will not help.
If you can edit the ASP code, you can add logic like this in potentially long running sections:
If Not Response.IsClientConnected Then Call Response.End()
You can also global search your code for Server.ScriptTimeout to understand from where the abuse is coming.
If you do want to change the default script timeout, here is where it is stored:
https://www.iis.net/configreference/system.webserver/asp/limits
To change via the IIS7 GUI go to: (web site) > (features view) > ("IIS" category) > "ASP" > expand "Limits Properties" node > "Script Time-out"
I am using Meteor on Heroku (free tier) with MongoHQ. My app is very simple right now, it loads 3-4 entries from a Collection, but when I deploy it to Heroku, I am seeing ridiculous load times (1-2 minutes). The HTML is rendered immediately. When I deploy to Meteor.com's free server, load times are a lot lower but still about 15 seconds for 4 tiny pieces of data. I'm not seeing this whatsoever when I deploy locally, app pulls data from the DB right away.
It is worth noting that I don't think it's an "idling" issue for Heroku. Even if I already have one browser window with the app just opened, if I use a different browser and try again I still get 1-2 minute load times. Once the data is loaded, however, performance goes back to being great, I can read and write with no problems.
What am I missing? I'm not seeing any errors in the console, mongo shows several queries in the logs and shows that it is responding quickly with 4 documents, but apparently somewhere in the middle there's a traffic jam. Any help with this is greatly appreciated, if I can't get past this Meteor is useless for my needs right now.
UPDATE: I've been watching it closely in Firebug, and it looks like the performance is largely inconsistent. Sometimes a simple refresh will take 1 minute, sometimes it will take 10 seconds. But what I've noticed is that the times when its slow, it GETs the sockjs/info file, then right after that the sockjs POST is aborted (sometimes multiple times). When it runs fast, the POST and subsequent POSTs run smoothly
Slow:
GET http://pocleaderboard.herokuapp.com/sockjs/info 200 OK 22ms
POST http://pocleaderboard.herokuapp.com/sockjs/029/su0d77fb/xhr Aborted
GET http://pocleaderboard.herokuapp.com/sockjs/info 200 OK 27ms
POST http://pocleaderboard.herokuapp.com/sockjs/132/uljqusxd/xhr Aborted
GET http://pocleaderboard.herokuapp.com/sockjs/info 200 OK 28ms
POST http://pocleaderboard.herokuapp.com/sockjs/154/kcbr6a5p/xhr Aborted
Fast(er):
GET http://pocleaderboard.herokuapp.com/sockjs/info 200 OK 1.08s
POST http://pocleaderboard.herokuapp.com/sockjs/755/xiggb555/xhr 200 OK 1.02s
Meteor gets loaded that fast locally, because it doesn't depend on your internet connection and the files can just be read from your harddrive and don't need to be downloaded.
And once the data is loaded it's the same everywhere you host, because the client (you) perform all actions on your cached mongo database and then just wait for the server to say if the action was alright or not.
But for the Heroku loading times, I have no idea, Sorry!
UPDATE:
These are the long-pulls from SockJS that is used by Meteor.
Normally these pulls only get Aborted on a hot code push (when a file is added/changed/removed).
Either you or Heroku seem to write or change something in the directory.
Because then a hot code push may be initiated by Meteor.
Heroku may not support web-sockets, which means you're stuck with the slower polling approach. See this:
https://devcenter.heroku.com/articles/using-socket-io-with-node-js-on-heroku
I observe the following weird behavior. I have an Azure web role which is deployed on love Azure cloud. Now I click "Configure" in the Azure Management Portal and change the number of instances - the portal shows some "activity". Now I open the browser and navigate to the URL assigned to my deployment and start refreshing the page something like once per two seconds. The page reloads fine many times and then fro some time it will stop reloading - the request will be rejected, then after something like half a minute the requests are handled normally.
What is happening? Is the web server temporarily stopped? How do I change number of instances so that HTTP requests to the role are handled at all times?
When you change the configuration file, your current instance might be restarted. This might be the reason you met with, which your website didn't response in about 30 seconds.
Please have a look http://msdn.microsoft.com/en-us/library/microsoft.windowsazure.serviceruntime.roleenvironment.changing.aspx and check if it 's because of the role restarting.
What you are doing is manual. Have you looked at the SDK for autoscaling Azure?
http://channel9.msdn.com/posts/Autoscaling-Windows-Azure-applications
Check out the demo at the 18 minute mark. It doesn't answer your question directly, but its a much more configurable/dynamic way of scaling Azure.
Azure updates your roles one update domain at a time, so in theory you should see no downtime when updating the config (provided you have at least two instances). However, if you refresh the browser every couple of seconds, it's possible that your requests go always to the same instance due to keep-alive.
It would be interesting to know what the behavior is if you disable keep-alives for your webrole. Note that this will have a performance impact, so you'll probably want to re-enable keep-alives after the exercise.