disqus comments refreshing takes really long - comments

I added disqus to my site (on localhost) and when I add comments it doesn't refresh automatically, it takes a really long time. About 10-15 minutes. I already turned developer mode on, and it is working because it reads the amount of comments, only it takes ages to read them. So when there are 5 comments, and I add 10 more it will stay on 5 for about 15 minutes. Is there something I can do about that?

The comments are cached, so no it's not something you can control. Realtime is still immediate, so that's generally how new comments get delivered to the client.
However, I wouldn't necessarily count on it being that way forever. We do change things to better the user experience based on feedback, whenever possible.

Related

Youtube data api quota increase

Hi i am writing android app that lets users take videos and upload them to youtube, as of now i am not anticipating any where close to reaching quota since Im still developing the app. But I am worried since the limit is 1,000,000 units per day, I will eventually cross it.
So how do we increase it, I noticed under the quota tab there is an option, clicking it brings up a form. But it doesn't mention how much will it cost me to increase the limit? Also I couldn't find any google support page so I am asking it here
thanks
With the current quota, you can upload near 660 videos a day. If you get close to that number you can fill that form which is a long form and you need two cups of coffee and perhaps more than 2 hours to do it. In around 48 hours, they will send the result and will approve it, if your app behaves compatible with "terms of service". And it's free.
Cheers.
Edit
It is not documented anywhere, but very recently (almost since the day I answered this question), YouTube has changed the data-api, that each day it accepts 50 videos, and afterward it accepts only one video per 15 minutes. And because it is just applied without being documented nor explained by YouTube, we cannot anticipate what the limitation is going to be.

Time attendance algorithm

I've recently started to work on Time attendance software. People are using cards to check in and check out, but sometimes they check out before they check in and then some of them realize they made mistake and check in again. sometimes they check in instead of check out. I wrote an application that creates report and everything works fine when mistakes are simple, but sometimes people are just people and they check in for example 15 times.
I know my question is kinda complex and I doubt there is and answer but I was wondering if there is any algorithm which can determine such mistakes and can create decent report.
thanks in advance.
I think really if you are trying to have your software guess what the users intent was then you would need to base it on what the users schedule should be and what their expected check in/out cycle looks like
If its a workplace and the users are punching in their time and they work 8 hour shifts, you could try to be smart and flag checkins 7.5-8.5 hours apart as probably a check in that should have been a check out. Then you could flag back to back checkins 23+ hours apart as probably a missed check out on the previous shift. 16 hour differences would still probably be impossible to guess because they could be clocking out for a double, or changing their schedule and working an earlier shift the next day.
If this was for a college building you could probably at least say that back to back checkins that occur on separate calendar days were a missed checkout.

Heroku taking 2 seconds to load every page--including pages that simply render a single text string

No, this is NOT the "my page doesn't have any traffic and it has to be reloaded" issue.
We have 4 dynos for an alpha application. The reason we do, is because each page takes over 2 seconds to load. Even little things like rendering a text string (no layouts, erb or anything).
If I watch our logs, for our longer queries, they report response times in the 300-700ms range--which is far shorter than 2 seconds.
The DNS is cached, and the collective time to load given that isn't a slow DNS issue. And, that shouldn't affect subsequent page loads, right?
Any thoughts on how to get to the bottom of this would be appreciated.
Here are two screenshots to show what I mean.
http://dl.dropbox.com/u/7175041/Screenshots/qo.png
http://dl.dropbox.com/u/7175041/Screenshots/qq.png
Thanks!
First thing I'd do is to switch on NewRelic Basic - it's a free performance monitor integrated with Heroku. That'll help you get a bearing on the basics of where the trouble is coming from.
I take it that you don't see similar results locally? If you don't, then skip this step, but if you do, you can also run NewRelic locally and interrogate all of your queries for response times.
I'd stay away from using things like the Benchmark library - that was my first thought in troubleshooting a speed issue, but Benchmark is necessarily going to ignore elements of your app that are outside the pure Ruby layer, and if that's where you're slow then NewRelic catches that anyway.
Finally, if all else fails, a support ticket with Heroku's team has always been extremely helpful to me. Just make sure you check the box that lets them clone your app, it makes things a lot easier for them.
Let us know what you find out - I'm curious to see what the particular gremlin is!

Reading Web.config Many times and performance

If i am reading one of my application settings from the web.config everytime when each of my ASP.NET page loads,Would it be a performance issue ?I m concerned about memory too.
It's not great, but in the context of serving up a page, it's just a drop in the bucket. It's not nearly as bad as reading it over and over in a loop, hundreds of times per page view. Lots of pages do things like look up previous visit info (user preferences, cookie tracking, etc..) which usually requires opening a database connection and running a query. So hitting the config file is small potatoes.
You also have to consider how often this really happens. A thousand times per hour? Don't waste your time. A thousand per minute? Stil probably not a problem (a datbase query would probably be a different story though). A thousand times per second, and then you've got reason to try to optomize this.
I don't think I'd worry about it. It is a very small file, and reading from it is very fast.
If it concerns you that much, read it into an Application variable, and reference that throughout the app instead.

Best format for displaying rendered time on a webpage

I've started to add the time taken to render a page to the footer of our internal web applications. Currently it appears like this
Rendered in 0.062 seconds
Occasionally I get rendered times like this
Rendered in 0.000 seconds
Currently it's only meant to be a guide for users to judge whether a page is quick to load or not, allowing them to quickly inform us if a page is taking 17 seconds rather than the usual 0.5. My question is what format should the time be in? At which point should I switch to a statement such as
Rendered in less than a second
I like seeing the tenths of a second but the second example above is of no use to anyone, in fact it just highlights the limits of the calculation I use to find the render time. I'd rather not let the users see that at all! Any answers welcome, including whether anything should be included on the page.
"Rendered instantly" sounds way better than "Rendered in less than a second".
I'm not sure there's any value in telling users how long it took for the server to render the page. It could well be worth you logging that sort of information, but they don't care.
If it takes the server 0.001 of a second to draw the page but it takes 17 seconds for them to load it (due to network, javascript, page size, their rubbish PC, etc) their perception will be the latter.
Then again adding the render time might help you fend off the enquiries about any percieved slowness with a "talk to your local network admin" response.
Given that you know the accuracy of your measurements you could have the 0.000 text be "Rendered in less than a thousandth of a second"
Rather than relying on your users to look at the page footer and to let you know if the value exceeds some patience threshold, it might be a better idea to log the page render times in a log file on the server. Once you have all that raw data, you can look for particular pages that tend to take longer than normal to render.
With more detailed logging, you could also measure the elapsed times in database queries or whatever if your web app relies on external systems.
I think I over-emphasized it was for the users.
I know by using in trace in the web.config I can get accurate information on page render times along with times for accessing the database.
We have in the past had problems with applications running too slowly over the network although it's now fixed I'm adding the label to new applications so that users are aware it is something we are taking seriously and it's a very simple indicator for the developers.
Taking all that into account I like "Rendered Instantly" and write a lot of sense so I'll accept both your answer and kokos'.
Thanks

Resources