I'm working on an Ajax-heavy web app, and we're getting complaints of flaky behavior in situations where the user has an iffy network connection. As a first step in dealing with the issue, we'd like to add a network status widget to the top right corner of the affected pages.
The simplest version would be to have a script ping the server via Ajax every n seconds and show a green light/red light depending on whether or not it succeeded; that should be pretty easy to implement. However, is there an available widget that does something like this, possibly with a more sophisticated or informative approach? My initial Google searches haven't turned up anything, so I'm checking in here to see if anybody knows of any good existing solutions to this problem.
Some links that might help.
Detect that the Internet connection is offline? This question might prove helpful if you decide on creating your own or it might have the answer outright. There are many helpful answers.
How to Detect if Your Server Is Down When Making jQuery Ajax Calls - Might also prove helpful for a homegrown solution.
Check if Internet Connection Exists with Javascript? - More good suggestions for a simple AJAX solution.
Detecting offline status in HTML 5 - might be helpful.
Based on my findings I think your simple AJAX solution described in the question would work best. I didn't see any established widgets out there. Nevertheless, I would still review some of the suggestions/answers in the links as there were many clever solutions for different situations.
Related
I'm reading quite a bit about http2's server-push. Also did some experimenting (on a beginner's level)...
Well, my question is: Does it make sense to server-push woff2 web-fonts? (since not every browser uses them), and, is there a method to push the correct font (if not already in the cache)?
Zach points out how important it is to have a fast font-delivery-solution, and CSS-Tricks (Chris Coyer) has a great method to get it done cache-aware...
Thank you!
david
Well that's an interesting question alright. The answer is: No you should not do this. But the reason is a little different than you might think...
For reasons that are a bit cryptic, fonts are always requested without credentials (basically cookies). For most browsers (Edge being the exception) this means the browser opens another connection for that request and this is important because HTTP/2 Pushes are linked to the connection. So if you push a resource on one connection, and the browser goes to get a resource from another connection it will not use that pushed resource (you do not push directly into the HTTP Cache as you might think).
This, and lots of other HTTP/2 Push trickiness and edge cases were discussed by Jake Archibald in his excellent HTTP/2 push is tougher than I thought article.
But it does beg the question of how you can decide what format to push even if this wasn't an issue, or if you wanted to send different image formats for example (that would be on the same connection). Other than looking at the User-Agent and guessing based off of that, there is now way for you to know what the browser supports.
There is a new HTTP Client Hints header currently being proposed which aims to allow the browser to indicate the device specifics. This currently is more concerned with image size and density, but could in theory also include the file formats that are supported.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I was asked this question once at an interview:
"Suppose you own a website where the server is at some remote location. One day, some user calls/emails you saying the site is abominably slow. How would you identify why the site is slow? Also, when you check the website yourself as any user would (using your browser), the site behaves just fine."
I could think of only one thing (which was shot down):
Check the server logs to analyse incoming traffic. Maybe a DoS attack or exceptionally high traffic. Interviewer told me to assume the server has normal traffic and no DoS.
I was kind of lost because I had never thought of this problem. I have almost no idea how running a server/website works. So if someone could highlight a few approaches, it would be nice.
While googling around, I could find only this relevant, wonderful article. That article is kind of too technical for me now, but I'm slowly breaking it down and understanding it.
Since you already said when you check the site yourself the speed is fine, this means that (at least for the pages you checked) there is nothing wrong with the server and it can serve those pages at a good speed. What you should be figuring out at this point is what the difference is between you and the user that reports your site is slow. It might be a lot of different things:
Is the user using a slow network connection (mobile for example)?
Does the user experience the same problems with other websites hosted at the same webhoster? If so, this could indicate a network problem. Normally this could also indicate a resource problem at the webserver, but in that case the site would also be slow for you.
If neither of the above leads to an answer, you could assume that the connection to the server and the server itself are fine. This means the problem must be in the users device. Find out which browser/OS he uses and try to replicate the problem. If that fails find out if he uses any antivirus or similar software that might cause problems.
This is a great tool to find the speed of web pages and tells you what makes it slow: https://developers.google.com/speed/pagespeed/insights
I think one of the important thing that is missing from above answers is the server location, which can play a vital in web performance.
When someone is saying that it is taking a longer time to open a web page that means high latency. High latency can be caused due to server location.
Let's assume as you are the owner of the web page then the server and client are co-located, so it will have a low latency.
But, now if client is across the border, then latency time will increase drastically. And hence a slow perfomance.
Another factor is caching which drastically affects the latency time.
Taking the example of facebook, they have server all over the world to reduce the latency time (and also to provide several other advantages) and they use huge caching system to cache their hot data (trending topics) whereas cold data (old data) are stored in hard disk so it takes a longer time to load an older photo or post.
So, a user might would have complained about this as they were trying load up some cold data.
I can think of these few reasons (first two are already mentioned above):
High Latency due to location of client
Server memory might need to be increased
Number of service calls from the page.
If a service could be down at the time of complaint, it could prevent page from loading.
The server load might be too high at the time of the poor experience. The server might need to increase the resources (e.g. adding another server/web server to the cluster).
Check if there was any background job running on the server at that time.
It is important to check the logs and schedules of the batch jobs to determine what all was running at that time.
Hope this help.
Normally the user takes the page loading time as a measure to find out that the site is slow. But if you really want to know that what is taking the maximum time the you can open the browser debugger by pressing f12. if your browser is chrome the click on network and see what calls your application is making and which are taking maximum time. If you are using Firefox the you need to install firebug. If you have that, then again press f12 and click on Net.
One reason could be the role of the user is different of your role. You might be having suppose an administrator privilege (some thing like super user role) and the code might be just allowing everything for such role that means it does not really do much of conditional checking to see what is allowed or not. Some times, it's a considerable over ahead to get all the privileges of the user and have the conditions checking, how course depends how how the authorization is implemented. That means, the page might be really slow for specific roles. Hence, you should find out the roles of the user and see if that is a reason.
Obviously an issue with the connection of the person connecting to your site OR it's possible it was a temporary issue and by the time you checked your site, everything was dandy. You could check your logs or ask your host if there was an issue at the time the slow down occured.
This is usually a memory issue and it can be resolved by increasing the Heap Size of the Web Server hosting the application. In case the application is running on Weblogic Server. Heap size can be increased in "setEnv" file located in Application Home.
Goodluck!
Michael Orebe
Though your question is quite clear, web site optimisation is a very extensive subject.
The majority of the popular web developing frameworks are for some reason, extremely processor inefficient.
The old fashioned way of developing n-tier web applications is still very relevant and is still considered to be best practice according the W3C. If you take a little time to read the source code structure of the most popular web developing frameworks you will see that they run much more code at the server than is necessary.
This may seem a bit of a simple answer but, the less code you run at the server and the more code you run at the client the faster your servers will work.
Sometimes contrasting framework code against the old fashioned way is the best way to get an understanding of this. Here is a link to a fully working mini web application which represents W3C best practices and runs the minimum amount of code at the server and the maximum amount of code at the client: http://developersfound.com/W3C_MVC_EX.zip this codebases is also MVC compliant.
This codebase comes with a MySQL database dump, php and client side code. To see this code in action you will need to restore the SQL dump to a MySQL instance (sql dump came from MySQL 8 Community) and add the user and schema permissions that are found in the php file (conn_include.php); setting the user to have execute permissions on the schema.
If you contrast this code base against all of the most popular web frameworks, it will really open your eyes to just how inefficient these frameworks are. The popular PHP frameworks that claim to be MVC frameworks aren’t actually MVC compliant at all. This is because they rely on embedding PHP tags inside HTML tags or visa-versa (considered very bad practice according the W3C). Also most popular node frameworks run way more code at the server than is necessary. Embedded tags also stop asynchronous calls from working properly unless the framework supports AJAX dumps such as Yii 2.
Two of the most important rules to follow with MVC compliance is: never embed server side tags (such as PHP tags) in HTML tags or visa-versa (unless there is a very good excuse such as SEO) and religiously never write code to run at the server if it can be run at the client. Also true MVC is based on tier separation, where as the MVC frameworks are based on code separation. True MVC compliance is very processor efficient. Don’t get me wrong MVC frameworks are very useful for a lot of things, but if you’re developing a site that is going to get millions of hits, they are quite useless, or at least they will drive your cloud bills so high that it will really eat into your company’s profits.
In summary frameworks don’t give much control over what code runs at the client or server and are very inefficient but you can get prototypes up and running quicker with less code.
In contrast the old fashioned way takes a bit more elbow grease but you have complete control over what runs at the server and what runs at the client.
As an additional bit of advice for optimisation avoid using pass-through queries and triggers and instead opt for stored procedures. Historically stored procedures weren’t invented at the time MVC was present as a paradigm but it definitely increases separation of concerns between the tiers and is much more processor efficient.
Hope this advice helps.
I want to keep no-good scrapers (aka. bad bots that by defintition ignores robots.txt) that steal content and consume bandwidth off my site. At the same time, I do not want to interfere with the user experience of legitimate human users, or stop well-behaved bots (such as Googlebot) from indexing the site.
The standard method for dealing with this has already been described here: Tactics for dealing with misbehaving robots. However, the solution presented and upvoted in that thread is not what I am looking for.
Some bad bots connect through tor or botnets, which means that their IP address is ephemeral and may well belong to a human being using a compromised computer.
I've therefore been thinking about how to improve the industry standard method by letting the "false positives" (i.e. humans) that has their IP blacklisted get access to my website again. One idea is to stop blocking these IPs outright, and instead asking them to pass a CAPTCHA before being allowed access. While I consider CAPTCHA to be a PITA for legitimate users, vetting suspected bad bots with a CAPTCHA seems to be a better solution than blocking access for these IPs completely. By tracking the session of users that completes the CAPTCHA, I should be able to determine whether they are human (and should have their IP removed from the blacklist), or robots smart enough to solve a CAPTCHA, placing them on an even blacker list.
However, before I go ahead and implement this idea, I want to ask the good people here if they foresee any problems or weaknesses (I am already aware that some CAPTCHAs has been broken - but I think that I shall be able to handle that).
The question I believe is whether or not there are foreseeable problems with captcha. Before I dive into that, I also want to address the point of how you plan on catching bots to challenge them with a captcha. TOR and proxy nodes change regularly so that IP list will need to be constantly updated. You can use Maxmind for a decent list of proxy addresses as your baseline. You can also find services that update the addresses of all the TOR nodes. But not all bad bots come from those two vectors, so you need find other ways of catching bots. If you add in rate limiting and spam lists then you should get to over 50% of the bad bots. Other tactics really have to be custom built around your site.
Now to talk about problems with Captchas. First, there are services like http://deathbycaptcha.com/. I dont know if I need to elaborate on that one, but it kind of renders your approach useless. Many of the other ways people get around Captcha's are using OCR software. The better the Captcha is at beating OCR, the harder it is going to be on your users. Also, many Captcha systems use client side cookies that someone can solve once and then upload to all their bots.
Most famous I think is Karl Groves's list of 28 ways to beat Captcha. http://www.karlgroves.com/2013/02/09/list-of-resources-breaking-captcha/
For full disclosure, I am a cofounder of Distil Networks, a SaaS solution to block bots. I often pitch our software as a more sophisticated system than simply using captcha and building it yourself so my opinion of the effectivity of your solution is biased.
I have discovered socket.io recently, and it seems to fit perfectly my needs for a multiplayer game. From what I understand, it serves the same role for communications between client and server that jQuery does for client-side querying and animations. Is that roughly correct?
The official website is kind of informative, and I've found a few blog posts, such as this one, boasting how awesome it is. However, I've found no Wikipedia article describing it, no news items, no scholarly research, etc. So, how popular is it?
Somewhat surprising, I have not been able to people complaining about its bad parts, apart from (understandable) bugs like this. For some reason I haven't found comments about how fast it is, how buggy it is, how complete it is, etc.
I would like to know what I'm getting into before diving in and learning the technology!
Note: My opinion is biased because I work on Socket.IO
We have the http://socket.io site and some wiki pages on github (https://github.com/learnboost/socket.io/wiki). It's not allot of information, but enough to get your started.
I think the main reason why there isn't that much information yet is because node and socket.io are relatively young. But it's really populair in the node.js community, when you want realtime communication socket.io is usually the first module that is suggest to you.
According to the stats of npm (node package manager) it's quite populair http://search.npmjs.org/ as it's in the list of most depended on modules. Also if you checkout the stats on the repositories on github you can see that it's quite active and loved. The server has more than 2000+ watchers and 220+ forks. And the client 1300+ watchers and 110+ forks. The google groups has 1350 members already. So that is not to bad for one single node.js module (if i may say so).
As for the bugs, we have launched a complete rewrite of the code a couple of months ago, incorporating the lessons and feedback we had on Socket.IO 0.6, so there a few leaks and bugs but we are working on hard on resolving those. I have already fixed most of the known memory leaks and they should hopefully land in socket.io 0.7.8 / 0.8.
Hopes this helps <3
First off, The Problem:
We have a Web App with a Flash front-end that talks to our ASP.NET web service via SOAP which then deals with all of our server side code (C#).
Right now, we implement a simple user sign on in our application, storing the info in our MSSQL DB.
A client has requested what I understand to be Windows authentication through our application using the currently logged in user.
So, I have been tasked with investigating this. Nobody, including myself, has any experience in this area.
I have been reading up on some basic Active Directory information, and some simple tutorials. I understand how to get access to the directory using ADSI through code. What I'm really interested in seeing is how the entire thing should be architected. I don't want to throw together a hacky solution.
Does anyone know of a good tutorial for this kind of thing or have any advice on getting started? More importantly, does this even sound viable?
I know I haven't given much information, but feel free to ask and I will provide answers.
Thanks.
Edit:
Will, to give you an idea of the scope of this, the network will include every computer in a large hospital. So yes, this is huge. Clearly I need to start small. I would like to come up with something that will work at my office first. Maybe ~10 Windows computers on a single domain. One Domain Controller.
I am also open to any good books on the subject.
If you are going to tie into Active Directory you will want to take a look at the System.DirectoryServices namespace. The implementations can vary wildly depending on your system architecture, but this should give you a good starting point.
Enjoy!