I'm trying to build a web bot, and I was researching about the fastest technologies available.
After reading a lot, I saw the fastest technologies are headless browsers.
PhantomJS and HtmlUnit is what I found to be the fastest ones.
PhantomJS it seems it has good support for javascript, and HtmlUnit seems it does not, on the other hand HtmlUnit is a bit faster.
In terms or performance (CPU and Memory usage) I didn't found anything about it.
Do you guys know any other technology?
I'm trying to build a really fast bot to do a certain flow. And I will run in parallel this flow, so having multiple threads doing the same thing. Ideally I wanna choose a technology that is fast on navigating, but that is also light in order to allow me to open as many threads as possible.
Related
My title pretty much says it all. I have been looking at mod_pagespeed and it somehow impresses me as being very little more than a way to offload the work of optimization to the server instead of the developer.
There may be some advantages to doing this such as reducing developer time etc so I'm not suggesting that it is all bad. But it also does strike me as a bit of a script kiddie way to do things. Rather than learn about all those performance techniques, hey! just let the server do it!
Is mod_pagespeed something that would be good to implement on my production web application or would I be better off doing the optimization "by hand"?
Here is the original announcement.
It seems to me that it could empower the server admin to centrally optimize content created by a large set of developers. Also, it could be a good way of baking in some well-tested (by Google) best practices that might be costly to develop on your own.
YSlow, dynaTrace, HTTPWatch, Fiddler .........
All these things are really good for measuring the performance of the website and get statistics for the same. YSlow is really cool, offers good guidelines also.
However, i am very confused with so many things around (Though it's good that people already invested time and have made nice guidelines to follow and i thank them for great work done).
Following are my questions:
How much accuracy these tools have in terms or numbers they show ?
Which one(tool) is BEST to use (one for all needs)? Or i am missing name of some tool which is out of box and better than above all?
I'm suprised that you haven't mentioned JMeter. It is free, quite easy to use, has lots of features, and great for load testing your website.
As for question one, I'm not sure I can answer that. I'm sure that in general, the numbers these tools show are pretty accurate, but there are some catches. Take JMeter for example:
JMeter itself uses a lot of memory and also some substantial CPU time if you do some heavy load testing. That means that if you run the tool on the same machine as your website, some resources are lost, e.g. not available for the website
Testing it on the same machine does not out-of-the-box take in account that the data has to be sent over the internet connection, so response times are lower then the reality.
But in all, I think you should never blindly trust the results these tools give you, but they can give you a good insight into possible bottlenecks or problems.
YSlow is good to measure performance for a single user. Try to keep it grade A and it will be OK. But it actually doesn't measure performance in case of multiple concurrent users. For that you can use under each Apache JMeter. It's a good webserver/webapplication stresstest tool. So I would say, just use both YSlow (for client performance) and JMeter (for server performance).
I haven't used DynaTrace before, so I'll skip that part. The mentioned HTTP request trackers doesn't really measure performance, they are more debuggers.
As far as I am concerned, i find YSlow to be really good (have tried fiddler too) and it does help me when i need it and i do believe that it provides the correct figures thereby making me use that in the time ahead too unless there is anything unanimously accepted (which is difficult because everyone has different choices and requirements.) or even better. Oh they are right, forgot the JMeter, something you should definitely give a mention.
There is also Speed Tracer extension for Chrome. It should be usable with any JavaScript heavy website.
http://code.google.com/webtoolkit/speedtracer/
http://gtmetrix.com is a good tool and it is free. that analyzes your page's speed performance using Page Speed and YSlow
Suppose I'm writing a 2d tile based MMORPG.
Furthermore suppose I hate flash.
Lastly, suppose I only need my code to run in the latest safari, latest firefox, and latest chrome.
What are the limits to what I can and can't do? (Are there examples of good game engines that only require a recent web browser)?
Look into HTML5 Canvas http://en.wikipedia.org/wiki/Canvas_element
The latest versions of the browsers you mention already support it.
Check out the Unity3D engine: http://www.unity3d.com
Cross-browser, cross-platform, although your users would have to download the unity browser plugin...
There's also the Raphaƫl javascript library...it does a very nice job of abstracting a lot of the heavy lifting you'd have to do otherwise! The memory footprint seems decently light as well (from my small-scale playing around with it anyway).
For something that works for the user, OOBE (without add-ins etc); Javascript is probably the only unified functionality that exists between all browsers.
The browser is surprisingly quite capable (at least Chrome is), this is something that Google Chrome is attempting to promote (see http://www.chromeexperiments.com/). Notice some however, that experiments are often laggy or unworkable in other web browsers.
As for a list of things that are and aren't capable; that would take a fair while to compile.
In regards specifically to a 2D tile based game, I wouldn't say it isn't possible, but it may be quite difficult to create. As mentioned before, most browsers (apart from the stand-out; Google Chrome), suffer from limited resources. Therefore resources wise, it may be difficult to implement and would defiantly require a lot of requirement fore planning.
Java applets are also possible...
You could also move to 3D. While it does require a plug-in (although is being integrated into Chrome), the results are undeniable.
"O3D is an open-source web API for creating rich, interactive 3D applications in the browser." http://code.google.com/apis/o3d/. The video is quite amazing actually -- especially the live map editing (e.g. removing sprites).
I am developing a Cocoa application which communicates constantly with a web service to get up-to-date data. This considerably reduces the performance of the application. The calls are made to the web service asynchronously but the number of calls is huge.
In what ways can I improve the performance of the application? Is there a good document/write up available which gives the best practices to be followed when an a Cocoa application communicates with a web service?
Thanks
You should try out Shark that comes with the Mac OS X devtools - really great for digging into your callstack and should allow you to limit to network libraries and friends.
Yes! Apple actually has some very concise guides on performance that cover a lot of tricks and techniques, I'm sure you'll find something relevant to your own application. There may be some additional guides specific to 10.5 I haven't seen yet, but here are three I've found useful in the past.
Performance Overview
Cocoa Performance Guidelines
Cocoa Drawing Performance Guidelines
The most important thing to take away though, is that you need to use performance tools to see exactly where the bottleneck is occurring. Sometimes it may be in the place you least expect it.
I think if you use Shark your just find your app is blocking waiting for answers back from the server. Code split across machines is far harder to benchmark as the standard tools can only benchmark part of the picture.
Sounds like you need to look into bundling calls up into fewer transactions.... Your bottleneck is almost certainly the network. What about supporting sending multiple calls as an array of calls? and the same for answers? Maybe you could buffer calls locally and only send them a few times a second as a single transaction?
Tony
I've been looking at ways people test their apps in order decide where to do caching or apply some extra engineering effort, and so far httperf and a simple sesslog have been quite helpful.
What tools and tricks did you apply on your projects?
I use httperf for a high level view of performance.
Rails has a performance script built in, that uses the ruby-prof gem to analyse calls deep within the Rails stack. There is an awesome Railscast on Request Profiling using this technique.
NewRelic have some seriously cool analysis tools that give near real-time data.
They just made it a "Lite" version available for free.
I use jmeter for session-based testing - it allows very fine-grained control over pages you want to hit, parameters to inject, loops to go through, etc. It's great for simulating how many real users your site can handle, rather than just performance testing a set of static urls. You can distribute tests over multiple machines quite easily by loading up the jmeter-server on computers with publicly accessible IP's. I have found some limitations in the number of users/threads any one machine can throw at a server at once (it depends on the test), but jmeter has helped my team improve our apps capacity for users to 6x.
It doesn't have any fancy graphing -- I actually use my own in-house graphing with gruff that can do performance analysis on request time for certain pages and actions.
I'm evaluating a new opensource web page instrumentation and measurement suite called Jiffy. It's not particularly for ruby, it works for all kind of webapps
There's also a Jiffy Firebug Extension for rendering the metrics inside the browser.
I also suggest you look at Browser Mob for load testing.
A colleague of mine has also posted some interesting thoughts on this.