What's the best way to profile my RoR website http://www.karmabee.net?
I'm using the fb_graph GEM which is pretty slow, especially when retrieving friends lists. Twilio is also pretty slow when sending SMS texts.
So I'm not sure I could optimize those things. In any case, I need to figure out how to profile the site first.
Any ideas?
NewRelic: http://newrelic.com/ It looks into your rails app and tells you how much time each request spends on db queries, page rendering etc. From there you can drilldown to the bottleneck and work on optimizations.
http://www.webpagetest.org/ is handy for general page speed testing.
Chrome comes with the Audits tool(right click, inspect element -> Audits tab) which you can test any webpage's webpage performance and network utilization. Firefox has an addon YSlow does something similar.
Not sure how interaction with twilio can be profiled...
I really like request log analyzer, just do:
gem install request-log-analyzer
Then on your production box you can do something like:
request-log-analyzer log/production.log
It'll tell you all sorts of things like which controllers and actions are slow, etc, give it a try!
Related
Well, I trying establish a web page with a wordpress and GoDaddy hosting. I want to make fast web page, because people says fast web pages appear on first line at Google (as specially mobile web page speed is very important people says). So want to make very fast web page but my level of knowledge is not very advanced, I progress by learning.
If I test my web page with Insights, mine mobile score is about 60-70. If I read reports of Insights there are lots of improvements links appear at blow. I want to learn how to fix that. If you help me make an example, I will do the others myself.
If we start at first problem which is /css?family=…(fonts.googleapis.com) this problem seen below of "Eliminate resources that prevent rendering" topic. So how to fix it. What should I do?
Also at the "covorage" tab there are some source codes are seen and it is not using. For example I am not using easy-sheare plugin (secong row at the image) at homepage.
How to remove safely that codes from home page. If I can learn how one is made, I can correct the others myself.
The issue you are running into is something I have seen over and over again. GoDaddy and Wordpress sites generally are bloated and perform poorly.
Here are some tips to improve your speed & get a better PS ranking.
Hosting: Do you need to be on Godaddy? I have seen this time and time again. Most websites on GD are SLOW. GD is good for domain registration, not for hosting. Most non-tech folks do not know any better. Try using Amazon Lightsail, AWS-S3, Google Firebase, or Netlify. They all offer much faster page loads by reducing initial server response time. And they are surprisingly simple to learn and deploy.
CDN: You must use a content-distribution-network (CDN). Check out Cloudfront. They offer a free tier that works quite well.
Wordpress: This is your real issue. Wordpress is neither easy to build nor easy to maintain. You need multiple plugins to make the site perform. Best you build your own. If you have to be on Wordpress checkout image optimizers, minifiers, and cache plugins. Gumlet, WP Rocket, Shortpixel are quite popular to improve speed.
I need to test links inside a web application. I have looked at a couple tools (Xenu, various browser plugins, link-checker(ruby)). Nothing quite fits my needs which I will detail below.
I need to get past a login form
test needs to be rerun for different types of users (multiple sets of login credentials)
would like to automate this under a ci server (Jenkins)
the ability to spider the site
Does anyone have any ideas? Bonus if I can use Ruby to do this!
What you are asking for is beyond most of the test tools once you throw in the ability to spider the site.
That final requirement pushes you into the realm of hand-coding something. Using the Mechanize gem you could do all those things, but, you get to code a lot of the navigation of the site.
Mechanize uses Nokogiri internally, so it's easy to grab all links in a page, which you could store in a database to be checked by a different thread, or some subsequent code. That said, writing a spider is not hard if you're the owner of the pages you're hitting, because you can be pretty brutal about accessing the server and let the code run at full speed without worrying about being banned for excessive bandwidth use.
I am developing a small web application using Ruby, Sinatra & HAML.
The scenario I am struggling with at the moment is something that I used to solve in PHP using Ajax and Javascript, and am not sure how to best go about doing it in Ruby (what the best practice would be, if there is a more optimized way of approaching this).
I have a screen as follows:
What I wish to happen is that when a user clicks on one of the buttons (for example Show most popular), the system calls a function which queries the database to get the relevant records and re-populates the 'Entries' box with the appropriate records. I want to do this without the rest of the page re-loading or anything else being effected, just the 'Entries' box. Bonus - If I can show a little "LOADING" spiral while the data is being fetched.
My research led me to gem known as "typhoeus" which I found to be really great, but am not sure if it applies in this scenario (or how to implement it if it does).
Any kind of help would be greatly appreciated.
Much obliged.
It seems like pjax is what you would be looking for.
https://github.com/defunkt/jquery-pjax/tree/heroku
The author uses Sinatra in his example app. Though he does use erb, I am sure that it wouldn't cause any problems if you switched the template engine to haml
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
I am looking at writing my own, but I am wondering if there are any good web crawlers out there which are written in Ruby.
Short of a full-blown web crawler, any gems that might be helpful in building a web crawler would be useful. I know this part of the question is touched upon in a couple of places, but a list of gems applicable to building a web crawler would be a great resource as well.
I used to write spiders, page scrapers and site analyzers for my job, and still write them periodically to scratch some itch I get.
Ruby has some excellent gems to make it easy:
Nokogiri is my #1 choice for the HTML parser. I used to use Hpricot, but found some sites that made it explode in flames. I switched to Nokogiri afterwards and have been very happy with it. I regularly use it for parsing HTML, RDF/RSS/Atom and XML. Ox looks interesting too, so that might be another candidate, though I find searching the DOM a lot easier than trying to walk through a big hash, such as what is returned by Ox.
OpenURI is good as a simple HTTP client, but it can get in the way when you want to do more complex things or need to have multiple requests firing at once. I'd recommend looking at HTTPClient or Typhoeus with Hydra for modest to heavyweight jobs. Curb is good too, because it uses the cURL library, but the interface isn't as intuitive to me. It's worth looking at though. HTTPclient is also worth looking at, but I lean toward the previously mentioned ones.
Note: OpenURI has some flaws and vulnerabilities that can affect unsuspecting programmers so it's fallen out of favor somewhat. RestClient is a very worthy successor.
You'll need a backing database, and some way to talk to it. This isn't a task for Rails per se, but you could use ActiveRecord, detached from Rails, to talk to the database. I've done that a couple times and it works all right. Instead, I really like Sequel for my ORM. It's very flexible in how it lets you talk to the database, from using straight SQL to using Sequel's ability to programmatically build a query, to modeling the database and using migrations. Once you have the database built, you could use Rails to act as a front-end to the data though.
If you are going to navigate sites in any way beyond simply grabbing pages and following links, you'll want to look at Mechanize. It makes it easy to fill out forms and submit pages. As an added bonus, you can grab the content of a page as a Nokogiri HTML document and parse away using Nokogiri's multitude of tricks.
For massaging/mangling URLs I really like Addressable::URI. It's more full-featured than the built-in URI module. One thing that URI does that's nice is it has the URI#extract method to scan a string for URLs. If that string happened to be the body of a web page it would be an alternate way of locating links, but its downside is you'll also get links to images, videos, ads, etc., and you'll have to filter those out, probably resulting in more work than if you use a parser and look for <a> tags exclusively. For that matter, Mechanize also has the links method which returns all the links in a page, but you'll still have to filter them to determine whether you want to follow or ignore them.
If you think you'll need to deal with Javascript manipulated pages, or pages that get their content dynamically from AJAX, you should look into using one of the WATIR variants. There are flavors for the different browsers on different OSes, such as Firewatir, Safariwatir and Operawatir, so you'll have to figure out what works for you.
You do NOT want to rely on keeping your list of URLs to visit, or visited URLs, in memory. Design a database schema and store that information there. Spend some time up front designing the schema, thinking about what things you'll want to know as you collect links on a site. SQLite3, MySQL and Postgres are all excellent choices, depending on how big you think your database needs will be. One of my site analyzers was custom designed to help us recommend SEO changes for a Fortune 50 company. It ran for over three weeks covering about twenty different sites before we had enough data and stopped it. Imagine what would have happened if we had a power-outage and all that data went in the bit-bucket.
After all that you'll want to also make your code be aware of proper spidering etiquette: What are the key considerations when creating a web crawler?
I am building wombat, a Ruby DSL to crawl web pages and extract content. Check it out on github https://github.com/felipecsl/wombat
It is still in an early stage but is already functional with basic functionality. More stuff will be added really soon.
So you want a good Ruby-based web crawler?
Try spider or anemone. Both have solid usage according to RubyGems download counts.
The other answers, so far, are detailed and helpful but they don't have a laser-like focus on the question, which asks for ruby libraries for web crawlers. It would seem that this distinction can get muddled: see my answer to "Crawling vs. Web-Scraping?"
Tin Man's comprehensive list is good but partly outdated for me.
Most websites my customers deal with are heavily AJAX/Javascript dependent.
I've been using Watir / watir-webdriver / selenium for a few years too, but the overhead of having to load up a hidden web browser on the backend to render that DOM stuff just isn't viable, let alone that all this time they still haven't implemented a useable "browser session reuse" to let new code execution reuse an old browser in memory for this purpose, shooting down tickets that might have worked their way up the API layers eventually. (refering to https://code.google.com/p/selenium/issues/detail?id=18 ) **
https://rubygems.org/gems/phantomjs
is what we're migrating new projects over to now, to let the necessary data get rendered without even any sort of invisible Xvfb memory & CPU heavy web browser.
** Alternative approaches also failed to pan out:
how to serialize an object using TCPServer inside?
Can a watir browser object be re-used in a later Ruby process?
If you don't want to write your own, then use any ordinary web crawler. There are dozens out there.
If you do want to write your own, then write your own. A web crawler isn't exactly a complicated activity, it consists of:
Downloading a website.
Locating URLs in that website, filtered however you dang well please.
For each URL in that website, repeat step 1.
Oh, and this seems to be a duplicate of "Web crawler in ruby".
Is there a way to visually see if htmlunit is performing the correct commands? I have a hard requirement to use htmlunit. I just don't know if it's filling out all the form correctly.
HTMLunit is designed to be GUI less browser and for your requirements you can consider using Webdriver or Watir or Selenium etc such tools. In case you are in to Ruby, take a look at Celerity which wrapped HtmlUnit in a Watir-ish API; In fact Celerity is itself being wrapped by Culerity, which integrates Celerity and Cucumber and that could be of more interest to you.
Yes. you can see the HTTP traffic by using proxy like webscarab, fiddler..etc.
Make sure the following
Set the proxy details to Htmlunit via contsructor. I think it is webclient
Make sure you either trust all the certs or add proxy certificate to truststore
What do you mean by "correct commands"? HtmlUnit itself won't give you a running description of what it's doing, if that's what you mean. As suthasankar says, HtmlUnit is a headless browser (intentionally so) and will never give you the cool Watir experience of watching pages fly by.
Any time I've wanted to know what's happening during a test's execution, I have added logging statements at various points in the test code and then watched them in the console. You could send messages to any other monitoring system you instead.
It wouldn't take much to then write wrappers around the "commands" you're interested in, like "getPage" and button clicks and form entries and the like.
It's not possible to view what HtmlUnit is doing unless you code logging and some sort of display yourself. I have done this in the past, and it's helpful to a certain degree but it's not really possible to have a visual feedback to see what HtmlUnit is doing. Even with logging, it's not possible to know every single detail what HtmlUnit is doing or where it goes wrong, so it's an extremely time consuming task. I even resorted to outputting the current page viewed but this is pretty limited as an html page cannot tell the actual "commands" HtmlUnit is executing on that page.
Another approach would be to use Selenium, which executes your "commands" in a visual manner you can see where things go wrong instantly by watching it.