Method to detect a parked page? - domain-name

Anyone know of a way to programatically detect a parked web page? That is, those pages that you accidentally type in (or intentionally sometimes) and they are hosted by a domain parking service with nothing but ads on them.
I am working on a linking network and want to make sure that sites that expire don't end up getting snatched by someone else and then being a parked page.

Here is a test that I think may catch a decent number of them. It takes advantage of the fact you don't actually want to have real web sites up for your parked domains. It looks for the wildcarding of both subdomain and path. Lets say we have this URL in our system
http://www.example.com/method-to-detect-parked.
First I would check the actual URL and hash it or grab a copy for comparison.
My second check would be to
http://random.example.com/random
If it matches the original link or even succeeds, you have a pretty good indicator that the page is parked. If it fails I might check both the subdomain and path individually. If the page randomly changes some elements, you may want to choose a few items to compare. For example make a list of links included in the page and compare those or maybe the title tag.

I would say that you'll have to examine the WHOIS records for the sites in question and/or the actual content of the pages and develop some heuristics as to what constitutes a "parked page".
Take goooogle.com, looking at their WHOIS record shows that they are owned by "Privacy Protection" and that their DNS servers are ns1/ns2.fastpark.net. If you look at the source for the site, they're silly enough to have a CSS file named "style_park.css" :)
All in all, I don't think you'll be able to come up with a generic way to do it. You'll probably end up with some ever evolving rule base or blacklist

Look at the creation date of the dns/whois record, and compare it to the add date of the link. If the DNS is newer, that's a link that needs manual checking.
Or: check http://example.com/ and http://example.com/xxxxxxrandomstringxxxxx . If those two pages are identical, you've got some sort of problem that needs manual checking. Either the primary page you wanted to link to is broken, or the domain is parked and all pages return the same value. This test is not 100%, because some parked pages echo back elements from the URL.
If you just want to check an existing website, a service like http://www.linkalarm.com/ does this well.

You could just rely on your users to "Report this link"... which would put it into a queue to review later?

Related

crawl small homepage with metadata.transfer and N:M-relationships

hi folks,
We use StormCrawler with elasticsearch to make an index of our homepage which consist of "old pages" and "new pages".
My Question in short:
If two pages A(old),B(new) link to page X, how to pass metadata from B to X?
My Question in long:
We relauched our homepage step by step. So at time we have pdf-Files which are reachable via only the old html-pages, via only the new html-page or on both ways.
For "order by" purpose we must mark all pdf-Files which are reachable by the new html-pages.
So we insert "newHomepage=true" to seeds.txt and "metadata.transfer/-newHomepage" to "crawler-conf.yaml": Fine :-)
But for the pdf-Files which are reachable from old !and! new html-pages, we now have a race condition: If our pdf-File is "DISCOVERED" from an old page this information (newHomepage=false) is in Status-Index and can not be overridden.
( StatusUpdaterBolt does not override documents, IndexerBolt does override by default).
To make the thinks more complicate: in our case a URL (at html-page) to a PDF is redirected two times, before the file is delivered.
So from my point of view we have two possibilities:
Start the crawler two times. First we only index our new pages (and all reachable pdf files), second we index our old pages.
--> Problems with new pages which are changed after crawler was started
Store "outbound_links" and use them to set "newHomepage" independently from the crawler
--> short times with wrong metadata in index
Any advice or other ideas?
Best regards
Karsten
thanks for sharing your problem and great to hear that you are using SC. This is an interesting and unusual use case.
Your analysis of the problem is correct. An intuitive approach would be to extend the default StatusUpdaterBolt so that it updates the metadata if a document already exists. You'd need to remove the part that does the check on whether the doc has a status of DISCOVERED.
This would slow things down, but since you are dealing with a single website, this should not have a massive impact.
You could push the logic even further by setting a new nextFetchDate if the document had been fetched so that it gets refetched and updated quicker in the doc index (as opposed to the status one).

How do I speed up iterations for web crawling ids-nokogiri/ruby

What I want to do is iterate through all possible product pages given a 10 digit numerical id
an example of the page I would like to scrape is somewebsite.com/product?productid=10000000000
The scraper would go to the page see if a tag exists to see if it is a product page and then log the url if it is or move on to the next page if it is not.
doing iterations 1 by 1 (productid = large number++)is too slow and from looking at some sample product ids it seems like numbers without patterns such as(121212121212) are more likely I wanted to ask what would be a way to iterate through these pages in a more reasonable amount of time. I am doing this in ruby with nokogiri right now.
Iterating through that number of product IDs is a horrible way to treat a target site, and odds are good you'd get banned because it's not likely their products are sequentially numbered. In other words, you would get a lot of missing page responses, which will be logged, and if their web-development team is decent they'll get a list of those along with the requesting IP.
Instead, be smart and find a page that lists all their products, parse out that list, then walk it. If there isn't a single page containing them, but many, then start at the first and walk them all until you've reached the last one. Aggregate the product IDs into an array, or process them as you read each page.
Also, be very gentle and kind to their site by sleeping between iterations. Failing to do that can also get you banned because requesting thousands of pages, one immediately after another, will drive up their host's CPU, network usage, which again will alert them that you are spidering their site and negatively impacting their ability to serve normal customers.
Finally, if you really want to do things the right way, your first connection to the site should request their "robots.txt" file. Process it, and use those directives in your code. That file is put there to help robots/spiders/scrapers do the right thing and not unfairly antagonize the site or web-admins of the site. Failing to do that is a sure path to being banned. More information is available at "The Web Robots Pages" and "Robots exclusion standard".

Programmatically maintaining sitemaps

Hoping someone can chime in on an ideal methodology.
I don't want to run my site through a crawler every month to add new pages to my sitemap, I'd like some robust systematic method to do so, because maintaining it by hand seems very prone to ahem human forgetfulness. Is there some sorta way to programmatically validate new controllers, controller methods, views, etc. to some special controller? What I'm picturing is some mechanism that enforces updating the sitemap whenever you create a new controller method or view. I work in LAMP stack if that's relevant. This guy here is doing it through the file system and that's not what I want for a public facing sitemap.
Perhaps there's another best practice for this type of maintenance other than the concept I'm proposing. Would love to hear how everyone else does this! :)
If your site is content based, best practise is reading database periodically and generating each contents link. With this method you can specify some subjects are more prior or vice versa in sitemap.
That method already mentioned before at topic that you linked.
Else, you can hold a visited pages list (static) at server-side. Or just log them. After recording your site traffic, without blocking the user experience, I mean asynchronously, check the sitemaps and add your page links there. You can specify priority with this method too, by visiting intensity of your pages and some statistical logic.

How to generate sitemap on a highly dynamic website?

Should a highly dynamic website that is constantly generating new pages use a sitemap? If so, how does a site like stackoverflow.com go about regenerating a sitemap? It seems like it would be a drain on precious server resources if it was constantly regenerating a sitemap every time someone adds a question. Does it generate a new sitemap at set intervals (e.g. every four hours)? I'm very curious how large, dynamic websites make this work.
On Stackoverflow (and all Stack Exchange sites), a sitemap.xml file is created which contains a link to every question posted on the system. When a new question is posted, they simply append another entry to the end of the sitemap file. It isn't that resource intensive to add to the end of the file but the file is quite large.
That is the only way search engines like Google can effectively crawl the site.
Jeff Atwood talks about it in a blog post: The Importance of Sitemaps
This is from Google's webmaster help page on sitemaps:
Sitemaps are particularly helpful if:
Your site has dynamic content.
Your site has pages that aren't easily discovered by Googlebot during
the crawl process - for example, pages
featuring rich AJAX or Flash.
Your site is new and has few links to it. (Googlebot crawls the web by
following links from one page to
another, so if your site isn't well
linked, it may be hard for us to
discover it.)
Your site has a large archive of content pages that are not well linked
to each other, or are not linked at
all.
There's no need to regenerate the Google sitemap XML each time a question is posted. It's far simpler just to have the XML file generated on-demand directly from the database (and a little caching).
To reduce load, the sitemap can be split into many sitemaps. Partitioning it by day/month would allow you to tell Google to retrieve today's sitemap frequently, but only fetch the sitemap from six months ago once in a while.
I'd like to share my solution here just in case it helps someone as well.
It took me reading this question and many others to decide what to do.
My site structure.
Static pages
Home (Highly dynamic. Cached for 30 mins)
Artists, Albums, Songs, Playlists and Albums (Paginated List)
Legal (Static page with Terms etc)
...etc
Dynamic Pages
Artists, Albums, Songs, Playlists and Albums detail pages
My approach.
sitemap.xml: This url generates a <sitemapindex /> with the first item being /sitemap-main.xml. The number of Artists, Albums, Songs etc are counted and divided by 1,000 (number of urls I want in each sitemap. the limit is 50,000). I round this number up.
So for e.g, 1900 songs = 1.9 = 2.
I generate. add the urls /sitemap-songs-0.xml and /sitemap-songs-1.xml to the index. I repeat this for all other items. Basically, I am paginating.
The output is returned uncached. I want this to always be fresh.
sitemap-main.xml: This lists all the static pages. You can actually use a static file for this as you will only need to update it once in a while.
sitemap-songs-0.xml, sitemap-albums-0.xml, etc: I use a single route for this in SlimPhp 2.
$app->get('/sitemap-:type-:page.xml', function ($type, $page) use ($app) {...
I use a simple switch statement to generate the relevant files. If for this page, I got 1,000 items, the limit specified above, I cache the file for 2 Weeks.
Else, I only cache it for a few hours.
I guess this can help anyone else implement their own system.
Even on something like StackOverflow, there is a certain amount of static organization; there are FAQs, tag pages, question pages, user pages, badge pages, etc.; I'd say in a very dynamic site, the best way to approach a sitemap would be to have a map of the categorizations; each node in the sitemap can point to a page of the dynamically generated data (a node for a question page, a node for a user page, etc.).
Of course, a sitemap may not even be appropriate for a given site; there's a certain amount of judgment call required there.
For a highly dynamic site, I wrote a cron job at my server which runs on daily basis. It makes a rest call to my backend every day, and generates a new sitemap according to all newly generated content, and returns the sitemap in the form of an xml file. This new sitemap overrides the previous one and keeps my website updated according to all the changes. Changing sitemap for each newly added dynamic content is not a good approach I think

JSONP and Cross-Domain queries - How to Update/Manipulate instead of just read

So I'm reading The Art & Science of Javascript, which is a good book, and it has a good section on JSONP. I've been reading all I can about it today, and even looking through every question here on StackOverflow. JSONP is a great idea, but it only seems to resolve the "Same Origin Problem" for getting data, but doesn't address it for changing data.
Did I just miss all the blogs that talked about this, or is JSONP not the solution I was hoping for?
JSONP results in a SCRIPT tag being generated to another server with any parameters that might be required as a GET request. e.g.
<script src="http://myserver.com/getjson?customer=232&callback=jsonp543354" type="text/javascript">
</script>
There is technically nothing to stop this sort of request altering data on the server, e.g. specifying newName=Tony. Your response could then be whether the update succeeded or not. You will be limited by whatever you can fit on a querystring. If you are going with this approach add some random element as a parameter so that proxy's won't cache it.
Some people may consider this goes against the way GET's are supposed to work i.e. they shouldn't cause data to change.
Yes, and honestly I would like to stick to that paradigm. However, I might bend the rule and say that, requests which do not alter/deal with CRUCIAL data will be accessible via GET calls... hm...
For instance, I am building a shopping cart system, and I think that allowing the adding/removing/etc of items to/from a cart could very easily be exposed via GETs, since even though you can change data, you cannot do anything critical with it. If someone maliciously added 1,000 flatscreen monitors to your shopping cart, there would be at least one verification step that would NOT be vulnerable to any attacks (a standard ASP.NET page at that point, with verification and all that jazz).
Is this a good/workable solution in anyones' opinion?

Resources