YQL Blacklisting - yahoo

I'm developing an online deck editor, Proxymang, and YQL can't handle the number of requests per second. The requests are one per Magic card, they're cached, and they're spaced out. What should I do?
GitHub

I never resolved the problem; I eventually decided to use the Gatherer's meager API instead.
The problem seems to one of the following:
YQL blacklisted YelloSoft after too many requests/second
MagicCards.info blacklisted YQL after too many requests/second
MagicCards.info has a strict robots.txt which blocks HTML scraping, but not always

Related

Chrome allowed too many parallel requests to the same resource at the same time

Please see the below Image, all the requests were sent to the same server almost at the same time.
Chrome version is 86.0.4240.183.
That's very weird, I thought there was a maximum of six TCP connections per host. In my opinion, how the requests were sent should be like the below Image
How could Chrome not to block/stall when making multiple requests, can anybody help?
It's one of the benefits of HTTP/2 that is called multiplexing.
HTTP/1.1 is sort of like buying a single item at a grocery store, taking it back home, going back to the store for the next item you need, and repeating until your pantry is fully stocked. Multiplexing gives you a shopping cart so you can pick up everything you need in one trip.

What does multiplexing mean in HTTP/2

Could someone please explain multiplexing in relation to HTTP/2 and how it works?
Put simply, multiplexing allows your Browser to fire off multiple requests at once on the same connection and receive the requests back in any order.
And now for the much more complicated answer...
When you load a web page, it downloads the HTML page, it sees it needs some CSS, some JavaScript, a load of images... etc.
Under HTTP/1.1 you can only download one of those at a time on your HTTP/1.1 connection. So your browser downloads the HTML, then it asks for the CSS file. When that's returned it asks for the JavaScript file. When that's returned it asks for the first image file... etc. HTTP/1.1 is basically synchronous - once you send a request you're stuck until you get a response. This means most of the time the browser is not doing very much, as it has fired off a request, is waiting for a response, then fires off another request, then is waiting for a response... etc. Of course complex sites with lots of JavaScript do require the Browser to do lots of processing, but that depends on the JavaScript being downloaded so, at least for the beginning, the delays inherit to HTTP/1.1 do cause problems. Typically the server isn't doing very much either (at least per request - of course they add up for busy sites), because it should respond almost instantly for static resources (like CSS, JavaScript, images, fonts... etc.) and hopefully not too much longer even for dynamic requests (that require a database call or the like).
So one of the main issues on the web today is the network latency in sending the requests between browser and server. It may only be tens or perhaps hundreds of millisecond, which might not seem much, but they add up and are often the slowest part of web browsing - especially as websites get more complex and require extra resources (as they are getting) and Internet access is increasingly via mobile (with slower latency than broadband).
As an example let's say there are 10 resources that your web page needs to load after the HTML is loaded itself (which is a very small site by today's standards as 100+ resources is common, but we'll keep it simple and go with this example). And let's say each request takes 100ms to travel across the Internet to web server and back and the processing time at either end is negligible (let's say 0 for this example for simplicity sake). As you have to send each resource and wait for a response one at a time, this will take 10 * 100ms = 1,000ms or 1 second to download the whole site.
To get around this, browsers usually open multiple connections to the web server (typically 6). This means a browser can fire off multiple requests at the same time, which is much better, but at the cost of the complexity of having to set-up and manage multiple connections (which impacts both browser and server). Let's continue the previous example and also say there are 4 connections and, for simplicity, let's say all requests are equal. In this case you can split the requests across all four connections, so two will have 3 resources to get, and two will have 2 resources to get totally the ten resources (3 + 3 + 2 + 2 = 10). In that case the worst case is 3 round times or 300ms = 0.3 seconds - a good improvement, but this simple example does not include the cost of setting up those multiple connections, nor the resource implications of managing them (which I've not gone into here as this answer is long enough already but setting up separate TCP connections does take time and other resources - to do the TCP connection, HTTPS handshake and then get up to full speed due to TCP slow start).
HTTP/2 allows you to send off multiple requests on the same connection - so you don't need to open multiple connections as per above. So your browser can say "Gimme this CSS file. Gimme that JavaScript file. Gimme image1.jpg. Gimme image2.jpg... Etc." to fully utilise the one single connection. This has the obvious performance benefit of not delaying sending of those requests waiting for a free connection. All these requests make their way through the Internet to the server in (almost) parallel. The server responds to each one, and then they start to make their way back. In fact it's even more powerful than that as the web server can respond to them in any order it feels like and send back files in different order, or even break each file requested into pieces and intermingle the files together. This has the secondary benefit of one heavy request not blocking all the other subsequent requests (known as the head of line blocking issue). The web browser then is tasked with putting all the pieces back together. In best case (assuming no bandwidth limits - see below), if all 10 requests are fired off pretty much at once in parallel, and are answered by the server immediately, this means you basically have one round trip or 100ms or 0.1 seconds, to download all 10 resources. And this has none of the downsides that multiple connections had for HTTP/1.1! This is also much more scalable as resources on each website grow (currently browsers open up to 6 parallel connections under HTTP/1.1 but should that grow as sites get more complex?).
This diagram shows the differences, and there is an animated version too.
Note: HTTP/1.1 does have the concept of pipelining which also allows multiple requests to be sent off at once. However they still had to be returned in order they were requested, in their entirety, so nowhere near as good as HTTP/2, even if conceptually it's similar. Not to mention the fact this is so poorly supported by both browsers and servers that it is rarely used.
One thing highlighted in below comments is how bandwidth impacts us here. Of course your Internet connection is limited by how much you can download and HTTP/2 does not address that. So if those 10 resources discussed in above examples are all massive print-quality images, then they will still be slow to download. However, for most web browser, bandwidth is less of a problem than latency. So if those ten resources are small items (particularly text resources like CSS and JavaScript which can be gzipped to be tiny), as is very common on websites, then bandwidth is not really an issue - it's the sheer volume of resources that is often the problem and HTTP/2 looks to address that. This is also why concatenation is used in HTTP/1.1 as another workaround, so for example all CSS is often joined together into one file: the amount of CSS downloaded is the same but by doing it as one resource there are huge performance benefits (though less so with HTTP/2 and in fact some say concatenation should be an anti-pattern under HTTP/2 - though there are arguments against doing away with it completely too).
To put it as a real world example: assume you have to order 10 items from a shop for home delivery:
HTTP/1.1 with one connection means you have to order them one at a time and you cannot order the next item until the last arrives. You can understand it would take weeks to get through everything.
HTTP/1.1 with multiple connections means you can have a (limited) number of independent orders on the go at the same time.
HTTP/1.1 with pipelining means you can ask for all 10 items one after the other without waiting, but then they all arrive in the specific order you asked for them. And if one item is out of stock then you have to wait for that before you get the items you ordered after that - even if those later items are actually in stock! This is a bit better but is still subject to delays, and let's say most shops don't support this way of ordering anyway.
HTTP/2 means you can order your items in any particular order - without any delays (similar to above). The shop will dispatch them as they are ready, so they may arrive in a different order than you asked for them, and they may even split items so some parts of that order arrive first (so better than above). Ultimately this should mean you 1) get everything quicker overall and 2) can start working on each item as it arrives ("oh that's not as nice as I thought it would be, so I might want to order something else as well or instead").
Of course you're still limited by the size of your postman's van (the bandwidth) so they might have to leave some packages back at the sorting office until the next day if they are full up for that day, but that's rarely a problem compared to the delay in actually sending the order across and back. Most of web browsing involves sending small letters back and forth, rather than bulky packages.
Since #Juanma Menendez answer is correct while his diagram is confusing, I decided to improve upon it, clarifying the difference between multiplexing and pipelining, the notions that are often conflated.
Pipelining (HTTP/1.1)
Multiple requests are sent over the same HTTP connection. Responses are received in the same order. If the first response takes a lot of time, other responses have to wait in line. Similar to CPU pipeling where an instruction is fetched while another one is being decoded. Multiple instructions are in flight at the same time, but their order is preserved.
Multiplexing (HTTP/2)
Multiple requests are sent over the same HTTP connection. Responses are received in the arbitrary order. No need to wait for a slow response that's blocking others. Similar to out-of-order instruction execution in modern CPUs.
Hopefully the improved image clarifies the difference:
Request multiplexing
HTTP/2 can send multiple requests for data in parallel over a single TCP connection. This is the most advanced feature of the HTTP/2 protocol because it allows you to download web files asynchronously from one server. Most modern browsers limit TCP connections to one server. This reduces the additional round trip time (RTT), making your website load faster without any optimization, and makes domain sharding unnecessary.
Multiplexing in HTTP 2.0 is the type of relationship between the browser and the server that use a single connection to deliver multiple requests and responses in parallel, creating many individual frames in this process.
Multiplexing breaks away from the strict request-response semantics and enables one-to-many or many-to-many relationships.
Simple Ans (Source) :
Multiplexing means your browser can send multiple requests and receive multiple responses "bundled" into a single TCP connection. So the workload associated with DNS lookups and handshakes is saved for files coming from the same server.
Complex/Detailed Ans:
Look out the answer provided by #BazzaDP.

The ways to make a scraper bot look more like a human

Due to a limitation of the API of a websites I use for searching some products, I have to do html scraping its Products page. There's no no other way because it offers only free API with the limitation. I just need 10 or 100 times more items that its API returns, meaning even if I call it 5 times, it'll return the same set of the products as if it were 1 call.
I don't need to scrape plenty of the page in short period of time. Normally a scrape bot would scrape all that data in a few minutes. For me a few hours is acceptable, so my scraper can be more like a human.
The questions is: what are the ways to make my scraper look like a normal user?
First, make less calls in a short period of time.
Use a headless browser, maybe?
Use vpn? or proxy? or both?
What are other pointers?
Note: in my case scraping is the only way to achieve what I want because the API doesn't work. So there's no question whether I should use the API or scraping. I simply can only use scraping.
You are basically heading toward a right direction.
Yet I suspect that you don't really master the API (or it's a weird one) if if call it 5 times, it'll return the same set of the products as if it were 1 call. API should be able to let users access to all possible data (with frequency limit though).
The items you've asked about:
Make less calls in a short period of time. - Kind of true, yet still you should be clear what request frequency is acceptible for certain site (not being detected, nor bandwidth throttling).
Use a headless browser. - Yes. Abandon cookie, be anonymous.
Use vpn? or proxy? - Proxy yes, use an appropriate proxy service that will provide you enough flexibility of not being detected. VPN does not help, since network nodes (where you scrape from) are limited in number and have static IPs (basically).
I think this post might be to your help.

GET vs. POST ajax requests: When and how to use either?

What are the strengths of GET over POST and vice versa when creating an ajax request? How do I know which I should use at any given time? Is it a security-minded decision?
Also, what is the difference in how they are actually sent?
GETs should be used for idempotent operations, that is operations that can be safely repeated more than once without changing anything. Browsers will cache GET requests (for normal and AJAX requests)
POSTs should be generally be used for non-idenpotent operations, like saving something. Although you can use them for other operations if you want.
Data for GETs is sent over the URL query string. Data for POSTs is sent separately. Some browsers have a maximum URL length (I think Internet Explorer is 2048 characters), and if the query string becomes too long you'll get an error.
You should use GET and POST requests in AJAX calls just as you would use GET and POST requests in normal calls. Basic rule of thumb:
Will the request modify anything in your Model?
YES: The request will modify (add/update/delete) data from your data store,
or in some other way change the state of the server (cause creation of
a file, for example). Use POST.
NO: The request will not affect the state of anything (database, file system,
sessions, ...) on the server, but merely retrieve information. Use GET.
POST requests are requests that you do not want to accidentally happen. GET requests are requests you are OK with happening by a user pointing a browser to via a URL.
GET requests can be repeated quite simply since their data is based in the URL itself.
You should think about AJAX requests like you think about regular form requests (and their GET and POST)
The Yahoo! Mail team found that when using XMLHttpRequest, POST is implemented in the browsers as a two-step process: sending the headers first, then sending data. So it's best to use GET, which only takes one TCP packet to send (unless you have a lot of cookies). The maximum URL length in IE is 2K, so if you send more than 2K data you might not be able to use GET.
http://developer.yahoo.com/performance/rules.html#ajax_get

How do I get around the Twitter API caching problem?

I'm building a Twitter app that requires to check user data somewhat frequently, but I'm facing trouble with a cache that's oddly on Twitter's side, not mine.
Try the following user:
users/show in XML: http://twitter.com/users/show.xml?screen_name=technolocus
users/show in JSON: http://twitter.com/users/show.json?screen_name=technolocus
normal page: http://twitter.com/technolocus
All these methods of accessing data should return the same values, right? Check the statuses_count for each of them.
XML: 12548
JSON: 12513
normal: 12498
The normal method (i.e. just visiting the profile non-programatically) serves up the most correct value of 12498. If I post or delete tweets to this account, it gets updated on the profile page instantly, but the XML and JSON methods still return cached data.
At this point, the values of the XML and JSON methods are 12 to 18 hours old respectively.
I first tried to access these methods from my website (hosted on Dreamhost). I thought it was Dreamhost caching the responses. Then I tried to access the API directly from my browser. I did a cURL from the command line from my machine after that. It wasn't dreamhost. I thought it was probably my ISP (I think they use NetApp or something like that). Then I asked a friend in another corner of India to try it. He's getting the exact same cached responses as I am.
So it isn't Dreamhost's cache; it isn't my ISP or my country's cache. There's only one conclusion - Twitter is caching responses.
How in the heavens do I get around this?!?
Forgot to mention this: The script on the server is in PHP and is using cURL to retrieve the XML and JSON data from Twitter, while the local tests have been just using the browser. Both have the exact same result!
First, I think you should report this a a bug to Twitter. I see the same discrepancy as you, and no matter what that seems like a bug. Even if they're caching, I'd expect that a cache on their side would store an abstract form that would then be rendered into HTML, JSON, and XML. I wonder if what's actually going on is that these requests are performing similar but different queries.
Are you sure that the values are "old"? For example, did you actually delete about 50 updates recently (since you say the HTML one is newest but shows a lower count than the other two)? If you create another update do you see the HTML number increment while the other numbers stay the same, or do they all increment simultaneously?
If what you are saying is accurate, and it probably is, generally, you can't get around it. Twitter would want to be caching its responses since they are costly to reproduce every single time.
When you use Twitter's APIs, you end up being bound by its conventions, even if that includes caching.
Your best bet is to tweet to #twitterapi and get them to give you a response as to why the two representations are divergent.
Add ?blah=xxxx to all urls.
I don't develop anything against twitter and ocassionaly manually "follow" three tweets by going to them in my browser. They always lag behind by half a day. I add ?asdsadsadsad to the url (everytime something different) and it always updates. I don't know what Twitter is doing here and came here while searching for the problem. But I guess this trick of appending a random value to the url via GET will probably work for your api requests, too.

Resources