I am working on a program for a friend which will allow him to post and revise listings to his eBay account and when reviewing the necessary documentation I have found a conflict in terms of limits.
This page suggests that each API has a limit of 5,000 calls per day: https://go.developer.ebay.com/developers/ebay/ebay-api-call-limits
However, this page specifies that the Add Listing and Revise Listing calls can have up to 3.5 million calls per day each: http://developer.ebay.com/devzone/xml/docs/reference/ebay/ReviseItem.html
Can anyone advise me on what the limits are or whether these Add and Revise Listing calls do not contribute to the rest of your API calls as defined in the first URL?
The mere fact of signing up for a developer account with eBay gets you access to both sandbox and production environment.
Without lifting a finger you get 5,000 production API calls to eBay for your application. No approvals or anything needed to have these 5,000.
By the way this is 5,000 per day.
If you want a need more then you'll have to submit your app for compatibility check by eBay.This is basically to ensure their TOS and best use of API for your objective is being used. Once approved, they will increase the call limit based on your needs.
The application application isn't hard to pass. I've had simple 5 pages of code passed and more complex apps passed. In fact I've never failed an application check before.
Make sure to include in your application check why you need higher limits based on use now or in future. If you don't ask they might not give them.
PS - If I recall , I never even actually had to submit my application (actual software file) to eBay for compatibility check. Just took screen shots of application and documented what it would be used for, who would use it and what API calls we were using.
I was initially given 1 million API calls a day and upon a simple request was able to increase it to 2 million without any fuss.
Lastly, I would note that if this application is for internal use only, then make sure to tell them that because if you plan on collecting user credentials then eBay would surely require some more advanced checking. All my apps previously did not collect credentials or require them. Just used the finding API.
Just be legit and not doing anything funny and eBay will support all your needs in regards to using their API and increased call limits no matter if your small fish or enterprise.
UPDATE
Default now seems to be 1.5 Million calls as soon as your app passes Compatibility check.
When applying, their major concerns are related to error handling. Ex: If call fails, you have some method to either report it in a log file or take action to resolve/terminate. My last app was successfully approved with a simple log tab that reported any API errors on the UI and stored them into a CSV file local to the machine the app was installed on.
They're not looking to approve based on your apps ingenuity, rather just trying to prevent abuse.
Your application need to complete the ebay compatible application check process to get more than 5000 calls per day.
Click Here
eBay makes this a "premium" feature. There's no way we can raise the limit without paying eBay a good fortune.
Related
My company has over a hundred users of a specific CRM web application, which is provided as a service by another company to us.
The users of this application are very dissatisfied with its average response time, and I need to find a way to gather metrics during a certain period of time (let's say .. a week) to prove the service provider that they are really providing a bad service.
If the application were mine, I would get the metrics from New Relic or some other equivalent monitoring service, but since it is not, I'm looking for something that could do some sort of client side monitoring.
I already checked Page Speed from Google and YSlow from Yahoo, but both are only useful when you want to test the application during a few seconds. They are not meant for the long term monitoring I need.
Would anybody know a way to get this kind of monitoring from a client side perspective?
LoadRunner is no charge for 50 users, but what you really need is not a test tool but a synthetic user monitor which runs every n number of minutes and pulls the stats. You can build it yourself using LoadRunner 12, Jmeter, or any other http sampling technology. You could also use a service like Gomez for sampling or mpulse from SOASTA for tracking every page component across all users.
Keep in mind that your developer tools will time all of the components of the request to give you some page times. As will Dynatrace for the web client.
If you have access to the web server then consider configuring the web server logs to capture the w3c time-taken field, which will track every request. Depending upon the server the level of granularity can be to the millionth of a second on each and every request.
You could also look at a service like LiteSquare which can process those web logs and provide ammunition for changes to the server to improve performance on a no-gain, no-charge model.
One (expensive) solution would be using LoadRunner endurance test feature. Check here for a demonstration.
Another tool is Oracle OATS.
JMeter is a free tool, though I'm not sure if it's reliable enough to run for a whole week.
These are load generator tools, so if you are testing as a single client, you should carefully chose your load amount (e.g. one user).
Last but not least, you could create your own webservice client, and create a cron job to run it on your specified time of day and log the access time.
If what you want is to get data from their server, this is impossible ... without hacking into it. All you can do is monitor the website as a client, using some of the above tools, make a report and present that to them. But even so they could challenge your bandwidth, your test method etc.
I recommend that you negotiate with them to give you their logs and to prove that their system can support a certain amount of load. If you are a customer to them, you can file a complain or test additional offers.
Dynatrace was already mentioned in combination with Load Testing. As you said that you want to monitor your live system I want to bring Dynatrace up again. Most of the time it is used to do live system monitoring to understand what end users are actually doing. It is also available as a 30 day trial - so - no need to buy it - but - use it for your sanity check: http://bit.ly/dttrial
So I've built an iOS app (my first) and I want to distribute it for free. It's a content creation app, and my plan is to allow the user full access to record up to 5 records of content for the purpose of evaluation. If the user likes the app and wants to continue generating new content, he'll have to purchase an unlock via in-app-purchase.
I've looked at the documentation, and I'm going to use MKStoreKit to do this. I understand that I'm going to be creating a non-consumable, non-subscription product to sell.
So my problem is that while I can find lots of information on HOW to do the actual IAP, I can't find anything on where or how to track that it was purchased. That is, how do I go about ensuring the app is unlocked? Does it require a round trip to the AppStore servers on every app startup? If this is the case, I'm a bit concerned about it because network connectivity is not a guarantee.
Another possibility I've been thinking about is writing some kind of semaphore somewhere when unlock is purchased, whether it's a file or just modifying a setting in a .plist. This is certainly optimal from a user-experience point of view, but can it be easily hacked? If I write a file, can a user just take that file and distribute it to whomever?
Is there some standard mechanism or methodology that's typically employed here?
Thanks for any assistance.
What I usually do is check with the Apple servers if the content is unlocked. If so, I change some attribute in a .plist and check it to unlock the content.
There are two common approaches to achieve that: The first is to check only if the attribute is not set (or with a specific value) and the other, more secure but, im my opinion, not the best, is to have a point in your app that everytime it is executed the Apple servers are verified again.
What you need to have in mind is that if your application is hacked, you can't do anything, but there is a great number of users (most of them) that don't care about hacks and not even Jailbreaks... so forget it and apply the check when the app opens and only if it is not unlocked yet.
I have a REST api.
It offers the services get person, get price, get route
how can I determine how long does each call on each of this services take?
For example get person is very fast=ms 5; get route takes 2sec as it needs to make a remote call to Google API.
I could get the time at the beginning of the request and just before the response is submitted, compute the difference and log that to a database.
But that would be pretty much overhead, so how would you do it? would you do it at all, or just rely on on-machine profiling? what tools would you use that minimize overhead?
What I want is to determine if there is any component that in production could have low availability.
Thank you
So it looks like you want 2 things:
Minimal impact on your production environment
Figuring out how much each request takes
In that case I would go for the IIS logs. Windows Azure Diagnostics you can get this out-of-the-box by adding the module and configuring it. As a result your IIS logs will be stored in your storage account.
After that you can download these logs and use Log Parser to execute some interesting queries which allow you to find the slowest pages, pages with most hits, pages with most exceptions... Log Parser can be a little hard to work with if you never used it before. Take a look at the blog post by Scott Hanselman covering the Log Parser Lizard GUI tool: Analyze your Web Server Data and be empowered with LogParser and Log Parser Lizard GUI:
This powerful tool can give you all the information you need with minimal impact on your production instances.
Is it possible to create a script that is executed outside of the server,
or with a browser add-on for example it automatically fills in form values, then submits the form all ready to be parsed by the server ? this way in three minutes a billion fake accounts could get registered very easily, imagine facebook which does not use any visible to the human captcha, a browser add on that performs the form submission and inserts the vals retrieved from a local database for new emails to be associated as that is a check - no duplicate emails, can thousands and thousands of fake accounts be created each day accross the globe?
What is the best method to prevent fake accounts? Even imagining the scenario of a rotating ips center with human beings registering just to choke the databases, achieving 30-50 million accounts in a year. Thanks
This is probably better on the Security.Stackexchange.com website, but...
According to the OWASP Guide to Authentication, CAPTCHAs are actually a bad thing. Not only do they not work, induce additional headaches, but in come cases (per OWASP) they are illegal.
CAPTCHA
CAPTCHA (Completely automated Turing Tests To Tell Humans and Computers Apart) are illegal in any jurisdiction that prohibits
discrimination against disabled citizens. This is essentially the
entire world. Although CAPTCHAs seem useful, they are in fact, trivial
to break using any of the following methods:
• Optical Character Recognition. Most common CAPTCHAs are solvable using specialist
CAPTCHA breaking OCR software.
• Break a test, get free access to foo,> where foo is a desirable resource
• Pay someone to solve the CAPTCHAs.
The current rate at the time of writing is $12 per 500 tests.
Therefore implementing CAPTCHAs in your software is most likely to be
illegal in at least a few countries, and worse - completely
ineffective.
Other methods are commonly used.
The most common, probably, is the e-mail verification process. You sign up, they send you an email, and only when you confirm it is the account activated and accessible.
There are also several interesting alternatives to CAPTCHA that perform the same function, but in a manner that's (arguably, in some cases) less difficult.
More difficult may be to track form submissions from a single IP address, and block obvious attacks. But that can be spoofed and bypassed.
Another technique to use JavaScript to time the amount of time the user spent on the web page before submitting. Most bots will submit things almost instantly (if they even run the JavaScript at all), so checking that a second or 2 has elapsed since the page rendered can detect bots. but bots can be crafted to fool this as well
The Honeypot technique can also help to detect such form submissions. There's a nice example of implementation here.
This page also talks about a Form Token method. The Form Token is one I'd never heard of until just now in this context. It looks similar to an anti-csrf token in concept.
All told, your best defense, as with anything security related, is a layered approach, using more than one defense. The idea is to make it more difficult than average, so that your attacker gives up ad tries a different site. This won't block persistent attackers, but it will scale down on the drive-by attacks.
To answer your original question, it all depends on what preventative measures the website developer took to prevent people from automatic account creation.
Any competent developer would address this in the requirements gathering phase, and plan for it. But there are plenty of websites out there coded by incompetent developers/teams, even among big-name companies that should know better.
This is possible using simple scripts, which may or may not use browser extension (for example scripts written in Perl or shell scripts using wget/curl).
Most websites rely on tracking the number of requests received from a particular browser/IP before they enable CAPTCHA.
This information can be tracked on the server side with a finite expiry time, or in case of clients using multiple IPs (for example users on DHCP connection), this information can be tracked using cookies.
Yes. It is very easy to automate form submissions, for instance using wget or curl. This is exactly why CAPTCHAs exist, to make it difficult to automate form submissions.
Verification e-mails are also helpful, although it'd be fairly straightforward to automate opening them and clicking on the verification links. They do provide protection if you're vigilant about blocking spammy e-mail domains (e.g. hotmail.com).
Are there any open source (or I guess commercial) packages that you can plug into your site for monitoring purposes? I'd like something that we can hook up to our ASP.NET site and use to provide reporting on things like:
performance over time
current load
page traffic
SQL performance
PU time monitoring
Ideally in c# :)
With some sexy graphs.
Edit: I'd also be happy with a package that I can feed statistics and views of data to, and it would analyse trends, spot abnormal behaviour (e.g. "no one has logged in for the last hour. is this Ok?", "high traffic levels detected", "low number of API calls detected") and generally be very useful indeed. Does such a thing exist?
At my last office we had a big screen which showed us loads and loads of performance counters over a couple of time ranges, and we could spot weird stuff happening, but the data was not stored and there was no way to report on it. Its a package for doing this that I'm after.
It should be noted that google analytics is not an accurate representation of web site usage. This is because the web beacon (web bug) used on the page does not always load for these reasons:
Google analytics servers are called by millions of pages every second and can not always process the requests in a timely fashion.
Users often browse away from a page before the full page has loaded and thus there is not enough time to load Googles web beacon to record a hit.
Google analytics require javascript to be installed which can be disabled.
Quite a few (but not substantial amount) of people block google-analytics.com from their browsers, myself included.
The physical log files are the best 'real' representation of site usage as they record every request. Alternatively there are far better 'professional' packages, of which Omniture is my favourite, which have much better response times, alternative methods for recording actions and more functionality.
If you're after things like server data, would RRDTool be something you're after?
It's not really a webserver type stats program though, I have no idea how it would scale.
Edit:
I've also just found Splunk Swarm, if you're interested in something that looks "cool".
Google Analytics is free (up to 50,000 hits per month I think) and is easy to setup with just a little javascript snippet to insert into your header or footer and has great detailed reports, with some very nice graphs.
Google Analytics is quick to set up and provides more sexy graphs than you can shake a stick at.
http://www.google.com/analytics/
Not Invented here but it's on my todo list to setup.
http://awstats.sourceforge.net/
#Ian
Looks like they've raised the limit. Not very surprising, it is google after all ;)
This free version is limited to 5 million pageviews a month - however, users with an active Google AdWords account are given unlimited pageview tracking.
http://www.google.com/support/googleanalytics/bin/answer.py?hl=en&answer=55543
http://www.serverdensity.com/
One option is to use external monitoring tools, which will monitor the web performance from outside the firewall by simulating end user activities.
Catchpoint Systems has an interesting approach that requires very little coding and gives you the performance stats from outside the datacenter and from inside the asp.net (like processing time, etc)
http://www.catchpoint.com/products.html