Is there anyway for developer to test Google keyword Tools API for free?
You can test it against the sandbox without charge but you don't get the real results that you would from the live system.
Testing against the live API uses API units. However, I usually find I only need to pull down a few keywords for testing and even with many testing cycles it doesn't really cost very much.
One thing you can do to reduce costs is to get the tool working with the sandbox to make sure there aren't any errors etc. Then, once it's working, just run a few tests in production to check the data looks correct.
Having an Adword's account is free. You only pay if you have an active ad running. The keywords tool is here
Related
I'm using Python and Plaid's development environment to download bank balances and transactions. To get the initial access tokens, I'm launching Link from quickstart, and can do that in standard and update mode.
The problem I'm running into is how frequently my API call returns the ITEM_LOGIN_REQUIRED error and I have to re-authenticate. For a Regions account I've been testing, this happens a few times throughout the day. For a Pinnacle Financial Partners bank, this happens almost immediately after updating the access token. As in, I can log in through link, successfully fire an auth/get request, and by the time I can send another request (e.g., balance/get), I'm already getting ITEM_LOGIN_REQUIRED again.
As I'm evaluating Plaid for production use, is this frequent authentication atypical? Is it a known limitation with development, or with specific banks even on production? I've seen some banks (Bank of America) only work in production, but I'm hoping what I'm experiencing is just the nature of working in development. Thanks.
Development vs. Production environments are virtually identical and shouldn't impact how often you hit ITEM_LOGIN_REQUIRED.
What you're seeing is atypical, though. Unless you have multi-factor auth turned on and configured not to trust known devices, this shouldn't happen.
Assuming you don't have that configured, would you mind submitting a support ticket so Plaid Support can look into this and help figure out why it's happening?
google play developer console has a near channel called "internal test". it is the last one in the list and i took a image of it below. how does it compare with alpha channel ? im not understanding its usecase ? seems your allowed 100 users.
I tried looking for documentation but i only see that it makes the app available faster. is that the prime benefit ?
That is correct, the Internal test track primary advantage is that APKs published to this track are available to testers within seconds, instead of up to several hours for Alpha or Beta.
Also, Internal testers can access app versions that are not otherwise available to other tracks users due to various restrictions such as device exclusions.
Finally, Internal testers do not have to pay to acquire the test version of the app, if the app is paid.
The internal test track is designed for internal testing use cases.
The documentation is a great first port of call for an introduction to the Yammer export API. It gives a great example of what the API gives back detailing the file types, etc. I would like to see some sample files. Does anybody know where I may find these? Even better, in a perfect world, is there some kind of simulator to demonstrate the use of the API?
I don't have admin access therefore can not see the exports from my network. I will probably never get this type of access. We wish to see examples so we can decide if its worthwhile requesting an export on a schedule from the global people who do have admin access. Making this request is time consuming and not all that straightforward.
If your administrator will not grant you access to the data export for your Yammer network, you do have an option to test the data export and see what it looks like in practice:
Sign up for a trial O365 subscription for a tenant that includes Yammer.
Post some test content.
Perform a data export.
If you need the tenant to live longer than the trial period, you'll need to pay for it. Depending on who your are working for, you may be able to assign some licences for a test tenant. Maintaining a separate test tenant is a best practice.
I am trying to use the sample code provided for Amazon Alexa API, and trying to run hello world / history buff examples through the computer. How do I test from my local machine, about the request and response formats. In the README file it is given to visit this website : http://echo.amazon.com/#skills, but I could see nothing there as it mentions more about connecting to the device. I dont have the device, but I would like to test things locally through my laptop.
We have a tool that we built specifically for this purpose:
https://bespoken.tools/blog/2016/08/24/introducing-bst-proxy-for-alexa-skill-development
Requests and responses from Alexa will be sent directly to your development laptop, so that you can quickly code and debug without having to do any deployments. We have found this to be very useful for our own development.
Our Github project is here:
https://github.com/bespoken/bst
We are also adding other useful commands for Alexa development.
Yes, the Test tab in the Alexa Developer Console allows you to interact completely with your skill during development.
You will type in your utterances instead of speaking them, but from a program logic perspective, there is no difference.
The Test page also provides a place to type in your skill's reponses, to see what they'll actually sound like. I recommend that you do so if you don't have an actual device. Sometimes adding or removing a comment can help make the responses easier to understand, or sound more natural.
Use http://ngrok.com
See my video for a tutorial:
https://youtu.be/eC2zi4WIFX0?t=108
I'm guessing the key point in OP's question is "dont have the device".
There is a web simulator at https://echosim.io
It behaves just like any other Alexa 'device'. Login with your Amazon account and it picks up all your selected skills, etc. Shows up as just another device in the Alexa app.
Only downsides: You have to click to talk, and it's pretty slow, presumably because it has to receive, buffer, convert and re-ship the audio.
Also, I'm not sure how you register/connect to the Alexa service in the first place without an Echo/Dot device, but I assume there is a way.
UPDATE:
More recently, there are a number of free 3rd-party apps on Android and iOS devices to also simulate an Alexa/Echo device. It can be less klunky than the web site. Search for 'Alexa' in your App/Play store and try a few of them out. "Reverb" is one: https://itunes.apple.com/us/app/reverb-for-amazon-alexa/id1144695621
Good luck.
I dont have the device, but I would like to test things locally
through my laptop.
If you are developing the skill using an AWS Lambda function in Python, have a look at: https://pypi.python.org/pypi/FirstAlexaSkills/0.1.2
It can generate custom Alexa events based on your parameters (utterances, slot variables) and allows you to create test cases against your local code, as well as against AWS Lambda itself.
You can also test your skill locally by following this tutorial:
How to test your Alexa skill locally
Since CodenameOne doesn't support "the cloud storage API" any more and the parse.com is going to retire soon as well. Does CodenameOne has any plan to release a new Cloud Storage API or provide suggestions/guidelines to help developers to deal with the parse4cn1 library code, cloud code, database structure and data in parse.com?
That is something you will have to figure out yourself as parse4cn1 was initially contributed by a community member and wasn't developed by Codenameone team.
You can use a simple webservices created in php, python or java, hosted along your content with any ISP.
You may also have a look at amazon aws which is promising, they provide a cloud solution but their SDKs is not yet integrated to Codenameone.
I made the parse4cn1 lib and I'm also wondering what's smartest to do. With the announcement of Parse.com's imminent shutdown, there's been a lot of discussion around alternatives. My feeling is that "the dust is yet to settle" as per what options are best and reliable for the longer term (it would be a pity to migrate to another service only for it to be shut down soon). So I personally plan to wait till sometime in Q2 to do a proper evaluation of the alternatives. Hopefully, there'll be more clarity then.
The option to host one's own Parse server (e.g. on AWS or Heroku) is getting interesting. They recently announced support for push notifications on iOS and Android. If (when?) they open source the Parse.com dashboard code, I think that option would be much more interesting.
At some point in the coming months, I plan to make a parse4cn1 release that exposes an option to set the server path. With that, anyone migrating to the Parse server option should, in principle, be able to continue to use the cn1lib. Of course, for features that are supported by the open source Parse server.
PS: Here are pointers to some of such discussions on Parse alternatives:
https://github.com/relatedcode/ParseAlternatives
http://www.slant.co/topics/5219/compare/~firebase_vs_kumulos_vs_kinvey