Trying to write up cucumber feature steps for REST API test.
I am not sure which approach is better:
Given I log in with username and password
When I add one "tv" into my cart
And I check my cart
Then I should see the item "tv" is in my cart
or
Given the client authenticate with username and password
When the client send POST to "/cart/add" with body "{item: body}"
Then the response code should be "200"
And the response body should expect "{success: true}"
When the client send GET to "/cart"
Then the response code should be "200"
And the response body should expect "{"items": ["tv"]}"
Is there any convention to follow when people trying to write cucumber steps for REST API?
I just stumbled on this helpful article: https://www.gregbeech.com/2014/01/19/effective-api-testing-with-cucumber/
To summarize...
Scenario: List fruit
Given the system knows about the following fruit:
| name | color |
| banana | yellow |
| strawberry | red |
When the client requests a list of fruit
Then the response is a list containing 2 fruits
And one fruit has the following attributes:
| attribute | type | value |
| name | String | banana |
| color | String | yellow |
And one fruit has the following attributes:
| attribute | type | value |
| name | String | strawberry |
| color | String | red |
Validating a result against JSON is tricky business because if the result is an array, the elements may not be the same order as how you are validating in the test.
Edit: Updated link using finderAUT's comment. Thanks!
Here's a (close enough) example to what the Pragmatic Programmer's "the Cucumber Book" says about testing REST APIs via Cuke,and it seems to more closely relate to your second example:
Feature: Addresses
In order to complete the information on the place
I need an address
Scenario: Addresses
Given the system knows about the following addresses:
[INSERT TABLE HERE or GRAB FROM DATABASE]
When client requests GET /addresses
Then the response should be JSON:
"""
[
{"venue": "foo", "address": "bar"},
{ more stuff }
]
"""
STEP DEFINITION:
Given(/^the system knows about the following addresses:$/) do |addresses|
# table is a Cucumber::Ast::Table
File.open('addresses.json', 'w') do |io|
io.write(addresses.hashes.to_json)
end
end
When(/^client requests GET (.*)$/) do |path|
#last_response = HTTParty.get('local host url goes here' + path)
end
Then /^the response should be JSON:$/ do |json|
JSON.parse(#last_response.body).should == JSON.parse(json)
end
ENV File:
require File.join(File.dirname(__FILE__), '..', '..', 'address_app')
require 'rack/test'
require 'json'
require 'sinatra'
require 'cucumber'
require 'httparty'
require 'childprocess'
require 'timeout'
server = ChildProcess.build("rackup", "--port", "9000")
server.start
Timeout.timeout(3) do
loop do
begin
HTTParty.get('local host here')
break
rescue Errno::ECONNREFUSED => try_again
sleep 0.1
end
end
end
at_exit do
server.stop
end
I've been using cucumber to test and more importantly to document the API that I created using rails-api in my current project. I looked around for some tools to use and I ended up using a combination of cucumber-api-steps and json_spec. It worked well for me.
There is no convention on how to write the cucumber steps. The way you write your steps depends on how you want to use your cucumber suite. I used the cucumber output as the reference for our Angular JS client devs to implement the API client. So my cucumber steps contained the actual JSON requests and responses along with the status code for each scenario. This made it really easy to communicate with a client side team whenever something changed ( especially when the client side team was not physically present at my workplace ).
Everytime I would create or update an API, the CI server would run cucumber as part of the build and move the HTML formatted output to a "build_artifacts" location that can be opened in the browser. The client side devs would always get the most recent reference that way.
I've written all of this down in a blog post about creating a tested, documented and versioned JSON API, hope it helps you in some way.
One of Cucumber's original intents, which contributes to its design, is to bridge the gap between technical implementation, and people who know the business needs, so that the test descriptions could be written and/or understood by non-developers. As such, it's not a great fit to detailed technical specs, or blow-by-blow unit testing.
So that would point me to your first test description, if that's also the reason you are using Cucumber.
There is no major problem with implementing tests like the second version, Cucumber can support it. There probably are not a large number of statement types you would need to parse either. But you could end up fighting the test framework a little, or going against your rationale for using Cucumber in the first place.
As for a convention, I am not aware of enough REST API tests in practice to comment, and none that I have seen tested have used Cucumber as the framework.
Update: Browsing around SO on the subject, I did find a link to this: https://github.com/jayzes/cucumber-api-steps which is more similar to your second format.
There are a few libraries now for server side REST testing with cucumber in Ruby. Here's a couple:
Cucumber-API-Steps. (Recommended)
Cucumber-API. A small tutorial for that is here.
The library I've been using for server side REST testing with cucumber is Cucumber-API-Steps.
Cucumber-API-Steps
Here's how I'd write your test using 'cucumber-api-steps' (Recommended):
#success
Scenario: Successfully add to cart
Given I am logged in
When I send a POST request to “/cart/add” with the following:
| item | body |
Then the response status should be “200”
And the JSON response should have "success" with the text "true"
When I send a GET request to “/cart”
Then the response status should be “200”
And the JSON response should be "{'items': ['tv']}"
And here's what my tests look like using 'cucumber-api-steps':
#success
Scenario: Successfully log in
Given I am logged out
When I send a POST request to “/login” with:
| username | katie#gmail.com |
| password | mypassword |
Then the response status should be “200”
And the JSON response should have "firstName" with the text "Katie"
Cucumber-API
Here's how I'd write your test using 'cucumber-api':
#success
Scenario: Successfully add to cart
Given I am logged in
When I send a POST request to “/cart/add”
And I set JSON request body to '{item: body}'
Then the response status should be “200”
And the response should have key “success” with value “true”
When I send a GET request to “/cart”
Then the response status should be “200”
And the response should follow "{'items': ['tv']}"
And here's what my tests look like using 'cucumber-api':
#success
Scenario: Successfully log in
Given I am logged out
When I send a POST request to “/login” with:
| username | katie#gmail.com |
| password | mypassword |
Then the response status should be “200”
And the response should have key “firstName”
Note about Cucumber-API: There is no way currently to do should have key “firstName” with value “Katie”. The "with value" part has not been done yet.
Also "Follow" expects a JSON file
Another resource is here, but it is old (2011).
I would recommend your first scenario.
From my own experiences I personally feel that the biggest value you get from using BDD as a software delivery method, is when you place the emphasis on business value.
In other words the scenarios should be examples of what behaviour the business wants, rather than technical implementation. This ensures that the development is driven by the goals of the business and the deliverables match their expectations.
This is known as outside-in development.
Additional tests of the system behaviour can and should be used to cover the technical requirements but I think there's less value in spending effort writing these up in natural language, which is often time consuming and laborious across a large number of scenarios.
I recommend the following approach:
1) Work with the BAs and POs to develop examples of the behaviour they want using non-implementation specific language (like your first example).
2) Engineers use these to drive the development from a test first approach, automating them as integration tests - with the majority below the browser (against your REST API for example) and the most core scenarios also through the browser(if you are developing one).
3) Engineers TDD the feature code with unit tests until both the unit tests and BDD examples pass.
I think the first one is better. I would put the technical in the ruby classes and modules. E.g like module cart.add(items) in the when step and in the then step put expect(cart.item).to include('items' => a_string_matching(item))
By this, the ruby classes and modules can be reuse in another features steps. E.g like maybe you have another scenario which would add multiple items into the cart then validate the total amount.
However, the second 1 I think can make it like technical features. E.g like common/global header or body request is expected across all the api.
See here: https://github.com/ctco/cukes-rest. It provides a Cucumber DSL to test RESTful APIs.
Related
I am trying to use the Ruby SDK to upload videos to YouTube automatically. Inserting a video, deleting a video, and setting the thumbnail for a video works fine, but for some reason trying to add captions results in an invalid metadata client error regardless of the parameters I use.
I wrote code based on the documentation and code samples in other languages (I can't find any examples of doing this in Ruby with the current gem). I am using the google-apis-youtube_v3 gem, version 0.22.0.
Here is the relevant part of my code (assuming I have uploaded a video with id 'XYZ123'):
require 'googleauth'
require 'googleauth/stores/file_token_store'
require 'google-apis-youtube_v3'
def authorize [... auth code omitted ...] end
def get_service
service = Google::Apis::YoutubeV3::YouTubeService.new
service.key = API_KEY
service.client_options.application_name = APPLICATION_NAME
service.authorization = authorize
service
end
body = {
"snippet": {
"videoId": 'XYZ123',
"language": 'en',
"name": 'English'
}
}
s = get_service
s.insert_caption('snippet', body, upload_source: '/path/to/my-captions.vtt')
I have tried many different combinations, but the result is always the same:
Google::Apis::ClientError: invalidMetadata: The request contains invalid metadata values, which prevent the track from being created. Confirm that the request specifies valid values for the snippet.language, snippet.name, and snippet.videoId properties. The snippet.isDraft property can also be included, but it is not required. status_code: 400
It seems that there really is not much choice for the language and video ID values, and there is nothing remarkable about naming the captions as "English". I am really at a loss as to what could be wrong with the values I am passing in.
Incidentally, I get exactly the same response even if I just pass in nil as the body.
I looked at the OVERVIEW.md file included with the google-apis-youtube_v3 gem, and it referred to the Google simple REST client Usage Guide, which in turn mentions that most object properties do not use camel case (which is what the underlying JSON representation uses). Instead, in most cases properties must be sent using Ruby's "snake_case" convention.
Thus it turns out that the snippet should specify video_id and not videoId.
That seems to have let the request go through, so this resolves this issue.
The response I'm getting now has a status of "failed" and a failure reason of "processingFailed", but that may be the subject of another question if I can't figure it out.
For context, I'm someone with zero experience in Ruby - I just asked my Senior Dev to copy-paste me some of his Ruby code so I could try to work with some APIs that he ended up putting off because he was too busy.
So I'm using an API wrapper called zoho_hub, used as a wrapper for Zoho APIs (https://github.com/rikas/zoho_hub/blob/master/README.md).
My IDE is VSCode.
I execute the entire length of the code, and I'm faced with this:
[Done] exited with code=0 in 1.26 seconds
The API is supposed to return a paginated list of records, but I don't see anything outputted in VSCode, despite the fact that no error is being reflected. The last 2 lines of my code are:
ZohoHub.connection.get 'Leads'
p "testing"
I use the dummy string "testing" to make sure that it's being executed up till the very end, and it does get printed.
This has been baffling me for hours now - is my response actually being outputted somewhere, and I just can't see it??
Ruby does not print anything unless you tell it to. For debugging there is a pretty printing method available called pp, which is decent for trying to print structured data.
In this case, if you want to output the records that your get method returns, you would do:
pp ZohoHub.connection.get 'Leads'
To get the next page you can look at the source code, and you will see the get request has an additional Hash parameter.
def get(path, params = {})
Then you have to read the Zoho API documentation for get, and you will see that the page is requested using the page param.
Therefore we can finally piece it together:
pp ZohoHub.connection.get('Leads', page: NNN)
Where NNN is the number of the page you want to request.
I need to be able to prompt for user input (username, password and authorisation code) so my tests can access a GUI. These details cannot be stored as test data, so they'll have to be input part way through a test.
I've tried the following, but it's not working how I want:
Feature file:
Feature: user input as part of a test
Scenario: user input at the start
Given the test requires a name
Step definition:
Given(/^the test requires a name$/) do
get_a_name
end
Method:
def get_a_name
puts "Gimme a name"
#input_name = gets.chomp
puts "Hello #{#input_name}"
end
Result:
Gimme a name
Hello Feature: user input as part of a test
Any help would be much appreciated. Thanks.
You have a number of options when dealing with external services in test automation. In this particular case you can
Change your source so the authentication behaves differently when testing
OR
Record a response from the external service and use that response instead of going to the external service (see https://github.com/vcr/vcr). As the response is recorded you will know the authorisation code
OR
Use a test version of the external service which gives back know responses
I suspect there are a number of other solutions, but all three of the above are used widely and work fine.
There is certainly no need to have manual interactions to test your system unless you are running your tests against a production system (which is a very bad idea!).
I've found facebook's 'Graph API Explorer' tool (https://developers.facebook.com/tools/explorer/) to be an incredibly easy way, welcoming (for beginners) & effective way to use facebook's graph API via its GUI.
I'd like to be able to use the koala gem to pass these generated URLs to facebook's api.
Right now, lets say I had a query like this
url = "me?fields=id,name,posts.fields(likes.fields(id,name),comments.fields(parent,likes.fields(id,name)),message)"
I'd like to be able to pass that directly into koala as a single string.
#graph.get_connections(url)
It doesn't like that so I separate out the uid and the ? operator like the gem seems to want
url = "fields=id,name,posts.fields(likes.fields(id,name),comments.fields(parent,likes.fields(id,name)),message)"
#graph.get_connections("me", url)
This however, returns an error as well:
Koala::Facebook::AuthenticationError:
type: OAuthException, code: 2500,
message: Unknown path components: /fields=id,name,posts.fields(likes.fields(id,name),comments.fields(parent,likes.fields(id,name)),message) [HTTP 400]
Currently this is where I am stuck. I'd like to continue using koala because I like the gem-approach to working with API's, especially when it comes to using OAuth & OAuth2.
UPDATE:
I'm starting to break down the request into pieces which the koala gem can handle, for example
posts = #graph.get_connections("me", "posts")
postids = posts.map { |p| p['id'] }
likes = postids.inject([]) {|ary, id| ary << #graph.get_connection(id, "likes") }
So that's a long way of getting two arrays, one of posts, one of like data.
But I'd quickly burn up my API requests limit in no time using this kind of approach.
I was kind of hoping I'd just be able to pass the whole string from the Graph API Explorer and just get what I wanted rather than having to manually parse all this stuff.
I don't really know about your posts.fields(likes.fields(id,name) -this does not work in the Graph API Explorer- and stuff like that but I know you can do this:
fb_api = Koala::Facebook::API.new(access_token)
fb_api.api("/me?fields=id,name,posts")
# => => {"id"=>"71170", "name"=>"My Name", "posts"=>{"paging"=>{"next"=>"https://graph.facebook.com/71170/posts?access_token=CAAEO&limit=25&until=13705022", "previous"=>"https://graph.facebook.com/711737070/posts?access_token=CAAEOTYMZD&limit=25&since=1370723&__previous=1"}, "data"=>[{"id"=>"71170_1013572471", "comments"=>{"count"=>0}, "created_time"=>"2013-06-09T08:03:43+0000", "from"=>{"id"=>"71170", "name"=>"My Name"}, "updated_time"=>"2013-06-09T08:03:43+0000", "privacy"=>{"value"=>""}, "type"=>"status", "story_tags"=>{"0"=>[{"id"=>"71170", "name"=>" ", "length"=>8, "type"=>"user", "offset"=>0}]}, "story"=>" likes a photo."}]}}
And you will receive in a hash what you asked for.
From time to time, you must pass nil as a param to koala:
result += graph_api.batch do |batch_api|
facebook_page_ids.each do |facebook_page_id|
batch_api.get_connections(facebook_page_id, nil, {"fields"=>"posts"})
end
end
I use tweetstream gem to get sample tweets from Twitter Streaming API:
TweetStream.configure do |config|
config.username = 'my_username'
config.password = 'my_password'
config.auth_method = :basic
end
#client = TweetStream::Client.new
#client.sample do |status|
puts "#{status.text}"
end
However, this script will stop printing out tweets after about 100 tweets (the script continues to run). What could be the problem?
The Twitter Search API sets certain arbitrary (from the outside) limits for things, from the docs:
GET statuses/:id/retweeted_by Show user objects of up to 100 members who retweeted the status.
From the gem, the code for the method is:
# Returns a random sample of all public statuses. The default access level
# provides a small proportion of the Firehose. The "Gardenhose" access
# level provides a proportion more suitable for data mining and
# research applications that desire a larger proportion to be statistically
# significant sample.
def sample(query_parameters = {}, &block)
start('statuses/sample', query_parameters, &block)
end
I checked the API docs but don't see an entry for 'statuses/sample', but looking at the one above I'm assuming you've reached 100 of whatever statuses/xxx has been accessed.
Also, correct me if I'm wrong, but I believe Twitter no longer accepts basic auth and you must use an OAuth key. If this is so, then that means you're unauthenticated, and the search API will also limit you in other ways too, see https://dev.twitter.com/docs/rate-limiting
Hope that helps.
Ok, I made a mistake there, I was looking at the search API when I should've been looking at the streaming API (my apologies), but it's possible some of the things I was talking about could be the cause of your problems so I'll leave it up. Twitter definitely has moved away from basic auth, so I'd try resolving that first, see:
https://dev.twitter.com/docs/auth/oauth/faq