How do you protect your API key in your tests when you publish a gem with tests? - ruby

I am using rspec in my test and i would like to protect my API key it when i publish my gem on github.
What are the best practices to do that? should i use VCR and then remove my key from the git log?

Broadly speaking, here are three approaches I have used in the past in similar situations. Which you choose will depend on the details your particular situation.
Test user supplies API key
If your test suite requires, or at least prefers, actual API calls with an actual API key, you can have the caller of the tests supply the credentials when running the tests.
Two most common ways of doing this are:
File in project with well-known name which is not checked into version control. Include an example with fake credentials, which is checked into version control, along with instructions for users to supply their real credentials into the real file before calling test suite.
Read from environment variables. Include instructions for users to set appropriate environment variables before calling test suite.
Otherwise,
Mock out the API
This can be the VCR approach you described. This could also be patching the API call to return some fake results.
Test your domain-specific code separate from the API interaction
Assume the API and the API client behave how you expect. Then, factor out the parts of your code which construct the API input and process the API output. Test properties of your input generated. Test behavior of output processor with known or fake output.
Finally, a warning:
If you have ever committed your API key to version control, it will visible in the history. If you have ever pushed to a public hosting service, it will have been exposed to the Internet, most notably, it will have been exposed to specialized bots which scrape newly-pushed commits for sensitive credentials. If this is you, change your credentials now!
I can't find the original blog post at the moment, but there was at least one report of someone accidentally pushing their AWS credentials to GitHub. They subsequently woke up to a several thousand dollar bill.

Related

Test endpoints compliance against openapi contract in Spring Boot Rest

I am looking for a nice way to write tests to make sure that enpoints in Spring Boot Rest (ver. 2.1.9) application follows the contract in openapi contract.
In the project I moved recently there is following workflow: architects write contract openapi.yml and developers have to implement endpoint to compliance the contract. Unfortunately a lot of differences happen and this test have to catch such situation and it is not possible to change this :(
I was thinking about the solution to generate openapi.yml from current ednpoints and compares it somehow but wonder if there is some out of the box solution.
I was thinking about the solution to generate openapi.yml from current ednpoints and compares it somehow but wonder if there is some out of the box solution.
In a general case, even the generated spec may not match the actual app behavior because some things can't be expressed with Open API. However, it still could be helpful as a starting point.
Open API provides a way to specify examples, that could be used to verify the contract. But the actual schemas might be a better source of expectations.
I want to note two tools that can generate and execute test cases based only on the input Open API spec:
Schemathesis uses both examples and schemas and doesn't require configuration by default. It utilizes property-based testing and verifies properties defined in the tested schema - response codes, schemas, and headers. It supports Open API 2 & 3.
Dredd focuses more on examples and provides several automatic expectations. It supports only Open API 2, and the third version is experimental.
Both provide a CLI and could be extended with various hooks to fit the desired workflow.
I'd suggest passing the contracts (as a spec you mentioned) to Schemathesis and it will verify if all schemas and examples are handled correctly by your app.

pentest-verify checklist after cheked

After pentesting and checking the check-list, how can I reassure my client that these checks are done and vulnerabilities patched? (of course for something like sqli, showing is obvious)
But I mean somewhere to verify or something like this?
Thanks
For test checks that are done you can provide different reports generated by tools or manual testing (depending on vulnerability type) for those specific checks.
While for patched vulnerabilities, you will need to re-test the platform again and provide the changed reports either generated from tools or manual testing that will show different output by indicating the vulnerability is no longer present.
For further re-assurance you can also add the vulnerability exploitation reproducing steps on the report. So if the client wants to test it themselves they can do it (and get assured that it was fixed).
You need to describe all methodologies used like OSSTMM, OWASP, NIST. Is very important too talk about the perimeter tested (web like forms, api, frameworks, network protocols,etc).
However, you can create a topic every step tested using Top10Owasp:
Broken Authentication
Sensitive data exposure
XML External Entities (XXE)
Broken Access control
Security misconfigurations
Cross Site Scripting (XSS)
Insecure Deserialization
Using Components with known vulnerabilities
Insufficient logging and monitoring
This way you ensure that your test was compliance.

IdentityServer4 first acceptance test

I am starting a new project to create an authentication api using IdentityServer4 following TDD. Many microservices and websites will be using this to authenticate users. But I could not figure out first 3 acceptance tests for the project. Any help will be highly appreciated.
Note: I have recently read goos
Well, in the book they suggest starting with the simplest success case possible. For an authentication service that would probably be a successful authentication.
So your first acceptance test could look something like that:
When: receiving valid user data
Then: authentication should be successful
That may seem awfully small for an acceptance test that is supposed to test a whole system, but your system is also very small and there aren't many user stories to handle. Basically only authentication success, fail and maybe a test that covers the case when a user has tried to log in too many times without success.
Your unit tests then can go more into detail about the actual authentication mechanism, but the acceptance test should always be about the user story.
I guess one could also argue that you don't need to write acceptance tests for your authentication service at all, since it is only a part of your system and you should rather write acceptance tests for your whole system, meaning when you have brought all the microservices together or for each individual website that will rely on that service. The main reason for this kind of argument is that acceptance testing is about testing from the outside in and your authentication service is already a rather deep component of your system.

Performing semi-automated testing with ruby

I am writing an open source gem that interacts with an sms service. I want to test the interaction, however it needs account information and a phone number to run. It also needs feedback to determine if sms messages were being sent correctly. This causes two problems:
I can't put the account information in the test file, as the gem is open source and anyone could get to it.
I need the person running the test to give information to the script as it is running (eg checking the phone to see if a message was received).
What techniques or libraries are available that can help with this? I'm currently using rspec and making it prompt for parameters (using gets), however it is pretty cluncky at the moment. I can't be the first person using ruby to have this problem, and I feel that I'm missing a gem or something that solves this problem.
Use mocks
What are your tests testing, specifically? That a given login/password works? Probably not. Most likely you want to make sure your code reacts to the API properly. Therefore, I'd suggest mocking. Save the output of the API calls and use a mock service to return those responses. Then test. Your tests will be faster and less brittle as a happy side-effect.
More information on mocking with RSpec is here:
http://rspec.info/documentation/mocks/
Re 1) Why not just save configuration options in a YAML file and load them at the beginning of your tests?
Re 2) Are there maybe any web services for that? E.g. one where you can send a message to and query an API to see if it worked. I know this can be unreliable, but the same is true for a user's phone company network.
+1 for Mark Thomas' answer on mocking. Two more alternative mock object libraries for Ruby: FlexMock and Mocha

Ways to Unit Test Oauth for different services in ruby?

Are there any best practices in writing unit tests when 90% of the time I'm building the Oauth connecting class, I need to actually be logging into the remote service?
I am building a rubygem that logs in to Twitter/Google/MySpace, etc., and the hardest part is making sure I have the settings right for that particular provider, and I would like to write tests for that.
Is there a recommended way to do that? If I did mocks or stubs, I'd still have to spend that 90% of the time figuring out how to use the service, and would end up writing tests after the fact instead of before...
On a related note, if I created test API keys for each Oauth provider, and I just used it for the gem for testing purposes, is there any issue in leaving the api key and secret in plain view in the tests? So other people could test it out more realistically too.
nothing wrong with hitting live services in your integration tests.
you should stub out the oauth part completely in your unit tests though
If you're planning on open sourcing the code you're writing, leaving API keys in the code might not be a good idea as you might hit API usage limits or rate limiting when the gem becomes popular and people run the tests frequently which will lead to unexpected test failures.

Resources