GitLab CI implementation of ZAP - continuous-integration

i'm working on a GitLab CI implementation of ZAP.
What i'm trying to archive, is to perform tests directly in the project, and check the results in the pipeline. I need your help to understand how i can write a yml file to test all the urls present in the application to test.
Is there anyone who already did this?
Thx

GitLab Team member here. I definitely don't pretend to be an expert on ZAP and how to integrate it, however, this StackOverflow thread seems like a promising example of executing successfully using something like Docker.
Let me know if this reply misses the mark, but I hope it at least gets you one step closer.

Related

Hoverfly API Simulations with Golang repositories: how to get started

I have just started to experiment with Hoverfly and I have a Golang backend calling a number of 3rd party APIs for which I would need to create simulations. I am aware that Hoverfly has Java and Py bindings and I have come across a number of tutorials using Hoverfly with both. I think I am possibly missing very trivial point here, once I have created the simulations (via the Capture Mode), what is the next step? Do I simply create integration tests making use of them? Do you import the go package here into my repository? I was looking for some sample usages in the examples folder and I have seen more .py driven ones. Is there any available example that I totally missed out?
Thank you
For Golang testing, you can have a look at the functional tests in the hoverfly project: https://github.com/SpectoLabs/hoverfly/tree/master/functional-tests, it’s using hoverfly to test hoverfly!

Using VS Code debugger for serverless lambda flask applications

I have been creating some Lambda functions on AWS using the serverless framework, Flask and SLS WSGI. Some dynamodb tables but that should not matter in this case.
The problem that I am facing is that I can not debug the whole thing end to end, I am able to run sls wsgi serve and run a local instance of my lambda functions, happy days. However, I am a little bit spoiled by other dev tools, languages and IDEs (even just Flask itself) that allow me to set breakpoints and see the scope, step through etc. So I would really like to be able to get that done here as well.
I tried launching the sls command mentioned above in a launch configuration inside vs code, no luck. Next thing I tried was to run the default flask launch config but that obviously didn't include all the configuration stored in the sls.yml file which is essential for accessing the local dynamodb instance.
The last thing I tried was to attach to ptvsd at the end of my app.py file. So I would hit a wait action from ptvsd, attach the debugger in vs code to the specified port, which seems to be successful and returning the code execution. However, it seems like sls wsgi runs the file twice, so that the attaching happens for the first instance and not the second, which then does not trigger a breakpoint when I try to execute an API call through Postman.
I guess I could include the wait step everywhere manually, then attach for each method that I am trying to debug inside the code instead of in the IDE, but that seems like overkill and not very convenient.
I have been looking for answers online and reading through docs and could find no further.
I figured out that I can use Attach using Process Id It is however a little bit tricky because there are always 2 processes running in the list (different pid's). Its not great, but it does the trick
One technique I have found useful, albeit in a node environment but should apply here, is to make use of unit testing as a way to execute code locally with the ability to tie in a debuggerm as well as make use of mocking to stub away the external dependencies such as AWS services (S3, DynamoDB, etc). I wrote a blog post about setting this up for node but you may find it useful as a way to consider setting things up with Pythoin as well: https://serverless.com/blog/serverless-local-development/
However, in the world of serverless development, it ultimately doesn't matter how sophisticated you get with your local development environment, you will have to to test in the cloud environment as well. The unit testing technique I described is good for catching those basic syntactical and logical errors but you still will need to perform a deployment into the cloud and test in that environment. Its one of the reasons at Serverless we are working very hard on ways to improve the ability and time it takes to deploy to the cloud so that testing in AWS is replaces local testing.

Testing spring-cloud-dataflow stream

I'm looking for a way to easily test any flow. And I thing that spring-cloud-dataflow-acceptance-tests could help me to do that but I can'f find any documentation about that.
How does it work? I launched the application but I don't know how can write and run test.
Does anyone has any suggestion about that?
Thanks
The spring-cloud-dataflow-acceptance-tests is used to deploy and run test scenarios. The readme has an extensive example on how to bootstrap it, is your question about how to write a test for a stream app, or how to test a deployed stream end to end?

Performing semi-automated testing with ruby

I am writing an open source gem that interacts with an sms service. I want to test the interaction, however it needs account information and a phone number to run. It also needs feedback to determine if sms messages were being sent correctly. This causes two problems:
I can't put the account information in the test file, as the gem is open source and anyone could get to it.
I need the person running the test to give information to the script as it is running (eg checking the phone to see if a message was received).
What techniques or libraries are available that can help with this? I'm currently using rspec and making it prompt for parameters (using gets), however it is pretty cluncky at the moment. I can't be the first person using ruby to have this problem, and I feel that I'm missing a gem or something that solves this problem.
Use mocks
What are your tests testing, specifically? That a given login/password works? Probably not. Most likely you want to make sure your code reacts to the API properly. Therefore, I'd suggest mocking. Save the output of the API calls and use a mock service to return those responses. Then test. Your tests will be faster and less brittle as a happy side-effect.
More information on mocking with RSpec is here:
http://rspec.info/documentation/mocks/
Re 1) Why not just save configuration options in a YAML file and load them at the beginning of your tests?
Re 2) Are there maybe any web services for that? E.g. one where you can send a message to and query an API to see if it worked. I know this can be unreliable, but the same is true for a user's phone company network.
+1 for Mark Thomas' answer on mocking. Two more alternative mock object libraries for Ruby: FlexMock and Mocha

Generating a report and sending out email notifications after running a PHPUnit test using phpunit 3.5.13 and seleniumRC

I'm still new to PHPUnit Testing and seleniumRC, but i have managed to get them both working, so now i was wondering if it is possible to sent out an email when the test fails and passes after every test is run. the mail should go to the developer and the testing manager. is it possible to do that? and it would be very nice to generate a whole report and sent it out on all the test results. can someone please give me a proper direction which i can follow on how to get around this.
Thank you in advance
D~~
You have a few options available. For a small project, maybe a plain php script that redirects phpunit output to a file, parses it and acts accordingly. ob_start() could also be your friend for this task.
Getting into more complicated options, you could look also into using a couple of phing tasks for this. Then, last but not least, the holy-grail: very flexible for most any build tasks and best of all automated -> look into continuous integration tools such as the jenkins.
For small one-man team projects I opt for the simplest.

Resources