How to fake failure when testing Azure Blob Storage with Azurite - azure-blob-storage

I'm using Azurite (Blob Storage Simulator) to unit test Blob Storage applications, how can I force or fake some failure cases to test the error handling ?
I can use Azurite to test normal success cases, but how could I to force or fake some failure case to test the exception capture and error-handling ?

Related

How to do performance testing of non server application

I have to conduct performance testing of an application which is not a web server. This application picks the data from a database and pushes into Kafka. What could be the strategy? In particular, when I look at the JMeter for ex, it says about making a REST request and getting the response to measure the peeformance. But in my case, this application does not server any service. I was wondering how to proceed..
This application picks the data from a database - what is the trigger for the application to pick the data from the database? If it is something which can be invoked externally you need to identify what network protocols are being used and whether JMeter supports this protocols with its Samplers or via JMeter Plugins or if there are client libraries you can use from the JSR223 Test Elements
If you can trigger this reading from the database and pushing into Kafka event - you're good to go, if not - you need to identify the scope, to wit what you're testing and what you're trying to achieve.
If you need to load test the application itself - it makes sense to use profiling tools to check what are the slowest functions, largest objects, the most resource consuming routines, etc.
If you need to load test the database, to wit simulate the application reading the data from the database at high rate - this can be done using JMeter's JDBC Request sampler, check out Building a Database Test Plan article for more details
If you need to load test your Kafka instance - it can be done using Pepper-Box - Kafka Load Generator, check out Apache Kafka - How to Load Test with JMeter article for comprehensive information.

Cypress secure to use with production data

I am trying to use cypress for running some monitoring tests on production.I am also using snapshot match plugin to compare screenshots.
I just want to know is this safe to do ?
I am not using any dashboard services from cypress -just running tests on our local machines-will cypress sent any info outside our network?
Cypress doesn't send anything to Cypress's servers unless you specifically configure it to - it's safe.
The only other thing is, by default, Cypress will send crash reports (when Cypress itself crashes) to be analyzed. You can turn this off by following the instructions here.

Rest API functional testing

I am automating functionality of API using JMeter. I just passed input parameters using JSON and asserted with expected result like 'Registered successfully'. My doubt is Whether I need to check the values saved in DB. If yes how can I do it in Jmeter.
JMeter provides JDBC Request Sampler which allows executing arbitrary SQL queries. You need to
Download relevant JDBC driver for your database management system and put it somewhere in JMeter Classpath (normally lib folder of your JMeter installation). JMeter restart will be required to pick the library up
Add JDBC Connection Configuration test element and specify database URL, credentials and other parameters if needed
Using JDBC Request sampler execute SQL query to validate that database contains the expected value(s)
See The Real Secret to Building a Database Test Plan With JMeter article for comprehensive instructions and configuration examples.
You could use JDBC Sampler & configurations for DB validation. However, I would suggest you to make use of other APIs (if any) to verify if they are present in the system/DB. some get request might bring the registered info.
I do a lot of API testing. sometimes we run these automated tests in higher environments like staging / PROD as part of sanity test after prod push. If you think that you might do something similar in the future, then you would not have prod DB config details - your test will be limited to run only in the lower environments & would not work in PROD. So, try to avoid DB validation.
Once you have successfully executed the API, add JDBC sampler after that, write query to count the number of rows in the db. If this count is growing which means the API is successfully inserting the data in the DB.
Once you have the count, write a beanshell script to print the count and compare it with older count and based on the comparison raise an assertion. This way you can be sure that the data is being inserted.
I will also recommend not to use this approach or any additional load when you are running your actual tests as the numbers you will get or system monitoring data you will collect will be with additional query which in turn will not be real life scenario or your actual test plan.
The best practices for API testing involve checking the result of the API call against an SQL query result.
Use the above details for setting your db connection.
I second with Vins. Using database validation to assert the reaponse from API limits the test capabilities and can not be scaled to higher environments where you have limited or no access.
Also you can not reuse the functional tests to run load tests as the number of users increases the more data gets inserted into database and slows down the test execution as more data pump in.
Also might be case that some select queries gets struck on as data set is more or due to network bandwidth or less memory alloxated to Jmeter
You might also face out of memory errors in java as it keeps trying to garbage collection to accomodate the large data set
Recommended apporach is to use front end validations if available and wherever applicable or make use of other APIs query and validate the data.

How to mark the special case as success for "failed to load application context"

Normally when failed to load application context, it means the test case failed.
However, sometimes I need to specially trigger "failed to load application context" for rainy cases and I need to mark this kind of case as a success. How can we do this using spring-test/junit ?
Thanks!

Testing 2 web applications that interact with each other

We are developing 2 different web applications (WARS).
Both use the same message bus (ActiveMQ - jms).
We would like to preform tests that triggers one action on a webapp#1 , that action should induce message throwing that will be consumed on webapp#2 and mutate the DB.
How can we test this end to end scenario??
We would like to have an automated test for that, and would like to avoid manual testing as much as possible.
We are using junit with springframework, and already have tons of junit that are being preformed daily, but non of them so far involved the usage of the message bus. it appear that this scenarion is a whole different story to automate.
Are there any possibilities to test this scenario with automated script (spring \ junit \ other)?
A JUnit test could certainly this integration test sequence:
send a HTTP request to webapp#1 to trigger, using HTTPUrlConnection for example
run a SQL command (using JDBC) to detect wether the database contains the expected value
In the test setup, the database needs to be initialized (rest) so that the second step does not give a false-positive result

Resources