People are often mixing Stress Test and Load Test in software development. So, let’s demystify a bit.
Stress test tries to break the application until its limits and have a clear KPI regarding the SLA. In this stress test, you have 2 aspects: Positive test using all the system working optimally and negative test having some system already broken and what is the limit of the system in degraded mode.
Load test is a scale up and a controlled volume loading test. The main aspect here is to define the bottle necks of the system at different level of volume.
Here a few tools that could do both type of tests:
Before starting to automatize tests, you need to take a step back and define few aspects.
Consider the test team members
Depending on the test team members you need to think what kind of test scripts you are going to adopt. Depending on the tools like Tosca, UFT, Ranorex or Selenium, there are 3 ways of scripting:
Record : This is a an easy way, but could be expensive to maintain. The profiles adopting this way of working are the Business Users.
Script : This a fair quick way to script, and the maintenance is less expensive then using Record. It requires a Test Automation Expert to develop and to maintain them. Any tool is able to achieve the automation.
Data Driven and Workflow : It requires both profile : business user and a test automation expert. This could take more time to implement but the maintenance is way easier and also scripts could be easily reused by the business user without requiring an expert.
Ranorex is supporting the data driven part. But no workflow management.
Consider Manual vs Automated testing
You should not automate everything and keep some test cases for manual testing. To define which one should be automated, keep in mind the following:
How much time the same script need to be repeated? If you are doing more than 4 times, you should automate.
If your test are UAT (User acceptance test), exploratory test or complex with many asynchronous processes, you should consider manual test.
Also when you prioritize the test cases based on their business criticality. More a test case is critical, more it should be automatized.
Consider Test Data Strategy
Ask yourself these following questions:
Do you need real data with a state data based on a process? That means your data will change from the beginning until the end of the test. You may need a copy from a production database with data anonymization and a way to reset it. This could be a complicated task. You could also use advanced tool for Service Virtualization to avoid to copy database and have a instant “refresh”. But this path is also quiet expensive in license or in man days.
Does your data drive the test? In other words, do you need more a copy of the database, but also “properties” data to drive all the test scenarios?
Define the right set of data that covers all the business risks. When you generate all the test scenarios for the same workflow, you should ensure to optimize the number of scenarios without being systematic, otherwise it could fall in an exponential number of scenarios. See an example below:
Consider Test Script Maintenance
When you create automated test you will require later to maintain them for 3 main reasons :
“Repair” test cases based on business changes.
Enhance test cases based on issue found in UAT (User acceptance test) or later in Production.
Optimize or correct test based on false positive errors.
Consider the right scripting tool
Based on all your SUT (System under test) technology. You need to find the right scripting tool depending on the features and your budget.
When you start to have too many UI tests and it takes hours to complete them, maybe something is wrong in term of automated test strategy.
The first reaction when it takes ages, you start to run nightly test, revise them one by one and find eventually duplicates and finally try parallelize them when it is possible. And if this is still not enough?
Maybe you should define the most critical (20-30%) and modify the other using API calls instead of using UI.
When you test modern application with multiple layer, the UI layer should have no logic and no data. That means if you execute on API or UI level you should have the same results. But on API level the test are more quicker. In fact what you should test on the UI side is if the UI is behaving properly using the API and the security is aligned with the API, then the rest all should be done on API side.
Moving test from UI to API is not obvious because you may also modify all the UI test to cover the alignment between UI and API.
This is why you should think from the beginning to adopt the right test automation strategy with a good balance between API and UI!
For project using open source tool, RestAssured is a good choice to automatized API test. In this Github repository, you could find an example on how to use API test.
In 2017 we could say there are 3 ways of doing Automated test. The typology of the these 3 ways are based on their cost maintainability.
The most easy one but the most complex to maintain is the Record type. All the commercial tools and even Selenium with Selenium IDE, propose this way of doing. But when the SUT (system under test) is changing the automated test could be drastically compromised and need a full rework of the test script.
The second way and more developer oriented is the Scripted type. The maintainability is less complex and could be handle some design change if the script and the UI is properly identified.
The third one is based on Workflow and Data Driven type, there are still scripts but there are more easy to maintain. If a step in the workflow changes, you just need to change those steps. Almost all the commercial tools are proposing these kind of way of doing. With Selenium if you use the BDD with Cucumber will work in that way.
Now in term of implementation the most expensive in maintenance is the most easy to implement.
So when it the time to choose which way you want to implement tests you need forecast how complex, how many tests you want and how many time they are run.
If you want to go more further in Containers, and you want to have a dedicated machine for it. You will need a few more “thing” above Docker. You will need an Hypervisor and a container orchestrator.
For an Hypervisor you could take LXD and an OpenStack as orchestrator.
At the first glance this could take a lot of configurations steps. But guess what, there is a “Next, Next, Next” installation!
This POC (Proof of concept) tend to analyse different service virtualization available on the market.
POC = Proof of concept
Service Virtualization = “A test double often provided as a Software-as-a-Service (SaaS), is always called remotely, and is never working in-process directly with methods or functions. A virtual service is often created by recording traffic using one of the service virtualization platforms instead of building the interaction pattern from scratch based on interface or API documentation.” (ref1)
Stub = “A minimal implementation of an interface that normally returns hardcoded data that is tightly coupled to the test suite.” (ref1)
Mock = “A programmable interface observer, that verifies outputs against expectations defined by the test.” (ref1)
Main feature to consider in Service Virtualization
Multi protocol (not only HTTP).
Record record real transaction to re-play them for testing purpose.
We will use the API from ft-demo-website project => API. We will explore HPE SV and We will virtualize 2 methods from the API:
Read list of books
Add a book
Read list of books
Original : http://../api/listBooks.php
This GET Rest Api provides a list of books
We need to configure HPE Sv to learn the service:
Go to Add a new virtual service…
Select I don’t have a service description & click Next
Select REST & click Next
Select agent HTTP Gateway because we want to simulate the endpoint.
Add in Real Service > Endpoints the original rest api as specified above & click Next
Start to learn by selecting Learn and create a call using the Virtual Service not the Real service
System is learning thanks to this unique call. You will see that you have been redirected to the real service. When it the system is learning it captures the call, redirects it, plays as proxy and records the answer.
Switch to Simulate now & do again the call using again Virtual Service url. As you can see, you are not redirected and it is providing the same content. Now the service is kind of cached. We can say this service is stubbed as per this definition.
Original : http//../api/insertBook.php?title=toto&author=tutu&edition=Sun%20Edition
This GET Rest Api adds a book with the content defined in the param. You create a new Virtual service as the step above. Then put the 2 services in Learning mode.
Before starting, reset the db using http//../api/createTable.php (this will not simulated), and start to alternate the call (list, add, list, add etc…) then switch to Simulate and redo the same pattern. You will see the system is redoing the same outputs.
Now lets trick the system. Call only listBooks.php, you will notice he will loop as per the pattern even we don’t use insertBook.php! We you could face some issue for complex Business Process. How you could solve it? By using script or data driven set.
Beyond a simple stub
After recording you can more data scenarios. You have 2 options that could be used even together:
Data driven set managed with an Excel file and you define all the possible request and response required for your test.
Data test strategy
If you are stuck with some poor data or it is hard to use test data. It could be a good tool to put in place and have a new data test set. Based on your test scenarios, you create all the set of data you want to have in requests and answers and you will have fairly quick your data test set. This should be consider when you are doing your test strategy.
It works perfect if you are doing automated test but if you are doing manual explanatory tests it will be wired.
If you need to put more business logic it is still possible. For each service virtualized, you can add scripts with business logic. But how far you want to replicate the business logic? This could be complicated if you need a perfect replication, that means you will have 2 technologies to maintain with the same business logic. You should avoid creating scripts and answer as much as possible with Data driven set. This will be more easy to maintain.