In term of architecture there plenty ways to connect to an IoT devices, at least there are:
Devices connected to a mobile
Devices connected to a desktop or a laptop
Devices connected to a cloud solution
Today we will talk about IoT devices connected to a mobile. How these devices are connected?
Mainly they use Bluetooth. If they use Wifi, then they are using a cloud solution as proxy. That means we could categorize them as devices connected to a cloud solution. We will discuss how we could test them in a future article.
Before exploring how we could test them automatically, let gives a few solution examples of IoT devices connected to a mobile:
Watch connected to a mobile (Apple Watch, FitBit, etc.)
Beverage cooler and Freezer
Last point before starting, here will cover the software part of the test. The hardware will be not covered in this article.
The first challenge is how we get feedbacks from the IoT devices? Mainly the IoT devices have a very simplified interface with a very limited scope of functionality. But when we test, we need to access to a bit more of functionality, like to have access to details logs, have feedback on functional that return nothing, have a feedback when it changes state. Usually the hardware and firmware vendor of the device need to provide board that simulate the hardware but it uses the real firmware as it is a real device. In fact, what we test here is if the firmware is reacting properly as per the specification. And the firmware will interact with the board as it is a real hardware.
When we talk about mobile phone device, we need to cover a huge number of devices version (hardware and OS). And sometime, the way the Bluetooth works, it could vary between different mobile phone type.
There are no End to End solution for this type of architecture. You will need to take what already exists and you have developed, then put them together.
Architecture and test
To test this architecture, we could use a Java Test factory based on Appium, Selenium and Rest-Assured. This factory could be driven by a BDD (behavior-driven development) like Cucumber to define the test cases. The only missing part is the machine controller.
As mentioned before the main challenge is to set a Hardware emulator to test the real firmware and have a full access to all input and output logs. This emulator will be handle by a custom developed machine controller. This controller will interface with the device through a serial port and the machine is expected to implement parsing the MCP messages.
Regarding Appium, the setup is required to test a Native or Web application running in a Mobile device.
To test the backend, let assume they are in majority accessible by Web interface or Rest Api. So, the best tool for that are Selenium for the web and Rest-Assured for the API.
The major effort to set this kind of End 2 End test is to set the machine controller, and that could take more than 50% of the time required to setup this environment. What is missing is a standard protocol to manage device through serial. And also, I’m quite sure any time soon this pain will be solved by any major Automated Test Framework leader (HP, Tosca) or by an open-source community. But in the meantime…good luck!
On Thursday 30th Nov 2017 with Itecor, we have released for one of its customers (in health industry) a tool to set a MTP (Master Test Plan). It allows a Test Strategy definition, define the Requirements and the Test Cases. Also the tool allows to export the Requirements and the Test Cases in TestLink.
With this tool, the customer will organize a standardized Test Strategy across hundred projects. With this MTP, it will centralize all the Test Management in a TestLink instance that will also handle all the Test Campaign and define the automated test.
Also this tool has a wizard that will help the project manager and the product manager to define a Test Strategy. The main constrain in this implementation is the low-maturity level in the Test Management. This tool should take this organisation in the next level of maturity.
Now the next step is to define a Global Automated Test Strategy and implement Tosca as an Automated Testing Tool.
On Friday 17th of November with Itecor, we have released for one of its customer (in a food and beverage industry) an IoT Test lab. It allows to run continuously automated test with IoT devices.
In the DevOps model, there are many new test challenges. One of these are to automate test with IoT devices without having a human to interact with the device. Everything is handled by custom controller for IoT Devices.
Appium Desktop is a good way to understand the Appium mechanism. You will see how the elements are identified and what type of interaction you could accomplish. After that we could start industrialization with a test factory.
Today we are focusing on an iOS application. So we will use a iOS test application and it will require a Mac environment.
There is 2 main purposes to use Appium Desktop :
Investigate how the application could be automated and identify the objects.
Use as Appium server instead of using Node.js
We will setup Appium Desktop on a Mac (10.12)
Install “Xcode” from “App store”. It is free.
Run a first time Xcode and accept the License Agreement.
Here we use a test application that allows you to practice and understand how to use Appium. (http://appium.s3.amazonaws.com/TestApp7.1.app.zip)
The Appium will show the screen of the iOS simulator (1), the inspector pan (2), the detail properties for a selected element (3). The (4) is the real simulator running on your machine, Appium is using a simulator to interact with your application. Appium is not providing its own simulator and this is a good approach, we could compare the simulator as the browser for Selenium.
When you select an element in the (1) by double clicking, you will see in (2) the element name that has been selected and in (3) all the properties of the selected element. In this example you could “Send Keys” or “Tap” in the selected field.
Now you can play with Appium and see how you could automate test with your App. It is a good way to see the challenge you could face when you try to automate your iOS, Android or WindowsPhone app.
People are often mixing Stress Test and Load Test in software development. So, let’s demystify a bit.
Stress test tries to break the application until its limits and have a clear KPI regarding the SLA. In this stress test, you have 2 aspects: Positive test using all the system working optimally and negative test having some system already broken and what is the limit of the system in degraded mode.
Load test is a scale up and a controlled volume loading test. The main aspect here is to define the bottle necks of the system at different level of volume.
Here a few tools that could do both type of tests:
Before starting to automatize tests, you need to take a step back and define few aspects.
Consider the test team members
Depending on the test team members you need to think what kind of test scripts you are going to adopt. Depending on the tools like Tosca, UFT, Ranorex or Selenium, there are 3 ways of scripting:
Record : This is a an easy way, but could be expensive to maintain. The profiles adopting this way of working are the Business Users.
Script : This a fair quick way to script, and the maintenance is less expensive then using Record. It requires a Test Automation Expert to develop and to maintain them. Any tool is able to achieve the automation.
Data Driven and Workflow : It requires both profile : business user and a test automation expert. This could take more time to implement but the maintenance is way easier and also scripts could be easily reused by the business user without requiring an expert.
Ranorex is supporting the data driven part. But no workflow management.
Consider Manual vs Automated testing
You should not automate everything and keep some test cases for manual testing. To define which one should be automated, keep in mind the following:
How much time the same script need to be repeated? If you are doing more than 4 times, you should automate.
If your test are UAT (User acceptance test), exploratory test or complex with many asynchronous processes, you should consider manual test.
Also when you prioritize the test cases based on their business criticality. More a test case is critical, more it should be automatized.
Consider Test Data Strategy
Ask yourself these following questions:
Do you need real data with a state data based on a process? That means your data will change from the beginning until the end of the test. You may need a copy from a production database with data anonymization and a way to reset it. This could be a complicated task. You could also use advanced tool for Service Virtualization to avoid to copy database and have a instant “refresh”. But this path is also quiet expensive in license or in man days.
Does your data drive the test? In other words, do you need more a copy of the database, but also “properties” data to drive all the test scenarios?
Define the right set of data that covers all the business risks. When you generate all the test scenarios for the same workflow, you should ensure to optimize the number of scenarios without being systematic, otherwise it could fall in an exponential number of scenarios. See an example below:
Consider Test Script Maintenance
When you create automated test you will require later to maintain them for 3 main reasons :
“Repair” test cases based on business changes.
Enhance test cases based on issue found in UAT (User acceptance test) or later in Production.
Optimize or correct test based on false positive errors.
Consider the right scripting tool
Based on all your SUT (System under test) technology. You need to find the right scripting tool depending on the features and your budget.
When you start to have too many UI tests and it takes hours to complete them, maybe something is wrong in term of automated test strategy.
The first reaction when it takes ages, you start to run nightly test, revise them one by one and find eventually duplicates and finally try parallelize them when it is possible. And if this is still not enough?
Maybe you should define the most critical (20-30%) and modify the other using API calls instead of using UI.
When you test modern application with multiple layer, the UI layer should have no logic and no data. That means if you execute on API or UI level you should have the same results. But on API level the test are more quicker. In fact what you should test on the UI side is if the UI is behaving properly using the API and the security is aligned with the API, then the rest all should be done on API side.
Moving test from UI to API is not obvious because you may also modify all the UI test to cover the alignment between UI and API.
This is why you should think from the beginning to adopt the right test automation strategy with a good balance between API and UI!
For project using open source tool, RestAssured is a good choice to automatized API test. In this Github repository, you could find an example on how to use API test.
In 2017 we could say there are 3 ways of doing Automated test. The typology of the these 3 ways are based on their cost maintainability.
The most easy one but the most complex to maintain is the Record type. All the commercial tools and even Selenium with Selenium IDE, propose this way of doing. But when the SUT (system under test) is changing the automated test could be drastically compromised and need a full rework of the test script.
The second way and more developer oriented is the Scripted type. The maintainability is less complex and could be handle some design change if the script and the UI is properly identified.
The third one is based on Workflow and Data Driven type, there are still scripts but there are more easy to maintain. If a step in the workflow changes, you just need to change those steps. Almost all the commercial tools are proposing these kind of way of doing. With Selenium if you use the BDD with Cucumber will work in that way.
Now in term of implementation the most expensive in maintenance is the most easy to implement.
So when it the time to choose which way you want to implement tests you need forecast how complex, how many tests you want and how many time they are run.