Selenium : The hell of Angular

Recently have some trouble automating an Application using Angular using Selenium and SerenityBDD.

I found a component that helps a lot. NGWebDriver

How to search an element and interact with it?

import com.paulhammant.ngwebdriver.ByAngularCssContainingText;

waitFor(ExpectedConditions.visibilityOfElementLocated(By.xpath("//my-form[@class='myclass']")));
find(ByAngularCssContainingText.xpath(//my-form[@class='myclass'])).click();

Also it could be handy to sometime wait that Angular load everything by using:

waitForAngularRequestsToFinish();

Also I had to revise a bit my Xpath way of working. Here is a good tutorial :  https://www.guru99.com/xpath-selenium.html

Jmeter

Here some note I took while exploring Jmeter.

Pre-requirement

  • jmeter 5.0
  • Ensure to have added in PATH the location of the bin folder of jmeter
  •  jre 1.8

How to start Jmeter GUI

  • Go where you have downloaded the binaries then run `jmeter.bat` using command line. Or type jmeter.bat if you have the bin folder of jmeter in the PATH.
  • Do not run performance test with GUI. Only for design and debugging purpose with low charge.

How to run Jmeter in command line

  • jmeter -n -e -l appLog.csv -o appReport -t app.jmx
    • `-n` to run in cli mode
    • `-e` to create a report at the end of the run. It requires `-l`
    • `-l` log file name and path
    • `-t` testplan file and path
    • `-o` path where the report is generated

Record script

  • Use the HTTP(S) test Script Recorder with the port 8000
    • On the Root > Add > Non-Test Elements > HTTP(S) test Script Recorder
  • Use firefox and setup the proxy to 8000
  • Record under the current : “HTTP(S) test Script Recorder” then copy the step in each “Recording Controller”
    • On a Thread Group > Add > Logic Controller > Recording Controller
  • Record each step with a different label
    • HTTP Sample settings : Prefix
    • Update each step with a meaning step name
    • Avoid to “Retrive All Embedded Ressources” to keep it simple.
    • Avoid “Redirect Automatically”
    • Allow “Follow Redirects”
    • Alkow “Use KeepAlive”

Thread Group structure

  • “HTTP Request Defaults” : to setup the default root url
    • On a Thread Group > Add > Config Elements > HTTP Request Defaults
  • “HTTP Cookie Manager” : to handle the cookie reset at each iteration
    • On a Thread Group > Add > Config Elements > HTTP Cookie Manager
  • “User Parameters” : to handle the variables and users
    • On a Thread Group > Add > Pro Processors > User Parameters
  • “Debug Sampler” : to retrieve all the variables for debug purpose
    • On a Thread Group > Add > Sampler > User Parameters
  • “View Results Tree” : to see all the step details
    • On a Thread Group > Add > Listener > View Results Tree
  • “Summary Report” : to see the metrics
    • On a Thread Group > Add > Listener > Summary Report

To catch value in response

  • Use Json Extractor : It allow to get a value in json by using JSON Path expression and assign to an existing (JMeter Variable Name to use) or new variables (Names of created variables) with n variables separated by coma.
    • On the http request > Add > Pro Processors > Json Extractor

Other references

  • master/slave architecture for distributed testing : https://jmeter.apache.org/usermanual/jmeter_distributed_testing_step_by_step.pdf

Selenium : How to use Javascript to show button or move a window.

I had some trouble lately with some Angular interface. So here  interesting work around I found.

Scenario 1 : You have  a list of checkbox to click, they are not hidden but hidden under an other element. Like hidden behind a pop-up window or a menu button, but they should partially visible for a human user.
The work around : move the checkbox to a visible position in the page. How?

JavascriptExecutor js = (JavascriptExecutor) driver;
js.executeScript("arguments[0].scrollIntoView(true);", checkBoxElement);

Scenario 2 : You want to move a Pop-up window using Javascript by using the position.

JavascriptExecutor js = (JavascriptExecutor) driver;
js.executeScript("arguments[0].setAttribute('style', 'top: 0px');", windowElement);

Advanced automated testing using AI

Business case

When a customer website is critical for his business and a lot of users are interacting with the website. Even with a content review process some non-compliant contents or issues may be missed. And above that, sometime the pictures have less attention then the text.
But how could we test picture content?

Quality objectives

Here are some non-exhaustive objectives. What are the quality objectives on the pictures displayed on the website?

  1. Picture object are relevant with the description.
  2. Avoid certain type of object content in pictures.
  3. The picture content doesn’t contain unauthorised logo.
  4. People on the picture are not happy enough.
  5. The picture colors are not following the graphical chart.
  6. Misspell on title in the picture.

What are the technology solutions to achieve this goal?

There could be several solutions, but let’s narrow a bit to one particular solution that we are working on it.

We have started from a test factory based on Serenity, Cucumber and Selenium.

Depending on the test scenario, when it is required to validate a picture, we send the picture to an service that allows to validate a picture content based of acceptance criteria like defined in the quality objectives above.

How this picture validation works?

The service is based on Google Vision Api. This Api is able to identify objects on picture. But this Api is not perfect and only provide relevancy percentage on different options. In order or be accurate, the AI need some context and that’s what we provide him to get a better relevancy and have a final answer.

This service solution is not free. Google is charging any transaction to the Api. So some caching process need also be taken in account when the same picture with identical content are analysed several time.

This Google API is using the machine learning and and his big data. This Api is the most popular service even used by the US Air Force to identify objects on picture taken by drones or satellite. Each day it becomes more and more “clever”. That’s why we have chosen this Api to build our service for picture object identification.

The object identification could be items like human constructions, human faces behaviours, animal and vegetal identification and much more.

AI or not AI

Some AI “expert” could be not agree with this, but we want to keep this simple. For us AI means “a branch of computer science dealing with the simulation of intelligent behavior in computers” (merriam-webster). Google Vision API is based on Google Big Data and Machine Learning to be able to simulate this human behaviour. So for that reason we consider as AI.

AI service release

We are still working on this service. This service using AI is in alpha stage and we are preparing for Q1 2019 an initial release. We are in an optimisation and testing phase. Stay tuned for more info on twitter @fanaticaltest.

Serenity BDD in FanaticalTest Web Test Factory

We are super exited to release today a new web test factory that allows you to create functional automated test of any web application or website. This factory is open to anyone and anyone could contribute.

What’s new in 2.0?

We have remove our own framework and we decide to adopt Serenity BDD framework. We have decide to go with Serenity BDD in order to focus on test delivery and not reinvent the wheel. Also this framework has so many contributors that allows to keep it-self up-to-date.

Also the factory is delivered with a Gradle including some build tasks examples to manage several level of tests.

Last but not least, we provide a docker container for selenium agent in order to have a full selenium grid.

Getting started quickly

  1. Clone or fork the test factory.
  2. Open the project in your favorite Java IDE.
  3. In the terminal window, go to the projects root and run the selenium agent :
    docker-compose up -d
  4. Run the vnc to open the agent in order to see live the test:
    vnc://127.0.0.1
  5. Run the smoke test:
    gradle clean smokeTest
  6. When the test is completed, run in the browser [project-root]/target/site/serenity/index.html and you should see a report similar to this example.

 

Master Test Plan Tool release

On Thursday 30th Nov 2017 with Itecor, we have released for one of its customers (in health industry) a tool to set a MTP (Master Test Plan). It allows a Test Strategy definition, define the Requirements and the Test Cases. Also the tool allows to export the Requirements and the Test Cases in TestLink.

With this tool, the customer will organize a standardized Test Strategy across hundred projects. With this MTP, it will centralize all the Test Management in a TestLink instance that will also handle all the Test Campaign and define the automated test.

Also this tool has a wizard that will help the project manager and the product manager to define a Test Strategy. The main constrain in this implementation is the low-maturity level in the Test Management. This tool should take this organisation in the next level of maturity.

Now the next step is to define a Global Automated Test Strategy and implement Tosca as an Automated Testing Tool.

IoT Test Lab – New Release

On Friday 17th of November with Itecor, we have released for one of its customer (in a food and beverage industry) an IoT Test lab. It allows to run continuously automated test with IoT devices.

In the DevOps model, there are many new test challenges. One of these are to automate test with IoT devices without having a human to interact with the device. Everything is handled by custom controller for IoT Devices.

The architecture model used was the one described in one of the previous articles : IoT testing – Devices connected to a mobile.

With this lab, the customer will ensure that on each code commit of any application interacting with its IoT devices, it will be properly tested and validated without any regressions.

Itecor and us are proud of this new achievement and foreseen new opportunities in a near future in IoT Testing.

IoT testing – Devices connected to a mobile

In term of architecture there plenty ways to connect to an IoT devices, at least there are:

  • Devices connected to a mobile
  • Devices connected to a desktop or a laptop
  • Devices connected to a cloud solution

Today we will talk about IoT devices connected to a mobile. How these devices are connected?

Mainly they use Bluetooth. If they use Wifi, then they are using a cloud solution as proxy. That means we could categorize them as devices connected to a cloud solution. We will discuss how we could test them in a future article.

Before exploring how we could test them automatically, let gives a few solution examples of IoT devices connected to a mobile:

  1. Watch connected to a mobile (Apple Watch, FitBit, etc.)
  2. Beverage cooler and Freezer
  3. Coffee machine
  4. Construction tools
  5. Healthcare equipment

Last point before starting, here will cover the software part of the test. The hardware will be not covered in this article.

Challenges

  1. The first challenge is how we get feedbacks from the IoT devices? Mainly the IoT devices have a very simplified interface with a very limited scope of functionality. But when we test, we need to access to a bit more of functionality, like to have access to details logs, have feedback on functional that return nothing, have a feedback when it changes state. Usually the hardware and firmware vendor of the device need to provide board that simulate the hardware but it uses the real firmware as it is a real device. In fact, what we test here is if the firmware is reacting properly as per the specification. And the firmware will interact with the board as it is a real hardware.
  2. When we talk about mobile phone device, we need to cover a huge number of devices version (hardware and OS). And sometime, the way the Bluetooth works, it could vary between different mobile phone type.
  3. There are no End to End solution for this type of architecture. You will need to take what already exists and you have developed, then put them together.

    Architecture and test

    Here is an example of architecture and which tools to automate test.

    To test this architecture, we could use a Java Test factory based on Appium, Selenium and Rest-Assured. This factory could be driven by a BDD (behavior-driven development) like Cucumber to define the test cases. The only missing part is the machine controller.

    As mentioned before the main challenge is to set a Hardware emulator to test the real firmware and have a full access to all input and output logs. This emulator will be handle by a custom developed machine controller. This controller will interface with the device through a serial port and the machine is expected to implement parsing the MCP messages.

    Regarding Appium, the setup is required to test a Native or Web application running in a Mobile device.

    To test the backend, let assume they are in majority accessible by Web interface or Rest Api. So, the best tool for that are Selenium for the web and Rest-Assured for the API.

     Conclusion

    The major effort to set this kind of End 2 End test is to set the machine controller, and that could take more than 50% of the time required to setup this environment.  What is missing is a standard protocol to manage device through serial. And also, I’m quite sure any time soon this pain will be solved by any major Automated Test Framework leader (HP, Tosca) or by an open-source community. But in the meantime…good luck!

Related Article

Appium Desktop tutorial and setup

Appium Desktop tutorial and setup

Introduction

Appium Desktop is a good way to understand the Appium mechanism. You will see how the elements are identified and what type of interaction you could accomplish. After that we could start industrialization with a test factory.

Today we are focusing on an iOS application. So we will use a iOS test application and it will require a Mac environment.

There is 2 main purposes to use Appium Desktop :

  1. Investigate how the application could be automated and identify the objects.
  2. Use as Appium server instead of using Node.js

Setup

We will setup Appium Desktop on a Mac (10.12)

  1. Install “Xcode” from “App store”. It is free.
  2. Run a first time Xcode and accept the License Agreement.
  3. Start the simulator : “Xcode” > “Open Developer Tool” > “Simulator”
  4. Check the version of your Simulator like below:

    iOS 10.3 simulator
    iOS 10.3 simulator
  5. Then go download and install the latest version of Appium Desktop (https://github.com/appium/appium-desktop/releases) like below:

    Download the latest version. Here it was 1.0.1 the latest version.
    Download the latest version. Here it was 1.0.1 the latest version.

Tutorial

  1. Start Appium and let the value by default like below:

    Appium starting screen
    Appium starting screen
  2. Start a new session :

    Appium logging screen
    Appium logging screen
  3. Create a new “Desired capability”. Ensure to report the right version of your iOS simulator!
    {
         "platformName": "iOS",
         "platformVersion": "10.3",
         "deviceName": "iPhone Simulator",
         "app": "http://appium.s3.amazonaws.com/TestApp7.1.app.zip",
         "noReset": true
    }
    
    Add in "Desired Capabilities" the required properties to run the simulator and define the application to test.
    Add in “Desired Capabilities” the required properties to run the simulator and define the application to test.

    Here we use a test application that allows you to practice and understand how to use Appium. (http://appium.s3.amazonaws.com/TestApp7.1.app.zip)

  4. The Appium will show the screen of the iOS simulator (1), the inspector pan (2), the detail properties for a selected element (3). The (4) is the real simulator running on your machine, Appium is using a simulator to interact with your application. Appium is not providing its own simulator and this is a good approach, we could compare the simulator as the browser for Selenium.
    Appium Desktop (1) (2) (3) and iOS Simulator (4)
    Appium Desktop (1) (2) (3) and iOS Simulator (4)

    When you select an element in the (1) by double clicking, you will see in (2) the element name that has been selected and in (3) all the properties of the selected element. In this example you could “Send Keys”  or “Tap” in the selected field.

    How an element is defined and how you could interact with.
    How an element is defined and how you could interact with.

Conclusion

Now you can play with Appium and see how you could automate test with your App. It is a good way to see the challenge you could face when you try to automate your iOS, Android or WindowsPhone app.

Related Article

IoT testing – Devices connected to a mobile