Q1 2016 Calendar Overview

Q12016

The year just started but as you can see, the market is already busy and we can see as a continuation to 2015, that Apple is much more active on its bug fix releases compared to Android.

Sign up for my quarterly Digital Test Coverage index report to stay up to date with the market trends, top devices, OS versions and desktop browsers.

Digital Test Coverage Download Page

Happy Testing!

Few Best Practices Around Mobile Testing and Agility

When looking at some key blockers for Dev and Test team which are trying to either increase their existing test coverage, release more frequently without compromising quality we see some common pitfalls which with some planning in advance can be unblocked.

Let’s look first at the core mobile testing pillars:

MobileTestPillars-3

The above boxes represent either a full or a subset of a mobile app testing plan. Some of the above can fit into a functional test cycle, some regression or unit and some can be pre-release acceptance tests.

The importance of planning the test coverage and the contents of each iteration in the cycle can be a critical task to the overall app life cycle velocity.

In order to meet both Quality and Velocity goals, Dev/test/QE teams ought to include portions of tests in a model which is based on tests stability.

Let me explain – When trying to include in a CI acceptance test cycle or a functional test cycle more tests than needed, without really debugging each of the tests on few devices, there is a high risks of few tests to fail due to unexpected pop ups, bugs in the tests, specific device issues etc. – such tests will obviously damage and block the entire test cycle.

In order to have a fluent CI/Automation cycle, the recommended practice is to start with a small but robust subset which was already executed few times in the past on more than 1 real device, and were debugged with high probability of not getting stuck etc. Only once this suite was “certified” as stable, it would make sense to increase with the right dependencies and validation points the scope of your cycle, and add more automation tests in order to increase the CI cycle scope.

Such a paced approach which may seem trivial does not happen in many organizations, therefore as soon as a new device is introduced, or a new test is added to cover new features or screen, or simply when a new device unexpected pop up comes up – the CI process breaks.

This results in slow down of the process, delays in release and development tasks and frustration.

To summarize:

  • Construct your CI and automation cycle and “certify” each test case and only once it is stable and can run unattended – add it to the acceptance test suite
  • continuously debug your entire relevant test suite whenever a new feature, OS, device are introduced to assure nothing breaks your process
  • Less == More –> Assess the most valuable tests which are candidates to identify more bugs than others and include them in the cycle, redundant tests just consume time, resources, and can put your entire cycle in danger
  • Make sure you can gain access to all of your devices under tests (DUT) at all time for development, debugging, and continuous testing
  • Include sufficient debugging artifacts in your test code either through Try’s and Catches, visual screen/scenario’s validation or other debugging logs, outputs, vitals.

Happy unattended testing!

Selenium Is the New Testing Tool Standard

Seems like the debate in the world of test automation tools is over.

If few years back HP QTP/UFT (formerly WinRunner) was the standard and most commonly used tool for test automation in the QA space, those days are over.

The shift toward Agile, Devops and such trends together with the digital transformation which includes multi platform testing of Mobile, Web, IOT in a very short amount of times changed the tools landscape and the testing requirements.

See below a snapshot of the top required testing tools which show that the shift already started in 2011 where Selenium passed HP tools in the market adoption.

qtp vs selenium

Sourcehttp://www.seleniumguide.com/

The requirements today are that testing is done as early as possible in the project life cycle (SDLC) and to enforce this process, developers ought to play a significant role – Testing is now being developed and executed by all Agile team members including developers, testers, ops people and others.

In order for the shift and the adoption to grow the tools need to be tightly integrated into the developers environment (IDE’s) which in the digital space might be Eclipse, Android Studio, Visual Studio, Xcode or other cross platform IDE’s like PhoneGap or Titanium.

The additional aspect of test framework adoption such as Selenium and Appium lies in the Open-Source nature of these tools. The flexibility of such open source tools to get extended by developers according to their needs is a great deal compared to closed testing tools such as UFT which are disconnected from the IDE and development environments.

We shall continue to monitor the tools space and movement, but seems like the open source tools is becoming standard for Agile, DevOps practitioners which find these tools suitable for their shift left activities, keeping up with the market dynamics and competition, as well as great enablers for quality and velocity maintainability.

To get some heads up into what is the future of Selenium, and how are the efforts moving on toward making the web browsers drivers (Chrome, Firefox, IE etc.) standard and managed by the browser vendors, refer to this great session (courtesy of Applitools)

http://testautomation.applitools.com/post/120437769417/the-future-of-selenium-andreas-tolfsen-video

Planning Mobile Test Coverage

In any conversation i participate the topic of test coverage comes up – and it is indeed a great challenge for business, practitioners whether they are developers or testers (Agile, DevOps, Waterfall etc.)

Before we understand the how, let’s understand the objectives and coverage definition.

Coverage Aspects:

  1. Device coverage
  2. Market coverage
  3. Test case/use case coverage
  4. Environment conditions coverage

When we mention device coverage, we should try and include some relevant factor, not just the DUT (Device under test), because it is simply not enough.

Device Coverage

Proper device coverage shall include few important properties and the more permutations you’re going to include in your test lab the higher coverage you will reach. Some of the MUST properties which i would recommend to have as part of the mix are:

  • Screen size & Resolution
  • PPI (Pixel per inch)
  • OSV (OS Version)

To that mix you need to relate to leading market devices and also to legacy devices which are still popular by many users in various geo’s (e.g. Samsung Galaxy S3, iPad 2) in order to obtain both legacy OS and new OS coverage + the above device characteristics.

Market Coverage

Let’s understand Market coverage – This term relates to a combination of data sources to which some teams may have access to, and some won’t. Such coverage term will typically be a combination of leading market statistics and organizational web traffic or monitoring reports which would highlight information around most usage coming from which platform, browsers etc. When combining both Market and Org. data teams can best match their target audience and test against what’s right for their customers from current top usage perspective and in addition get market coverage around new and emerging devices/OSV to allow them to stay on top of market trends.

Another important aspect around coverage is of course the test cases themselves.

Test Case Coverage

Determining the right test cases to execute against each platform and in each test iteration throughout the SDLC (software development life cycle) is a crucial AGILE enabler and an efficiency driver. When there is a robust automation foundation within the organization teams can take advantage of this system and sometime “fail” by overloading it with either redundant test cases or inefficient test cases which does not add the right value. The key to increase test case coverage is to combine Manual & Automation testing (automate of course as much as possible) but only include the cross platform robust test automation and unit tests which are repeatable, valuable for a quick feedback loop between Dev and QA and leave the platform specific tests, corner cases, and such either to be done manually or as a separate JOB/cycle to assure flawless CI/Automation process.

Even with the above in mind, keep in mind that automation without ongoing maintenance, review of the test code will eventually fail especially around mobile due to constant platform specif changes, new features added or new unexpected popups which may block automation tests from running end to end.

Test Environment

Last, for a digital test coverage the user experience and the environment in which the user operates in is everything. Not covering the right environment would eventually waste testing and dev time since these efforts will be done against the wrong or “happy path only” environment. A real mobile environment takes into account the following:

  • Network conditions (2G, 3G, Wifi)
  • Background applications running as a “noise background” – consuming resources, taking over GPS resources or camera
  • incoming calls/popups
  • different screen orientation changes while app is in the foreground
  • Location of the app 
  • Locale & language

When taking all of the above under consideration, organizations can really build a test lab which provides sufficient coverage for their product and can easily adjust the lab based on market and product dynamics.

Happy Testing!

Blog Series (2) Digital Test Coverage Assuring the Right Mobile/Web Testing Mix

It’s an exciting time to be a digital company. Your customers are engaging with your products on various screens, moving between desktop web browsers to apps on mobile devices. But in the effort to guarantee quality web and mobile experiences, organizations are struggling to find the right testing mix.

It’s true that mobile is far more complex and fragmented than the web, but with so many web browser/OS permutations out there (i.e. Chrome OS 47 running on Windows 7, 8.1, 10, Mac Yosemite, etc.) precise testing becomes a challenge.

To help DevTest teams test more precisely, Perfecto recently published the “Digital Test Coverage Index – Edition 3”, a quarterly report that provides a prescriptive way to build a digital test lab that covers 30%, 50%, or 80% of mobile device and web browser markets in various geographies. The report — intended for organizations just starting their digital journey or trying to move to the next stage — is based on market share data and analysis of enterprise customer usage in Perfecto’s cloud testing lab.

2015Q3Index_Shared_Image_4

Using the above 30%-50%-80% coverage model featured in the Index report, teams can more accurately define their desired testing parameters and allocate the recommended devices and virtual machines running the relevant desktop browsers. Teams that are developing a responsive web application (RWD) can refer to the Index and then test the app in their lab on the recommended smartphones and tablets alongside the recommended desktop browser/OS permutations.

On the subject of web browser/OS mixes: According to our latest Index, of the top 30% of desktop browsers in the U.S. market, Chrome OS 46 (version 47 was just released and is well-adopted already) is by far the leading browser on Windows 7, followed by FireFox OS version 42 on Windows 7, and Safari OS 9 on Mac OS El Capitan. The Index report includes the complete 30%-50%-80% matrix for web/OS and mobile device/OS combinations.

411d4a3d-b435-46d4-89b6-f783935a82e3-original

It’s also worth noting that in the browser testing landscape, the Windows 10 platform is gaining momentum and will soon become the second most popular desktop OS in most of the geographies, according to market share numbers.

It will take more than looking at a list of smartphones and web browsers to ensure full digital test coverage for native and hybrid mobile apps, mobile web browsers and RWD. So organizations need to combine their existing customer analytics with a regularly updated test coverage index that reflects market adoption rates in various geographies. Another important metric to monitor is the status of legacy platforms that are still relevant enough to test against. For example, the Samsung Galaxy S3 is a leading legacy smartphone in most markets in the same way that we still see many Windows 7 machines even though Windows 8 and Windows 10 are widely available.

For more details on how to test for the full digital experience, download the free Digital Test Coverage Index.

Blog series: (1) Mobile Market Landscape: 2015 Highlights

As we wrap up another year, we thought it would be fun and informative to take a look back at the mobile devices, operating system updates and trends that hit the market this year.

Let’s start with this bird’s-eye view graphic of the important releases that made 2015 such an innovative year in mobile.

Mobile Market 2015 Retrospective Calendar, Source (Perfecto's Digital Test Coverage Index Report)

Mobile Market 2015 Retrospective Calendar (Source: Perfecto’s Digital Test Coverage Index report)

There were more than 30 significant smartphones and tablets released this year. Many of these devices such as the iPhone 6S and 6S Plus, Samsung Galaxy S6, the Galaxy Note 5 and the LG Nexus 5X quickly became popular on a global scale. When putting together our quarterly Digital Test Coverage Index we noticed that these newly-minted devices ranked high up in the indexes in both the U.S. and Europe.

But a device is nothing if it’s not running on an updated operating system — and this year brought 15 OS releases (major and minor), from Apple iOS 8.2 to 9.2, and Android 5.1.1 to Android 6.0 Marshmallow. Microsoft released what it hopes will be a mobile game-changer, Windows 10. All of these OS releases will have an impact on the already fragmented mobile space, keeping DevTest teams busy with more re-testing.

This year we also saw disruptive technologies take center stage, such as Apple Force Touch, Touch ID authentication, Mobile Payments, Voice commands and contextual awareness apps based on location.

Samsung Galaxy View Tablet

Samsung Galaxy View Tablet

Looking ahead, it’s clear that 2016 is going to be just as hectic, fragmented, and EXCITING. In the coming year, enterprises can expect mobile and web user engagement to be key business drivers, forcing many organizations to do rigorous testing of new app features on mobile devices, OSes and web browsers to deliver a memorable user experience to customers.

As a sneak in preview in our latest Digital Test Coverage Index, we’ve added a 2016 calendar and we already see new trends emerging such as increasing tablet screen sizes that aim to replace today’s laptops (i.e. the Samsung Galaxy View tablet with its unique screen size of 18.4”, and Apple’s iPad Pro with its 12.9” screen size).

For more details on important mobile and web test coverage trends, download the Digital Test Coverage Index.

Happy holidays to all. Here’s to successful digital test coverage in 2016!

The Keys To Mobile Test Automation That Works!

In the world of mobile app testing, a developer’s claim “Well it works on my device” is not acceptable. Mobile is by far more fragmented than Desktop/Web platforms due to the variety of device types, OS versions, and the environment on which it operates (network impact, roaming, battery stretch etc.).

It is not sufficient to test on one or even a few devices, but instead test apps on several devices that adequately cover your end-users’ device pool. How can you keep up with testing apps on so many devices, you ask? Easy. Build a working mobile test automation strategy that runs the same script across all of your devices, making your Dev/QA teams confident that they are delivering the best app to end-users.

From the experience Perfecto Mobile gathers everyday, we came up with the must-haves for building a continuous, unattended mobile test automation strategy that provides immediate feedback to its Dev/QA teams.

Relevant Device Selection

It is one of the toughest dilemmas in today’s market to identify the right device X OS mix on which teams should develop and test their mobile app against. Base your selection on the right criterias such as:

    1. Target market and competition/End-Users –> On which devices and OS versions your end-users actually use the most (based on organizational analytic data)
    2. Generic market statistics –> Top sold devices (smartphones and Tablets) in the relevant regions. Such data can be retrieved from analyst research such as IDC, Flurry, ComScore and others.
    3. Change rate and market dynamics –> As we all see, the mobile market is in constant change and your teams need to “listen” to the trends and changes- adopt them and change the strategy accordingly

Unattended mobile app test automation which supports continuous integration and fast feedback:

    1. Mandates mobile OS system level control – If the script cannot initiate events outside of the AUT (App under test) it cannot really test the app as the end-user will use it
    2. Incoming alerts and error handling are also mandating system level control out side of the AUT
    3. Hybrid objects support – automation framework should support both OS level objects (DOM for web and hybrid apps) and Visual objects for full use case support
    4. Device logs, rich media reports (Video and Visual) and vitals (Memory status, Battery etc.) are critical evidence to Dev/DevOps teams as feedback to act upon

Device Elasticity

As your mobile app project grows/shrinks based on your SDLC phase, your automation solution needs to be flexible enough to accommodate necessary changes and provide the needed support to your team (even more critical in cases of distributed teams).

Integration is Key for Cross Team Collaboration

Whether your Agile/Dev/QA teams are using ALM tools (HP, Microsoft, etc.) or OSS (Selenium, Robotium, JUnit) tools, your automation framework solution should be able to connect to these tools and bridge any gaps in the daily continuous work flow. Use what your team knows – this is the best way to release mobile apps faster without compromising quality.

Real Devices

It is imperative to understand that testing on something other than real devices in your critical automation scripts and as your mobile app evolves is a huge risk. Only real devices running formal OEM OS flavors against real networks reveal an app’s true functionality in the same way your end-user will experience it.

To conclude, see below (Fig 1), a check list of mobile testing pillars which need to be part of your Mobile App Testing Plan.

Fig 1: The pillars of mobile app testing

Good Luck!

Leveraging Real Network Conditions As Part Of The Continuous Quality Processes

As the digital experience becomes key for organizations to offer end-users and a high user experience based on mobile moments, it is important to understand the environment in which mobile apps are “living” in.

 

We have taken to calling a mobile app’s environment as a wind tunnel – many different conditions and “states” occur, causing mobile apps (Web and Native) running on various screens (Smartphones, Tablets, Wearables) to behave differently with functionality and performance POV.

 

In order to extend a continuous quality process and enrich its coverage, mobile app quality ought to include real network conditions as part of the test matrix as early as possible in the mobile app development lifecycle.

 

To achieve coverage of real network conditions developers and network engineers should add the following scenarios into their test plan:

  • Network virtualization – What happens to your mobile app in various network conditions, through the use of various profiles which can simulate poor, average, good 2G/3G/4G/Wifi network connections in various geographical locations (see e.g. Fig1 below). Understanding the behavior of your mobile apps in degraded network conditions early is a key to drive your mobile app release velocity as well as increase your app quality.

Screen Shot 2015-01-22 at 20.21.07

  • Degraded application performance due to other processes, like apps running in the background, can be achieved through mobile device vitals collection, logs analysis and other examinations (e.g. many apps today leverage 3rd party apps like social media apps etc. which can impact the app under test performance)
  • Degraded server performance due to load (light load, heavy load) – how do your mobile app performs from response time and availability in peak times, in various network conditions?
  • Interoperability and events impact on mobile app functionality – how is your mobile app going to behave and respond when an incoming call, pop-ups, and other events which comes in while your mobile app is running?
    • To measure this impact mobile app vendors need to have full access to the mobile system (e.g. iPhone SpringBoard) so they can trigger the events, control them and debug them properly.

Screen Shot 2015-01-27 at 14.02.24

Figure 2: Perfecto Mobile Wind Tunnel various capabilities 

Some examples for more mobile app testing interruptions

  • Incoming Call
  • Text message
  • Other app notifications
  • Storage/RAM low
  • Battery low
  • Battery dead
  • No storage
  • Airplane mode
  • Intermittent connectivity
  • Home screen jump
  • Sleep mode

device-2012-07-25-133714

 

To read more about the continuous quality approach and the continuous quality lab which can enable such capabilities, visit the Perfecto Mobile web site: http://www.perfectomobile.com