Today, mobile applications have become an inevitable part of our daily lives and it has changed the way we do our day-to-day work. One wants to shop cloth, book a cab, order food, read news and so on; there are numerous mobile applications or responsive websites available for everything. Mobile applications are distributed by App stores- Google play app store for android users and Appleâ€™s App store for iOS users. And the mobile web applications can be opened in the smartphones using the browsers like chrome, Safari etc.
Recently, Statista published that â€ś52.2 percent of all website traffic worldwide was generated through mobile phones and mobile currently accounts for half of all global web pages servedâ€ť. In another report, Statista also mentioned that â€śandroid users were able to choose between 3.8 million applications and Appleâ€™s store remained the second largest app store with 2 million available applicationsâ€ť, at the end of first quarter of 2018.Â Statista also stated that â€ś21 percent applications downloaded by mobile app users worldwide were only accessed once during the first six months of ownershipâ€ť.
The reasons for customers to stop using mobile apps or a mobile web application can be its instability, poor user experience, basic functionality failure etc.
With umpteen mobile applications readily available for end users, it has become imperative for businesses to deliver high quality mobile applications so as to sustain in the market. From here comes the essence of mobile application testing. With companies aggressively releasing new features or updates to match up with end userâ€™s expectations to remain ahead in the league; MTA (Mobile Test Automation) becomes need of the hour. Â This whitepaper illustrates the means to achieve very motive behind MTA, effectively.
Businesses wants to ensure timely and quality release of mobile applications every time. They also aim to meet the requirements of testing more and testing faster in shorter span of time. However, to test each and every feature of the mobile app/mobile website in squeezed time spans on all possible combinations of mobile devices & versions of operating systems; a big team of manual testers and huge array of physical mobile devices is required which is practically impossible to achieve.
To boost up the testing process, the only way out is running automated tests.Â Automated mobile test executions in areas like smoke testing, regression testing etc. empowers the manual test team to concentrate on new feature testing. MTA also ensures maximum test coverage over number of combinations of different devices and OS versions, thereby, providing the quality control over the released product. It is also essential to have measurables in place for test automation projects. Measurables helps to determine success of MTA by determining the projectâ€™s contribution in improving the overall quality of product.
It is challenging but important to set realistic objectives from MTA to avoid burns. Firstly, it is impossible to achieve 100% automation. Secondly, MTA definitely cannot reduce time to execute a particular test case. Automation is no magic and it will do all the activities a manual tester does but in an automated fashion. It actually shortens overall testing timeline by running tests in parallel across vast number of devices and platforms. Last but not the least, the most important fact is that MTA does not have immediate return on Investments (ROI).Â It usually takes time and depends on multiple number of factors.
Businesses wants to leverage the benefits of automation to expedite their mobile application testing process but often fail to consider the best practices. Neglecting best practices and making common mistakes can eventually result in failure of MTA. The earlier these are thought through and strategized, the better. This includes understanding do and donâ€™ts, which is just first step in thousand-mile march.
|Do ensure the quality and completeness of manual test cases before initiating MTA||Donâ€™t fail to identify and address associated risks|
|Do have robust and scalable automation framework in place||Donâ€™t disregard test data requirements|
|Do use appropriate tools||Donâ€™t cease to strategies MTA execution over manifold hardware configuration, OS and their versions for mobile devices|
|Do have automated script review process in place||Donâ€™t forget need of enhancements and maintenance of automated suits for Frequent feature updates/changes|
|Do consider the testing on real Mobile devices||Donâ€™t limit the automation framework from being DevOps ready|
How to measure the effectiveness of Mobile Test Automation (MTA)?
As mentioned before, it is essential to have measurables in place for test automation projects. Measurables helps to determine success of MTA by determining the projectâ€™s contribution in improving the overall quality of product. Measurables equip managers to optimize MTA to match up with requirements of the organisations. There are humongous factors driving effectiveness of MTA, few of them have been listed below:
To measure test automation cost-effectiveness, we need to know about the cost of the automation effort. The cost of automation includes the overall cost of the resources and the complete time to automate the tests. One of the major influencers in deciding if automation is required at all is Cost Effectiveness. Cost effectiveness through MTA is not immediate, it is spread over a time period and depends highly on the number of releases/ testing cycles MTA is used. The more the automated suits are run, the earlier we start getting return on the investments.
Automated Scripts should be able to give accurate results every time. Reliability of MTA can be measured through the percent of test failed due to error in script, the number of additional iterations required due to script issues and number of the false negatives.
Automation results should make sense to the manual testers. It should include the reasons why a test failed/passed and should include the run time snapshots in case a test fails. A typical technical error statement in case of failing validation can be disconnecting for a manual tester. Manual tester might not be able to identify the actual cause of failure, mark it as â€śfailed for unknown reasonâ€ť. Usability of MTA can be measured through the survey of manual testers for determining time taken in result analysis, time to find the root cause of the failure, number of false negatives/positives and number of test cases failing for unknown reasons.
Usability from automation testers point of view can be measured in terms of time taken for a new automation person with similar skill set to understand the framework and become productive.
The framework should be hybrid- a mix of modular, data driven, keyword driven framework and should have library architecture which has common functions (reusable codes) stored in a library. This approach makes the automation framework highly maintainable so that whenever there is a change/update in a functionality, only the affected part areas can be fixed leaving the other parts untouched. Such frameworks can be easily scaled up by adding common functions in library or adding keywords to main test scripts, as and when required.
This can be measured by survey of test automation engineers.
Testers should be able to execute the scripts in different test environments with minimal changes. This can be measured by calculating the effort required to make that automated suit run in the new test environment or new hardware platform.
Enablers of successful Mobile Test Automation (MTA)
Success of Mobile test automation (MTA) resides in the way MTA is implemented and driven. In the inception, it is important to Identify and understand MTALC (Mobile Test Automation Lifecycle). The components of MTALC which are the pillars of implementing MTA successfully are as under:
Selection of Mobile Test Automation Tools depends highly on which technology mobile application is built on. It is desired to perform tool feasibility before finalizing the automation tool. The basic features that should be looked upon in the MTA tool are Record and Replay, support for integration of tools for automated execution triggers, automated bug tracking tools (like JIRA, Mantis etc) and capability to execute tests parallelly.While selecting an automation framework, the aforementioned measurable â€śScalability and Maintainabilityâ€ť should be key factors. With regard to the future and latest trend in testing i.e. DevOps; the framework should be DevOps ready from day one. MTA tool and framework should be able to cater test execution needs across multiple mobile devices with different screen sizes, hardware configurations, different OS and their versions. The tool should be able to support automation of mobile websites and hybrid/native mobile application for both android & iOS platform. It is good choose a framework which can be scaled up, to support test automation of APIâ€™s, websites and desktop applications if required.
Planning is the very foundation of any project. With respect to MTA, plan should include the efforts required for the project, risks foreseen and strategy to execute it over device matrix. There are few pre-requisites listed below that needs to be addressed before initiating MTA
Automation engineers should follow industry standards while automating tests like following modular approach, creating reusable components, keeping test data and object repository out of the actual test cases, using variable naming convention, incorporating comments about the functionality the script addresses, ensuring the validations are reported properly etc. This provided ease to maintain and enhance the scripts in longer run. Further, the automated scripts should be mapped to manual test cases to ensure visibility and traceability.
For obvious reason automated tests cannot be executed over all the devices available in market but also cannot be limited to few mobile devices. Hence, it is practical to prepare a device matric addressing the devices, OS & its version and hardware configurations we are aiming to test. Now, depending on the matrix, the organisation can take a call whether to have physical device lab available for the testers which has its own proâ€™s & conâ€™s, or they wish to leverage the cloud mobile device labs. Cloud mobile device labs enable users to perform tests from any location on real devices placed in their labs. They offer availability of tremendous no. of devices to choose from and have APIâ€™s which can be integrated to the automations frameworks, hence enabling, automation scripts to be executed on clod mobile devices directly.
There are multiple factors, mentioned before, that leads to the success of MTA. However, the key to sustain that success is maintaining and enhancing the automated scripts as and when required. The effectiveness and benefits of the MTA fades with time if the automated suit is not maintained.
It is a common perception that MTA should find more bugs or MTA should improve the quality of the mobile application. However, Businesses should understand that automation is only a means for executing the tests. In general, mobile test automation is aimed to perform regression testing, to ensure that the older functionalities are still working with new features/updates.Â There are multiple factors, mentioned below, that really helps ensuring that your mobile test automation is actually capable to find bugs:
Businesses often wish to leverage the benefits of automation but lacks the team that has hard core technical skills required for test automation. The respite is that there are service providers like CresTech with whom Businesses can partner with. CresTech not only brings the technical skill set to the table but also brings along their years of experience in handling similar projects. CresTech actually understand the complete testing cycle a product goes through and customize their solution around the product requirement.
CresTech is a market leader in providing Software Quality Management Solutions & Services. CresTechâ€™s solutions & services have helped organizations meet their project time lines, budget and quality goals. With a commitment to offer the best and experience in delivering quality solutions & services across industries, the company has 250+ test specialists with global delivery centres across Noida, Bangalore and USA. What makes us different are the factors: Core Expertise, Futuristic Vision and our Core Values.
CresTech Software Systems
Archana Mehta has been with CresTech since 2013. Her core competencies include Test Automation Framework design and development. She has played different roles in QA industry and is currently responsible for understanding & analysing customer requirements, designing solution and consulting. For more information contact at firstname.lastname@example.org
Â Download:Â Mobile test automation_v1.0_forPDFÂ (Pdf , 1,022.90 kB )