How to Build App Quality into Your App Lifecycle
Whether you’re just starting out or you have numerous apps in the market, it’s important to be aware of how much quality can differentiate your product in the marketplace. In the increasingly competitive app ecosystem, quality really is key, as only 16% of people say they will try a failing app more than twice. Not to mention the fact there are over a million other apps in the various markets out there for your customers to choose from.
It’s a competitive space, but it’s certainly possible to give yourself an edge. There are three key areas where quality control can be engineered into your app: testing, crash reporting, and feedback management. In this article you’ll get a high-level overview of each category, some examples of tools and services available, and why that specific area is important in building a quality experience for your customers.
Testing and Test Automation
If you’re using Test Driven Development, Behavior Driven Development, or any number of other strategies, you’re likely familiar with unit testing. This is a way of engineering code with testing in mind. For Android, JUnit is the weapon of choice, which goes very well with the free tool, Robolectric, an open-source project for running Android unit tests in a headless environment. This is great because it’s very fast (way faster than the notoriously slow Android emulator). For iOS, OCUnit is a popular choice, and, because the simulator uses hardware acceleration, it’s fast enough out of the box to run without the aid of a Robolectric-like harness.
Functional tests, tests that exercise basic functionality and flows within an app, are prime candidates for automation. Given the thousands of device models on the market, manually running such tests is a daunting task, especially because these types of tests are most valuable when run on real, physical hardware. Tools like Robotium, a library that integrates with JUnit, and Calabash, a cross-platform framework, help make writing automated, scalable test scripts easy that can be run on emulators, simulators, and real hardware.
A popular strategy is to run tests locally on emulators and simulators, then run them on local devices with tools like Spoon, and then run those same tests on hundreds of devices hosted by services like AppThwack. Rather than spend time building automation harnesses, device labs, and other necessary components of an end-to-end test environment, services like AppThwack remove the burden by providing cloud-based device labs for rapid, parallel test automation execution.
Exploratory and UX testing is very important as well, and this is where humans excel. A good strategy is to automate your mundane tests so your time can be spent exploring new functionality, conducting labs where you can observe real people as they interact with your app, and so on. Having a human touch is paramount in measuring how an app actually feels from a customer perspective.
Keeping Track of Errors
Let’s face it. No matter how well you code and how much you test, we all write bad code, meaning some errors are bound to make it through. In fact, with the speed of today’s development only getting faster and the increasing adoption of agile development practices, it’s necessary now more than ever to engineer in anticipation of unexpected errors.
There are numerous tools for tracking crash reports in the wild for both Android and iOS. For Android, ACRA is a popular free option. There are commercial services available that support multiple platforms and add bells and whistles to crash reporting, such as pattern identification, tracking issues over time, and analyzing performance metrics, but at their core the biggest value of any crash reporting library is to provide insight into how your app is behaving once it’s in the market.
Once you know how and where your app is crashing, you can more quickly fix the reported issues, avoiding uninstalls and bad reviews. While it’s best to capture bugs before customers encounter them, at least you won’t be in the dark until someone decides to tell you about a problem. For every one complaint you receive there are 26 others you don’t, which should emphasize just how important this step is.
What’s more accurate, your own impression of how your app performs or your customers’? The answer is obvious, but time and time again we, as developers, assume we know best. Your customers’ feedback should guide your decisions, whether that means acknowledging their comments and consciously dismissing them, or restructuring your app so it better fits their needs.
The obvious question, then, is how do you interact with your customers so you get valuable information while not seeming needy, overbearing, or paranoid? It’s important to form dialogs with your customers and encourage them to communicate with you, rather than create a confrontational relationship that leaves you both unhappy.
Services such as Apptentive do the work for you, inserting feedback prompts into your app in a way that gathers pertinent information and, ideally, avoids the most damaging form of feedback: poor reviews on the market.
By keeping these three categories in mind, building app quality into the app lifecycle should be a clear strategy going forward. Coding with testing in mind (unit tests) and automating tests that make sense (functional and performance tests), tracking app performance (crash reporting), and managing feedback (customer dialogs) will all help differentiate your app in the market. With more than a 1.5 million apps on the various platforms, that kind of positive differentiation can’t hurt.
Trent Peterson is a founder at AppThwack, a fully automated service that helps developers and QA teams test their Android, web, and iOS apps on 100s of real devices in minutes, gathering high-level results, low-level logs, pixel-perfect screenshots, and performance trends along the way. You can keep up with Trent and AppThwack on Google+ and Twitter.