
By Hitesh Umaletiya · April 2, 2026 · 6 min read
!How To Test A Mobile App Banner
!How To Test A Mobile App Banner Mobile
Last year, a fintech client shipped a build to the App Store with a payment flow that silently failed on iPhone SE (3rd gen). No crash, no error message — just a spinning loader that never resolved. It took 48 hours and 1,200 one-star reviews before anyone caught it. The fix was a three-line patch. The damage cost them an estimated $40K in refunds and months of trust rebuilding.
That bug would have taken ten minutes to catch with the right test matrix. This guide covers the process we use at Brilworks to make sure that doesn't happen — on React Native and FlutterFlow projects across iOS and Android.
Mobile testing breaks into a few distinct categories, and most teams only do the first one well.
Functional testing checks whether features work as specified. Tap a button, get the right result. This is table stakes. Performance testing measures load times, memory usage, and battery drain under real conditions — not just on a developer's brand-new Pixel 9. Usability testing puts the app in front of actual humans who haven't memorized the navigation. Security testing looks for data leaks, insecure API calls, and local storage vulnerabilities. Compatibility testing runs the app across different devices, OS versions, and screen sizes.
Most teams nail functional testing and skip the rest. That's where the bugs that make it to production come from.
Manual testing catches things scripts can't: a confusing screen transition, a button that's technically tappable but feels too small, an animation that stutters on scroll. Automated testing catches things humans won't: the 200th regression case on a Tuesday afternoon when the tester's eyes have glazed over.
We run both, deliberately. Automated suites handle regression and smoke tests. Manual testers handle exploratory sessions and new feature validation. Splitting this early saves you from slow releases and from bugs that only a real finger on a real screen would catch.
Before writing a single test case, define what "done" looks like. QA scope creep is as damaging as dev scope creep, and without clear boundaries, teams either over-test low-priority screens or miss the flows that generate revenue.
List every path a user takes to complete a core action: sign up, make a purchase, submit a form, connect a payment method. Rank them P1 (blocks release if broken), P2 (degrades experience), or P3 (cosmetic, fix next sprint).
On a recent e-commerce project, our P1 list had exactly seven flows. Sign-up, login, product search, add-to-cart, checkout, payment confirmation, and order tracking. Everything else was P2 or P3. That focus meant we could run a full P1 regression in under 90 minutes instead of the four-hour marathon the previous team had been doing.
| Priority | Example Flow | Why It Matters |
|----------|-------------|----------------|
| P1 | User registration and login | App is unusable without it |
| P1 | Core feature (e.g., checkout) | Direct revenue impact |
| P2 | Push notification opt-in | Reduced engagement |
| P3 | Profile settings update | Minor user friction |
Write these down before testing starts, not after. Example:
If your release criteria aren't written down, they don't exist. Someone will rationalize shipping with a known P1 at 11 PM on a Friday.
Our engineering team can help you turn this into reality. Talk to our experts today
You can't test every device. You can test the right ones.
!Build A Targeted Device Matrix
Pull your analytics data or your target market research. For most apps, 80% of users are on 6–10 device/OS combos. We typically test on:
That's 8 configurations. Covers the range without burning a week.
Wi-Fi in your office is not representative. Your users are on spotty LTE in a subway, on hotel Wi-Fi with a captive portal, or on 3G in a rural area.
Use network throttling tools (Charles Proxy, Android's built-in network emulator) to simulate:
We once found a bug where an app silently dropped a user's form submission on a network switch from Wi-Fi to cellular. The data just vanished. No error, no retry. That one only showed up on a throttled connection test.
Simulators are fine for early development. They're not fine for QA sign-off. Real devices catch things emulators miss: touch response delays, camera permission dialogs that render differently, hardware-specific rendering bugs, and battery-related throttling that kills background sync.
Each session follows a written test case. The tester knows the steps, the expected result, and how to flag a deviation. Free exploration has its place — in dedicated exploratory sessions with a time box and a theme, not during core regression.
A test case looks like:
A bug report that says "checkout broken on Android" is useless. A bug report that says "Checkout fails on Samsung Galaxy A54 running Android 14, build 2.3.1-rc2, after adding 3+ items with a promo code applied — payment button becomes unresponsive after 2 taps" gets fixed in a day.
Every defect log needs: device, OS version, app build, steps to reproduce, expected vs. actual result, and a screen recording if possible.
Once your core flows are stable and your test cases are documented, automate the repetitive ones.
Start with:
Skip automating anything that changes frequently (UI-heavy features in active redesign) or requires subjective judgment (animations, "does this feel right").
| Automation Priority | Criteria |
|--------------------|----------|
| High | P1 flow, runs every sprint, stable UI elements |
| Medium | P2 flow, runs weekly, moderate UI stability |
| Low | Edge case, rarely executed, frequently changing UI |
Tools we use: Detox for React Native end-to-end tests, Maestro for cross-platform flow testing, Appium when we need broader device coverage through BrowserStack's device farm.
Tests that don't run automatically don't run consistently. Wire your automated suite into your CI system (GitHub Actions, Bitrise, CircleCI) so every pull request triggers:
Our default setup runs smoke tests on every PR (takes ~8 minutes) and full regression nightly (takes ~35 minutes across 6 devices via BrowserStack). A developer whose PR breaks a smoke test gets a Slack notification within 10 minutes.
We specialize in building production-grade solutions. Talk to our experts today
Shipping isn't the finish line. The first 72 hours after a release are when real users hit edge cases your test matrix missed.
Monitor crash rates through Firebase Crashlytics or Sentry. Set alerts for crash-free rate dropping below 99.5%. Watch app store reviews daily for the first week — users report bugs there before they email support.
Keep running your automated regression suite on every release branch, not just during active sprints. When you catch a regression before it ships, you skip the emergency hotfix cycle entirely.
If your team doesn't have the QA bandwidth to maintain this after launch, that's worth fixing. Talk to the Brilworks team — we embed QA engineers who own quality from sprint one through post-launch monitoring.
Depends on scope. A focused P1 regression on 8 devices takes 1–2 days. A full QA cycle including performance, security, and compatibility for a new release takes 1–2 weeks. Automation cuts regression time by 60–70% after the initial setup investment.
Yes, but it doesn't scale. Manual-only testing works for early-stage apps with small feature sets. Once you're past 10–15 core flows and shipping biweekly, you need automation for regression or your release cycle will slow to a crawl.
Test on at least 6–8 devices covering both iOS and Android, spanning flagship, mid-range, and budget hardware. Pull your analytics to identify the actual devices your users have — don't guess.
Both. Emulators for fast iteration during development. Real devices for final QA sign-off. Budget devices in particular expose performance issues that emulators hide. --- <!-- IMAGE REFERENCE MAP These are the original blog images for use in Storyblok or other CMS: | Usage | URL | |-------|-----| | Author Avatar | https://d14lhgoyljo1xt.cloudfront.net/assets/9caa5563dc_hitesh-150x150.jpg | | Banner Desktop | https://d14lhgoyljo1xt.cloudfront.net/assets/banner-1-69ce428264cb3-1775126272244.jpg | | Banner Mobile | https://d14lhgoyljo1xt.cloudfront.net/assets/banner-2-69ce42824ccac-1775125169427.jpg | | Core Testing Types | https://d14lhgoyljo1xt.cloudfront.net/assets/the-core-testing-type-69ce4281558c5-1775125150126.jpg | | Device Matrix | https://d14lhgoyljo1xt.cloudfront.net/assets/build-a-targeted-device-matriz-69ce4281923fb-1775125160129.jpg | -->
Get In Touch
Contact us for your software development requirements
You might also like
Get In Touch
Contact us for your software development requirements