The Mobile Attack Surface

This article is part of the Security Tech Blog Series: Spring Cleaning for Security. My name is Azeem Ilyas and I’m part of the Product Security team here at Mercari.

I work closely with mobile teams here to ensure that our mobile clients are safe and secure for Mercari’s users.

As part of this blog post, I’ll be covering some common issues facing Android applications as well as how individuals/organizations can begin to understand and measure the security level of their applications. In a later post, I’ll cover the iOS mobile attack surface.

How secure is secure enough?

Not every application needs to be locked down like Fort Knox. Applications can have varying levels of security depending on their use case and it’s important to be able to recognize what level of security applies to your app.

OWASP provides a guide for independently reviewing mobile applications called the MASVS (Mobile Application Security Verification Standard).

MASVS defines two different levels of security for mobile applications, and an extra level for assuring resiliency.

The levels of MASVS, taken from OWASP MASVS

While the MASVS is great for larger applications, if all you’re doing is building a simple alarm clock app it might be overkill. It (MASVS) can be a great tool the moment your application starts handling sensitive information such as PII. The same could also be said if your application is handling information that users may consider sensitive, but doesn’t specifically classify as PII.

Here are a few examples of how to apply the levels:

An application like Reddit, might aim for L1 since it handles basic user information and user generated content.

A multiplayer game like PUBG will aim for L1+R since it also needs to prevent modding/tampering.

Finally, a mobile banking app will aim for L2+R as they often handle the most critical PII and the potential financial implications of security defects are huge.

Ultimately, it is up to the application owner to make a decision on what level of security they deem necessary, based on the amount and sensitivity of the data they’re handling.

Introducing the Battleground

Welcome to the battleground! Aka the devices of your end-users. The illustration below highlights the common issues found in Android applications that most attackers tend to exploit.

If you’d like to test for these issues manually, you can check out the OWASP Mobile Security Testing Guide. But, if you want a more efficient approach you can opt for automated tests.

To cover all of these issues would take too long, so in this article I’d like to cover some topics which we’ve found particularly interesting while securing the Mercari app:

  • Sensitive Data in Internal Storage (Files)
  • Insecure WebView

*These issues usually require elevated privileges. For access to internal storage, usually root is what’s required. There are exceptions though where IPC mechanisms or webview misconfigurations may expose internal storage to other applications.

**Requires a malicious application to specify the BIND_NOTIFICATION_LISTENER_SERVICE permission in the Android application manifest file.

Automated Security Testing

No matter how many testers you have, they can’t test every change you make, especially in an environment incorporating modern day agile approaches to application development. Imagine security blocking every little commit. It could slow the development process down to a halt and you’d likely have an angry mob of engineers just looking to get their jobs done at your door.

On the other hand, conducting a full application test only once or twice a year could leave you with critical issues that may go unidentified and unaddressed for long periods of time.

For that reason, many companies, including Mercari, have turned to automating security testing throughout the CI/CD pipeline.

At Mercari, we’re currently using a paid solution called NowSecure. There are many other products out there that provide a similar service but most of them, if not all, are paid.

There are also open source alternatives which I’ll briefly mention here.


At Mercari, our engineers aim is to release once a week. That’s fast for a mobile client. If anything unexpected happens, it’s not as easy to update a mobile application as it is a web application. Web apps have the advantage that users will always use the latest versions. Mobile applications have to be updated by the user or the play store/app store.

Therefore, it is imperative that critical issues are addressed before release.

Currently, we use NowSecure to run SAST (Static Analysis) and DAST (Dynamic Analysis) security testing against our application pre-release, quickly identifying any issues that might make their way to the release build. NowSecure finds a lot of the issues we mentioned in the battleground section by performing a series of static and dynamic tests.

Compared to other solutions, we found NowSecure’s integration to be very simple.

For example when using CircleCI it can be implemented as shown below:

      - build_apk:  # Your own APK/IPA build job defined in the config file
          flavor: prod
      - run:
          name: upload binary to nowsecure for analysis
          command: |
            curl -X POST –data-binary @/home/circleci/path/to/app.apk \
            -H “Authorization: Bearer ${TOKEN}”

Alternatively, you could also use their CircleCI orb if you are brave enough to pull another tool into your supply chain this way.

Mercari’s application is quite large (around 100MB), the NowSecure assessment takes around 20 minutes. Considering that DAST scanners for web applications can take hours, you might think that’s fast for a complete app scan, but the time taken is exactly why we run this as a non-blocking CI/CD test. If developers want to move fast, sometimes 20 minutes can be a huge blocker. For this reason, we’ve found it sufficient to run this particular task as a CircleCI cron job instead of being part of our main workflow.

And so far, we’ve been happy with NowSecure results. It helps us to find issues early, often, and at scale before they make their way into the release build.


Before we started using NowSecure, we actually used MobSF (Mobile Security Framework) for a while.

It’s open source and free to use. However, it’s clearly built for the purpose of running adhoc scans of Android and iOS applications. These types of adhoc scans are less suitable for organizations, since the tool is made to be manually operated and thus not CI/CD friendly. Implementing MobSF into our CI took more work than was necessary, and we also had to handle report formatting by ourselves.

Here’s roughly what our MobSF CI integration looked like.

When we first started using MobSF, we actually combined it with OWASP Glue in order to alert only on new findings. In order to do so, we had to fork the repository and then make some large changes to the codebase.

Since then, there have been some changes to MobSF and even a new command line tool and additional CI options which might fare better than we did.

Overall, I think MobSF is great for one off testing and will meet the needs for most use cases.

Now, back to the issues we mentioned in the previous section: Sensitive Data in Internal Storage (Files) and Insecure WebViews!

Sensitive Data in Internal Storage (files)

On Android, you’ll find the internal storage for an app at /data/data/ You’ll usually find a bunch of sensitive data stored here, such as access tokens.

MASVS as well as other guides, usually determine that sensitive data stored in internal storage (in plaintext) is dangerous and they are right, but don’t go into detail as to the reason.

In general, data stored in internal storage is relatively safe. Even privileged apps, which run as the system user, don’t have access to the internal storage of unprivileged applications.

A typical answer you’ll find to this online: Malware with root capabilities can access an application’s internal storage.

To root-capable attackers though, this is more of a defense-in-depth method. Why? Because root-capable attackers can just attach a debugger and copy the access token during runtime. Even if you added anti-debugging features, you’d still be putting the token in memory at some point, and nothing will stop an attacker from simply taking a memory dump or reading from the proc filesystem to analyze the applications memory and extract the access token.

So why encrypt the sensitive data if the security benefit is minimal against root-capable attackers? There’s actually one more reason.

It’s not uncommon for IPC or platform mechanisms to accidentally give access to internal files to other applications. In fact, this is even one of the features of a content service provider and internal files can also be exposed using webviews which I’ll talk about next.

In summary, while encrypting sensitive data provides a basic level of protection against root-capable actors, the prime reason you should be encrypting sensitive information is to prevent it from being leaked through IPC or platform mechanisms which don’t require privileged user levels to be abused.

Insecure Webviews

We already talked about how WebViews can so very easily lead to sensitive files being leaked, but that’s not the only issue plaguing WebViews.

Why do developers often use WebViews? Because it allows them to embed web content that can be very quickly updated without the need to push an update to the mobile app. But along with this convenience, WebView usage comes with a lot of security risks.

Exposing internal storage – WebViews have a setting called setAllowFileAccess which allows access to the local file system. Chain the local file access setting issue with arbitrary URI loading in the WebView while the file:// schema being allowed, and you have an application that grants direct access to internal storage by any application.

webView = view.findViewById(
webView.settings.apply {
    javaScriptEnabled = true
    allowFileAccess = true


Allowing the loading of third party websites is another issue. In Mercari, we had a deep link which looked like this: mercari://app/openUrl? It was supposed to be used to load campaign ads, our privacy policy, our ToS (Terms of Service), and help center pages etc. Instead, a malicious third party could use it to load whatever website they choose. Subsequently, this is a great tool for phishing, but things can get worse when chained with other issues as described next…

Attaching a native bridge is a process which allows the WebView and the native application to communicate using a JavaScript interface. It’s typically used to fetch the user session from the native application back up to the website which will then use the user session to render the page. Now, imagine chaining this with the previous issue of allowing the loading of third party websites and you’re essentially giving attackers a way to steal user sessions, simply by exposing the user to a malicious web page or unprivileged malicious application.

Sometimes, the native bridge (JavaScript interface) even provides the functionality for web pages to directly control and act as the underlying native application.

The most straightforward way to solve these issues is to simply avoid the use of WebViews. Here are some other components you can use instead, and their caveats.

For loading sites in an in-app browser, it’s recommended that developers use Chrome Custom Tabs instead. They are faster, more customizable, and they also use the same cookie jar from Chrome which means that if they’re already logged in on Chrome, they’ll also be logged in within the Custom Tab view. Finally, JavaScript code execute in Custom Tabs will actually be executed under a separate process (Chrome), minimizing the potential damage caused to the native application.

For even stricter isolation, Android released Trusted Web Activities in 2020. It builds on top of the Custom Tabs protocol but with greater restrictions, such as only loading web content from the same developer of the application (as defined by Digital Asset Links). Note that Trusted Web Activities are much more seamless when compared to Custom Tabs and WebViews since they appear full screen, much like other activities in Android. This may or may not be the user experience that you’re looking for.

However, both of these options are still not a perfect solution for those who want their in-app browser to interact with their application.

At Mercari, we’ve actually minimized our use of WebViews where they aren’t needed and made the switch to Custom Tabs where possible.

In order to solve the native bridge issue, some of our engineers developed an internal library called Onix. Onix extends WebView functionality. It doesn’t require the application to expose a JavaScript interface, but instead uses the WebMessage API. On initializing the Onix client, a strict set of domains is whitelisted for communication with the WebMessage API and it allows us to strictly control which domains interact with the Mercari application. Maybe at some point it will be open sourced, but for now, it’s not too difficult to implement on your own.

In summary, try to stay away from WebViews and use Chrome custom tabs or Trusted Web Activities where possible. If you have to use WebViews, try to restrict their functionality as much as possible!


While I wish I could explain all the issues here, the topic of mobile security issues is too broad to cover in this article. If you are interested in learning more, I recommend checking out the OWASP MSTG guide that I mentioned above if you get the chance. It’s a great starting point for testing. You can also employ the MASVS to define security requirements and review your applications current security posture.

Finally, for full coverage, consider adding mobile SAST and DAST tools to your CI/CD pipeline. It’ll help you to catch issues earlier on in the development process and before they make it out to your release builds!

Thank you for reading if you got this far. The Product Security Team is looking for talented engineers to join our team! If you are interested in working with us, please see the job posting here.