Making Bubbles

Justin Richer
6 min readJun 24, 2024

--

About a year ago, I wrote about a new concept I’d started to develop: a new way to look at how we view account provisioning, and how we use federation technologies, especially in a world where the connection properties are always changing. I called this idea federation bubbles, and in the last year I’ve been privileged to talk to a lot of people about this idea and what it could mean, and I’ve even been able to prototype out some key pieces of the puzzle. I’ve gotten to present the idea at a few conferences, and I even recently did a whole podcast episode on the topic (with a video version!) for Identity At The Center.

Through all of that, several major concepts have risen to the surface, and I’ll be looking to tackle these in a few different posts — questions like “can’t we just copy the user store?” and “isn’t this just like an interfederation?” get brought up quickly each time. But I wanted to start with something very concrete before getting into the what and why: how would you make a bubble?

It’s Not New Technology

The central idea behind a bubble in this world is that it’s an internally-cohesive network of systems, with clear boundaries and processes to cross those boundaries. I think we can start building bubbles today out of tech that we’re already using for similar and related purposes.

Some of the technologies we can make bubbles out of

An important corollary to that is that I deeply believe that this concept is not conducive to a single technology stack. So many times through tech history, we’ve been told that if the whole world would just adopt this one particular way of doing things, then all the problems would be solved. This line is usually delivered by a person selling the new way of doing things, or at the very least the picks and shovels to make it happen.

For bubbles, though? I think we’ve got all the most important parts already. What’s fundamentally different is how we use everything, and the assumptions we make around the moving parts and how we stick them together.

Crossing The Borders

In order to create and update accounts in the bubble, we often want to pull that information from elsewhere. Whether it’s an authoritative source that created the bubble in the first place, or it’s a peer we’re bumping up against in the field, we want to be able to copy identifiers and other attributes into our local account.

For the more structured cases, SCIM gives us a really powerful system for propagating user objects across systems. X.509 certificates and Verifiable Credentials also give us a way to carry a stack of user information into our system, while also providing a convenient key-proofing mechanism with the delivery.

But not everything is that structured, since we’ll also want to be talking to peers in the field about their accounts. We need to be able to do this without going all the way back up our peer’s hierarchy, and so we can do just-in-time provisioning based on a federation protocol as needed.

When that stack of user attributes gets to us, it becomes an input into an account management system — but just one input among potentially many. Instead of just overwriting or overriding a record we might already have, the incoming information feeds into the account in a way that makes sense for the local environment.

When we need to send updates about changes to others in the system, the shared signals and events frameworks give us a really solid base to build on. But what SSE, CAEP, and RISC are missing is a semantic layer that can talk about the kinds of dynamic accounts in distributed systems that we expect in a bubble environment.

It’s Familiar on the Inside

Within a bubble, everything is local. Because of that, we want to have a clear notion of a single account for each user. We can use federation technology like OpenID Connect (and the OAuth 2 it’s built on) to connect that one account to a variety of applications, devices, APIs, and whatever the bubbled system needs. This is a wholly separate federation protocol from the onboarding and outward facing processes we talked about above. We can also use SCIM to transmit internal user attributes and updates proactively, or we can just rely on the federation transactions to carry good-enough propagation of these attributes to our apps.

We aren’t going to be using external federation or similar technologies once the onboarding has taken place. For logging in to the IdP itself, we really should be using passkeys everywhere. Since we control the onboarding process, we get to control how we associate the accounts with the authenticators. Sometimes, this means we’ll hand someone their shiny new authenticator at the moment we make their account active. Sometimes, we’ll have them plug it in and bind it when we set things up. Sometimes, a record of their authenticator might come across the wire with them.

And if we’ve got applications, users, or circumstances that make some authenticators unworkable sometimes? Since the account is local, we have the ability to manage this in a way that makes sense in our environment. For example, a firefighter wearing heavy gloves is not going to be able to use a fingerprint reader in the field, but they could probably use one when back at HQ, not to mention all of the other users in the system that don’t have the same constraints. In other words, we can adapt as we need to because we are close to the environment that requires the adaptation.

Addressing The World

As we collect information about an account, we need to record not only what the information is, but also where we got it. Our view of that account is the amalgamation of all of our information sources, plus all of the local information about that account. In order for this view to make sense, we need to have a reasonable way to talk about where something came from.

Traditional federation models like to use hostnames for this, but not everything in our environment is going to be addressable on a stable, publicly-accessible URL. We can’t rely on a common data fabric (e.g., assuming everyone uses the same blockchain), and we can also be pretty sure that keys will change over time for different parties and circumstances, so we can’t just use the keys directly when we need a record.

OpenID Connect Federation brings a solution that works well for the online, connected world, but would need to be adapted for a space where the federation domains and their availability are much more dynamic. The SPIFFE project also brings us the concept of trust bundles, which tie a set of keys to identifiers in a way that can be passed between different domains. While not an exact analogue to the more macro problem here, there are some key similarities to what we’re seeing in the workload space.

Pulling it Together

The final solution isn’t going to be a single protocol, or even a single technology stack. Interoperability in this space is going to be defined by a complicated and contextual set of decisions. Two bubbles might not always be able to talk in a given dimension — one might speak OIDC outbound and another might only take in VC’s — but it’s going to be important that they can still speak in other dimensions. In the end, it’s people that make the technologies work, and we need to embrace the dirt and uncertainty of the world if we want any hope of surviving in it.

--

--

Justin Richer

Justin Richer is a security architect and freelance consultant living in the Boston area. To get in touch, contact his company: https://bspk.io/