XYZ: Compatibility With OAuth 2

Justin Richer
6 min readJun 16, 2020

--

This article is part of a series about XYZ and how it works, also including articles on Why?, Handles, Interaction, and Cryptographic Agility.

XYZ is a novel protocol, and one of its goals is to move beyond what OAuth 2 enables in any easy fashion. One of the first and most important decisions I made was to not be backwards compatible with OAuth 2.

This is a bold choice: OAuth 2 is absolutely everywhere, and deliberately breaking with it will put any new protocol at a distinct deployment disadvantage. However, I think that the advantages in the flexibility, especially in the models, are worth it. There are a ton of use cases that don’t fit well into the OAuth 2 world, and even though many of these square pegs have been crammed into the round hole, we can do better than that by taking a step back and designing something new.

Working With the Legacy

For people in these new use cases, this is a great opportunity. But what about everyone out there who already has OAuth 2 systems and wants to also support something new? XYZ doesn’t exist in a vacuum, and so as a solution it needs to take OAuth 2 developers into account in its design even while addressing the new world.

To do that, we’ve taken a page from how some of the most successful backwards compatible efforts in the past have worked. Let’s look at Ninendo’s Wii console. When it was introduced, one of its most compelling features was a radically new control scheme. This had features like wireless connections, IR-based pointers, switchable accessories, motion sensitivity, and even a built-in speaker. It was unlike anything else out there, and it was a particularly radical departure from Nintendo’s previous generation game console, the GameCube. Nintendo wanted to encourage existing GameCube owners to come along to the new system, not just by making the new features compelling on their own right (since without that, why move away from the GameCube?) but also by adding ports to allow GameCube games and hardware to be used with the new system. If you had GameCube stuff, there was a place you could plug it in. It was on the side/top of the console and hidden under a flap, but it’s there if you wanted to use it.

The ports are there, just hidden. Image from https://www.wikihow.com/Play-Gamecube-Games-on-Wii

Now if you wanted to play a Wii game, you needed to use the Wii controllers and hardware. And if you wanted to play a GameCube game, you could always just buy a GameCube. But what if you were interested in doing both, but didn’t want to have two systems sitting under your TV? This model of focusing on the new but allowing the old to be plugged in is a powerful one, and it was wildly successful.

Plugging OAuth 2 Into XYZ

XYZ’s core model is pretty different from OAuth 2. XYZ doesn’t have client IDs as part of the protocol, nor does it have scopes. Instead, clients are identified by their keys, and resources have a rich request language to describe them in multiple dimensions. You can even request multiple access tokens simultaneously. All of these are great for use cases like ephemeral and native clients, or rich APIs, but what about something where OAuth 2 already works ok?

Just like with the Wii, we’ve made sure that there’s a place to plug in things. In fact, this is one of the key features that polymorphic JSON brings to the party. Let’s say you’ve got an OAuth 2 client that has been assigned the client ID client1 and it asks for scopes foo and bar at its AS. This is pretty easy for a developer to plug in to the authorization endpoint (ignoring other optional parameters):

client_id=client1&scope=foo%20bar

Since we don’t have explicit client identifiers or scopes in XYZ, what are we supposed to do with these values? To understand that, we first need to take a step back and think about what these items each mean in the context of the overall protocol.

The client_id in OAuth 2 identifies the client, which should be no surprise. But why do we need to identify the client? For one, we want to make sure that when the client authenticates, it does so using the appropriate set of credentials. Since the browser is untrusted for handling shared secrets, we can’t pass those credentials in the front channel where OAuth 2 starts and we need a separate identifier. If we start in the back channel, we don’t need an identifier separate from the credential. But more fundamentally, what does it mean to identify the client at all? The AS is going to want to know which client is asking so that it can apply the appropriate policies, including gathering consent from the user, and these policies drive the rest of the decision process. We can make that same set of decisions without a separate identifier and using the credential directly. XYZ allows us to send the credential by value, but it also allows us to send a reference to that credential instead of the value. In this way, we can use our existing client_id to tell the AS which key we’re using to protect the requests by passing it in as the key handle for the request, in lieu of the key itself. The AS can associate the key, and therefore the policies, by looking up the handle.

The scope of OAuth 2 follows a similar journey. A resource in XYZ is described by a multi-dimensional and potentially complex object, with actions, locations, datatypes, and any number of API-specific components to it. But just like the client_id, we need to ask ourselves what does the scope represent in OAuth 2? Functionally, it’s a shortcut asking for a particular slice of an API, defined by the API. So in other words, it’s a predefined combination of actions, locations, datatypes, and any number of other API-specific components — it is a shorthand to a concept analogous to one of the resource request objects in XYZ. Which means, of course, that we can use a resource handle to represent that predefined object.

What this means is that our client developer has a place in the protocol to plug in the values that they already have. They don’t plug into the same place, and they don’t go through quite the same circuitry, but they’re there, and they work.

{
"keys": "client1",
"resources": [
"foo",
"bar"
]
}

But here’s something really cool about this: once we’re plugging in our values into the new protocol, then we can start to use the new features alongside the old ones. Since we have the new system, we can play both the new games as well as the old games. That means that our client could, for example, ask for an additional detailed resource alongside its tried and true scopes.

{
"keys": "client1",
"resources": [
"foo",
"bar",
{
"actions": ["read", "write"],
"datatypes": ["accounts", "history"]
}
]
}

All of this is done without putting a burden on new apps to use any of these legacy mechanisms. Ephemeral apps don’t have to pretend to get a client ID, and resource servers don’t have to wrap their APIs into simple scope values. Those functions are there for the cases where they make sense and stay out of the way when they don’t.

Staying With OAuth 2

But what if you don’t want all the new features at all? What if you’re totally fine with OAuth 2 as it is today, or as it will be in a year with all of its extensions? In those cases, just keep using it. No, really. If it’s a good solution, you should keep using it, even if something fancier is on the horizon. OAuth 2 is a good solution for a lot of things, and we’ll continue seeing it for a long time to come.

OAuth 2’s got a lot of great extensions that let it do a lot of different things, including things like PAR and RAR that start to approach a very different protocol that’s closer to XYZ than classical OAuth 2. However, these extensions have to live in the world of OAuth 2, and the fit isn’t always that great. But for many developers, the cost of smushing these new functionalities in to their existing system might be less than the cost of switching to a new system, and that’s a fine and reasonable approach.

But for the rest of the world, those who aren’t using OAuth 2 because it’s not a good fit or want OAuth 2 but a bit more than what it offers, then it’s a good time to move forward in a clear and deliberate way.

--

--

Justin Richer

Justin Richer is a security architect and freelance consultant living in the Boston area. To get in touch, contact his company: https://bspk.io/