Deployment and Hosting Patterns in OAuth
Back in 2006 (when OAuth started), if you wanted to run a server you would basically need to do everything with it on your own. These days, just about everything is available as a service, from basic computation all the way to fully functional APIs and applications. This diversity of offerings leads to a lot of tradeoffs in terms of cost, functionality, control, security, and risk. For many companies — and even individuals — it makes sense to outsource key functions to dedicated specialist providers in order to leverage their expertise and infrastructure. You can outsource all different kinds of functionality, but the risk always comes back to you since it’s your company or service on the line if things fail. The counterargument is that the experts are less likely to make mistakes or be caught unaware in the first place, since that’s their focus. But if there ever is a failure? You’re the one who’s going to bear the consequences.
Things get particularly interesting when you start to consider outsourcing some of your security functionality. Companies will happily sell you identity, authorization, and authentication services for all your users and applications, and as with anything else you’ve got your choice on where and how those services operate.
In the OAuth world, there are different common deployment patterns for standing up an authorization server. I recently took on a new client, Authlete, whose offering is a bit different from what I’ve seen others provide, and I’ll take some time to step through the basic pros and cons of each category. And before we get started, I’ll point out that this blog is and always will be my own independent opinion and stance as a professional in this field. Authlete didn’t ask me to write this, nor did they pay me for its content.
On-Premises
First off, we’ve got the traditional enterprise deployment where you get to build and deploy absolutely everything, from the network to the servers to the firewalls to everything else needed to make the service work. One could call this the original approach to getting a service up on the internet: if you wanted it, you built it. You would, of course buy pieces, of it and put it together— or maybe even buy a box with preinstalled software, but ultimately it’s your machine on your network providing your service. The early days of OAuth more or less assumed this model, where you’d have the authorization server and resource server running on the same physical box.
The biggest upside to this approach is control. With this, there are no boundaries on what you could customize and build out, since the entire stack is under your control. Do you need a filter set up a particular way? Go for it. Want to have your brand injected into the HTTP headers? No problem, throw on a custom filter and call it a day. Want to upgrade your machine? Just head to the store and pickup some new parts to throw in there.
The downside is that with all of that control comes great responsibility. If the hard drive dies on your machine? Well, I hope you’ve got a backup ready. Did you lose power? I hope you’ve got an offsite hot-spare someplace. Something broken at 3AM? Time to drive down to the office and see if you can figure out what’s going on before your customers notice. Need more bandwidth for a big marketing push? Better call your ISP and see if you can upgrade your connection. Is there a new zero-day exploit in some piece of software that comes prepacked in your Linux distribution that you didn’t even realize was installed? Then I hope you’re really, really fast at patching and mitigating such bugs — which is assuming you see the announcement at all.
Hosted Server
Another approach is to let somebody else keep your computer in their data center. The upside here is that you can give your box over to people who are really good at running data centers. They get to worry about things like the internet connection being up to snuff and the lights staying on. And back in the days of dialup internet access, this was perhaps the only way for small players to get a server with always-on availability on the internet.
But past that? It’s still up to you. You get to install the OS and software, you get to keep it patched, and you get to run whatever software you’re serving. You’ve still got a lot of control, but you’ve lost one key aspect of that control: physical access. If you can get physical access to a computer, you can override just about any software access controls, eventually. Are you storing anything sensitive in plain text on the hard drive? Now you’ve got to trust that someone else doesn’t just walk up to the server rack and walk off with a copy of your data. They’re also running the local network that your computer is attached to, and you need to trust that they’re not sitting on any connections and doing fishy things to your data streams.
OAuth tends to work in this space similarly to how it does in the on-premises world. You take your computer, or stack of computers, and set them up just as you would in your own network. Your OAuth server has its own accounts and controls, and everything is stored on your own set of drives.
There are of course companies who will take care of patching and updating the core system for you. You get less control over your instance here, and maybe you need to share some resources with other tenants, but you at least get to stop worrying about the OS and all its fiddly bits and start focusing on the product or service that you’re trying to provide.
I’ll even extend this category to include a hosted virtual server, with the benefit that it’s easier to scale and harder to walk off with a hard drive in that case. Still, if you’re paying someone for a computer that you access only remotely, then you’ve still got to make sure that everything in your stack is up to snuff.
Hosted Service
So you can outsource the network, or the computer, or both — why not just outsource the whole thing? Following the same line of thought above, why not just outsource the security entirely to someone else? Security protocols can be complicated and can have some strange consequences, so with this option you can let an expert handle it from end to end for you.
On the upside, you no longer have to manage the service or the infrastructure that it runs on. For OAuth, this usually means having someone run your authorization server for you. New features like PKCE or best practices like scrubbing Referer headers can be implemented and incorporated without any action on your part. Your systems and services can just use it and go on their way, and this layer of abstraction is part of the value proposition of standards.
Conversely, the security of your application is now fully in the hands of someone else. Anyone who controls that remote authorization server could issue tokens for use with your APIs whenever and however they wanted to. When things are out of your hands and in someone else’s, you need to trust that they’re doing what they say they’re doing. On the whole, it’s difficult for you to verify that they’re doing things properly, but that might be just fine, because maybe you don’t have the expertise to verify they’re doing a good job anyway and you’re banking on their reputation and track record. After all, we don’t routinely dig around in our own bodies after a surgery to see if the doctor put things back together to our liking — we trust their expertise and reputation and, if we’re careful, make sure that the system is such that any one mistake won’t cascade catastrophically.
But if they’re doing their jobs well, then they’re going to take care of things for you. If there’s a zero-day against a piece of their infrastructure, they’re the ones that will need to patch it. Of course, you do have to wait for them to patch it, but since it is part of their core business to provide a secure system, they’re highly motivated to do so.
The troubling thing with security systems like OAuth is that now you’ve got an external party that needs to see and manage things like user accounts for you. If you’ve got a separate system for handling this already, that means you’re going to end up with password vaults and other undesirable setups. Ironically, OAuth and OpenID Connect are designed to allow exactly this type of connection to be trusted, but if you’re outsourcing that functionality to begin with, you can’t really be expected to use those. In this case, where does the trust come from?
The other thing that bothers me about systems like this is that there’s inevitably a proprietary connection to the hosted service for getting it to connect back to your own systems. Sure, you can have someone else handle the OAuth bits, but you’re now responsible for learning and implementing their custom APIs. This ties you to a specific vendor, unlike a standards-based approach.
You’re also still responsible for security. Any downstream systems that you run will still need to follow best practices, and you’ll still need to make sure all your connections to the external systems are protected. Furthermore, you’re going to need to make sure that you’re using the proprietary service API in a secure way, and the best practices for doing that might not be obvious or well documented.
Semi-Hosted Service
For a while, these were the options that I’d seen being deployed, but Authlete had a different take on the hosted service that I found interesting: instead of outsourcing the entire authorization server, you could instead outsource just the core functionality and implement a few of the simpler parts locally. In the Authlete model in particular, you handle user authentication on a local application that you run wherever you like, but then hand off the incoming request to a hosted service to say “what do I do with this request?” and respond accordingly. It’s the responsibility of the back end service to make sure that the request parameters are well-formed and compliant with the spec, and that server has to figure out what the right next course of action is. It’s the responsibility of the local application to interact with the user and process the results of the back end API’s response.
The upside, like with most hosted solutions, is immediate simplicity. You don’t have to deal with parsing the protocol parameters or supporting extensions. The hosted portion handles creating and managing tokens, and keeping client credentials safe.
Special to this model, you don’t need to hand off all of your user accounts to someone else for verification. Instead, your local app does a login just like it always has, and then it turns around and tells the remote server who logged in. The remote server takes that statement and issues all the right tokens and credentials as needed. Your users interact only with your local application’s pages — their browsers don’t even need to reach outside of your network. By doing this, you’re effectively separating the concerns of user authentication from authorization and federation with other apps and APIs.
This solution inherits nearly all of the drawbacks of a fully-hosted solution: there’s a proprietary API to learn, you need to trust another party with your security components, and you’re still not fully off the hook in terms of security. The biggest drawback of this semi-hosted solution is that it’s incredibly difficult, if not impossible, for different components to verify what the party on the far end is doing. There’s a very explicit trust relationship set up between the local component and the hosted back end. As far as the back end service is concerned, the user-facing application needs to be behaving perfectly in order for the whole system to function properly.
Conclusions
I’m not writing this blog post to tell you what the right answer is. After all, that’s a complicated question and it relies most on what you’re trying to do and where your own capabilities are best applied. Do you have great infrastructure teams and really want tight control over everything? Then sure, run it all in house. Do you have no idea what you’re doing? Maybe get someone else to run at least parts of this for you. Really, there is no one answer, but there are a lot of options.
As with any choice, there are trade-offs that you need to consider; but no matter what parts of your infrastructure or functionality you outsource, remember that you can never outsource your risk. In the end, all the fallout from an attack, outage, or bug will come back to you.