Could anyone explain what's good about OAuth2 and why we should implement it? I ask because I'm a bit confused about it — here's my current thoughts:
OAuth1 (more precisely HMAC) requests seem logical, easy to understand, easy to develop and really, really secure.
OAuth2, instead, brings authorization requests, access tokens and refresh tokens, and you have to make 3 requests at the very start of a session to get the data you're after. And even then, one of your requests will eventually end up failing when the token expires.
And to get another access token, you use a refresh token that was passed at the same time as the access token. Does that make the access token futile from a security point of view?
Plus, as /r/netsec have showed recently, SSL isn't all entirely secure, so the push to get everything onto TLS/SSL instead of a secure HMAC confuses me.
OAuth are arguing that it's not about 100% safety, but getting it published and finished. That doesn't exactly sound promising from a provider's point of view. I can see what the draft is trying to achieve when it mentions the 6 different flows, but it's just not fitting together in my head.
I think it might be more my struggling to understand it's benefits and reasoning than actually disliking it, so this may be a bit of an unwarranted attack, and sorry if this could seem like a rant.
Background: I've written client and server stacks for OAuth 1.0a and 2.0.
Both OAuth 1.0a & 2.0 support two-legged authentication, where a server is assured of a user's identity, and three-legged authentication, where a server is assured by a content provider of the user's identity. Three-legged authentication is where authorization requests and access tokens come into play, and it's important to note that OAuth 1 has those, too.
The complex one: three-legged authentication
A main point of the OAuth specs is for a content provider (e.g. Facebook, Twitter, etc.) to assure a server (e.g. a Web app that wishes to talk to the content provider on behalf of the client) that the client has some identity. What three-legged authentication offers is the ability to do that without the client or server ever needing to know the details of that identity (e.g. username and password).
Without (?) getting too deep into the details of OAuth:
The client submits an authorization request to the server, which validates that the client is a legitimate client of its service. The server redirects the client to the content provider to request access to its resources. The content provider validates the user's identity, and often requests their permission to access the resources. The content provider redirects the client back to the server, notifying it of success or failure. This request includes an authorization code on success. The server makes an out-of-band request to the content provider and exchanges the authorization code for an access token.
The server can now make requests to the content provider on behalf of the user by passing the access token.
Each exchange (client->server, server->content provider) includes validation of a shared secret, but since OAuth 1 can run over an unencrypted connection, each validation cannot pass the secret over the wire.
That's done, as you've noted, with HMAC. The client uses the secret it shares with the server to sign the arguments for its authorization request. The server takes the arguments, signs them itself with the client's key, and is able to see whether it's a legitimate client (in step 1 above).
This signature requires both the client and the server to agree on the order of the arguments (so they're signing exactly the same string), and one of the main complaints about OAuth 1 is that it requires both the server and clients to sort and sign identically. This is fiddly code and either it's right or you get 401 Unauthorized
with little help. This increases the barrier to writing a client.
By requiring the authorization request to run over SSL, OAuth 2.0 removes the need for argument sorting and signing altogether. The client passes its secret to the server, which validates it directly.
The same requirements are present in the server->content provider connection, and since that's SSL that removes one barrier to writing a server that accesses OAuth services.
That makes things a lot easier in steps 1, 2, and 5 above.
So at this point our server has a permanent access token which is a username/password equivalent for the user. It can make requests to the content provider on behalf of the user by passing that access token as part of the request (as a query argument, HTTP header, or POST form data).
If the content service is accessed only over SSL, we're done. If it's available via plain HTTP, we'd like to protect that permanent access token in some way. Anyone sniffing the connection would be able to get access to the user's content forever.
The way that's solved in OAuth 2 is with a refresh token. The refresh token becomes the permanent password equivalent, and it's only ever transmitted over SSL. When the server needs access to the content service, it exchanges the refresh token for a short-lived access token. That way all sniffable HTTP accesses are made with a token that will expire. Google is using a 5 minute expiration on their OAuth 2 APIs.
So aside from the refresh tokens, OAuth 2 simplifies all the communications between the client, server, and content provider. And the refresh tokens only exist to provide security when content is being accessed unencrypted.
Two-legged authentication
Sometimes, though, a server just needs to control access to its own content. Two-legged authentication allows the client to authenticate the user directly with the server.
OAuth 2 standardizes some extensions to OAuth 1 that were in wide use. The one I know best was introduced by Twitter as xAuth. You can see it in OAuth 2 as Resource Owner Password Credentials.
Essentially, if you can trust the client with the user's credentials (username and password), they can exchange those directly with the content provider for an access token. This makes OAuth much more useful on mobile apps--with three-legged authentication, you have to embed an HTTP view in order to handle the authorization process with the content server.
With OAuth 1, this was not part of the official standard, and required the same signing procedure as all the other requests.
I just implemented the server side of OAuth 2 with Resource Owner Password Credentials, and from a client's perspective, getting the access token has become simple: request an access token from the server, passing the client id/secret as an HTTP Authorization header and the user's login/password as form data.
Advantage: Simplicity
So from an implementor's perspective, the main advantages I see in OAuth 2 are in reduced complexity. It doesn't require the request signing procedure, which is not exactly difficult but is certainly fiddly. It greatly reduces the work required to act as a client of a service, which is where (in the modern, mobile world) you most want to minimize pain. The reduced complexity on the server->content provider end makes it more scalable in the datacenter.
And it codifies into the standard some extensions to OAuth 1.0a (like xAuth) that are now in wide use.
First, as clearly indicated in OAuth authentication
OAuth 2.0 is not an authentication protocol.
Authentication in the context of a user accessing an application tells an application who the current user is and whether or not they're present. A full authentication protocol will probably also tell you a number of attributes about this user, such as a unique identifier, an email address, and what to call them when the application says "Good Morning".
However, OAuth tells the application none of that. OAuth says absolutely nothing about the user, nor does it say how the user proved their presence or even if they're still there. As far as an OAuth client is concerned, it asked for a token, got a token, and eventually used that token to access some API. It doesn't know anything about who authorized the application or if there was even a user there at all.
There is a standard for user authentication using OAuth: OpenID Connect, compatible with OAuth2.
The OpenID Connect ID Token is a signed JSON Web Token (JWT) that is given to the client application along side the regular OAuth access token. The ID Token contains a set of claims about the authentication session, including an identifier for the user (sub), the identifier for the identity provider who issued the token (iss), and the identifier of the client for which this token was created (aud).
In Go, you can look at coreos/dex, an OpenID Connect Identity (OIDC) and OAuth 2.0 Provider with Pluggable Connector.
Answer from this post vonc
I would answer this question slightly differently, and I will be very precise and brief, mainly because @Peter T answered it all.
The main gain that I see from this standard is to respect two principles:
Separation of concerns. Decoupling authentication from the web application, which usually serves business.
By doing so,
You can implement an alternative to Single SignOn: If you have multiple applications that trust one STS. What I mean, one username for all applications. You can enable your web application (The client) to access resources that belong to the user and do not belong to the web application (The client). You can mandate the authentication process to a third party that you trust , and never worry about user authenticity validation.
Success story sharing