In this chapter, we will see how we can secure access to the APIs and web pages exposed by the edge server introduced in the previous chapter. We will learn how to use HTTPS to protect against eavesdropping on external access to our APIs, and how to use OAuth 2.0 and OpenID Connect to authenticate and authorize users and client applications to access our APIs. Finally, we will use HTTP Basic authentication to secure access to the discovery server, Netflix Eureka.
The following topics will be covered in this chapter:
An introduction to the OAuth 2.0 and OpenID Connect standards
A general discussion on how to secure the system landscape
Protecting external communication with HTTPS
Securing access to the discovery server, Netflix Eureka
Adding a local authorization server to our system landscape
Authenticating and authorizing API access using OAuth 2.0 and OpenID Connect
Testing with the local authorization server
Testing with an external OpenID Connect provider, Auth0
Technical requirements
For instructions on how to install the tools used in this book and how to access the source code for this book, see:
Chapter 21 for macOS
Chapter 22 for Windows
The code examples in this chapter all come from the source code in $BOOK_HOME/Chapter11
.
If you want to view the changes applied to the source code in this chapter, that is, see what it took to secure access to the APIs in the microservice landscape, you can compare it with the source code for Chapter 10 , Using Spring Cloud Gateway to Hide Microservices behind an Edge Server . You can use your favorite diff
tool and compare the two folders, $BOOK_HOME/Chapter10
and $BOOK_HOME/Chapter11
.
Introduction to OAuth 2.0 and OpenID Connect
Before introducing OAuth 2.0 and OpenID Connect, let's clarify what we mean by authentication and authorization. Authentication means identifying a user by validating credentials supplied by the user, such as a username and password. Authorization is about giving access to various parts of, in our case, an API to an authenticated user.
OAuth 2.0 is an open standard for authorization delegation , and OpenID Connect is an add-on to OAuth 2.0 that enables client applications to verify the identity of users based on the authentication performed by the authorization server. Let's look briefly at OAuth 2.0 and OpenID Connect separately to get an initial understanding of their purposes!
Introducing OAuth 2.0
OAuth 2.0 is a widely accepted open standard for authorization that enables a user to give consent for a third-party client application to access protected resources in the name of the user. Giving a third-party client application the right to act in the name of a user, for example, calling an API, is known as authorization delegation .
So, what does this mean?
Let's start by sorting out the concepts used:
Resource owner : The end user.
Client : The third-party client application, for example, a web app or a native mobile app, that wants to call some protected APIs in the name of the end user.
Resource server : The server that exposes the APIs that we want to protect.
Authorization server : The authorization server issues tokens to the client after the resource owner, that is, the end user, has been authenticated. The management of user information and the authentication of users are typically delegated, behind the scenes, to an Identity Provider (IdP ).
A client is registered in the authorization server and is given a client ID and a client secret . The client secret must be protected by the client, like a password. A client also gets registered with a set of allowed redirect URIs that the authorization server will use after a user has been authenticated to send authorization codes and tokens that have been issued back to the client application.
The following is an example by way of illustration. Let's say that a user accesses a third-party client application and the client application wants to call a protected API to serve the user. To be allowed to access these APIs, the client application needs a way to tell the APIs that it is acting in the name of the user. To avoid solutions where the user must share their credentials with the client application for authentication, an access token is issued by an authorization server that gives the client application limited access to a selected set of APIs in the name of the user.
This means that the user never has to reveal their credentials to the client application. The user can also give consent to the client application to access specific APIs on behalf of the user. An access token represents a time-constrained set of access rights, expressed as scopes in OAuth 2.0 terms. A refresh token can also be issued to a client application by the authorization server. A refresh token can be used by the client application to obtain new access tokens without having to involve the user.
The OAuth 2.0 specification defines four authorization grant flows for issuing access tokens, explained as follows:
Implicit grant flow : This flow is also web browser-based but intended for client applications that are not able to keep a client secret protected, for example, a single-page web application. The web browser gets an access token back from the authorization server instead of an authorization code. Since the implicit grant flow is less secure than the authorization code grant flow, the client can't request a refresh token.
Resource owner password credentials grant flow : If a client application can't interact with a web browser, it can fall back on this grant flow. In this grant flow, the user must share their credentials with the client application and the client application will use these credentials to acquire an access token.
Client credentials grant flow : In the case where a client application needs to call an API unrelated to a specific user, it can use this grant flow to acquire an access token using its own client ID and client secret.
The full specification can be found here: https://tools.ietf.org/html/rfc6749 . There are also a number of additional specifications that detail various aspects of OAuth 2.0; for an overview, refer to https://www.oauth.com/oauth2-servers/map-oauth-2-0-specs/ . One additional specification that is worth some extra attention is RFC 7636 – Proof Key for Code Exchange by OAuth Public Clients (PKCE), https://tools.ietf.org/html/rfc7636 . This specification describes how an otherwise unsecure public client, such as a mobile native app or desktop application, can utilize the authorization code grant flow in a secure way by adding an extra layer of security.
The OAuth 2.0 specification was published in 2012, and over the years a lot of lessons have been learned about what works and what does not. In 2019, work began to establish OAuth 2.1, consolidating all the best practices and experiences from using OAuth 2.0. A draft version can be found here: https://tools.ietf.org/html/draft-ietf-oauth-v2-1-01 .
In my opinion, the most important improvements in OAuth 2.1 are:
PKCE is integrated in the authorization code grant flow. Use of PKCE will be required by public clients to improve their security, as described above. For confidential clients, where the authorization server can verify their credentials, the use of PKCE is not required, only recommended.
The implicit grant flow is deprecated and omitted from the specification, due to its less secure nature.
The resource owner password credentials grant flow is also deprecated and omitted from the specification, for the same reasons.
Given the direction in the upcoming OAuth 2.1 specification, we will only use the authorization code grant flow and the client credentials grant flow in this book.
When it comes to automating tests against APIs that are protected by OAuth 2.0, the client credentials grant flow is very handy since it doesn't require manual interaction using a web browser. We will use this grant flow later on in this chapter with our test script; see the Changes in the test script section.
Introducing OpenID Connect
OpenID Connect (abbreviated to OIDC ) is, as has already been mentioned, an add-on to OAuth 2.0 that enables client applications to verify the identity of users. OIDC adds an extra token, an ID token, that the client application gets back from the authorization server after a completed grant flow.
The ID token is encoded as a JSON Web Token (JWT ) and contains a number of claims, such as the ID and email address of the user. The ID token is digitally signed using JSON web signatures. This makes it possible for a client application to trust the information in the ID token by validating its digital signature using public keys from the authorization server.
Optionally, access tokens can also be encoded and signed in the same way as ID tokens, but it is not mandatory according to the specification. Also important, OIDC defines a discovery endpoint , which is a standardized way to establish URLs to important endpoints, such as requesting authorization codes and tokens or getting the public keys to verify a digitally signed JWT. Finally, it also defines a user-info endpoint , which can be used to get extra information about an authenticated user given an access token for that user.
For an overview of the available specifications, see https://openid.net/developers/specs/ .
In this book, we will only use authorization servers that comply with the OpenID Connect specification. This will simplify the configuration of resource servers by the use of their discovery endpoints. We will also use the optional support for digitally signed JWT access tokens to simplify how resource servers can verify the authenticity of the access tokens. See the Changes in both the edge server and the product-composite service section below.
This concludes our introduction to the OAuth 2.0 and OpenID Connect standards. Later on in this chapter, we will learn more about how to use these standards. In the next section, we will get a high-level view of how the system landscape will be secured.
Securing the system landscape
To secure the system landscape as described in the introduction to this chapter, we will perform the following steps:
Encrypt external requests and responses to and from our external API using HTTPS to protect against eavesdropping
Authenticate and authorize users and client applications that access our APIs using OAuth 2.0 and OpenID Connect
Secure access to the discovery server, Netflix Eureka, using HTTP basic authentication
We will only apply HTTPS for external communication to our edge server, using plain HTTP for communication inside our system landscape.
In the chapter on service meshes (Chapter 18 , Using a Service Mesh to Improve Observability and Management ) that will appear later in this book, we will see how we can get help from a service mesh product to automatically provision HTTPS to secure communication inside a system landscape.
For test purposes, we will add a local OAuth 2.0 authorization server to our system landscape. All external communication with the authorization server will be routed through the edge server. The edge server and the product-composite
service will act as OAuth 2.0 resource servers; that is, they will require a valid OAuth 2.0 access token to allow access.
To minimize the overhead of validating access tokens, we will assume that they are encoded as signed JWTs and that the authorization server exposes an endpoint that the resource servers can use to access the public keys, also known as a JSON Web Key Set or jwk-set for short, required to validate the signing.
The system landscape will look like the following:
Figure 11.2: Adding an authorization server to the system landscape
From the preceding diagram, we can note that:
HTTPS is used for external communication, while plain text HTTP is used inside the system landscape
The local OAuth 2.0 authorization server will be accessed externally through the edge server
Both the edge server and the product-composite
microservice will validate access tokens as signed JWTs
The edge server and the product-composite
microservice will get the authorization server's public keys from its jwk-set
endpoint and use them to validate the signature of the JWT-based access tokens
Note that we will focus on securing access to APIs over HTTP, not on covering general best practices for securing web applications, for example, managing web application security risks pointed out by the OWASP Top Ten Project . Refer to https://owasp.org/www-project-top-ten/ for more information.
With this overview of how the system landscape will be secured, let's start to see how we can protect external communication from eavesdropping using HTTPS.
Protecting external communication with HTTPS
In this section, we will learn how to prevent eavesdropping on external communication, for example, from the internet, via the public APIs exposed by the edge server. We will use HTTPS to encrypt communication. To use HTTPS, we need to do the following:
Create a certificate : We will create our own self-signed certificate, sufficient for development purposes
Configure the edge server : It has to be configured to accept only HTTPS-based external traffic using the certificate
The self-signed certificate is created with the following command:
keytool -genkeypair -alias localhost -keyalg RSA -keysize 2048 -storetype PKCS12 -keystore edge.p12 -validity 3650
The source code comes with a sample certificate file, so you don't need to run this command to run the following examples.
The command will ask for a number of parameters. When asked for a password, I entered password
. For the rest of the parameters, I simply entered an empty value to accept the default value. The certificate file created, edge.p12
, is placed in the gateway
projects folder, src/main/resources/keystore
. This means that the certificate file will be placed in the .jar
file when it is built and will be available on the classpath at runtime at keystore/edge.p12
.
Providing certificates using the classpath is sufficient during development, but not applicable to other environments, for example, a production environment. See the Replacing a self-signed certificate at runtime section below for how we can replace this certificate with an external certificate at runtime!
To configure the edge server to use the certificate and HTTPS, the following is added to application.yml
in the gateway
project:
server.port: 8443
server.ssl:
key-store-type: PKCS12
key-store: classpath:keystore/edge.p12
key-store-password: password
key-alias: localhost
Some notes from the preceding source code:
The path to the certificate is specified in the server.ssl.key-store
parameter, and is set to classpath:keystore/edge.p12
. This means that the certificate will be picked up on the classpath from the location keystore/edge.p12
.
The password for the certificate is specified in the server.ssl.key-store-password
parameter.
To indicate that the edge server talks HTTPS and not HTTP, we also change the port from 8080
to 8443
in the server.port
parameter.
In addition to these changes in the edge server, changes are also required in the following files to reflect the changes to the port and HTTP protocol, replacing HTTP
with HTTPS
and 8080
with 8443
:
The three Docker Compose files, docker-compose*.yml
The test script, test-em-all.bash
Providing certificates using the classpath is, as already mentioned previously, only sufficient during development. Let's see how we can replace this certificate with an external certificate at runtime.
Replacing a self-signed certificate at runtime
Placing a self-signed certificate in the .jar
file is only useful for development. For a working solution in runtime environments, for example, for test or production, it must be possible to use certificates signed by authorized CAs (short for Certificate Authorities ).
It must also be possible to specify the certificates to be used during runtime without the need to rebuild the .jar
files and, when using Docker, the Docker image that contains the .jar
file. When using Docker Compose to manage the Docker container, we can map a volume in the Docker container to a certificate that resides on the Docker host. We can also set up environment variables for the Docker container that points to the external certificate in the Docker volume.
In Chapter 15 , Introduction to Kubernetes , we will learn about Kubernetes, where we will see more powerful solutions for how to handle secrets, such as certificates, that are suitable for running Docker containers in a cluster; that is, where containers are scheduled on a group of Docker hosts and not on a single Docker host.
The changes described in this topic have not been applied to the source code in the book's GitHub repository; you need to make them yourself to see them in action!
To replace the certificate packaged in the .jar
file, perform the following steps:
Create a second certificate and set the password to testtest
, when asked for it:
cd $BOOK_HOME/Chapter11
mkdir keystore
keytool -genkeypair -alias localhost -keyalg RSA -keysize 2048 -storetype PKCS12 -keystore keystore/edge-test.p12 -validity 3650
Update the Docker Compose file, docker-compose.yml
, with environment variables for the location, the password for the new certificate, and a volume that maps to the folder where the new certificate is placed. The configuration of the edge server will look like the following after the change:
gateway:
environment:
- SPRING_PROFILES_ACTIVE=docker
- SERVER_SSL_KEY_STORE=file:/keystore/edge-test.p12
- SERVER_SSL_KEY_STORE_PASSWORD=testtest
volumes:
- $PWD/keystore:/keystore
build: spring-cloud/gateway
mem_limit: 512m
ports:
- "8443:8443"
If the edge server is up and running, it needs to be restarted with the following commands:
docker-compose up -d --scale gateway=0
docker-compose up -d --scale gateway=1
The command docker-compose restart gateway
might look like a good candidate for restarting the gateway service, but it actually does not take changes in docker-compose.yml
into consideration. Hence, it is not a useful command in this case.
The new certificate is now in use!
This concludes the section on how to protect external communication with HTTPS. In the next section, we will learn how to secure access to the discovery server, Netflix Eureka, using HTTP Basic authentication.
Securing access to the discovery server
Previously, we learned how to protect external communication with HTTPS. Now we will use HTTP Basic authentication to restrict access to the APIs and web pages on the discovery server, Netflix Eureka. This means that we will require a user to supply a username and password to get access. Changes are required both on the Eureka server and in the Eureka clients, described as follows.
Changes in the Eureka server
To protect the Eureka server, the following changes have been applied in the source code:
In build.gradle
, a dependency has been added for Spring Security:
implementation 'org.springframework.boot:spring-boot-starter-security'
Security configuration has been added to the SecurityConfig
class:
The user is defined as follows:
public void configure (AuthenticationManagerBuilder auth) throws Exception {
auth.inMemoryAuthentication()
.passwordEncoder(NoOpPasswordEncoder.getInstance())
.withUser(username).password(password)
.authorities("USER" );
}
The username
and password
are injected into the constructor from the configuration file:
@Autowired
public SecurityConfig (
@Value("${app.eureka-username}") String username,
@Value("${app.eureka-password}") String password
) {
this .username = username;
this .password = password;
}
All APIs and web pages are protected using HTTP Basic authentication by means of the following definition:
protected void configure (HttpSecurity http) throws Exception {
http
.authorizeRequests()
.anyRequest().authenticated()
.and()
.httpBasic();
}
Credentials for the user are set up in the configuration file, application.yml
:
app:
eureka-username: u
eureka-password: p
Finally, the test class, EurekaServerApplicationTests
, uses the credentials from the configuration file when testing the APIs of the Eureka server:
@Value("${app.eureka-username}")
private String username;
@Value("${app.eureka-password}")
private String password;
@Autowired
public void setTestRestTemplate (TestRestTemplate testRestTemplate) {
this .testRestTemplate = testRestTemplate.withBasicAuth(username, password);
}
The above are the steps required for restricting access to the APIs and web pages of the discovery server, Netflix Eureka. It will now use HTTP Basic authentication and require a user to supply a username and password to get access. The last step is to configure Netflix Eureka clients so that they pass credentials when accessing the Netflix Eureka server.
Changes in Eureka clients
For Eureka clients, the credentials can be specified in the connection URL for the Eureka server. This is specified in each client's configuration file, application.yml
, as follows:
app:
eureka-username: u
eureka-password: p
eureka:
client:
serviceUrl:
defaultZone: "http://${app.eureka-username}:${app.eureka-
password}@${app.eureka-server}:8761/eureka/"
This concludes the section on how to restrict access to the Netflix Eureka server. In the section Testing the protected discovery server , we will run tests to verify that the access is protected. In the next section, we will learn how to add a local authorization server to the system landscape.
Adding a local authorization server
To be able to run tests locally and fully automated with APIs that are secured using OAuth 2.0 and OpenID Connect, we will add an authorization server that is compliant with these specifications to our system landscape. Spring Security unfortunately does not provide an authorization server out of the box. But in April 2020, a community-driven project, Spring Authorization Server , led by the Spring Security team, was announced with the goal to deliver an authorization server. For more information, see https://spring.io/blog/2020/04/15/announcing-the-spring-authorization-server .
The Spring Authorization Server supports both the use of the OpenID Connect discovery endpoint and digital signing of access tokens. It also provides an endpoint that can be accessed using the discovery information to get keys for verifying the digital signature of a token. With support for these features, it can be used as the authorization server in local and automated tests that verify that the system landscape works as expected.
The authorization server in this book is based on the sample authorization server provided by the Spring Authorization Server project; see https://github.com/spring-projects-experimental/spring-authorization-server/tree/master/samples/boot/oauth2-integration/authorizationserver .
The following changes have been applied to the sample project:
The build file has been updated to follow the structure of the other projects' build files in this book.
The port is set to 9999
.
A Dockerfile has been added with the same structure as for the other projects in this book.
The authorization server has been integrated with Eureka for service discovery in the same way as the other projects in this book.
Public access has been added to the actuator's endpoints.
WARNING : As already warned about in Chapter 7 , Developing Reactive Microservices , allowing public access to the actuator's endpoints is very helpful during development, but it can be a security issue to reveal too much information in actuator endpoints in production systems. Therefore, plan for minimizing the information exposed by the actuator endpoints in production!
Unit tests have been added that verify access to the most critical endpoints according to the OpenID Connect specification.
The username and password for the single registered user are set to "u"
and "p"
respectively.
Two OAuth clients are registered, reader
and writer
, where the reader
client is granted a product:read
scope and the writer
client is granted both a product:read
and product:write
scope. Both clients are configured to have the client secret set to secret
.
Allowed redirect URIs for the clients are set to https://my.redirect.uri
and https://localhost:8443/webjars/swagger-ui/oauth2-redirect.html
. The first URL will be used in the tests described below and the second URL is used by the Swagger UI component.
The source code for the authorization server is available in $BOOK_HOME/Chapter11/spring-cloud/authorization-server
.
To incorporate the authorization server in the system landscape, changes to the following files have been applied:
The server has been added to the common build file, settings.gradle
The server has been added to the three Docker Compose files, docker-compose*.yml
The edge server, spring-cloud/gateway
:
A health check has been added for the authorization server in HealthCheckConfiguration
.
Routes to the authorization server for the URIs starting with /oauth
, /login
, and /error
have been added in the configuration file application.yml
. These URIs are used to issue tokens for clients, authenticate users, and show error messages.
Since these three URIs need to be unprotected by the edge server, they are configured in the new class SecurityConfig
to permit all requests.
Due to a regression in Spring Security 5.5, which is used by Spring Boot 2.5, the Spring Authorization Server can't be used with Spring Boot 2.5 at the time of writing this chapter. Instead, Spring Boot 2.4.4 and Spring Cloud 2020.0.2 are used. For details, see:
With an understanding of how a local authorization server is added to the system landscape, let's move on and see how to use OAuth 2.0 and OpenID Connect to authenticate and authorize access to APIs.
Protecting APIs using OAuth 2.0 and OpenID Connect
With the authorization server in place, we can enhance the edge server and the product-composite
service to become OAuth 2.0 resource servers, so that they will require a valid access token to allow access. The edge server will be configured to accept any access token it can validate using the digital signature provided by the authorization server. The product-composite
service will also require the access token to contain valid OAuth 2.0 scopes:
The product:read
scope will be required for accessing the read-only APIs
The product:write
scope will be required for accessing the create and delete APIs
The product-composite
service will also be enhanced with configuration that allows its Swagger UI component to interact with the authorization server to issue an access token. This will allow users of the Swagger UI web page to test the protected API.
We also need to enhance the test script, test-em-all.bash
, so that it acquires access tokens and uses them when it performs the tests.
Changes in both the edge server and the product-composite service
The following changes have been applied in the source code to both the edge server and the product-composite
service:
Spring Security dependencies have been added to build.gradle
to support OAuth 2.0 resource servers:
implementation 'org.springframework.boot:spring-boot-starter-security'
implementation 'org.springframework.security:spring-security-oauth2-resource-server'
implementation 'org.springframework.security:spring-security-oauth2-jose'
Security configurations have been added to new SecurityConfig
classes in both projects:
@EnableWebFluxSecurity
public class SecurityConfig {
@Bean
SecurityWebFilterChain springSecurityFilterChain (
ServerHttpSecurity http) {
http
.authorizeExchange()
.pathMatchers("/actuator/**" ).permitAll()
.anyExchange().authenticated()
.and()
.oauth2ResourceServer()
.jwt();
return http.build();
}
}
Explanations for the preceding source code are as follows:
The annotation @EnableWebFluxSecurity
enables Spring Security support for APIs based on Spring WebFlux.
.pathMatchers("/actuator/**").permitAll()
is used to allow unrestricted access to URLs that should be unprotected, for example, the actuator
endpoints in this case. Refer to the source code for URLs that are treated as unprotected. Be careful about which URLs are exposed unprotected. For example, the actuator
endpoints should be protected before going to production.
.anyExchange().authenticated()
ensures that the user is authenticated before being allowed access to all other URLs.
.oauth2ResourceServer().jwt()
specifies that authorization will be based on OAuth 2.0 access tokens encoded as JWTs.
The authorization server's OIDC discovery endpoint has been registered in the configuration file, application.yml
:
app.auth-server: localhost
spring.security.oauth2.resourceserver.jwt.issuer-uri: http://${app.auth-server}:9999
---
spring.config.activate.on-profile: docker
app.auth-server: auth-server
Later on in this chapter, when the system landscape is started up, you can test the discovery endpoint. You can, for example, find the endpoint that returns the keys required for verifying the digital signature of a token using the command:
docker-compose exec auth-server curl localhost:9999/.well-known/openid-configuration -s | jq -r .jwks_uri
We also need to make some changes that only apply to the product-composite
service.
Changes in the product-composite service only
In addition to the common changes applied in the previous section, the following changes have also been applied to the product-composite
service:
The security configuration in the SecurityConfig
class has been refined by requiring OAuth 2.0 scopes in the access token in order to allow access:
.pathMatchers(POST, "/product-composite/**" )
.hasAuthority("SCOPE_product:write" )
.pathMatchers(DELETE, "/product-composite/**" )
.hasAuthority("SCOPE_product:write" )
.pathMatchers(GET, "/product-composite/**" )
.hasAuthority("SCOPE_product:read" )
By convention, OAuth 2.0 scopes need to be prefixed with SCOPE_
when checked for authority using Spring Security.
A method, logAuthorizationInfo()
, has been added to log relevant parts from the JWT-encoded access token upon each call to the API. The access token can be acquired using the standard Spring Security, SecurityContext
, which, in a reactive environment, can be acquired using the static helper method, ReactiveSecurityContextHolder.getContext()
. Refer to the ProductCompositeServiceImpl
class for details.
The use of OAuth has been disabled when running Spring-based integration tests. To prevent the OAuth machinery from kicking in when we are running integration tests, we disable it as follows:
Changes to allow Swagger UI to acquire access tokens
To allow access to the protected APIs from the Swagger UI component, the following changes have been applied in the product-composite
service:
The web pages exposed by the Swagger UI component have been configured to be publicly available. The following line has been added to the SecurityConfig
class:
.pathMatchers("/openapi/**" ).permitAll()
.pathMatchers("/webjars/**" ).permitAll()
The OpenAPI Specification of the API has been enhanced to require that the security schema security_auth
is applied.The following line has been added to the definition of the interface ProductCompositeService
in the API
project:
@SecurityRequirement(name = "security_auth")
To define the semantics of the security schema security_auth
, the class OpenApiConfig
has been added to the product-composite
project. It looks like this:
@SecurityScheme(
name = "security_auth", type = SecuritySchemeType.OAUTH2,
flows = @OAuthFlows(
authorizationCode = @OAuthFlow(
authorizationUrl = "${springdoc.oAuthFlow.
authorizationUrl}",
tokenUrl = "${springdoc.oAuthFlow.tokenUrl}",
scopes = {
@OAuthScope(name = "product:read", description =
"read scope"),
@OAuthScope(name = "product:write", description =
"write scope")
}
)))
public class OpenApiConfig {}
From the preceding class definition, we can see:
The security schema will be based on OAuth 2.0
The authorization code grant flow will be used
The required URLs for acquiring an authorization code and access tokens will be supplied by the configuration using the parameters springdoc.oAuthFlow.authorizationUrl and springdoc.oAuthFlow.tokenUrl
A list of scopes (product:read
and product:write
) that Swagger UI will require to be able to call the APIs
Finally, some configuration is added to application.yml
:
swagger-ui:
oauth2-redirect-url: https://localhost:8443/ webjars/swagger-ui/oauth2-redirect.html
oauth:
clientId: writer
clientSecret: secret
useBasicAuthenticationWithAccessCodeGrant: true
oAuthFlow:
authorizationUrl: https://localhost:8443/oauth2/authorize
tokenUrl: https://localhost:8443/oauth2/token
From the preceding configuration, we can see:
The redirect URL that Swagger UI will use to acquire the authorization code.
Its client ID and client secret.
It will use HTTP Basic Authentication when identifying itself for the authorization server.
The values of the authorizationUrl
and tokenUrl
parameters, used by the OpenApiConfig
class described above. Note that these URLs are used by the web browser and not by the product-composite
service itself. So they must be resolvable from the web browser.
To allow unprotected access to the Swagger UI web pages, the edge server has also been configured to allow unrestricted access to URLs that are routed to the Swagger UI component. The following is added to the edge server's SecurityConfig
class:
.pathMatchers("/openapi/**" ).permitAll()
.pathMatchers("/webjars/**" ).permitAll()
With these changes in place, both the edge server and the product-composite
service can act as OAuth 2.0 resource servers, and the Swagger UI component can act as an OAuth client. The last step we need to take to introduce the usage of OAuth 2.0 and OpenID Connect is to update the test script, so it acquires access tokens and uses them when running the tests.
Changes in the test script
To start with, we need to acquire an access token before we can call any of the APIs, except the health API. This is done, as already mentioned above, using the OAuth 2.0 client credentials flow. To be able to call the create and delete APIs, we acquire an access token as the writer
client, as follows:
ACCESS_TOKEN=$(curl -k https://writer:secret@$HOST :$PORT /oauth2/token -d grant_type=client_credentials -s | jq .access_token -r)
From the preceding command, we can see that it uses HTTP Basic authentication, passing its client ID and client secret as writer:secret@
before the hostname.
To verify that the scope-based authorization works, two tests have been added to the test script:
assertCurl 401 "curl -k https:// $HOST : $PORT /product-composite/ $PROD_ID_REVS_RECS -s"
READER_ACCESS_TOKEN=$(curl -k https://reader:secret@$HOST :$PORT /oauth2/token -d grant_type=client_credentials -s | jq .access_token -r)
READER_AUTH="-H \"Authorization: Bearer $READER_ACCESS_TOKEN \""
assertCurl 200 "curl -k https:// $HOST : $PORT /product-composite/ $PROD_ID_REVS_RECS $READER_AUTH -s"
assertCurl 403 "curl -k https:// $HOST : $PORT /product-composite/ $PROD_ID_REVS_RECS $READER_AUTH -X DELETE -s"
The test script uses the reader client's credentials to acquire an access token:
The first test calls an API without supplying an access token. The API is expected to return the 401 Unauthorized
HTTP status.
The second test verifies that the reader client can call a read-only API.
The last test calls an updating API using the reader
client, which is only granted a read
scope. A request sent to the delete API is expected to return the 403 Forbidden
HTTP status.
For the full source code, see test-em-all.bash
.
With the test script updated to acquire and use OAuth 2.0 access tokens, we are ready to try it out in the next section!
Testing with the local authorization server
In this section we will try out the secured system landscape; that is, we will test all the security components together. We will use the local authorization server to issue access tokens. The following tests will be performed:
First, we build from source and run the test script to ensure that everything fits together.
Next, we will test the protected discovery server's API and web page.
After that, we will learn how to acquire access tokens using OAuth 2.0 client credentials and authorization code grant flows.
With the issued access tokens, we will test the protected APIs. We will also verify that an access token issued for a reader client can't be used to call an updating API.
Finally, we will also verify that Swagger UI can issue access tokens and call the APIs.
Building and running the automated tests
To build and run automated tests, we perform the following steps:
First, build the Docker images from source with the following commands:
cd $BOOK_HOME/Chapter11
./gradlew build && docker-compose build
Next, start the system landscape in Docker and run the usual tests with the following command:
./test-em-all.bash start
Note the new negative tests at the end that verify that we get a 401 Unauthorized
code back when not authenticated, and 403 Forbidden
when not authorized.
Testing the protected discovery server
With the protected discovery server, Eureka, up and running, we have to supply valid credentials to be able to access its APIs and web pages.
For example, asking the Eureka server for registered instances can be done by means of the following curl
command, where we supply the username and password directly in the URL:
curl -H "accept:application/json" https://u:p@localhost:8443/eureka/api/apps -ks | jq -r .applications.application[].instance[].instanceId
A sample response is as follows:
Figure 11.3: Services registered in Eureka using an API call
When accessing the web page on https://localhost:8443/eureka/web
, we first have to accept an unsecure connection, since our certificate is self-signed, and next we have to supply valid credentials, as specified in the configuration file (u
as username and p
as password):
Figure 11.4: Eureka requires authentication
Following a successful login, we will see the familiar web page from the Eureka server:
Figure 11.5: Services registered in Eureka using the web page
After ensuring that access to the Eureka server is protected, we will learn how to issue OAuth access tokens.
Acquiring access tokens
Now we are ready to acquire access tokens using grant flows defined by OAuth 2.0. We will first try out the client credentials grant flow, followed by the authorization code grant flow.
Acquiring access tokens using the client credentials grant flow
To get an access token for the writer
client, that is, with both the product:read
and product:write
scopes, issue the following command:
curl -k https://writer:secret@localhost:8443/oauth2/token -d grant_type=client_credentials -s | jq .
The client identifies itself using HTTP Basic authentication, passing its client ID, writer
, and its client secret, secret
.
A sample response is as follows:
Figure 11.6: Sample token response
From the screenshot we can see that we got the following information in the response:
The access token itself.
The scopes granted to the token. The writer
client is granted both the product:write
and product:read
scope. It is also granted the openid
scope, allowing access to information regarding the user's ID, such as an email address.
The type of token we got; Bearer means that the bearer of this token should be given access according to the scopes granted to the token.
The number of seconds that the access token is valid for, 299
seconds in this case.
To get an access token for the reader
client, that is, with only the product:read
scope, simply replace writer
with reader
in the preceding command, resulting in:
curl -k https://reader:secret@localhost:8443/oauth2/token -d grant_type=client_credentials -s | jq .
Acquiring access tokens using the authorization code grant flow
To acquire an access token using the authorization code grant flow, we need to involve a web browser. This grant flow is a bit more complicated in order to make it secure in an environment that is partly unsecure (the web browser).
In the first unsecure step, we will use the web browser to acquire an authorization code that can be used only once, to be exchanged for an access token. The authorization code will be passed from the web browser to a secure layer, for example, server-side code, which can make a new request to the authorization server to exchange the authorization code for an access token. In this secure exchange, the server has to supply a client secret to verify its identity.
Perform the following steps to execute the authorization code grant flow:
To get an authorization code for the reader
client, use the following URL in a web browser that accepts the use of self-signed certificates, for example, Chrome: https://localhost:8443/oauth2/authorize?response_type=code&client_id=reader&redirect_uri=https://my.redirect.uri&scope=product:read&state=35725
.
When asked to log in by the web browser, use the credentials specified in the configuration of the authorization server, u
and p
:
Figure 11.7: Trying out the authorization code grant flow
Next, we will be asked to give the reader
client consent to call the APIs in our name:
Figure 11.8: Authorization code grant flow consent page
After clicking on the Submit Consent button, we will get the following response:
Figure 11.9: Authorization code grant flow redirect page
This might, at a first glance, look a bit disappointing. The URL that the authorization server sent back to the web browser is based on the redirect URI specified by the client in the initial request. Copy the URL into a text editor and you will find something similar to the following:https://my.redirect.uri/?code=Yyr...X0Q&state=35725
Great! We can find the authorization code in the redirect URL in the code
request parameter. Extract the authorization code from the code
parameter and define an environment variable, CODE
, with its value:
CODE=Yyr...X0Q
Next, pretend you are the backend server that exchanges the authorization code with an access token using the following curl
command:
curl -k https://reader:secret@localhost:8443/oauth2/token \
-d grant_type=authorization_code \
-d client_id=reader \
-d redirect_uri=https://my.redirect.uri \
-d code=$CODE -s | jq .
A sample response is as follows:
Figure 11.10: Authorization code grant flow access token
From the screenshot, we can see that we got similar information in the response as we got from the client credentials flow, with the following exceptions:
Since we used a more secure grant flow, we also got a refresh token
issued
Since we asked for an access token for the reader
client, we only got a product:read
scope, no product:write
scope
To get an authorization code for the writer
client, use the following URL: https://localhost:8443/oauth2/authorize?response_type=code&client_id=writer&redirect_uri=https://my.redirect.uri&scope=product:read+product:write&state=72489
.
To exchange the code for an access token for the writer
client, run the following command:
curl -k https://writer:secret@localhost:8443/oauth2/token \
-d grant_type=authorization_code \
-d client_id=writer \
-d redirect_uri=https://my.redirect.uri \
-d code=$CODE -s | jq .
Verify that the response now also contains the product:write
scope!
Calling protected APIs using access tokens
Now, let's use the access tokens we have acquired to call the protected APIs.
An OAuth 2.0 access token is expected to be sent as a standard HTTP authorization
header, where the access token is prefixed with Bearer
.
Run the following commands to call the protected APIs:
First, call an API to retrieve a composite product without a valid access token:
ACCESS_TOKEN=an-invalid-token
curl https://localhost:8443/product-composite/1 -k -H "Authorization: Bearer $ACCESS_TOKEN" -i
It should return the following response:
Figure 11.11: Invalid token results in a 401 Unauthorized response
The error message clearly states that the access token is invalid!
Next, try using the API to retrieve a composite product using one of the access tokens acquired for the reader
client from the previous section:
ACCESS_TOKEN={a-reader-access-token}
curl https://localhost:8443/product-composite/1 -k -H "Authorization: Bearer $ACCESS_TOKEN" -i
Now we will get the 200 OK
status code and the expected response body will be returned:
Figure 11.12: Valid access token results in a 200 OK response
If we try to access an updating API, for example, the delete API, with an access token acquired for the reader
client, the call will fail:
ACCESS_TOKEN={a-reader-access-token}
curl https://localhost:8443/product-composite/999 -k -H "Authorization: Bearer $ACCESS_TOKEN" -X DELETE -i
It will fail with a response similar to the following:
Figure 11.13: Insufficient scope results in a 403 Forbidden result
From the error response, it is clear that we are forbidden to call the API since the request requires higher privileges than what our access token is granted.
If we repeat the call to the delete API, but with an access token acquired for the writer
client, the call will succeed with 200 OK
in the response.
The delete operation should return 200
even if the product with the specified product ID does not exist in the underlying database, since the delete operation is idempotent, as described in Chapter 6 , Adding Persistence . Refer to the Adding new APIs section.
If you look into the log output using the docker-compose logs -f product-composite
command, you should be able to find authorization information such as the following:
Figure 11.14: Authorization info in the log output
This information was extracted in the product-composite
service from the JWT-encoded access token; the product-composite
service did not need to communicate with the authorization server to get this information!
With these tests, we have seen how to acquire an access token with the client credentials and authorization code grant flows. We have also seen how scopes can be used to limit what a client can do with a specific access token, for example, only use it for reading operations.
Testing Swagger UI with OAuth 2.0
In this section, we will learn how to use the Swagger UI component to access the protected API. The configuration described in the Changes in the product-composite service only section above allows us to issue an access token for Swagger UI and use it when calling the APIs from Swagger UI.
To try it out, perform the following steps:
Open the Swagger UI start page by going to the following URL in a web browser: https://localhost:8443/openapi/swagger-ui.html
.
On the start page we can now see a new button, next to the Servers drop-down list, with the text Authorize .
Click on the Authorize button to initiate an authorization code grant flow.
Swagger UI will present a list of scopes that it will ask the authorization server to get access to. Select all scopes by clicking on the link with the text select all and then clicking on the Authorize button:
Figure 11.15: Swagger UI asking for OAuth scopes
You will then be redirected to the authorization server. If you are not already logged in from the web browser used, the authorization server will ask for your credentials as in the Acquiring access tokens using the authorization code grant flow section.
Log in with username u
and password p
.
Next, the authorization server will ask for your consent. Select both scopes and click on the Submit Consent button.
Swagger UI will complete the authorization process by showing information about the completed grant flow. Click on the Close button to get back to the start page:
Figure 11.16: Swagger UI summarizing the OAuth grant flow
Now you can try out the APIs in the same way as described in Chapter 5 , Adding an API Description Using OpenAPI . Swagger UI will add the access token to the requests. If you look closely in the curl command reported below the Responses header, you can find the access token.
This completes the tests we will perform with the local authorization server. In the next section, we will replace it with an external OpenID Connect-compliant provider.
Testing with an external OpenID Connect provider
So, the OAuth dance works fine with an authorization server we control ourselves. But what happens if we replace it with a certified OpenID Connect provider? In theory, it should work out of the box. Let's find out, shall we?
For a list of certified implementations of OpenID Connect, refer to https://openid.net/developers/certified/ . We will use Auth0, https://auth0.com/ , for our tests with an external OpenID provider. To be able to use Auth0 instead of our own authorization server, we will go through the following topics:
Setting up an account with a reader and writer client and a user in Auth0
Applying the changes required to use Auth0 as an OpenID provider
Running the test script to verify that it is working
Acquiring access tokens using the following grant flows:
Client credentials grant flow
Authorization code grant flow
Calling protected APIs using the access tokens acquired from the grant flows
Using the user info endpoint to get more information about a user
Let us go through each of them in the following sections.
Setting up and configuring an account in Auth0
Most of the configuration required in Auth0 will be taken care of by a script that uses Auth0's management API. But we must perform a few manual steps up to the point where Auth0 has created a client ID and client secret we can use to access the management API. Auth0's service is multi-tenant, allowing us to create our own domain of OAuth objects in terms of clients, resource owners, and resource servers.
Perform the following manual steps to sign up for a free account in Auth0 and create a client that we can use to access the management API:
Open the URL https://auth0.com in your browser.
Click on the Sign up button:
Sign up with an email of your choice.
After a successful sign-up, you will be asked to create a tenant domain. Enter the name of the tenant of your choice, in my case: dev-ml.eu.auth0.com
.
Fill in information about your account as requested.
Also, look in your mailbox for an email with the subject Please Verify Your Auth0 Account and use the instructions in the email to verify your account.
Following sign-up, you will be directed to your dashboard with a Getting Started page.
In the menu to the left, click on Applications to get it expanded, then click on APIs to find the management API, Auth0 Management API . This API was created for you during the creation of your tenant. We will use this API to create the required definitions in the tenant.
Click on Auth0 Management API and select the Test tab.
A big button with the text CREATE & AUTHORIZE TEST APPLICATION will appear. Click on it to get a client created that can be used to access the management API.
Once created, a page is displayed with the header Asking Auth0 for tokens from my application . As a final step, we need to give the created client permission to use the management APIs.
Click on the tab Machine to Machine Applications , next to the Test tab.
Here we will find the test client, Auth0 Management API (Test Application) , and we can see that it is authorized to use the management API. If we click on the down arrow next to the Authorized toggle button, a large number of available privileges are revealed.
Click on the All choice and then on the UPDATE button. The screen should look similar to the following screenshot:
Figure 11.17: Auth0 management API client permissions
Press on the CONTINUE button after understanding that you now have a very powerful client with access to all management APIs within your tenant.
Now, we just need to collect the client ID and client secret of the created client. The easiest way to do that is to select Applications in the menu to the left (under the main menu choice Applications ) and then select the application named Auth0 Management API (Test Application) . A screen similar to the following should be displayed:
Figure 11.18: Auth0 management API client application information
Open the file $BOOK_HOME/Chapter11/auth0/env.bash
and copy the following values from the screen above:
Domain into the value of the variable TENANT
Client ID into the value of the variable MGM_CLIENT_ID
Client Secret into the value of the variable MGM_CLIENT_SECRET
Complete the values required in the env.bash
file by specifying an email address and password, in the variables USER_EMAIL
and USER_PASSWORD
, of a test user that the script will create for us.
Specifying a password for a user like this is not considered best practice from a security perspective. Auth0 supports enrolling users who will be able to set the password themselves, but it is more involved to set up. For more information, see https://auth0.com/docs/connections/database/password-change . Since this is only used for test purposes, specifying a password like this is OK.
We can now run the script that will create the following definitions for us:
Two applications, reader
and writer
, clients in OAuth terminology
The product-composite
API, a resource server in OAuth terminology, with the OAuth scopes product:read
and product:write
A user, a resource owner in OAuth terminology, that we will use to test the authorization code grant flow
Finally, we will grant the reader
application the scope product:read
, and the writer
application the scopes product:read
and product:write
Run the following commands:
cd $BOOK_HOME/Chapter11/auth0
./setup-tenant.bash
Expect the following output (details removed from the output below):
Figure 11.19: Output from setup-tenant.bash the first time it is executed
Save a copy of the export
commands printed at the end of the output; we will use them multiple times later on in this chapter.
Also, look in your mailbox for the email specified for the test user. You will receive a mail with the subject Verify your email. Use the instructions in the email to verify the test user's email address.
Note that the script is idempotent, meaning it can be run multiple times without corrupting the configuration. If running the script again, it should respond with:
Figure 11.20: Output from setup-tenant.bash the next time it is executed
It can be very handy to be able to run the script again, for example, to get access to the reader's and writer's client ID and client secret.
If you need to remove the objects created by setup-tenant.bash
, you can run the script reset-tenant.bash
.
With an Auth0 account created and configured, we can move on and apply the necessary configuration changes in the system landscape.
Applying the required changes to use Auth0 as an OpenID provider
In this section, we will learn what configuration changes are required to be able to replace the local authorization server with Auth0. We only need to change the configuration for the two services that act as OAuth resource servers, the product-composite
and gateway
services. We also need to change our test script a bit, so that it acquires the access tokens from Auth0 instead of acquiring them from our local authorization server. Let's start with the OAuth resource servers, the product-composite
and gateway
services.
The changes described in this topic have not been applied to the source code in the book's Git repository; you need to make them yourself to see them in action!
Changing the configuration in the OAuth resource servers
As already described, when using an OpenID Connect provider, we only have to configure the base URI to the standardized discovery endpoint in the OAuth resource servers.
In the product-composite
and gateway
projects, update the OIDC discovery endpoint to point to Auth0 instead of to our local authorization server. Make the following change to the application.yml
file in both projects:
Locate the property spring.security.oauth2.resourceserver.jwt.issuer-uri
.
Replace its value with https://${TENANT}/
, where ${TENANT}
should be replaced with your tenant domain name; in my case, it is dev-ml.eu.auth0.com
. Do not forget the trailing /
!
In my case, the configuration of the OIDC discovery endpoint will look like this:
spring.security.oauth2.resourceserver.jwt.issuer-uri: https://dev-ml.eu.auth0.com/
If you are curious, you can see what's in the discovery document by running the following command:
curl https://${TENANT}/.well-known/openid-configuration -s | jq
Rebuild the product-composite
and gateway
services as follows:
cd $BOOK_HOME/Chapter11
./gradlew build && docker-compose up -d --build product-composite gateway
With the product-composite
and gateway
services updated, we can move on and also update the test script.
Changing the test script so it acquires access tokens from Auth0
We also need to update the test script so it acquires access tokens from the Auth0 OIDC provider. This is done by performing the following changes in test-em-all.bash
:
Find the following command:
ACCESS_TOKEN=$(curl -k https://writer:secret@$HOST :$PORT /oauth2/token -d grant_type=client_credentials -s | jq .access_token -r)
Replace it with these commands:
export TENANT=...
export WRITER_CLIENT_ID=...
export WRITER_CLIENT_SECRET=...
ACCESS_TOKEN=$(curl -X POST https://$TENANT /oauth/token \
-d grant_type=client_credentials \
-d audience=https://localhost:8443/product-composite \
-d scope=product:read +product:write \
-d client_id=$WRITER_CLIENT_ID \
-d client_secret=$WRITER_CLIENT_SECRET -s | jq -r .access_token)
Note from the preceding command that Auth0 requires us to specify the intended audience of the requested access token, as an extra layer of security. The audience is the API we plan to call using the access token. Given that an API implementation verifies the audience field, this would prevent the situation where someone tries to use an access token issued for another purpose to get access to an API.
Set the values for the environment variables TENANT
, WRITER_CLIENT_ID
, and WRITER_CLIENT_SECRET
in the preceding commands with the values returned by the setup-tenant.bash
script.
As mentioned above, you can run the script again to acquire these values without risking any negative side effects!
Next, find the following command:
READER_ACCESS_TOKEN=$(curl -k https://reader:secret@$HOST :$PORT /oauth2/token -d grant_type=client_credentials -s | jq .access_token -r)
Replace it with this command:
export READER_CLIENT_ID=...
export READER_CLIENT_SECRET=...
READER_ACCESS_TOKEN=$(curl -X POST https://$TENANT /oauth/token \
-d grant_type=client_credentials \
-d audience=https://localhost:8443/product-composite \
-d scope=product:read \
-d client_id=$READER_CLIENT_ID \
-d client_secret=$READER_CLIENT_SECRET -s | jq -r .access_token)
Note that we only request the product:read
scope and not the product:write
scope here.
Set the values for the environment variables READER_CLIENT_ID
and READER_CLIENT_SECRET
in the preceding commands with the values returned by the setup-tenant.bash
script.
Now the access tokens are issued by Auth0 instead of our local authorization server, and our API implementations can verify the access tokens using information from Auth0's discovery service configured in the application.yml
files. The API implementations can, as before, use the scopes in the access tokens to authorize the client to perform the call to the API, or not.
With this, we have all the required changes in place. Let's run some tests to verify that we can acquire access tokens from Auth0.
Running the test script with Auth0 as the OpenID Connect provider
Now, we are ready to give Auth0 a try!
Run the usual tests, but this time using Auth0 as the OpenID Connect provider, with the following command:
./test-em-all.bash
In the logs, you will be able to find authorization information from the access tokens issued by Auth0. Run the command:
docker-compose logs product-composite | grep "Authorization info"
Expect the following outputs from the command:
From calls using an access token with both the product:read
and product:write
scopes, we will see both scopes listed as follows:
Figure 11.21: Authorization information for the writer client from Auth0 in the log output
From calls using an access token with only the product:read
scope, we will see that only that scope is listed as follows:
Figure 11.22: Authorization information for the reader client from Auth0 in the log output
As we can see from the log output, we now also get information regarding the intended audience for this access token. To strengthen security, we could add a test to our service that verifies that its URL, https://localhost:8443/product-composite
in this case, is part of the audience list. This would, as mentioned earlier, prevent the situation where someone tries to use an access token issued for another purpose than to get access to our API.
With the automated tests working together with Auth0, we can move on and learn how to acquire access tokens using the different types of grant flow. Let's start with the client credentials grant flow.
Acquiring access tokens using the client credentials grant flow
If you want to acquire an access token from Auth0 yourself, you can do so by running the following command, using the client credentials grant flow:
export TENANT=...
export WRITER_CLIENT_ID=...
export WRITER_CLIENT_SECRET=...
curl -X POST https://$TENANT/oauth/token \
-d grant_type=client_credentials \
-d audience=https://localhost:8443/product-composite \
-d scope=product:read+product:write \
-d client_id=$WRITER_CLIENT_ID \
-d client_secret=$WRITER_CLIENT_SECRET
Set the values for the environment variables TENANT
, WRITER_CLIENT_ID
, and WRITER_CLIENT_SECRET
in the preceding commands with the values returned by the setup-tenant.bash
script.
Following the instructions in the Calling protected APIs using access tokens section, you should be able to call the APIs using the acquired access token.
Acquiring access tokens using the authorization code grant flow
In this section, we will learn how to acquire an access token from Auth0 using the authorization code grant flow. As already described above, we first need to acquire an authorization code using a web browser. Next, we can use server-side code to exchange the authorization code for an access token.
Perform the following steps to execute the authorization code grant flow with Auth0:
To get an authorization code for the default app client, use the following URL in the web browser: https://${TENANT}/authorize?audience=https://localhost:8443/product-composite&scope=openid email product:read product:write&response_type=code&client_id=${WRITER_CLIENT_ID}&redirect_uri=https://my.redirect.uri&state=845361.
Replace ${TENANT}
and ${WRITER_CLIENT_ID}
in the preceding URL with the tenant domain name and writer client ID returned by the setup-tenant.bash
script.
Auth0 should present the following login screen:
Figure 11.23: Authorization code grant flow with Auth0, login screen
Following a successful login, Auth0 will ask you to give the client application your consent:
Figure 11.24: Authorization code grant flow with Auth0, consent screen
The authorization code is now in the URL in the browser, just like when we tried out the authorization code grant flow with our local authorization server:
Figure 11.25: Authorization code grant flow with Auth0, access token
Extract the code and run the following command to get the access token:
CODE=...
export TENANT=...
export WRITER_CLIENT_ID=...
export WRITER_CLIENT_SECRET=...
curl -X POST https://$TENANT/oauth/token \
-d grant_type=authorization_code \
-d client_id=$WRITER_CLIENT_ID \
-d client_secret=$WRITER_CLIENT_SECRET \
-d code=$CODE \
-d redirect_uri=https://my.redirect.uri -s | jq .
Set the values for the environment variables TENANT
, WRITER_CLIENT_ID
, and WRITER_CLIENT_SECRET
in the preceding commands to the values returned by the setup-tenant.bash
script.
Now that we have learned how to acquire access tokens using both grant flows, we are ready to try calling the external API using an access token acquired from Auth0 in the next section.
Calling protected APIs using the Auth0 access tokens
We can use access tokens issued by Auth0 to call our APIs, just like when we used access tokens issued by our local authorization server.
For a read-only API, execute the following command:
ACCESS_TOKEN=...
curl https://localhost:8443/product-composite/1 -k -H "Authorization: Bearer $ACCESS_TOKEN" -i
For an updating API, execute the following command:
ACCESS_TOKEN=...
curl https://localhost:8443/product-composite/999 -k -H "Authorization: Bearer $ACCESS_TOKEN" -X DELETE -i
Since we have requested both scopes, product:read
and product:write
, both the preceding API calls are expected to return 200 OK
.
Getting extra information about the user
From the log output in Figures 11.21 and 11.22 in the section Running the test script with Auth0 as the OpenID Connect provider , we could not see any information about the user that initiated the API request. If you want your API implementation to know a bit more about the user, it can call Auth0's userinfo_endpoint
. The URL of the user-info endpoint can be found in the response of a request to the OIDC discovery endpoint as described in the section Changing the configuration in the OAuth resource servers . To get user info related to an access token, make the following request:
Export TENANT=...
curl -H "Authorization: Bearer $ACCESS_TOKEN" https://$TENANT/userinfo -s | jq
Set the values for the TENANT
environment variable in the preceding commands to the values returned by the setup-tenant.bash
script.
Note that this command only applies to access tokens issued using the authorization code grant flow. Access tokens issued using the client credentials grant flow don't contain any user information and will result in an error response if tried.
A sample response is as follows:
Figure 11.26: Requesting extra user information from Auth0
This endpoint can also be used to verify that the user hasn't revoked the access token in Auth0.
Wrap up the tests by shutting down the system landscape with the following command:
docker-compose down
This concludes the section where we have learned how to replace the local OAuth 2.0 authorization server with an external alternative. We have also seen how to reconfigure the microservice landscape to validate access tokens using an external OIDC provider.
Summary
In this chapter, we have learned how to use Spring Security to protect our APIs.
We have seen how easy it is to enable HTTPS to prevent eavesdropping by third parties using Spring Security. With Spring Security, we have also learned that it is straightforward to restrict access to the discovery server, Netflix Eureka, using HTTP Basic authentication. Finally, we have seen how we can use Spring Security to simplify the use of OAuth 2.0 and OpenID Connect to allow third-party client applications to access our APIs in the name of a user, but without requiring that the user share credentials with the client applications. We have learned both how to set up a local OAuth 2.0 authorization server based on Spring Security and also how to change the configuration so that an external OpenID Connect provider, Auth0, can be used instead.
One concern, however, is how to manage the configuration required. Each microservice instance must be provided with its own configuration, making it hard to get a good overview of the current configuration. Updating configuration that concerns multiple microservices will also be challenging. Added to the scattered configuration is the fact that some of the configuration we have seen so far contains sensitive information, such as credentials or certificates. It seems like we need a better way to handle the configuration for a number of cooperating microservices and also a solution for how to handle sensitive parts of the configuration.
In the next chapter, we will explore the Spring Cloud Config Server and see how it can be used to handle these types of problems.
Questions
What are the benefits and shortcomings of using self-signed certificates?
What is the purpose of OAuth 2.0 authorization codes?
What is the purpose of OAuth 2.0 scopes?
What does it mean when a token is a JWT?
How can we trust the information that is stored in a JWT?
Is it suitable to use the OAuth 2.0 authorization code grant flow with a native mobile app?
What does OpenID Connect add to OAuth 2.0?