Chapters

Hide chapters

Real-World Android by Tutorials

Second Edition · Android 12 · Kotlin 1.6+ · Android Studio Chipmunk

Section I: Developing Real World Apps

Section 1: 7 chapters
Show chapters Hide chapters

4. Data Layer — Network
Written by Ricardo Costeira

Every software application needs data, and Android is no different. In fact, Android apps are almost always heavily dependent on data. That’s why it’s important to organize your data-centric code in its own layer, where you implement both data access and caching.

Creating this layer is a lot of work, so you’ll build yours across two chapters, starting with network access. In this chapter, you’ll learn why you need a data layer and how to:

  • Map data to the domain layer.
  • Connect to a network API.
  • Handle dependencies with Hilt.
  • Create and test network interceptors.

Now, it’s time to jump in.

What Is a Data Layer?

The data layer is where you put the code responsible for interacting with your data sources.

An app can have multiple data sources, and they can change over time. For instance, you can migrate from a REST server to a GraphQL server, or from a Room database to a Realm database, https://www.mongodb.com/realm/mobile/database. These changes only matter to the data handling logic, and should not affect the code that needs the data.

A data layer has two responsibilities. It:

  • Keeps your data I/O code organized in one place.
  • Creates a boundary between the data sources and their consumers.

The Repository Pattern

One way to create this boundary is by following the repository pattern. This is a popular pattern to use in Android because Google recommends it.

The repository is just an abstraction over the way you access data. It creates a thin layer over data sources — a class that wraps up calls to the objects that do the heavy lifting. While this sounds a bit redundant, it has its purposes. It lets you:

  • Swap data sources without affecting the rest of the app. Swapping sources is rare, but trust me, it happens. :]
  • Create the boundary between the data layer and the other layers that need to operate on data.
  • Orchestrate the different data sources to produce a result the domain expects, while keeping that orchestration logic hidden away.

You already took the first step in creating this boundary by creating the repository contract in the domain layer. You’ll now implement a repository that fulfills that contract. This makes the data layer depend on the domain layer, as the dependency rule demands.

Figure 4.1 — Closed Arrows Represent Data Flow. The Open Arrow Means “Implements”.
Figure 4.1 — Closed Arrows Represent Data Flow. The Open Arrow Means “Implements”.

You can have as many repositories as you want. A popular choice is to have one repository per domain entity type. This is a nice rule of thumb, but in the end, it’s up to you to decide what works best.

For instance, in this app, you’ll use only one repository to deal with both animal and organization entities. The latter just completes the former’s information, for now, so giving it its own repository isn’t worth it.

Before implementing your repository, you need data sources. You’ll start by working with the API. If you haven’t done so already, now’s a good time to look at PetFinder’s documentation at https://www.petfinder.com/developers/v2/docs/, which will help you understand some of the decisions you’ll make in this chapter.

Network Data Models

No, no, calm down, you’re not implementing any more data models! The models are already in the project, but they’re worth taking a look at.

Open the ApiAnimal.kt file in the common.data.api.model package.

You’ll see a bunch of different data classes. The first one is ApiAnimal. It corresponds to Animal in your domain, but is modeled exactly after the information the back end sends. The rest of the classes compose ApiAnimal, so they’re in the same file for convenience.

All classes follow the same building logic, so look at any of them to understand that logic. For instance, take ApiBreeds:

@JsonClass(generateAdapter = true) // 1
data class ApiBreeds(
    @field:Json(name = "primary") val primary: String?, // 2
    @field:Json(name = "secondary") val secondary: String?,
    @field:Json(name = "mixed") val mixed: Boolean?,
    @field:Json(name = "unknown") val unknown: Boolean?
)

Here you can see that:

  1. This annotation decorates every class. The app uses Moshi to parse the JSON from API responses. This annotation lets Moshi know it can create an object of this type from JSON data. Moshi will also automagically create an adapter if you set generateAdapter to true. It’ll then use it to create an instance of the class. Without this parameter, you’ll get a runtime error from Moshi, unless you create the adapter yourself.
  2. There are two different things to notice here. First, the Moshi annotation maps the JSON variable called primary to the code variable called primary. In this case, you didn’t need the annotation because the names are the same. Still, it’s there for consistency’s sake. Second, you used a nullable type. Long story short, never trust your backend. :] Using nullable types ensures that even if something goes wrong and you get unexpected nullable values in the response, the app won’t crash.

Next, you’ll see how to map these DTOs (data transfer objects) into your domain.

Mapping Data to the Domain

There are two typical ways of mapping data to the domain layer. One uses interfaces and independent classes, while the other uses static and/or member functions of the model. Here, you’ll use the former. You’ll try the other option later. :]

In the model package, expand mappers. You’ll see a lot of mappers there already, along with an ApiMapper interface:

interface ApiMapper<E, D> {

  fun mapToDomain(apiEntity: E): D
}

Having all the mappers follow this interface gives you the advantage of decoupling the mapping. This is useful if you have a lot of mappers and want to make sure they all follow the same contract.

Now, open ApiAnimalMapper.kt and remove the block comment. The class already has a few delegate methods for value objects and entities, using the appropriate mappers. The only thing missing is to fulfill the interface’s contract, which you’ll do by adding the following code below the add code here comment:

override fun mapToDomain(apiEntity: ApiAnimal): AnimalWithDetails {
  return AnimalWithDetails(
      id = apiEntity.id
        ?: throw MappingException("Animal ID cannot be null"),  // 1
      name = apiEntity.name.orEmpty(), // 2
      type = apiEntity.type.orEmpty(),
      details = parseAnimalDetails(apiEntity), // 3
      media = mapMedia(apiEntity),
      tags = apiEntity.tags.orEmpty().map { it.orEmpty() },
      adoptionStatus = parseAdoptionStatus(apiEntity.status),
      publishedAt =
        DateTimeUtils.parse(apiEntity.publishedAt.orEmpty()) // 4
  )
}

A few things worth noting here:

  1. If the API entity doesn’t have an ID, the code throws a MappingException. You need IDs to distinguish between entities, so you want the code to fail if they don’t exist.
  2. If name in the API entity is null, the code sets the name in the domain entity to empty. Should it, though? CanAnimalWithDetails entities have empty names? That depends on the domain. In fact, mappers are a good place to search for domain constraints. Anyway, for simplicity, assume an empty name is possible.
  3. details is a value object, so the code delegates its creation to an appropriate method. Clean code keeps responsibilities well separated.
  4. DateTimeUtils is a custom object that wraps java.time library calls. parse will throw an exception if it gets an empty string. This is also a domain constraint. There are future plans to order the animal list so the oldest ones in the system appear first, so the date can’t be empty.

Now that the mapping is done, you’ll start implementing the API requests.

Connecting to the API With Retrofit

Retrofit is the go-to HTTP client for Android. It allows you to build an HTTP API in record time, even with almost no knowledge about HTTP. It’s especially powerful when coupled with OkHttp, which gives you more control over your requests.

In the api package, open PetFinderApi.kt. Retrofit lets you define your API as an interface. PetFinderApi is empty right now. Which methods should you add?

For now, you’ll focus only on the data needs for the Animals near you feature, leaving Search for later. That way, you’ll see how to develop a feature one layer at a time versus jumping around through the layers.

Animals near you needs to retrieve animal data from the API according to your postal code and the distance you specify. Knowing that, you’ll add the following method to the interface:

@GET(ApiConstants.ANIMALS_ENDPOINT) // 1
suspend fun getNearbyAnimals( // 2
    @Query(ApiParameters.PAGE) pageToLoad: Int, // 3
    @Query(ApiParameters.LIMIT) pageSize: Int,
    @Query(ApiParameters.LOCATION) postcode: String,
    @Query(ApiParameters.DISTANCE) maxDistance: Int
): ApiPaginatedAnimals // 4

Be sure to import Retrofit dependencies. Gradle already knows about them.

In this code:

  1. You tell Retrofit you want to perform a GET request through the @GET annotation, passing in the endpoint for the request.
  2. You add the suspend modifier to the method. A network request is a one-shot operation, so running it in a coroutine fits perfectly.
  3. You specify the request’s parameters through the @Query annotation. For instance, if you’re loading the first page of 20 items, the request will have parameters like page=1&limit=20.
  4. You return ApiPaginatedAnimals, which will map to the domain’s PaginatedAnimals.

The PetFinder server uses OAuth for authentication. OAuth works with access tokens. To get an access token, you have to send an authentication request with your API key and API secret. You then use the token you receive to authenticate your request, sending it as an authorization header.

You need a token for every request, except the authentication request itself. If the token expires, you have to request a new one.

In other words, for each request, you have to:

  1. Store the original request.
  2. Request a token if you don’t have one, or a new token if the current one has expired.
  3. Send a valid token in the header of the original request.

That’s a lot of work! Fortunately, OkHttp has a neat feature that can help: interceptors.

Interceptors

OKHttp lets you manipulate your requests and/or responses through interceptors, which let you monitor, change or even retry API calls.

Figure 4.2 — OKHttp interceptors
Figure 4.2 — OKHttp interceptors

OkHttp allows two types of interceptors:

  • Application interceptors: Act between your code and OkHttp. You’ll probably use these most of the time. They have access to the full request along with the already-processed response, and let you act on that data.
  • Network interceptors: Act between OkHttp and the server. Useful in cases where you have to worry about intermediate responses, like redirects. They give you access to the data in the raw format it’s sent to the server, and to the actual Connection object.

Expand the interceptors package inside api. You’ll see three different interceptors already:

  • LoggingInterceptor: Logs request details to Android Studio’s Logcat.
  • NetworkStatusInterceptor: Uses ConnectionManager to check the internet connection, then either throws a custom NetworkUnavailableException or lets the request proceed.
  • AuthenticationInterceptor: Checks for token expiry, then requests a new one, if needed, and stores it. If a valid token already exists, it adds it to the request’s headers.

Before you continue, here are two things to consider about NetworkUnavailableException:

  1. The presentation layer needs to know about it in order to inform the user. However, the dependency rule states that dependencies flow inwards, not sideways. Since the data and presentation layers are at the same level, you want to keep them decoupled. So, the exception is modeled as a domain exception. This might seem awkward, but it’s conceivable for network unavailability to be part of an Android app’s domain. Plus, this keeps your dependencies clean with minimum effort.
  2. It extends IOException. This is where the boundary between the layers starts to blur. It extends IOException because Retrofit only handles IOExceptions. So, if NetworkUnavailableException extends from any other type, the app is likely to crash. This implicitly couples the domain layer to the data layer. If, someday, the app stops using Retrofit in favor of a library that handles exceptions differently, the domain layer will change as well.

You could invert the dependency by creating a domain interface for the exception and implementing it in the data layer, but is the extra code and work really worth it for such a simple case? This kind of situation is common when you’re trying to follow an architectural pattern — you’ll eventually break it for simplicity. :]

You’ll have to weigh in the pros and cons of every outcome, then decide on one. The important thing is to not get stuck in analysis paralysis. You can always change things in the future. Refactoring is part of your job as a developer.

In this case, the decision is simple: It’s unlikely that the project will ever use an HTTP client other than Retrofit, so it should be safe to keep the exception in the domain layer. Even if you do change it, the only domain layer change will be the type your custom exception extends.

AuthenticationInterceptor

Open AuthenticationInterceptor.kt and take a closer look at intercept’s signature:

    override fun intercept(chain: Interceptor.Chain): Response

The method takes in a Chain and returns a Response. Chain is the active chain of interceptors running when the request is ongoing, while Response is the output of the request.

There’s some code missing here that you’ll add to help understand how interceptors work. It’s a fairly complex piece of code, so you’ll add it in parts.

Checking the Token

Delete all the code in the method, then add the following in its place:

val token = preferences.getToken() // 1
val tokenExpirationTime =
  Instant.ofEpochSecond(preferences.getTokenExpirationTime()) // 2
val request = chain.request() // 3

// if (chain.request().headers[NO_AUTH_HEADER] != null) return chain.proceed(request) // 4

Here’s what’s happening in this code:

  1. You get your current token from shared preferences.
  2. You get the token’s expiration time.
  3. You get your current request from the interceptor chain.
  4. This is a special case for requests that don’t need authentication. Say you have a login request, for instance. You can add a custom header to it in the API interface — like NO_AUTH_HEADER — then check if the header exists here. If so, you let the request proceed. You won’t need this logic in this case, but it’s good to be aware of it.

You might find the access to preferences weird. Typically, a repository mediates between the different data sources, while they remain unaware of each other. One of its purposes in this layered architecture is not only to pass the other layers the data they need, but also to keep data sources unaware of each other.

In this case, though, all the action happens inside the data layer itself. You’d be introducing accidental complexity by creating a circular dependency between the API and the repository code. Also, Preferences is an interface, so the implementation details are still decoupled. You must resist “convention triggered” over-engineering. :]

Handling Valid Tokens

With that out of the way, add the next block of code below the one you just added:

val interceptedRequest: Request // 1

if (tokenExpirationTime.isAfter(Instant.now())) {
  interceptedRequest =
    chain.createAuthenticatedRequest(token) // 2
} else {

}

return chain
    .proceedDeletingTokenIfUnauthorized(interceptedRequest) // 3

In this code:

  1. You declare a new request value. You’ll assign the authenticated version of the original request to it.
  2. If the token is valid, you create an authenticated request through createAuthenticatedRequest. This function creates a request from the original one and adds an authorization header with the token.
  3. You tell the chain to proceed with your new request. proceedDeletingTokenIfUnauthorized calls proceed on the chain, which does all the HTTP magic and returns a response. If the response has a 401 code, proceedDeletingTokenIfUnauthorized deletes the token.

Good, you have the happy path implemented! As long as you have a valid token, your requests will go through. Now it’s time to cover the cases where the token is invalid or doesn’t exist yet.

Handling Invalid Tokens

Add the following block of code inside the empty else:

val tokenRefreshResponse = chain.refreshToken() // 1

interceptedRequest = if (tokenRefreshResponse.isSuccessful) { // 2
  val newToken = mapToken(tokenRefreshResponse) // 3

  if (newToken.isValid()) { // 4
    storeNewToken(newToken)
    chain.createAuthenticatedRequest(newToken.accessToken!!)
  } else {
    request
  }
} else {
  request
}

This is the most complex part. Here:

  1. You call refreshToken. This function does all the magic of fetching you a new token. It creates a whole new request pointing to the authentication endpoint and adds the necessary API key and secret to its body. It executes the request by calling proceedDeletingTokenIfUnauthorized, returning its response, then stores the response in tokenRefreshResponse.
  2. You set interceptedRequest with the result of the if-else condition. Remember that in Kotlin, if-else is an expression. You check if refreshToken was successful. If not, you return the original request.
  3. If refreshToken is successful, you have a new token to work with. But since the Moshi converter hasn’t run yet, you’re stuck with the JSON version of the response instead of an actual DTO. As such, you call mapToken to get the token DTO, ApiToken. Take a quick peek inside mapToken. This is what you’d have to do for each DTO if Moshi didn’t provide that handy generateAdapter parameter with the @JsonClass annotation. Plus, notice how it returns an invalid token when it can’t parse what comes from the network. This is the null object pattern.
  4. Finally, you check if the new token is valid — in other words, if the DTO values aren’t either NULL or empty. If so, you store the token in shared preferences and call createAuthenticatedRequest with it. If the token is invalid, you set interceptedRequest to the original request, since you still need one.

Build and run to make sure everything works. Whew! You now have a way of checking your token validity for every request and refreshing it if necessary. The only thing missing now is to pass the interceptor to the OkHttp instance for Retrofit to use.

You now need to add your interceptor to the dependency graph for the app.

Wiring Up the Interceptor

Expand the data.di package and locate and open the ApiModule.kt file. Focus on the provideOkHttpClient method, for now. This creates the OkHttp instance that Retrofit uses:

fun provideOkHttpClient(httpLoggingInterceptor: HttpLoggingInterceptor): OkHttpClient {
  return OkHttpClient.Builder()
      .addInterceptor(httpLoggingInterceptor)
      .build()
}

As you see, the code already adds an interceptor. The parameter, HttpLoggingInterceptor, is an OkHttp class. This instance is provided by the method below, provideHttpLoggingInterceptor. It uses the LoggingInterceptor in the interceptors package. It logs the headers and body of both requests and responses.

Look at the code inside provideOkHttpClient. You use addInterceptor to add application interceptors. For network interceptors, you’d have to use addNetworkInterceptor.

Ordering the Interceptors

There’s an important detail you must consider before adding the other interceptors. Like Retrofit’s type converters, interceptors are called in order. So, if you do something like:

OkHttpClient.Builder()
    .addInterceptor(A)
    .addInterceptor(C)
    .addInterceptor(B)

The interceptors will run in that order: A → C → B.

With this in mind, replace the method with:

fun provideOkHttpClient(
    httpLoggingInterceptor: HttpLoggingInterceptor, // 1
    networkStatusInterceptor: NetworkStatusInterceptor,
    authenticationInterceptor: AuthenticationInterceptor
): OkHttpClient {
  return OkHttpClient.Builder()
      .addInterceptor(networkStatusInterceptor) // 2
      .addInterceptor(authenticationInterceptor) // 3
      .addInterceptor(httpLoggingInterceptor) // 4
      .build()
}

In this code, you add:

  1. The needed dependencies as parameters.
  2. networkStatusInterceptor first. If the device doesn’t have an internet connection, there’s no need to allow the request to go further.
  3. authenticationInterceptor after the network interceptor so the token refresh logic only executes if there’s a connection.
  4. httpLoggingInterceptor, which wraps LoggingInterceptor.

Is it weird to put httpLoggingInterceptor last? Should it be the first one to run, so it can log even authenticationInterceptor’s requests?

Nope! If you add it first, it’ll run while there’s still nothing to log. Interceptors work on the chain they receive, so you want the logging interceptor to get the final chain.

This concludes your work with the interceptors. Well done! The last thing missing before you proceed to tests is dependency management.

Managing API Dependencies With Hilt

Dependency injection is a great way to maintain a decoupled and testable architecture as your project grows in complexity — but it’s hard to do by hand. Using a DI framework like Dagger helps, but then you have to deal with Dagger’s own quirks.

Note: If you want to learn everything about Dagger, Hilt and dependency injection, Dagger By Tutorials, https://www.raywenderlich.com/books/dagger-by-tutorials, is the right place for you.

PetFinder uses Hilt, Google’s Android DI solution. Although it’s built on top of Dagger, it’s a lot easier to use.

Open ApiModule.kt again. Although ApiModule has the word Module in its name and is located in a di package, it’s not a Hilt module… Yet.

You’ll change that next.

Turning ApiModule Into a Hilt Module

Annotate ApiModule with @Module:

@Module
object ApiModule

Build the app and you’ll get a Hilt error. Unlike common Dagger errors, you can actually read and understand it!

The error states that ApiModule is missing an @InstallIn annotation. This relates to one of the best Hilt features. When you use Hilt, you don’t need to create Dagger components.

Hilt generates a hierarchy of predefined components with corresponding scope annotations. These components are tied to Android lifecycles. This makes it a lot easier for you to define the lifetime of your dependencies.

Define the component where you’ll install ApiModule by adding:

@Module
@InstallIn(SingletonComponent::class)
object ApiModule

You’re installing the module in SingletonComponent. This component is the highest in the component hierarchy — all other components descend from it. By installing ApiModule here, you’re saying that any dependency it provides should live as long as the app itself. Also, since each child component can access the dependencies of its parent, you’re ensuring that all other components can access ApiModule.

Defining Dependencies

With the module installed, you now need to define the dependencies it provides. Just like with Dagger, Hilt allows you to inject dependencies with a few annotations:

  • @Inject: Use in class constructors to inject code you own, such as the data mappers.
  • @Provides: Use in modules to inject code you don’t own, like any library instance.
  • @Binds: Use in modules to inject interface implementations when you don’t need initialization code. You’ll see an example later.

In this case, annotate every method with @Provides. For provideApi, add the @Singleton annotation as well:

@Provides
@Singleton
fun provideApi(okHttpClient: OkHttpClient): PetFinderApi

@Provides works as it does in traditional Dagger. @Singleton, on the other hand, is the scope annotation for SingletonComponent. You can only add annotations to a module that match the scope of the component. If you try to use other scope annotations, you’ll get a compile-time error. You won’t get any errors if you try that now though, because your code doesn’t request PetFinderApi yet.

@Singleton ensures that only one instance of PetFinderApi exists during the app’s lifetime. For a stateless class whose job is to make requests and return responses, that makes sense, especially if it’s supposed to work as long as the app lives. Having the @Singleton annotation reveals the intent of the class. Plus, there are also two important details about OKHttp that you have to consider:

  • Each OkHttp instance has its own thread pool, which is expensive to create.
  • OkHttp has a request cache on disk. Different OkHttp instances will have different caches.

Of course, in some cases, it makes sense to have more than one instance of OkHttp. For example, if you need to connect with two APIs, you might have two Retrofit interfaces. If the APIs are different to the point where it doesn’t even make sense for them to have a common cache, you might choose to have more than one OKHttp instance. In that case, however, you’d also have to distinguish the bindings with qualifiers. In the end, as always, it depends.

As a final note, you might wonder why ApiModule is an object. Well, it could be a class, or even an abstract class. The thing is, if a module only has @Provides and is stateless — as every module should be! — making it an object allows Hilt or, more specifically, Dagger, to provide the dependencies without incurring the costs of creating object instances. All this becomes irrelevant if you’re using R8, because that can turn providers that come from stateless module instances into static ones. Regardless, it’s a good practice.

Build and run to make sure everything works. You’re done with dependency management… For now. :] In fact, you’re almost done with the chapter. There’s only one thing missing: tests!

Testing the Network Code

There are a few things you can test at this point:

  • The data mappers
  • The interceptors

There’s no point in testing the API requests, since you’d be testing Retrofit itself, not your app.

You also won’t test the data mappers here, as testing an interceptor covers the same testing details and more. That doesn’t mean you don’t need to test them in a real app, however! Though most of them start as simple builders, some can evolve to have some logic. In fact, the Enum mappers already have logic to test if the input can be translated into an Enum type.

Anyway, you’ll only test AuthenticationInterceptor. The package structure in test doesn’t exist yet. You’ll use a nifty Android Studio trick to create the whole thing automatically.

Expand api.interceptors, then open AuthenticationInterceptor.kt. Place the cursor on the class name and press Option-Enter on MacOS or Alt-Enter on Windows. On the small context menu that appears, click Create test. On the window that opens, choose JUnit4 as the testing library. Finally, in the second window, choose the src/test directory under the unitTest package.

Figure 4.3 — Creating Tests With Android Studio’s Help.
Figure 4.3 — Creating Tests With Android Studio’s Help.

Preparing Your Test

You need to create an instance of AuthenticationInterceptor for testing. Remember, the constructor requires an instance of Preferences. You have three options. You can provide either:

  1. A real Preferences instance using PetSavePreferences.
  2. A fake Preferences instance.
  3. A mock Preferences instance.

Providing a real one is out of the question, since you’d mess with the real shared preferences data. So you need to either fake it or mock it.

Fakes are useful whenever you need the dependency to have some sort of complex state. If that state varies a lot in your tests, it’s much easier to have a fake with a mutating state that you verify as the tests run.

With a mock, you have to define the behavior for each individual test, along with verifying all the calls you expect to happen. For this case, although Preferences is stateful — that is, it reads and writes API token info — you’ll go with a mock just to see how much work it takes, even for simple states.

To test the interceptor, you’ll need to add it to an OKHttp instance. You need a real instance to enqueue a request and use the interceptor on it. Connecting to a real API would make the test slow and flaky, so you’ll use MockWebServer to mock out the API.

Using MockWebServer

MockWebServer lets you test your network code without connecting to a real server. It creates a local web server that goes through the whole HTTP stack. You can use it like any other mocking framework and actually mock server responses.

There’s a mock response in src/debug/assets/networkresponses that mocks a server response for when you request a new token. It’s in the debug folder so instrumented tests can also access it in the future.

To access the file, you have to do some configuration work. Open the app module’s build.gradle. Add the following inside the Android block, just below buildFeatures:

testOptions {
  unitTests {
    includeAndroidResources = true
  }
}

Sync Gradle. Now, your unit tests can access all the resources, assets and manifests. Next, go to the utils package inside the api test package and open JsonReader.kt. You’ll use getJson in the object to read the mocked response in your test.

As you can see, it needs a Context:

val context = InstrumentationRegistry.getInstrumentation().context

In other words, your tests will need access to the Android framework. To avoid having to run them on the emulator, you’ll use Robolectric.

The way to do it is simple. Back in your test class, add the following annotations to the class:

@RunWith(RobolectricTestRunner::class)
class AuthenticationInterceptorTest

Testing

With that out of the way, you can start testing. In AuthenticationInterceptorTest, add the properties you’ll need in yout tests.

private lateinit var preferences: Preferences
private lateinit var mockWebServer: MockWebServer
private lateinit var authenticationInterceptor: AuthenticationInterceptor
private lateinit var okHttpClient: OkHttpClient

private val endpointSeparator = "/"
private val animalsEndpointPath =
  endpointSeparator + ApiConstants.ANIMALS_ENDPOINT
private val authEndpointPath =
  endpointSeparator + ApiConstants.AUTH_ENDPOINT
private val validToken = "validToken"
private val expiredToken = "expiredToken"

You’ll test the valid token and expired token use cases. For both tests, you need to: Start MockWebServer, mock Preferences and create the interceptor and the OkHttp instances. To do so before every test, add the following below the properties:

@Before
fun setup() {
  preferences = mock(Preferences::class.java)

  mockWebServer = MockWebServer()
  mockWebServer.start(8080)

  authenticationInterceptor =
    AuthenticationInterceptor(preferences)
  okHttpClient = OkHttpClient().newBuilder()
      .addInterceptor(authenticationInterceptor)
      .build()
}

Pretty straightforward. @Before ensures that this runs before every test. The method creates the Preferences mock, starts MockWebServer on port 8080 and creates the interceptor and OkHttp instances. You also need to close the server at the end of each test, so add the following method as well:

@After
fun teardown() {
  mockWebServer.shutdown()
}

@After is the reverse of @Before, making this method run after every test.

Writing Your First Test

For the first test, you’ll check the valid token use case. Below teardown, add:

@Test
fun authenticationInterceptor_validToken() {
  // Given

  // When

  // Then
}

Having those comments is a neat way of keeping the code inside tests organized.

Replacing // Given

Next, below // Given, add:

`when`(preferences.getToken()).thenReturn(validToken)
`when`(preferences.getTokenExpirationTime()).thenReturn(
    Instant.now().plusSeconds(3600).epochSecond
)

mockWebServer.dispatcher = getDispatcherForValidToken()

The two when calls set what the mock should return for this test: A valid token and a time in the future when the token will expire. The last line is more interesting. MockWebServer can take a Dispatcher that specifies what to return for each request.

Below your test method, define getDispatcherForValidToken():

private fun getDispatcherForValidToken() = object : Dispatcher() { // 1
  override fun dispatch(request: RecordedRequest): MockResponse {
    return when (request.path) {  // 2
      animalsEndpointPath -> { MockResponse().setResponseCode(200) } // 3
      else -> { MockResponse().setResponseCode(404) } // 4
    }
  }
}

This method:

  1. Returns an anonymous MockWebServer Dispatcher.
  2. Checks for the path the request points to in the dispatch method override.
  3. If the path is the /animals endpoint, the method returns a 200 response code. That’s all you need for this test.
  4. For any other endpoint, it returns a 404 code which means that the resource is not available.

Replacing // When

Back to the test method, below // When, add the OKHttp call:

okHttpClient.newCall(
  Request.Builder()
      .url(mockWebServer.url(ApiConstants.ANIMALS_ENDPOINT))
      .build()
).execute()

Here, you’re telling OkHttp to make a new request. You use MockWebServer to create the URL for it, passing in the /animals endpoint.

Replacing // Then

Finally, add the verifications below // Then:

val request = mockWebServer.takeRequest() // 1

with(request) { // 2
  assertThat(method).isEqualTo("GET")
  assertThat(path).isEqualTo(animalsEndpointPath)
  assertThat(getHeader(ApiParameters.AUTH_HEADER))
      .isEqualTo(ApiParameters.TOKEN_TYPE + validToken)
}

If the assertThat calls do not automatically resolve, add import com.google.common.truth.Truth.* at the top of the file.

This code:

  1. Awaits the next HTTP request. For this case, there should only be one request to begin with. This is a blocking method, so if anything goes wrong and the request never executes, the code will hang here until it times out.
  2. Scopes the request and checks a few of the request’s parameters. If it’s a GET request, the path points to the /animals endpoint and it has the authorization header, the test passes.

Build and run your test. Everything should work!

Writing Your Second Test

Now, you’re ready to write the second test, following the previous format:

@Test
fun authenticatorInterceptor_expiredToken() {
  // Given

  // When

  // Then
}

Replacing // Given

The // Given part is similar:

`when`(preferences.getToken()).thenReturn(expiredToken)
`when`(preferences.getTokenExpirationTime()).thenReturn(
    Instant.now().minusSeconds(3600).epochSecond
)

mockWebServer.dispatcher = getDispatcherForExpiredToken()

The difference is that preferences now returns expiredToken and an expired token time. This forces the interceptor to make an authentication request. Also, you’re setting MockWebServer to a different dispatcher.

Below the other dispatcher method, define getDispatcherForExpiredToken() as:

private fun getDispatcherForExpiredToken() = object : Dispatcher() {
  override fun dispatch(request: RecordedRequest): MockResponse {
    return when (request.path) {
      authEndpointPath -> {
        MockResponse().setResponseCode(200).setBody(JsonReader.getJson("networkresponses/validToken.json"))
      }
      animalsEndpointPath -> { MockResponse().setResponseCode(200) }
      else -> { MockResponse().setResponseCode(404) }
    }
  }
}

The difference from the other method is this one returns a specific response for the authentication endpoint. Not only does it set the response code to 200, it also sets the body to the mocked token response. This allows the interceptor to proceed with making a call to the /animals endpoint.

Replacing // When

The // When part is exactly the same:

okHttpClient.newCall(
  Request.Builder()
      .url(mockWebServer.url(ApiConstants.ANIMALS_ENDPOINT))
      .build()
).execute()

This is where you’re actually sending the request to the mockWebServer.

Replacing // Then

The largest change is to the // Then part:

val tokenRequest = mockWebServer.takeRequest() // 1
val animalsRequest = mockWebServer.takeRequest() // 2

with(tokenRequest) { // 3
  assertThat(method).isEqualTo("POST")
  assertThat(path).isEqualTo(authEndpointPath)
}

val inOrder = inOrder(preferences) // 4

inOrder.verify(preferences).getToken()
inOrder.verify(preferences).putToken(validToken)

verify(preferences, times(1)).getToken() // 5
verify(preferences, times(1)).putToken(validToken)
verify(preferences, times(1)).getTokenExpirationTime()
verify(preferences, times(1)).putTokenExpirationTime(anyLong())
verify(preferences, times(1)).putTokenType(ApiParameters.TOKEN_TYPE.trim())
verifyNoMoreInteractions(preferences)

with(animalsRequest) { // 6
  assertThat(method).isEqualTo("GET")
  assertThat(path).isEqualTo(animalsEndpointPath)
  assertThat(getHeader(ApiParameters.AUTH_HEADER))
      .isEqualTo(ApiParameters.TOKEN_TYPE + validToken)
}

In this code, you:

  1. Await the next request. Since preferences returns an expired token, the first request coming in should be for a new token.
  2. Wait for the next request. If the code works, after the new token request, there should be a request on the /animals endpoint.
  3. Verify the token request by checking whether it’s a POST request and if it points to the authentication endpoint.
  4. Use Mockito to verify the actions on preferences. You check that getToken is called before putToken(validToken). That should be the normal workflow to invalidate the old token and get a new one.
  5. You use times(1) to check that each of the Preferences you expect to be called is only called once. Also, verifyNoMoreInteractions(preferences) ensures that no methods other than these are called. Note that putTokenExpirationTime can be called with any long value. The code creates a timestamp at the moment it’s called, so trying to get that exact time here could cause the test to fail randomly.
  6. Verify the animal request, just as you did in the other test.

If you were to use a fake Preferences instance instead of a mock, you’d only need to verify its final state. In the end, all you care about is that your code has the correct behavior to produce the correct state.

With Mockito, however, it’s easy to get carried away, as in the test above. In no time, you’ll be encoding implementation details in your tests through Mockito. Imagine that, in the future, the way the interceptor interacts with preferences changes but the end result remains the same. Your tests will fail!

Strive to test behavior and state instead of the implementation itself.

Again, mocks can be useful to mock boundary dependencies or objects you don’t own. They just require some discipline to use. In a case like this, a fake would be better. It’s more work at the beginning, but it pays off in the long run.

Build and run your tests to make sure everything works. And that’s it! You’re done with the network code at this point. Now that you can connect to an external data source, you need a way of saving the data you retrieve from it. In the next chapter, you’ll dive into caching.

Key Points

  • A data layer keeps your data I/O organized and in one place.
  • The repository pattern is great for abstracting data sources and providing a clear boundary around the data layer.
  • OkHttp’s interceptors are useful to fine tune requests.
  • When properly configured, dependency injection frameworks do a lot of the heavy lifting of managing dependencies for you.
  • MockWebServer allows you to create a test environment that’s close to the real thing.
Have a technical question? Want to report a bug? You can ask questions and report bugs to the book authors in our official book forum here.
© 2024 Kodeco Inc.