TypeScript and Async Redux Actions

Note: this is the third entry in a short series about bolstering Redux application with TypeScripts. Part one introduced a simple counter application, which we then dressed up with a simple React UI. Read on!

We’ve previously put TypeScript together with Redux, then added a toy React application on top of it. What we haven’t considered yet is the world outside. Even toys, however, may need to store their state for safekeeping and recall at a later date. If you’re just looking to see this in action, check out the example project over on github. For the play-by-play, though, read on!


Our Redux application won’t be making direct calls to the API. Instead, we’ll dispatch actions to indicate that certain conditions have happened:

  • an asynchronous API call has been requested (X_REQUEST)
  • an asynchronous API call has succeeded (X_SUCCESS)
  • an asynchronous API call has failed (X_ERROR)

So far, we’re straight straight out of the excellent redux documentation. The only tricky question is, “how should we represent these actions in TypeScript?” And with TypeScript 2.0, we’re edging closer to a satisfying answer: “as a discriminate union.”

Here’s the idea. When the same string literal is present across multiple types, TypeScript can use it to narrow down the shape of the type:

type Foo = { type: 'FOO', str: string }
type Bar = { type: 'BAR': num: number }

type Action = Foo | Bar

function handle (a: Action): string {
  switch (a.type) {
    case 'FOO':
      return a.str + ' is a string!'
    case 'BAR':
      return a.num.toString() + 'is a number!'

The catch–and it’s a big one–is that string literals mean quite a bit of boilerplate upfront. We need to explicitly declare the set of actions (one string literal), we won’t be able to set up dynamic (generic) action creators, and we need to rewrite the string any time it turns up in a loop or conditional expression.

There will probably be ways around this in the future. There may be ways around it now (and if you have one, I would love to hear from you). But for the time being we’re going to be doling out some redundant code.

In return for all of that cutting and pasting (incidentally, something that IDEs are really good at for–we’ll gain reasonable static guarantees throughout the async action flow.

Anyway, to the actions themselves. Now that we’re using discriminant unions, all actions will be attached to a single type. Call it Action. Since we’ve already touched on the three events generally involved with asynchronous actions, we can extend the union describing our actions so far with their implementations.

export type Action =

// UI actions
     delta: number }
|  { type: 'RESET_COUNTER' }

// Async actions...
| ({ type: 'SAVE_COUNT_REQUEST',
     request: { value: number } })
| ({ type: 'SAVE_COUNT_SUCCESS',
     request: { value: number },
     response: {})
| ({ type: 'SAVE_COUNT_ERROR',
     request: { value: number },
     error: Error })

There’s a general structure here that we’ll keep for all “groups” of actions describing asynchronous events.

  • every action includes a request field describing the original request
  • success actions include a response field to hold the asynchronous result
  • error actions include an error field to hold any errors that arise

It may make sense to structure these another way, depending on preference and application, but they should be consistent. As we’ll see in a moment, homogeneity here will simplify matters in other parts of the application.

Before we get there, note that we’ve already built up some of the boilerplate I promised. To keep things tidy as the list of actions grows, we can alias commonly-used types and use intersections to compose them.

type Q<T> = { request: T }
type S<T> = { response: T }
type E = { error: Error }

Here, Q<T> expresses actions containing requests, S<T> expresses those with responses, and E those that contain errors.

We can then add a few aliases for reused types of requests and responses.

type QEmpty = Q<null>
type QValue = Q<{ value: number }>

Here are the SAVE_COUNT_X actions rewritten using the tighter-if-marginally-more-opaque aliases. And since we’ll need them in a moment anyway, here are some additional actions (LOAD_COUNT_X) for comparison’s sake.

export type Action =
// ...
| ({ type: 'SAVE_COUNT_REQUEST' } & QValue)
| ({ type: 'SAVE_COUNT_SUCCESS' } & QValue & S<{}>)
| ({ type: 'SAVE_COUNT_ERROR'   } & QValue & E)

| ({ type: 'LOAD_COUNT_REQUEST' } & QEmpty)
| ({ type: 'LOAD_COUNT_SUCCESS' } & QEmpty & S<{ value: number }>)
| ({ type: 'LOAD_COUNT_ERROR'   } & QEmpty & E)

Async Action Creators

There’s an obvious relationship between the SAVE_COUNT_X actions, but we haven’t yet made it explicit to the type system. Let’s fix that.

export type ApiActionGroup<_Q, _S> = {
  request: (q?: _Q)         => Action & Q<_Q>
  success: (s: _S, q?: _Q)  => Action & Q<_Q> & S<_S>
  error: (e: Error, q?: _Q) => Action & Q<_Q> & E

export const saveCount: ApiActionGroup<{ value: number }, {}> = {
  request: (request) =>
    ({ type: 'SAVE_COUNT_REQUEST', request }),
  success: (response, request) =>
    ({ type: 'SAVE_COUNT_SUCCESS', request, response }),
  error: (error, request) =>
    ({ type: 'SAVE_COUNT_ERROR',   request, error }),

export const loadCount: ApiActionGroup<null, { value: number }> = {
  request: (request) =>
    ({ type: 'LOAD_COUNT_REQUEST', request: null }),
  success: (response, request) =>
    ({ type: 'LOAD_COUNT_SUCCESS', request: null, response }),
  error: (error, request) =>
    ({ type: 'LOAD_COUNT_ERROR',   request: null, error }),

Now, when we need to refer to reference async actions, we can find them conveniently grouped within the saveCount and loadCount action groups.

We’re unfortunately packing on even more boilerplate. The action creators must return defined Actions using explicit string-literal types. We could conceivably work around this using dynamic action creators and a few generic-type hijinks, but as the code here is relatively simple and cut-and-paste operations are relatively cheap it may not be worth the trouble.


Our API will normally exist outside the redux application, or at least be the subject of a parallel design conversation. The persistence API for the counter has no surprises, however, so we’ve put off building it until now. We can use localStorage to achieve per-session persistence but still expose the same sort of promise-based API we might use to fetch data from a REST API or GraphQL server.

// ./api.ts
export const api = {

  // Save counter state
  save: (counter: { value: number }): Promise<null> => {
    localStorage.setItem('__counterValue', counter.value.toString())
    return Promise.resolve(null)

  // Load counter state
  load: (): Promise<{ value: number }> => {
    const value = parseInt(localStorage.getItem('__counterValue'), 10)
    return Promise.resolve({ value })

We’re leaning pretty heavily on the counter’s “toy” status to excuse the lack of validation, error handling, and formal request/response types. But, taking the focus back in the redux application, we now have something to call.


Remember that the core application only dispatches actions, never interacting directly with the API? That’s a job for middleware. This is where we’ll finally see the awesomeness of those discriminant unions at work–and even make some API requests while we’re at it.

We’ll start with the code:

// ./middleware/index.ts
import * as redux from 'redux'

import { api } from '../api'

import {
} from '../actions'

export const apiMiddleware = ({ dispatch }: redux.MiddlewareAPI<any>) =>
  (next: redux.Dispatch<any>) =>
    (action: Action) => {
      switch (action.type) {
        case 'SAVE_COUNT_REQUEST':
            .then(() => dispatch(saveCount.success({}, action.request)))
            .catch((e) => dispatch(saveCount.error(e, action.request)))

        case 'LOAD_COUNT_REQUEST':
            .then(({ value }) => dispatch(loadCount.success({ value }, action.request)))
            .catch((e) => dispatch(loadCount.error(e, action.request)))

      return next(action)

This switch can be dressed up quite a bit; it may even go away entirely. We could build up a map of action groups to api methods, for instance, and iterate over them each time the middleware is called. But it helps to illustrate some of the magic that all of our legwork has been building towards.

Before, if we wanted to extract an action’s payload, we would need to switch on its type and then assert its type:

// No longer needed!

type SaveCountRequestAction = {
  request: { value: number }

// ...
  api.save({ value: (action as SaveCountRequestAction).request.value })
    // ...

This required us to re-establish the association between the action’s type and various other fields: redundant, intuitively unnecessary, and prone to fat fingers.

But using the union type, we can now reference action.request without a type assertion! Because both SAVE_COUNT_REQUEST and LOAD_COUNT_REQUEST actions do have .request fields, and TypeScript is able to narrow the set of matching types in each case based on its type, it also recognizes the shape of the corresponding action. If we wanted to, we could even extract the { value } attached to SAVE_COUNT_REQUEST without complaint from the compiler:

  const { value } = action.request
  api.save({ value })
    // ...

Pretty cool, eh?


From there on out, things go back to normal. We map the actions’ dispatches to components, add reducers to update state, and map changes back to the components. The hum-drum is over in the example project.

But we’ve seen some pretty good stuff! We’ve set up asynchronous actions, and–in exchange for some boilerplate–gained reasonable static assurance that both the middleware and the core application have a handle on their shape and structure. The counter’s still just a toy, but the same strategies outlined here can be (in fact are being) used in much more sophisticated applications. Not a bad day’s work.

The completed counter project is available for reference on github, build, code, tests, and all. And I’m looking forward to your suggestions, experiences, and feedback over on twitter!

Happy dev-ing!

Posted 9/25/2016

RJ Zaworski writes, speaks, and advocates for sustainable development from beautiful Portland, Oregon.

Rocking the Whiteboard

I used to tour a workshop on technical interviewing around the Bay Area developer bootcamps. At first it started with, “I’m RJ, and I’m here to talk about technical interviewing”, but that sucked and I threw it out. The next time, I brought in a short exercise to set the scene instead. It starts off innocuous enough:

Take a few minutes to write a program to shuffle a deck of cards.

If I’m lucky and participants are maintaining a reasonable blood:caffeine balance, they’ll fire back with a few clarifying questions.

  • What sort of deck?
  • Does it matter how it’s structured?
  • Should I optimize for X?

I’m happy to answer–52 cards are fine, dev’s choice, and maybe later. Five minutes of furious scribbling ensue before it’s pencils down and we check to see how everyone did.

Thumbs up if you’re feeling pretty good about your answer; thumbs down, if it’s not so good?

A few thumbs go up, many waver in the middle, and many others point straight down. The tyranny of averages, but call it “not good.”

And why not?

  • You put us under pressure.
  • You didn’t tell us what you wanted.
  • You didn’t give us enough time.

And that’s exactly what I did! I’ve just done my best to make everyone feel like Will Smith. No introduction. No guidance. Not even the foggiest idea of why the test, or what it’s trying to achieve. I’m sure there are worse ways to do it. I’m not sure what they are.

The card-shuffling exercise is uncomfortable, high-pressure, vague, and–from the interviewer’s perspective–operationally useless. A deck-shuffling program (while convenient for a short workshop) has little to no utility on its own. We’re live-programming for the process, remember, and a cold test in isolated silence won’t tell us anything about it.

We spend the next few minutes talking about the hiring arc–what the stages are, who’s involved, what to expect, how to prepare, and so on–before getting back to the cards.

See, even if whiteboards are out of vogue, many teams still expect to see technical candidates program. Portfolios are great tools for representing overall ability, but they’re not as good at conveying process or the approach a candidate brings to new, less-familiar problems. Call it live-coding, pairing, or a take-home exercise. No matter the name–and no matter how others feel about it–the reality is that programming exercises maintain a significant presence in tech companies’ hiring processes.

So we take a second look at the cards.

Take Two

This time, we’ll focus on fixing the issues that were raised after our abortive first attempt. There’s more than one way to do this, but I’m particularly fond of a framework I stole from TDD. It should look pretty familiar:

  1. Write the spec – clarify problem and assumptions

  2. Propose a naive solution – discuss approach, revising assumptions as needed

  3. Implement it – take the spec’s ‘givens’ as input and return the expected value

  4. Validate – run the function (interpret yourself, if you have to) against the spec and fix any errors

Red, green, and–we’ll get to refactoring. But that’s the outline, and we spend the next part of the workshop doing just that.

1. Write the spec

It seems silly for something so simple as a deck of cards, but a spec ensures that our view of the world matches what the interviewer wants to see.

  • given: a deck of 52 cards
  • expect: the same deck with a different order

There are some assumptions, too, which we’ll also enumerate:

  • we can define our own data-structure (an array [0..51])
  • we can safely overlook the slim chance of a deck being shuffled back to its starting order

2. Propose a naive solution

Take a moment to think. If a first-pass solution isn’t forthcoming, it may be worth revisiting assumptions to try and simplify the problem. Once you’re ready with a naive answer, though, start talking through it:

I could shuffle by taking random cards out of the input deck and pushing them to the end of an output deck.

Fair enough! We can leave potential improvements for later; first, let’s get things working.

3. Implement it

Wrapping the spec up as a function will make it painfully clear what we’re out to solve, so let’s do that:

function shuffleCards (deck = []) {
  var newDeck = [];

  // leave space for the solution!

  return newDeck;

We have our input, our output, and plenty of room to act out the naive solution. Let’s add it:

function shuffleCards (deck = []) {
  var newDeck = [];
  while (deck.length > 0) {

  return newDeck;

We haven’t defined what spliceRandom is up to, but depending whether the interviewer wants to see it, we may get away with an obvious name and the assumption that it will do what it claims to do. The shuffling itself is implemented–we just need to make sure that it works.

4. Validate against the spec

Stepping through the implementation and comparing its behavior to the spec can help catch bugs or syntax errors. This is what interpreters are made for, but even on a whiteboard we can still interpret the code by hand. For instance,

function shuffleCards (deck = [1, 2, 3]) {  // [1, 2, 3]
  var newDeck = [];                         // []
  while (deck.length > 0) {                 // 3 > 0
    newDeck.push(spliceRandom(deck));       // [2]; deck == [1, 3]
  return newDeck;

If anything failed, we would fix it and try again. But this looks pretty good: iterating through the loop twice more, we’ll see deck gradually empty and newDeck fill up with randomly-ordered cards. Given a deck (check) expect the same deck with a different order (check).

5. Next steps

We have a naive solution, but it’s likely not the only answer. Next, we could optimize the solution in time and space; we could discuss concepts like functional programming or design considerations around mutable data; or we could even dive into more complicated use-cases or data-structures.

There’s always more to talk about.

Wrapping up

The second pass is always more fun than the first. Workshop participants throw out ideas, challenge assumptions, validate solutions, and engage much more like they would within a real team. And that’s really the goal–not to challenge technical ability, but to ferret out what working together is really like. How do you approach new problems? What challenges are you able to recognize preemptively, and what challenges are you able to overcome later? It’s not technical ability–your portfolio can speak to that–it’s your thinking, approach, attitude, and communication.

So have a conversation! Whether it’s a whiteboard, a pairing session, or a take home, talk through, think through, and communicate your process. The answer’s worth something, of course, but the Internet is full of answers: far more interesting is how you get there.

Posted 9/6/2016

RJ Zaworski writes, speaks, and advocates for sustainable development from beautiful Portland, Oregon.

But wait! There's more—

View all posts