At this point of the book, you now have all the raw concepts in place for the foundations of FP that I call "Functional-Light Programming". In this chapter, we're going to apply these concepts to a different context, but we won't really present particularly new ideas.
So far, almost everything we've done is synchronous, meaning that we call functions with immediate inputs and immediately get back output values. A lot of work can be done this way, but it's not nearly sufficient for the entirety of a modern JS application. To be truly ready for FP in the real world of JS, we need to understand async FP.
Our goal in this chapter is to expand our thinking about managing values with FP, to spread out such operations over time. We'll see that Observables (and Promises!) are one great way to do that.
The most complicated state in your entire application is time. That is, it's far easier to manage state when the transition from one state to another is immediate and affirmatively in your control. When the state of your application changes implicitly in response to events spread out over time, management becomes exponentially more difficult.
Every part of how we've presented FP in this text has been about making code easier to read by making it more trustable and more predictable. When you introduce asynchrony to your program, those efforts take a big hit.
But let's be more explicit: it's not the mere fact that some operations don't finish synchronously that is concerning; firing off asynchronous behavior is easy. It's the coordination of the responses to these actions, each of which has the potential to change the state of your application, that requires so much extra effort.
So, is it better for you the author to take that effort, or should you just leave it to the reader of your code to figure out what the state of the program will be if A finishes before B, or vice versa? That's a rhetorical question but one with a pretty concrete answer from my perspective: to have any hope of making such complex code more readable, the author has to take a lot more care than they normally would.
One of the most important outcomes of async programming patterns is simplifying state change management by abstracting out time from our sphere of concern. To illustrate, let's first look at a scenario where a race condition (aka, time complexity) exists, and must be manually managed:
var customerId = 42;
var customer;
lookupCustomer( customerId, function onCustomer(customerRecord){
var orders = customer ? customer.orders : null;
customer = customerRecord;
if (orders) {
customer.orders = orders;
}
} );
lookupOrders( customerId, function onOrders(customerOrders){
if (!customer) {
customer = {};
}
customer.orders = customerOrders;
} );
The onCustomer(..)
and onOrders(..)
callbacks are in a binary race condition. Assuming they both run, it's possible that either might run first, and it's impossible to predict which will happen.
If we could embed the call to lookupOrders(..)
inside of onCustomer(..)
, we'd be sure that onOrders(..)
was running after onCustomer(..)
. But we can't do that, because we need the two lookups to occur concurrently.
So to normalize this time-based state complexity, we use a pairing of if
-statement checks in the respective callbacks, along with an outer lexically closed over variable customer
. When each callback runs, it checks the state of customer
, and thus determines its own relative ordering; if customer
is unset for a callback, it's the first to run, otherwise it's the second.
This code works, but it's far from ideal in terms of readability. The time complexity makes this code harder to read. Let's instead use a JS Promise to factor time out of the picture:
var customerId = 42;
var customerPromise = lookupCustomer( customerId );
var ordersPromise = lookupOrders( customerId );
customerPromise.then( function onCustomer(customer){
ordersPromise.then( function onOrders(orders){
customer.orders = orders;
} );
} );
The onOrders(..)
callback is now inside of the onCustomer(..)
callback, so their relative ordering is guaranteed. The concurrency of the lookups is accomplished by making the lookupCustomer(..)
and lookupOrders(..)
calls separately before specifying the then(..)
response handling.
It may not be obvious, but there would otherwise inherently be a race condition in this snippet, were it not for how Promises are defined to behave. If the lookup of the orders
finishes before the ordersPromise.then(..)
is called to provide an onOrders(..)
callback, something needs to be smart enough to keep that orders
list around until onOrders(..)
can be called. In fact, the same concern could apply to customer
being present before onCustomer(..)
is specified to receive it.
That something is the same kind of time complexity logic we discussed with the previous snippet. But we don't have to worry about any of that complexity, either in the writing of this code or -- more importantly -- in the reading of it, because the promises take care of that time normalization for us.
A Promise represents a single (future) value in a time-independent manner. Moreover, extracting the value from a promise is the asynchronous form of the synchronous assignment (via =
) of an immediate value. In other words, a promise spreads an =
assignment operation out over time, but in a trustable (time-independent) fashion.
We'll now explore how we similarly can spread various synchronous FP operations from earlier in this book asynchronously over time.
Eager and lazy in the realm of computer science aren't compliments or insults, but rather ways to describe whether an operation will finish right away or progress over time.
The FP operations that we've seen in this text can be characterized as eager because they operate synchronously (right now) on a discrete immediate value or list/structure of values.
Recall:
var a = [1,2,3]
var b = a.map( v => v * 2 );
b; // [2,4,6]
This mapping from a
to b
is eager because it operates on all the values in the a
array at that moment, and produces a new b
array. If you later modify a
(for example, by adding a new value to the end of it) nothing will change about the contents of b
. That's eager FP.
But what would it look like to have a lazy FP operation? Consider something like this:
var a = [];
var b = mapLazy( a, v => v * 2 );
a.push( 1 );
a[0]; // 1
b[0]; // 2
a.push( 2 );
a[1]; // 2
b[1]; // 4
The mapLazy(..)
we've imagined here essentially "listens" to the a
array, and every time a new value is added to the end of it (with push(..)
), it runs the v => v * 2
mapping function and pushes the transformed value to the b
array.
Note: The implementation of mapLazy(..)
has not been shown because this is a fictional illustration, not a real operation. To accomplish this kind of lazy operation pairing between a
and b
, we'll need something smarter than basic arrays.
Consider the benefits of being able to pair an a
and b
together, where any time (even asynchronously!) you put a value into a
, it's transformed and projected to b
. That's the same kind of declarative FP power from of a map(..)
operation, but now it can be stretched over time; you don't have to know all the values of a
right now to set up the mapping from a
to b
.
To understand how we could create and use a lazy mapping between two sets of values, we need to abstract our idea of list (array) a bit. Let's imagine a smarter kind of array, not one which simply holds values but one which lazily receives and responds (aka "reacts") to values. Consider:
var a = new LazyArray();
var b = a.map( function double(v){
return v * 2;
} );
setInterval( function everySecond(){
a.push( Math.random() );
}, 1000 );
So far, this snippet doesn't look any different than a normal array. The only unusual thing is that we're used to the map(..)
running eagerly and immediately producing a b
array with all the currently mapped values from a
. The timer pushing random values into a
is strange, since all those values are coming after the map(..)
call.
But this fictional LazyArray
is different; it assumes that values will come one at a time, over time; just push(..)
values in whenever you want. b
will be a lazy mapping of whatever values eventually end up in a
.
Also, we don't really need to keep values in a
or b
once they've been handled; this special kind of array only holds a value as long as it's needed. So these arrays don't strictly grow in memory usage over time, an important characteristic of lazy data structures and operations. In fact, it's less like an array and more like a buffer.
A normal array is eager in that it holds all of its values right now. A "lazy array" is an array where the values will come in over time.
Since we won't necessarily know when a new value has arrived in a
, another key thing we need is to be able to listen to b
to be notified when new values are made available. We could imagine a listener like this:
b.listen( function onValue(v){
console.log( v );
} );
b
is reactive in that it's set up to react to values as they come into a
. There's an FP operation map(..)
that describes how each value transfers from the origin a
to the target b
. Each discrete mapping operation is exactly how we modeled single-value operations with normal synchronous FP, but here we're spreading out the sourcing of values over time.
Note: The term most commonly applied to these concepts is Functional Reactive Programming (FRP). I'm deliberately avoiding that term because there's some debate as to whether FP + Reactive genuinely constitutes FRP. We're not going to fully dive into all the implications of FRP here, so I'll just keep calling it reactive FP. Alternatively, you could call it evented-FP if that feels less confusing.
We can think of a
as producing values and b
as consuming them. So for readability, let's reorganize this snippet to separate the concerns into producer and consumer roles:
// producer:
var a = new LazyArray();
setInterval( function everySecond(){
a.push( Math.random() );
}, 1000 );
// **************************
// consumer:
var b = a.map( function double(v){
return v * 2;
} );
b.listen( function onValue(v){
console.log( v );
} );
a
is the producer, which acts essentially like a stream of values. We can think of each value arriving in a
as an event. The map(..)
operation then triggers a corresponding event on b
, which we listen(..)
to so we can consume the new value.
The reason we separate the producer and consumer concerns is so that different parts of our application can be responsible for each concern. This code organization can drastically improve both code readability and maintenance.
We're being very careful about how we introduce time into the discussion. Specifically, just as promises abstract time away from our concern for a single asynchronous operation, reactive FP abstracts (separates) time away from a series of values/operations.
From the perspective of a
(the producer), the only evident time concern is our manual setInterval(..)
loop. But that's only for demonstration purposes.
Imagine a
could actually be attached to some other event source, like the user's mouse clicks or keystrokes, websocket messages from a server, etc. In that scenario, a
doesn't actually have to concern itself with time. It's merely a time-independent conduit for values, whenever they are ready.
From the perspective of b
(the consumer), we do not know or care when/where the values in a
come from. As a matter of fact, all the values could already be present. All we care about is that we want those values, whenever they are ready. Again, this is a time-independent (aka lazy) modeling of the map(..)
transformation operation.
The time relationship between a
and b
is declarative (and implicit!), not imperative (or explicit).
The value of organizing such operations-over-time this way may not feel particularly effective yet. Let's compare to how this same sort of functionality could have been expressed imperatively:
// producer:
var a = {
onValue(v){
b.onValue( v );
}
};
setInterval( function everySecond(){
a.onValue( Math.random() );
}, 1000 );
// **************************
// consumer:
var b = {
map(v){
return v * 2;
},
onValue(v){
v = this.map( v );
console.log( v );
}
};
It may seem rather subtle, but there's an important difference between this more-imperative version of the code and the previous more-declarative version, aside from just b.onValue(..)
needing to call this.map(..)
itself. In the former snippet, b
pulls from a
, but in the latter snippet, a
pushes to b
. In other words, compare b = a.map(..)
to b.onValue(v)
.
In the latter imperative snippet, it's not clear (readability wise) from the consumer's perspective where the v
values are coming from. Moreover, the imperative hard coding of b.onValue(..)
in the middle of producer a
's logic is a violation of separation-of-concerns. That can make it harder to reason about producer and consumer independently.
By contrast, in the former snippet, b = a.map(..)
declares that b
's values are sourced from a
, and treats a
as an abstract event stream data source that we don't have to concern ourselves with at that moment. We declare that any value that comes from a
into b
will go through the map(..)
operation as specified.
For convenience, we've illustrated this notion of pairing a
and b
together over time via a one-to-one map(..)
ing. But many of our other FP operations could be modeled over time as well.
Consider:
var b = a.filter( function isOdd(v) {
return v % 2 == 1;
} );
b.listen( function onlyOdds(v){
console.log( "Odd:", v );
} );
Here, a value from a
only comes into b
if it passes the isOdd(..)
predicate.
Even reduce(..)
can be modeled over time:
var b = a.reduce( function sum(total,v){
return total + v;
} );
b.listen( function runningTotal(v){
console.log( "New current total:", v );
} );
Since we don't specify an initialValue
to the reduce(..)
call, neither the sum(..)
reducer nor the runningTotal(..)
event callback will be invoked until at least two values have come through from a
.
This snippet implies that the reduction has a memory of sorts, in that each time a future value comes in, the sum(..)
reducer will be invoked with whatever the previous total
was as well as the new next value v
.
Other FP operations extended over time could even involve an internal buffer, like for example unique(..)
keeping track of every value it's seen so far.
Hopefully by now you can see the importance of a reactive, evented, array-like data structure like the fictional LazyArray
we've conjured. The good news is, this kind of data structure already exists, and it's called an Observable.
Note: Just to set some expectation: the following discussion is only a brief intro to the world of Observables. This is a far more in-depth topic than we have space to fully explore. But if you've understood Functional-Light Programming in this text, and now grasped how asynchronous-time can be modeled via FP principles, Observables should follow very naturally for your continued learning.
Observables have been implemented by a variety of userland libraries, most notably RxJS and Most. At the time of this writing, there's an in-progress proposal to add Observables natively to JS, just like Promises were added in ES6. For the sake of demonstration, we'll use RxJS-flavored Observables for these next examples.
Here's our earlier reactive example, expressed with observables instead of LazyArray
:
// producer:
var a = new Rx.Subject();
setInterval( function everySecond(){
a.next( Math.random() );
}, 1000 );
// **************************
// consumer:
var b = a.map( function double(v){
return v * 2;
} );
b.subscribe( function onValue(v){
console.log( v );
} );
In the RxJS universe, an Observer subscribes to an Observable. If you combine the functionality of an Observer and an Observable, you get a Subject. So, to keep our snippet simpler, we construct a
as a Subject, so that we can call next(..)
on it to push values (events) into its stream.
If we want to keep the Observer and Observable separate:
// producer:
var a = Rx.Observable.create( function onObserve(observer){
setInterval( function everySecond(){
observer.next( Math.random() );
}, 1000 );
} );
In this snippet, a
is the observable, and unsurprisingly, the separate observer is called observer
; it's able to "observe" some events (like our setInterval(..)
loop); we use its next(..)
method to feed events into the a
observable stream.
In addition to map(..)
, RxJS defines well over a hundred operators that are invoked lazily as each new value comes in. Just like with arrays, each operator on an Observable returns a new Observable, meaning they are chainable. If an invocation of operator function determines a value should be passed along from the input Observable, it will be fired on the output Observable; otherwise it's discarded.
Example of a declarative observable chain:
var b =
a
.filter( v => v % 2 == 1 ) // only odd numbers
.distinctUntilChanged() // only consecutive-distinct
.throttle( 100 ) // slow it down a bit
.map( v = v * 2 ); // double them
b.subscribe( function onValue(v){
console.log( "Next:", v );
} );
Note: It's not necessary to assign the observable to b
and then call b.subscribe(..)
separately from the chain; that's done here to reinforce that each operator returns a new observable from the previous one. In many coding examples you'll find, the subscribe(..)
call is just the final method in the chain. Because subscribe(..)
is technically mutating the internal state of the observable, FPers generally prefer these two steps separated, to mark the side effect more obviously.
This book has detailed a wide variety of FP operations that take a single value (or an immediate list of values) and transform them into another value/values.
For operations that will be proceed over time, all of these foundational FP principles can be applied time-independently. Exactly like promises model single future values, we can model eager lists of values instead as lazy Observable (event) streams of values that may come in one-at-a-time.
A map(..)
on an array runs its mapping function once for each value currently in the array, putting all the mapped values in the outcome array. A map(..)
on an Observable runs its mapping function once for each value, whenever it comes in, and pushes all the mapped values to the output Observable.
In other words, if an array is an eager data structure for FP operations, an Observable is its lazy-over-time counterpart.
Note: For a different twist on asynchronous FP, check out a library called fasy, which is discussed in Appendix C.