Creating DTOs from the scratch, as well as maintaining them, is a tedious and error-prone process. I saw many production bugs caused by a mismatch between the type describing a response of some endpoint and what this endpoint actually returned.
The earlier article already mentioned that it is desirable to eliminate the human factor from the process of creating and maintaining DTO types. This post is going to elaborate on this idea.
OpenAPI Specification is a way of describing APIs in a standardized format that can be easily parsed. With OpenAPI you can list all the endpoints in your API, describe the shape of requests and the responses, list supported HTTP methods, etc. Given such specification you can generate documentation, client code or… types.
In order to to generate TypeScript types from OpenAPI, you will need a tool that can read and parse the specifications and turn them into type definitions. The one which I’ve been using and have been happy with is called openapi-typescript.
openapi-types
Let’s see openapi-typescript
in action. Here you can find a repository with an implementation of a simple React app that fetches and displays data from Spotify.
The app lets you:
Spotify exposes an OpenAPI specification under this URL. Our project sources contain a file called spotify.d.ts
which is based on this specification. Here is the command used to generate this file:
1 | npx openapi-typescript https://developer.spotify.com/reference/web-api/open-api-schema.yaml -o src/spotify.d.ts |
The project is structured in the following way:
types/spotify.d.ts
- type definitions generated by openapi-types
types/dto.ts
- type definitions for Data Transfer Objects - aliases for selected types from spotify.d.ts
defined purely for conveniencetypes/domain.ts
- type definitions for Domain Typesservices/conversion.ts
- functions for converting DTOs to Domain Types and the other way (if needed)services/api.ts
- functions for making server requests - they only operate with DTO typesservices/auth.ts
- authentication code, not relevant to this articlecomponents/*
- React components that implements data fetching and the UI; they use api.ts
functions to fetch DTOs and then convert them to Domain Types with conversion.ts
functionsFirst, let’s take a look at the DTO type definition for response types:
1 | import { paths } from "./spotify"; |
As you can see, we define the DTO types using the paths
type imported from the generated file. The types form a nested structure allowing different response types for different HTTP methods, status codes, content types, etc. At the first level, the type is indexed by string literal representing all the available paths. In this particular example, we’re extracting the type for a JSON response to a successful GET request sent to the “/albums/{id}” endpoint.
You may point out that this type is rather verbose. There is a way to shorten it, although it’s undocumented. You can use the SuccessResponse
utility type from the openapi-typescript-helpers
package. With it, the type argument would look like this:
1 | export type AlbumSearchResponseDto = SuccessResponse< |
Above DTO definitions are used in the api.ts
file where they are passed as type arguments to axios
calls:
1 | export const getAlbum = async (id: string) => |
Let’s also take a look at an example of a request DTO type:
1 | import { OperationRequestBodyContent } from "openapi-typescript-helpers"; |
Here we again leverage openapi-typescript-helpers
for a terser type definition. As you can see, we can extract both request and response types from the the OpenAPI-generated definitions. In fact, there is much more information encoded in these definitions - e.g. names of query parameters, path parameters, types of all possible error responses, etc. I encourage you to browse the spotify.d.ts
file and find out on your own.
openapi-fetch
Now, let’s go one step further. If you look at the code in api.ts
, you may notice that it is kind of schematic. Every function here consists of a single axios
call. Its main role is to match the correct DTO type to the endpoint url.
1 | export const getAlbum = async (id: string) => |
For example, getAlbum
function asserts that the type of the response returned from /albums/${id}
is AlbumDetailsResponseDto
. This assertion is manual and therefore error-prone. What if the type system could figure it out on its own?
Enter openapi-fetch
- a library built on top of openapi-types
which exposes functions for making HTTP requests that can infer the type of the request, response, query params, etc. based on the provided URL. Let’s see it in action.
First, we need to create a client object. We provide the paths
type as the type argument. This type is generated by openapi-types
and is responsible for the magic of figuring out types based on URLs.
1 | const spotifyClient = createClient<paths>(); |
Next, let’s look at an openapi-fetch
variant of the above getAlbum
function. There are several cool things about it:
"/albums/{id}"
string is type-checked; if we made a typo, the compiler would complain:1 | Argument of type '"/albumz/{id}"' is not assignable to parameter of type 'PathsWithMethod<paths, "get">'.ts(2345) |
path.id
parameters is required - the compiler will therefore make sure that we provide it.1 | const getAlbum: (id: string) => Promise<{ |
Let’s take at another example where the library correctly figures out the request type.
1 | export const createPlaylist = async (userId: string, name: string) => |
In this example we need to provide both the user_id
path param to identify the user for whom the playlist will be created as well as the body
of the request containing the definition of the playlist.
Check out this link to see the full sources of the openapi-fetch
based implementation of the Spotify client app.
openapi-fetch
is very convenient to work with. You can easily customize it (see the above repo for an example of adding authorization) or build wrappers around it so that it’s usable with more advanced data fetching solutions such as react-query
or rxjs
. I’ll look into this in more detail in a separate article.
In this article we saw how TypeScript can be used to dramatically improve safety of calling server endpoints. If you think about it, with the manual approach to typing DTOs, you can have a perfectly typed codebase but can still see runtime errors related to backend calls. This is because when manually typing DTOs, you’re actually making implicit type assertions. The tools described above help you mitigate this risk almost completely.
]]>AsyncResult
to our codebase a few years back).The examples in this post are based on React but these ideas can be used with any UI framework.
Complete examples available in this repo on GitHub.
Let’s take a look at a React component that fetches some data from the backend and displays the result. A typical (and somewhat naive) implementation could look like this:
1 | export const CatFact: React.FC = () => { |
There are several issues with this code:
The AsyncResult
pattern can help with the first two issues.
AsyncResult
The root cause behind the first two issues is that the author of the code forgot about these two cases. This is understandable - nobody is perfect and it’s easy to neglect such things when you’re focused on solving a complex problem or are simply very busy. Thankfully, we can employ the type system to remind us about handling these cases.
AsyncResult
is a simple type that represents the result of an asynchronous operation (a fetch call in our case). It’s defined as a discriminated union where each member represents a possible state in which an asynchronous operation could be: in progress, success or failure.
1 | export type AsyncResult<TResult, TError = unknown> = |
AsyncSuccess
stores the value returned by a successful operation. AsyncFailure
stores the error that caused the operation to fail. AsyncInProgress
doesn’t store any additional information.
In order to make working with this new type more convenient, it’s useful to define some utility functions for converting regular values to AsyncResult
:
1 | export const asAsyncSuccess = <TResult>( |
Now let’s see how we can apply this new type to address the issues mentioned at the beginning.
AsyncResult
Let’s start by replacing the value stored in the component state with AsyncResult
:
1 | export const CatFact: React.FC = () => { |
As we can see, TypeScript prevents us from accessing the value
property of catFact
. This is because we’re trying to access the result of an asynchronous operation yet we don’t even know whether it has already completed nor whether it completed successfully. In other words, the type system forces us to check the state of the operation before we access its result. While doing this, we get reminded that we should actually handle the other possible states.
1 | export const CatFactAsyncResult: React.FC = () => { |
There is a one missing piece - the code handles the failure
state, but right now catFact
will never be set to an instance of AsyncFailure
. We can fix this but providing the onrejected
callback to the Promise
returned from fetch
. The final code will look like this:
1 | export const CatFactAsyncResult: React.FC = () => { |
What’s so great about this type is that it makes it explicit that an async operation does not always have to finish successfully. By doing so, it forces the developer to take care of all possible outcomes.
fetch
utilityAs you can see, this approach introduces a little bit of boilerplate. Thankfully, we can easily abstract it in a generic utility hook for fetching data from the backend.
1 | export const useGetResult = <TResult, TError = unknown>(url: string) => { |
Now, the end result gets much cleaner:
1 | export const CatFactAsyncResultHook: React.FC = () => { |
Additional bonus of using such an abstraction is that you can incorporate more good patterns into it. For example, you could leverage AbortController
to address the third issue mentioned in the first paragraph: possible state update after the component is unmounted.
In the above example we focused on a scenario where data fetching happens in a React component. In a real-life app you’d often use Redux for storing the results of backend calls or use a dedicated data fetching library. Let’s see if AsyncResult
can prove useful in such cases as well.
Redux
You can definitely make use of the AsyncResult
type in Redux. When storing fetch results in Redux, you may end up having fields like isLoading
or errorMessage
in the state. With AsyncResult
, you don’t need such fields as you can represent the status of the operation with a single field.
What’s more, the type allows you to simplify your actions. In a typical scenario, you’d have a separate action for request start, success and failure. With AsyncResult
a single action type is sufficient as you can convey all this information in the action’s payload.
Those of you who have used React Query may find the pattern described in this article familiar. In fact, the type returned by useQuery
is an extended version of the AsyncResult
type.
1 | const result: UseQueryResult<number> = useQuery(["number"], () => |
Similarly, the type used to represent query result in RTK Query is also based on this pattern.
1 | const { data, error, isLoading } = useGetPokemonByNameQuery("bulbasaur"); |
This only proves that the AsyncResult
pattern is useful. Data fetching libraries such as React Query provide utils for fetching data that incorporate multiple best practices and the AsyncResult
pattern is one of them. Of course, these libraries do much more (caching, cancellation, etc.).
Is AsyncResult
type useful at all when working with these libraries then? Actually, I think it is. One thing I lack from the types returned by these libraries is composability. However, we can easily add composability to the AsyncResult
type. I’ll talk about some real life applications of this approach in a future post.
In this post we learned about some of the benefits of the AsyncResult
type. This is yet another great example of how we can leverage the type system to enforce some rules that actually improve the overall user experience.
When working with existing large codebases, it’s not always easy to introduce a proper data fetching library. In such cases, you may use the AsyncResult
pattern to get at list some of the benefits of these libraries without introducing any new dependencies. I encourage you to experiment with it in your project.
There are some use cases that are not covered by this approach, though. A good example is a tabbed UI where each tab has some input controls. As the user types into these controls and than switches between the tabs, it may sometime be desirable to persist the state between the tab switches. However, if we simply stored the user input as local state of each tab component, it wouldn’t be persisted, as the component would get umounted upon tab switching.
Take a look at the example here. If you type into the input in the Mario
component, than change the tab to Luigi
and than come back, your input disappears. This is because Mario
component got unmounted and its state was lost.
Let’s review some of the possible approaches to persisting the state between the tab switches:
MarioTabs
component). This would solve the problem. However, in this approach we abandon clear boundaries between components. Internal state of a child component is exposed to the parent component, which shouldn’t really know anything about it. What’s more, it has performance impact - any update to Mario
‘s state would lead to rerendering of the whole component tree.Instead, I’d like to propose a very simple approach in which the state is stored in a module-defined variable. This approach is similar to using a state management library, but without the ability to notify the consumers about the state being updated. It solves exactly one problem - persisting the state between remounts and nothing else.
usePersistedState
hookIn this approach we create a new hook called usePersistedState
. It has a similar signature to the regular useState
hook, but it additionally accepts an identifier - the key under which we will store this piece of state. The implementation of this hook is very straightforward - it wraps a regular useState
hook. On top of that, every time the state is changed, it updates the global dictionary where various pieces of state are stored under different keys. Finally, the global dictionary is read to get the initial state for the useState
hook.
1 | const Store = new Map<string, unknown>(); |
And that’s it. This simple solution is all you need to persist state local component state between component remounts. The id
parameter should uniquely identify the state.
One caveat of this approach is that you have to be careful when working with multiple instances of a given component. In such case you’ll need to make sure that each instance gets a different identifier.
Disclaimer: as I begun writing this article, I realized that there is already a library that provides a similar functionality and also uses the same name (usePersistedState
) as I did for my custom hook. Still, I decided to write the article to document my thought process. You’d be probably better off using the library then my simple implementation.
One caveat of this approach is that sometimes you may need to clear the state. One obvious example is unit tests - if you forget to clear the state after each test, you may get surprising results. Clearing the state is trivial - it’s sufficient to remove the value stored under given identifier.
1 | export const removeFromState = (id: string) => { |
Another reason for clearing the state may be if MarioTabs
lived in another container component. In some scenarios we may want to clear the state when the container component is unmounted (e.g. if the container is a closeable tab and the user explicitly closes the tab).
This article demonstrates a minimal approach for persisting state between component remounts.
There are many ways in which you can extend this approach if needed:
useReducer
hookcontainerId
- and allow clearing ąll entries within given containerLet me know what you think in the comments.
]]>Junior Engineers care about writing Software. They value code quality, employ best practices, try to adopt cutting-edge technologies. They invest a lot of time into learning new technologies. To them, the ultimate goal is to create elegant, performant, maintainable software.
Senior Engineers care about building Systems. To them, creating software is just one of the steps. First of all, they question whether the software needs to be built in the first place. They ask what problems would it solve and why it’s important to solve them. They inquire who will be using the software and on what scale. They think about where the software would run and how they’re going to monitor whether it’s working properly. They also decide how to measure whether the software is actually solving the problems it was supposed to solve.
Building Systems is much harder than building Software. It may be even uncomfortable. As an engineer, it’s very tempting to stay in your cave and focus on polishing this beautiful piece of code. It’s tempting to think that determining requirements is the job of a Product Manager and deploying the software should be taken care of by the Operations team. However, you can bring a lot of value by being involved in these aspects of building a system. You’re the person who knows your software best and it’s you who know best how to run it, how to monitor it, how easy it is to extend it, etc. What’s more, your analytical mind and problem-solving skills make your insights about product requirements very valuable.
Technical expertise is of course very important. Elegant, performant, maintainable software is easier to run, breaks less often, is easier to extend and to reason about. However, it may solve a wrong business problem. Or maybe the customers don’t like it because of performance issues that you don’t even know about because you aren’t monitoring it.
Let’s have a deeper look at a (non-exhaustive) list of activities that are part of building a System:
I’ve met many engineers who were convinced that the only way to advance their career is by investing in their technical skills. While this is important, the only thing that matters to your company is how much impact on the business are you making. Shifting the focus from Software to Systems puts you in a much better position for increasing it.
]]>My colleague was implementing a polling mechanism using RxJS. His solution was similiar to the one described in my blog post about polling. For the sake of this article, let’s assume he wrote the following code. It starts a long running backend operation (an analysis) by sending a POST
call and subseqently polls for the status of the operation (using GET
calls) every second. It stops polling as soon as the status returned by the last GET
call indicates success or failure. If you’re having trouble understanding the code below, please read the article first.
1 | const startAnalysis = () => { |
As you can see, instead of using ajax.getJSON
util provided by RxJS, it is using our custom wrapper around fetch
called fetchWithObservable
. We created it because at that time RxJS didn’t include ajax
utils. Here is the implementation of fetchWithObservable
that we were using at that time.
1 | export const fetchWithObservable = <T>( |
This function returns a new Observable
in which we call fetch
, emit the response (subscriber.next
) and immedietly complete the Observable
(subscriber.complete
). There is also some error handling subscriber.error
and cancellation support. We implemented cancellation by creating an AbortController
and passing controller.signal
as one of the options to the fetch
call. Then, we call controller.abort
in the function returned by the function definieng our Observable
. This function will get called whenever the Observable
is unsubscribed from. It makes sense, because if the caller unsubscribes, we don’t want to waste resources by waiting and processing the HTTP response.
Can you spot a bug in this code? Pause here for a moment and give it a try.
So, we tried running this code but it didn’t behave as expected. We expected a single POST
call to the analyze
endpoint followed by multiple GET
calls. Instead, we saw a POST
call followed by a single GET
call. What’s more, we saw the following error in the console.
1 | DOMException: Failed to execute 'fetch' on 'Window': The user aborted a request. |
The exception stacktrace in the debugger indicates that the exception comes from the fetch
call. Because of the repeat
operator, fetch
will get called multiple times. It will always use the same optionsWithAbort
object along with the same controller
.
Since the error message mentions aborting, let’s put a breakpoint on controller.abort()
. It turns out that this line gets called multiple times. Each fetch
is followed by a call to controller.abort()
.
This means that we call fetch
with a controller
that has already been aborted. Let’s see what happens in such case:
1 | const controller = new AbortController(); |
1 | DOMException: Failed to execute 'fetch' on 'Window': The user aborted a request. |
Bingo! We found the root cause - we’re reusing the AbortControler
for multiple fetch
requests. It works for the first time, because the controller is fresh, but it is already aborted when passed to the subsequent calls.
Therefore, the correct implementation is as follows:
1 | export const fetchWithObservable = <T>( |
Let’s understand what exactly is happening here. The key piece of the puzzle is the repeat
operator. Let’s check its definition.
Returns an Observable that will resubscribe to the source stream when the source stream completes.
Fair enough. In our case, the source stream is the one returned by fetchWithObservable
. repeat
operator will subscribe to it every second, which will result in a new fetch
call at the same interval.
As we saw in the implementation of fetchWithObservable
, it emits a single value and then completes. However, when an Observable
completes, it unsubscribes all its subscribers. It means that controller.abort()
will get called after each fetch
and before the next fetch
. This results in the behavior we observed.
Here is the exact sequence of events:
Observable
gets subscribed. fetch
gets called for the first time.Observable
completes, unsubscribe function gets called which results in calling controller.abort()
.repeat
operator waits 1000 ms and subscribes again.fetch
call. However, an aborted controller
is passed with options. Exception is thrown.Obviously you wouldn’t encounter such a bug when writing new code as you would use RxJS’s ajax
utils. However, still thought it worthwhile to share this story, as it may help you understand how things work under the hood.
One of the most common things done by a web application is to call a server endpoint to invoke some action. The action could mean buying a product, registering an account or adding an article to the database. Most of the time, such actions are short running. By short running I mean that they complete within the duration of a single HTTP request and the server can send a response indicating whether the action succeeded.
However, not all actions are short running. Sometimes, the action may require some significant computations or even manual input. To name a few examples:
In such cases, the action may not finish before the request times out. As a result, there is no way of informing the user whether the action succeeded or not (unless you notify them some other way, e.g. by e-mail).
There are several solutions to this problem but in this post we’ll focus on polling. In order to implement polling, the API needs to be implemented in a special way. The endpoint to invoke the action should support at least two HTTP methods:
POST
for initiating the actionGET
for checking the status of the action (and getting the results if available)DELETE
for cancelling an already started actionGiven such endpoint, polling can be described by the following steps:
POST
request to the endpoint to initiate the action. The server replies with an identifier representing the action.GET
request to the endpoint, passing the identifier as a parameter. The server replies with a status of the action - either in progress or done.GET
request as long as the status is not done every X milliseconds.GET
request should contain the actual results.As you can see, the polling algorithm consists of several steps, all of them being asynchronous. It is exactly the kind of scenario where RxJS shows its usefulness. Let’s see how to implement polling with RxJS in just a few lines of code.
In this example we’re working with an API for analysing the sentiment of a long piece of text. On a high level, given a string the API returns a boolean value indicating whether the sentiment of the text is postive or negative. Since the analysis can take up to a few minutes, the API is asynchronous and requires polling. The exact shape of the API is as follows. You can find a very basic implementation of this API here.
POST
request to /analyze
with an example payload of { message: "text to be analyzed" }
starts an analysis job and returns its identifierGET
request to /analyze/{jobId}
returns the status of the job and, optionally, the resultPOST
request to /analyze/{jobId}/cancel
cancels the jobGiven such an API, let’s implement a simple polling flow with RxJS.
1 | const startAnalysis = () => { |
Let’s analyze this code line by line:
Observable
using ajax.post
. This Observable
sends a POST
request to ${url}/analyze
when subscribed. It emits a single item - the response - and immedietly completes.Observable
(and we know that there will be just one), we switch to a new Observable
. This new Observable
sends a GET
request to ${url}/analyze/${ajaxResponse.response.id}
and emits the response (just once).repeat
operator that Observable
. This operator resubscribes to the source Observable
once it completes. Thanks to the delay
parameter, it will only resubscribe after waiting 1 second. Each new subscribtion will cause the source Observable
to send a new GET
request.GET
requests would keep happening forever and we need to tell our Observable
when to stop. takeWhile
operator will keep emitting for as long as the incoming items satisfy the passed condition. Once the condition is violated, it will emit one more item (thanks to true
passed as the second parameter) and complete.Observable
. We expect this Observable
to emit a respone with inProgress
status every second for some time, then emit a single response with finished
status and then complete.You may be wondering why repeat
and takeWhile
operators are in a “nested” pipe
instead of right after switchMap
. In other words, could we change this code to:
1 | .pipe( |
The way repeat
operator works is that it resubscribes to the source Observable
once it completes. The source, in this case, is the Observable
returned by post
function. Resubscribing it would result in a new POST
request to ${url}/analyze
, which is not what we want. Therefore, we need to apply repeat
to the Observable
returned by ajax.getJSON
.
Once you hit the Analyze button, you’ll observe the following requests in the Network tab of Chrome Dev Tools. Don’t worry about double POST
call - the first one is a preflight request.
This solution works well, but let’s see what happens when the user quickly clicks multiple times on the Analyze button.
It resulted in multiple polling sessions running concurrently. This is pretty bad, since each analysis is computationally costly and results in unnecessary load on the backend.
Let’s improve our solution to handle this scenario better. What’s needed here is cancellation. There are actually two aspets of cancellation:
We’ll address the first aspect by introducing a cancelSubject
. We’ll emit from this subject whenever polling should be cancelled. The nested polling stream will complete whenever cancelSubject
emits. For the latter, we’ll need a dedicated endpoint for cancelling the operation. Unless it is exposed by the server, we won’t be able to address this concern.
1 | // Make sure to memoize it if you're using React |
We introduced a new function - cancelAnalysis
. It does two things:
cancelSubject
which will cause the nested polling stream to complete thanks to the takeUntil
operatorcancelAnalysis
is called from startAnalysis
but it can also be called explicitly, for example if want to allow the user to stop the analysis at any point of time.
Now you can observe the flow of server requests in the “fast-clicking” scenario. We can see that there is only a single polling session at a time and the the previous session always gets cancelled before starting a new session.
In this article I explained when to use polling and how to implement it in RxJS. This is a great example of a problem that can be easily implemented in RxJS but would require a complex solution if done imperatively.
]]>REST APIs play a crucial role in modern web development. What’s more, in many projects frontend and backend development are separate. It leads to problems such as unnecessary coupling, backward compatibility, or runtime correctness. In this article, we’re going to explore some of the TypeScript patterns for interacting with REST endpoints that address these issues.
Let’s consider the following example. Your application issues an HTTP GET call to an endpoint that returns a list of User
s.
1 | interface User { |
Somewhere else in the code, some component displays the details about each user. The same User
type is being used to refer to the user object.
1 | const [users, setUsers] = useState<User[]>([]); |
Next, the team owning the endpoint decides to refactor it. They decide that instead of two boolean fields (isAdmin
and isMaintainer
), they will introduce a new enum role
field.
1 | const enum Role { |
As you update the types, you’ll notice compile errors in the component. This is because it is referring to the fields that no longer exist. In this simple case, you can just update the component and fix the errors. However, in a large application, the User
type may be used in multiple places. Updating all of the usages may not be a simple task, as some of the usages may depend heavily on the particular shape of the object. And you may be in a hurry, as your manager forgot to tell you about the upcoming endpoint refactor, and now it’s needed desperately.
Let’s make our application less vulnerable to such changes. Let’s introduce a new UserDTO
type.
1 | interface UserDTO { |
DTO stands for Data Transfer Object. This is a pattern, widely used in the backend, in which you decouple types used for data transfer from the actual data model. In our example, the changes in the API will be reflected in the UserDTO
type. However, thanks to the User
type being completely separate, we won’t be forced to make any changes in it and consequently in its usages.
For this to work we need to define a mapping function that takes UserDTO
and returns a User
. We also need to call it in the fetchUsers
so that the type returned by this function is still User[]
.
1 | const userFromDto = ({ id, name, role }: UserDTO): User => ({ |
This way we made the code less prone to sudden changes of API shape (at the cost of introducing some overhead). Once another API change is introduced, all you need to do is to adjust the UserDTO
type and the userFromDto
function. The change won’t affect any other layers of your app (unless you wish them to).
One other advantage of this approach is that it is very convenient when your app needs to be backward compatible. In other words, depending on how your application is being deployed, it might be the case that backend and frontend won’t be updated at the same time. In such a case, the frontend needs to support both the old and the new version of the API at the same time. DTO pattern is helpful here because it introduces the concept of a mapping function (userFromDto
) that encompasses the translation logic from the API world to your data model. This is exactly where you should put the code supporting both versions of the endpoint.
The simplest way to do this is to mark all of the fields in question as optional. Then userFromDto
would check for the presence of these fields and assume that either role
or both isAdmin
and isMaintainer
are defined.
1 | interface UserDTO { |
Another, more verbose (but less error-prone) way of doing this is to take advantage of Discriminated Union Types and type guards. This approach is particularly helpful when you’re working with complex types or when you need to support more than two API versions at the same time. It also makes it more explicit that two versions are supported and therefore makes it more likely that you will clean up this code once the need to support V1 disappears.
1 | interface User { |
The DTO type will usually have a lot of common fields with the domain type. It may seem very repetitive, especially in case of large types with tens of fields. TypeScript offers some mechanisms that may help with that.
1 | type UserDTO = Omit<User, "isAdmin" | "isMaintainer"> & { role: Role }; |
Here we define UserDTO
as a result of a transformation on the User
type. More specifically, first we use the Omit
built-in utlity type to remove two fields from User
. Next, we intersect the result with an object type containing only one field - role
.
The main advantage of this approach is the lack of repetitiveness. It doesn’t seem to be a problem in our little example but I’ve worked with huge types containing tens of fields where such an approach made more sense.
On the other hand, in this approach we don’t have full decoupling between these types - the DTO depends on the domain type. For example, if we added a new field to User
, it will also appear in UserDTO
, which is not desirable. Therefore, my advice is to use this approach with caution.
As mentioned above, DTOs are particularly helpful in those projects where backend and frontend development are separate and you, as a frontend developer, don’t have much control over the APIs. In such environments, it sometimes may happen that you’re assumptions about the data structures returned by the APIs are wrong. For example, a backend team may unexpectedly change a field and forget to communicate it to you. This can also happen because of some bug on their side.
Such issues can be avoided in the world of DTOs. A DTO’s sole purpose is to describe the shape of the data returned from an endpoint. It’s very important that it does it correctly and that it’s kept in sync with reality. Let me describe two mechanisms that you can use to enforce this.
As you know, TypeScript is just a language that gets compiled into JavaScript. The consequence of this is that all information about types gets erased at the moment of compilation. Therefore, there is no automatic way of checking whether objects actually fit into their type definitions at runtime.
This can however be achieved with some effort. The simplest approach would be to manually write the code that would check for existance of all fields declared in the type as well as verify their types.
1 | const assertUserDto = (data: any): UserDTO => { |
As you can see, this approach can be very tedious. Fortunately, there exist libraries such as io-ts that can help automate the task.
Note that runtime assertions introduce some performance overhead, especially when dealing with large arrays of objects. It’s advisable to disable runtime assertions in production environments.
Another approach is to automatically generate DTO definitions based on endpoints. This can be easily achievable if the APIs you call use Swagger, which is the standard for describing and documenting APIs. Even if this is not the case, you can still look for tools that generate TypeScript types directly from the backend types (some time ago I created scala-ts for generating TS types from Scala types).
The only drawback of this approach is that it requires setting up some infrastructure that plugs into your build process and makes sure that you’re always working with the latest type definitions.
In this article, we discussed several patterns for dealing with REST endpoints in TypeScript.
Firstly, we introduced the concept of Data Transfer Objects that help you better manage changes in the endpoints and make your code less prone to changes. Secondly, we looked into how DTOs can help with ensuring backward compatibility. Lastly, we saw how to ensure that types are an accurate representation of the endpoints.
]]>The advice in this article is mostly applicable to tech companies (especially Sillicon Valley-like companies) but some of the ideas might also work in more traditional companies.
The career path of a Frontend Engineer is not very different from one of a backend engineer. It usually starts with the (Junior) Software Engineer level, followed by Senior Software Engineer. At this point, you decide whether you prefer to stay on the Individual Contributor path and become a Staff or Principal Engineer or switch to engineering management.
Overall, the more senior you are, the more you are expected to solve problems beyond writing code. What’s more, you should also be able to identify problems, propose solutions, and make sure they’re solved. Another crucial aspect of seniority is visibility - in order to get promoted, you should make sure that people are familiar with and value your work. You won’t accomplish this just by solving the tasks that your manager assigns to you.
The subsequent paragraphs list a few ideas on improving your visibility and demonstrating seniority. I divided them into three areas: technical expertise, product/UX, and leadership. You should pick one of these areas and specialize in it. However, it usually makes sense to invest a bit in the remaining two areas as well.
Technical Lead is not just an excellent programmer but can also lead a project end-to-end. That involves smooth communication with the stakeholders (including the Product Manager) to gather the requirements, breaking down the work into smaller tasks, proposing architecture design and discussing it with the team, coordinating the implementation (if more devs are involved), and finally, rolling out the new feature/project.
Who’s the best person to improve Developer Experience if not the developers themselves? Being challenged by problems such as long builds or unstable tests every day, you know exactly how great an impact they have on developer productivity. It may sometimes be tricky to get your manager to prioritize such work. When selling your ideas to the leadership, try to quantify the productivity loss (e.g., days of developer’s time lost on waiting for the build to finish per month) and mention specific metrics that you want to improve (e.g., average build time).
Nowadays, UI performance is crucial to a great user experience. In some business domains, metrics such as page load time can have a direct impact on the sales of your company’s product. If your company is not doing it yet, championing UI performance monitoring is a great way to increase your impact on the whole organization.
Identify the key metrics that you want to track (e.g., FCP, TTI, or long tasks during interactions), start measuring them, and set up notifications for them. Present the monitoring framework to the leadership and explain how these metrics affect your company’s business. An example of 3rd part software that can help you achieve that is Sumo Logic’s Real User Monitoring (disclaimer: I work at Sumo Logic).
While the JavaScript framework landscape is becoming increasingly stable, many codebases went through one or two transitions in the past and still contain some traces of legacy frameworks (such as AngularJS). Such code is often a ticking bomb that nobody wants to approach. Coming up with a vision and strategy for the gradual removal of legacy code and selling the idea to the leadership is another great way of making a huge impact.
Working on the frontend brings you very close to the product. As a by-product of developing the UI, you’re constantly interacting with the product. It makes you a great source of ideas. Maintain a backlog of ideas and periodically discuss them with your PM. Focus on low-effort ideas instead of grand multiquarter projects - it would be much easier to convince your PM to put them on the roadmap. Bring some data points to back your ideas - user requests, usage statistics, etc.
If you feel strongly about a good User Experience, you may be better equipped to focus on usability improvements instead of new features. Feel free to interview a few users of your company’s software - it’s especially relevant when the company has a strong dogfooding culture. Create a list of UX improvements that would address the biggest pain points and partner up with a UX Designer to propose solutions for them.
Having a good understanding of how your product is being used is critical to making good product decisions. You can greatly help your PM by gathering and presenting such data. Similar to how you can use Real User Monitoring to measure UI performance, you can leverage it to collect user behavior metrics. Examples of such metrics include: the number of visits to a specific route, time spent on a specific route, number of clicks on a specific button, etc. With tools such us Sumo Logic you can later create dashboards and reports with the data you collected.
This is a no-brainer. By volunteering to lead team meetings, you showcase and develop skills such as organization, mediation, and keeping everyone engaged. Don’t hesitate to ask your Engineering Manager to lead one of the meetings - they’d gladly shed the responsibility. Be sure to prepare for the meeting in advance. Create an agenda and share it with everyone beforehand. Make sure that you stick to the schedule and cut lengthy discussions short. Collect notes and action items and send out the note after the meeting.
One of the areas that often consume a lot of the Engineering Manager’s time and attention is handling all the incoming bugs and urgent requests. By taking over this responsibility, you will get better at managing chaos. You’ll learn how to better assess the real priority of an ask and to push back on the ones that are not urgent. Start small - talk to your Engineering Manager and ask for a trial period where he’d review your choices on a daily basis. Gradually, you’ll both realize that less and less supervision is needed.
Another part of the Engineering Manager’s job that you can greatly help with is nudging the owners of the dependencies of the projects your team is working on. The dependencies can include UX designs, new API (or modifications to an existing API), security review, or gathering requirements from all stakeholders. Firstly, it’s important to identify the dependencies early so that other teams can plan the work in advance. Secondly, you should actively monitor progress to make sure that when you start working on the implementation, you won’t get blocked on some missing pieces.
While process may sound scary to you, it’s just a name for a set of instructions that will tell everyone how to behave in a certain situation. It’s like programming, but with people instead of code :) Processes make the team better organized, help build good practices and reduce ambiguity. You can propose a process for virtually anything: adding a new code dependency to the repository, handling customer escalations, onboarding a new team member, adding a new module in the repository. Create a document with a process description and share it with your colleagues so that they can provide their inputs. Design the process in a way in which it is easily enforceable. Once introduced, monitor whether the process is working as designed and search for a room for improvement.
One of the most obvious responsibilities of a Senior Engineer is being able to grow the ones that you’re working with. Set up a 1-1 with another team member. Discuss their current challenges and where they’d like to be in a year from now. Brainstorm together on how they can get there. Make sure that the work they’re doing is visible to your Engineering Manager.
As mentioned in the beginning, these ideas assume that you are working at a tech company. At such companies, developers usually have a lot of autonomy and are expected to make impact beyond writing code. You may encounter pushback when trying to implement some of these ideas in a traditional company where structures are more hierarchical and responsibilities are assigned to specific roles in a stricter way. However, don’t be discouraged. I managed to do a lot of these things as an Architect in a traditional company. Sometimes, it just takes a bit of convincing.
In this article, we discussed what it means to be a Senior Frontend Engineer. I mentioned three different areas where you can demonstrate your seniority and listed a few ideas in each category. Hopefully, you’ll find them useful! Importantly, most of these ideas will not only increase your chances of promotion but will also help you develop new skills. Let me know if you tried any of them and how they worked.
]]>It’s been a long time since I wrote on this blog. One of the reasons is my transition to an Engineering Manager role which happened about half a year ago. It’s been an interesting time, which certainly changed a lot in how I approach software development. In this article, I’d like to focus on how my approach to TypeScript changed during that time.
Being an Engineering Manager at the company I work in encompasses multiple responsibilities such as planning, coordinating with Product Manager and User Experience folks, taking major technical decisions, helping a Team member grow, etc. It’s a breadth-first role, as opposed to Software Engineer, which is a depth-first role. The natural consequence of this is that I have less time to focus on the technical aspects, such as static typing.
What’s more, as an EM I have to constantly balance how much effort is spent on the product backlog versus technical debt. I realized that even in a company with a strong engineering culture, prioritizing the tech debt is tricky. There is always a lot of things to fix: increasing test coverage, fixing performance, refactoring the code after bad design choices. Improving static typing competes with a lot of other items and sometimes compromises have to be made. As a result, I’m now definitely more cautious when evaluating when it’s worth spending a lot of time on improving typing in an area of code.
On the other hand, as an EM I get more visibility into how static typing affects the Team’s performance. And I can confirm that the majority of my SWE-era hypothesis about static typing was correct. My Team works with different parts of the product and we have to deal with noticeably more bugs in the areas that are poorly typed. What’s more, poorly typed code is far less readable and more error-prone. I get regular complaints from the Team when they have to deal with such code. Also, onboarding new Team members is far easier when the code is typed correctly.
One immensely helpful thing is building a Team culture that values static typing. Folks on my Team put a lot of focus on this when reviewing the code, always asking why someone typed some variable as any
(and they better had a really good reason) and suggesting more precise types whenever possible. This allows the team to minimize the tech debt created around static typing so that there’s less to clean up later.
Wrapping up, my perspective on static typing has certainly changed. I became more pragmatic about it, always evaluating the effort vs the benefit of complex type refactorings. However, fundamentally, I still believe in the importance of static typing and I would never encourage poor typing. Being an Engineering Manager allowed me to see the benefits of static typing at scale.
]]>The concept of generics is not a very new one - it has been present in different programming languages (such as Java, C# or C++) for a long time. However, for folks without background in a statically typed language, generics might appear complicated. Therefore, I’m not going to make any assumptions and will explain generics completely from scratch.
Let’s say you are adding types to some JavaScript codebase and you encounter this function:
1 | function getNames(persons) { |
Typing this function is straightforward. It accepts an array of person objects as a parameter and returns an array of names (strings). For the person object, you can either create a Person
interface or use one that you’ve already created.
1 | interface Person { |
Next, you notice that you don’t actually need this function. Instead, you can use the built-in Array.map
method.
1 | const persons: Person[] = [ |
Hmm, but what about types? You check the type of names
and realize that it has been correctly inferred to string[]
! How does TypeScript achieve such an effect?
To properly understand this, let’s try to type the following implementation of map
function.
1 | function map(items, mappingFunction) { |
The main issue with typing map
is that you don’t know anything about the type of the elements of the array it will be called with. What makes map
so cool is that it works with any kind of array!
1 | // Works with array of Persons |
any
!As a first step, let’s try using any
type to map
this function.
1 | function map(items: any[], mappingFunction: (item: any) => any): any[] { |
Let’s break this down. map
has two parameters. The type of the first one (items
) is any[]
. We tell the type system that we want items
to be an array, but we don’t care about the type of those items. The type of the second parameter (mappingFunction
) is a function that takes any
and returns any
. Finally, the return type is again any[]
- an array of anything.
Did we gain anything by doing this? Sure! TypeScript now won’t allow us to call map
with some nonsensical arguments:
1 | // 🔴 Error: 'hello' is not an array |
Unfortunately, the types we provided are not precise enough. The purpose of TypeScript is to catch possible runtime errors earlier, at compile-time. However, the following calls won’t give any compile errors.
1 | // The second argument is a function that only works on numbers, not on `Person` objects. |
How can we improve the typing of map
so that above examples would result in a compile-time error? Enter generics.
Generic function is (in this case) a way of saying “this function works with any kind of array” and maintaining type safety at the same time.
1 | function map<TElement, TResult>( |
We replaced any
with TElement
and TResult
type parameters. Type parameters are like named any
s. Typing items
as TElement[]
still means that it is an array of anything. However, because it’s named, it lets us establish relationships between types of function parameters and the return type.
Here, we’ve just expressed the following relationships:
mappingFunction
takes anything as a parameter, but it must be the same type of “anything” as the type of elements of items
arraymappingFunction
can return anything, but whatever type it returns, it will be used as the type of elements of the array returned by map
functionThe picture below demonstrates these relationships. Shapes of the same color have to be of the same type.
You might have noticed the <TElement, TResult>
thing that we added next to map
. Type parameters have to be declared explicitly using this notation. Otherwise, TypeScript wouldn’t know if TElement
is a type argument or an actual type.
BTW, for some reason, it is a common convention to use single-character names for type parameters (with a strong preference for T
). I’d strongly recommend using full names, especially when you are not that experienced with generics. On the other hand, it’s a good idea to prefix type arguments with T
, so that they’re easily distinguishable from regular types.
How to call a generic function? As we saw, generic functions have type parameters. These parameters are replaced with actual types “when” the function is called (technically, it’s all happening at compile-time). You can provide the actual types using angle brackets notation.
1 | map<Person, string>(persons, person => person.name); |
Imagine that by providing type arguments TElement
and TResult
become replaced with Person
and string
.
1 | function map<TElement, TResult>( |
Having to provide type arguments, when calling generic functions would be cumbersome. Fortunately, TypeScript can infer them by looking at the types of the arguments passed to the function. Therefore, we end up with the following code.
1 | const names = map(persons, person => person.name); |
Whoohoo! It looks exactly as the JavaScript version, except it’s type-safe! Contrary to the first version of map
, the type of names
is string[]
instead of any[]
. What’s more, TypeScript is now capable of throwing a compile error for the following call.
1 | // 🔴 Error! Operator '+' cannot be applied to Person and 5. |
Here is a very simplified sequence of steps that leads the compiler to throw an error.
persons
. It sees Person[]
.map
, the type of the first parameter is TElement[]
. Compiler deduces that TElement
is Person
.Person
to TResult
. It doesn’t know what TResult
is yet.n
is Person
.5
to n
, which is of type Person
. This doesn’t make sense, so it throws an error.The good news is that, most likely, you will not be creating generic functions very often. It’s much more common to call generic functions then to define them. However, it’s still very useful to know how generic functions work, as it can help you better understand compiler errors.
As exemplified by map
, functions that take arrays as parameters are often generic functions. If you look at the typings for lodash
library, you will see that nearly all of them are typed as generic functions. Such functions are only interested in the fact that the argument is an array, they don’t care about the type of its elements.
In React framework, Higher Order Components are generic functions, as they only care about the argument being a component. The type of the component’s properties is not important.
In RxJs, most operators are generic functions. They care about the input being and Observable
, but they’re not interested in the type of values being emitted by the observable.
Wrapping up:
any
type, except they can be used to express relationships between function parameters and the return type;I hope this article helped you finally understand generic functions. If not, please let me know!
]]>WE JUST MOVED OPTIONAL CHAINING IN JS TO STAGE 3 🎉🎉🎉🎉🎉🎉🎉
— Daniel Rosenwasser (@drosenwasser) July 25, 2019
Today I got to present nullish coalescing at TC39 and it progressed to stage 3! The cherry on top? @rkirsling already has a patch out for it in JavaScriptCore! https://t.co/o3jHs2Ieo9
— Daniel Rosenwasser (@drosenwasser) July 24, 2019
What it means to us, developers, is that two new exciting language features will soon become part of the ECMAScript standard.
Let’s have a quick look at these additions and see how to take advantage of them.
TC39 is a group of people that drives the development of the ECMAScript (the standard of which JavaScript language is an implementation). They meet regularly to discuss proposals of new language features. Every proposal goes through a number of stages. Once it reaches Stage 4, it is ready to be included in the next version of the ECMAScript standard.
When a proposal reaches Stage 3, it is already quite mature. The specification has been approved and is unlikely to change. There might already be some browsers implementing the new feature. While Stage 3 proposal is not guaranteed to become part of the standard, it’s very likely to.
The two proposals we’re looking at are:
Optional chaining aims to provide nice and short syntax for a very common pattern: accessing a nested property of an object in a safe way.
1 | const customers = [ |
This array contains objects representing customers. They all follow a similar structure, but some of the properties are optional. Let’s say we’d like to iterate over the array and print the company name in upper case for each customer.
1 | for (let customer of customers) { |
As you might have guessed, the above code is not safe. It will result in runtime errors for the second and the third array elements. We can fix it by using the following popular pattern.
1 | console.log( |
Logical and operator (&&
) in JavaScript behaves differently from most programming languages. It works on any value type, not only booleans. a && b
translates to: if a
is falsy (can be converted to false
), return a
. Otherwise, return b
.
Unfortunately, this solution is rather verbose. There is a lot of repetition and it gets worse the deeper the objects are nested. What’s more, it checks for a value to be falsy, not null
or undefined
. Therefore, it would return 0
for the following object, while it might be preferable to return undefined
instead.
1 | { |
Optional chaining comes to the rescue! With this new feature, we can shorten the above piece to a single line.
1 | customer?.company?.name?.toUpperCase(); |
The customer?.company
expression will check whether customer
is null
or undefined
. If this is the case, it will evaluate to undefined
. Otherwise, it will return company
. In other words, customer?.company
is equivalent to customer != null ? customer : undefined
. The new ?.
operator is particularly useful when chained, hence the name (optional chaining).
Be careful when replacing existing &&
chains with ?.
operator! Bear in mind the subtle difference it treatment of falsy values.
The second proposal introduces ??
operator which you can use to provide a default value when accessing a property/variable that you expect can be null
or undefined
.
But hey, why not simply use ||
for this? Similarly to &&
, logical _or_ can operator on non-boolean values as well. a || b
returns a
if it’s truthy, or b
otherwise.
However, it comes with the same problem as &&
- it checks for a truthy value. For example, an empty string (''
) will not be treated as a valid value and the default value would be returned instead.
1 | const customer = { |
Nullish coalescing operator can be nicely combined with optional chaining.
1 | (customer?.company?.name ?? "no company").toUpperCase(); |
While the benefit of optional chaining is clear (less verbose code), nullish coalescing is a little bit more subtle. We’ve all been using ||
for providing a default value for a long time. However, this pattern can potentially be a source of nasty bugs, when a falsy value is skipped in favour of the default value. In most cases, the semantics of ??
is what you’re actually looking for.
Since those proposals have not reached Stage 4 yet, you need transpile the code that uses them (for example with Babel). You can play with Babel’s on-line REPL to see what do they get compiled to.
At the moment of writing, optional chaining is available in Chrome behind a feature flag.
Optional chaining will also be available in the upcoming TypeScript 3.7 release!
Recent ECMAScript versions didn’t bring many syntactical additions to the language. It’s likely to change with the next edition. Some people say that JavaScript is getting bloated. I personally think that these two pieces of syntactic sugar are long overdue, as they’ve been available in many modern programming languages and they address real-life, common development scenarios.
What do you think? 😉
]]>In this article we will implement a common data fetching scenario with the useReducer hook. We will see how to take advantage of TypeScript’s discriminated unions to correctly type reducer’s actions. Finally, we will introduce a useful pattern for representing the state of data fetching operations.
This post has been originally published on SumoLogic company blog.
We will base our code on an example from the official React documentation. The demo linked from this article is a simple implementation of a very common pattern - fetching a list of items from some backend service. In this case, we’re fetching a list of Hacker News article headers.
What functionality is missing in this little demo? When fetching data from backend it’s useful to indicate to the user that an operation is in progress and if the operation fails, to show the error to the user. Neither is included in the demo as coded.
The attached code uses useState
hook to store the list of items after it is retrieved. We will need two additional pieces of state to implement our enhancements - a boolean indicating whether an action is in progress and an optional string containing the error message.
We could use more useState
hooks to store this information. There’s a better way to do this, though. Notice that we’re modifying multiple pieces of state at the same time as a result of certain actions. For example, when data is retrieved from the backend, we update both the data piece and the loading indicator piece. What’s more, we’re modifying the state in multiple places. Wouldn’t it be cleaner and easier to follow if we there was only a single place in the component where the state is updated?
We can achieve it by using the useReducer
hook. It will allow us to centralize all state modification, making them easier to track and reason about.
useReducer
Let’s take a look at useReducer
’s type signature (I simplified the code slightly by taking initializer out of the picture).
1 | function useReducer<R extends Reducer<any, any>>( |
Our hook is a function (yes, React hooks are just functions). It accepts two parameters: reducer
and initialState
.
The first parameter’s type must extend Reducer<any, any>
. Reducer
is just an alias for a function type that takes a state object and an action and returns an updated version of the state object. In other words, reducer describes how to update the state based on some action.
1 | type Reducer<S, A> = (prevState: S, action: A) => S; |
The second parameter allows us to provide the initial version of the state. ReducerState
is a conditional type that extracts the type of the state from Reducer
type.
Finally, useReducer
returns a tuple. The first element in the tuple is the recent version of the state object. We will render our component based on values contained in this object. The second item is a dispatch function. It is a function that will let us dispatch actions that will trigger state changes. Similarly to ReducerState
, ReducerAction
extracts action type from Reducer
.
Behold the power of static typing - reading a well typed function’s signature is often enough to understand its purpose.
Now is the time to fill in the gaps and define types representing state and actions.
It was already mentioned that apart from the data received from the server, we’re also going to store a flag indicating whether we’re loading that data and an optional error message.
Therefore, the shape of state can be described with the following types:
1 | type State = { |
HNResponse
interface is based on the response received from https://hn.algolia.com/api/v1/search endpoint which we’re going to use in this example. It is a free service that returns headers of Hacker News articles.
Action is an object that represents some event in our application and result in modification of the state. What kind of actions are there in our app?
How to represent the type of all these actions in TypeScript? We can take advantage of a very useful concept called discriminated union type.
1 | type Action = |
Action is a union of three object types. What makes it special is the fact that all of those types have a common property called type. The type of this property in each interface is a different literal type. This lets us distinguish between those types.
Why is this useful? TypeScript creates automatic type guards for discriminated unions. This means that if we write an if statement in which we compare the type property of given Action object with a specific type (e.g. success
), the type of the object inside the statement’s body will be narrowed to the matching component of the union type.
For example, in the following code, accessing action.results will not cause a compile error because the type of action inside the body of the if statement will be appropriately narrowed!
1 | function display(action: Action) { |
We’re all good to implement the reducer. As already mentioned, it takes a state and an action and returns an updated state.
For request action, we’re going to set isLoading flag to true.
The success action will disable isLoading flag and also set data to the results received from the server.
Finally, failure action will also disable isLoading and set the error message.
1 | function reducer(state: State, action: Action): State { |
Thanks to discriminated unions, we can access action’s properties inside case blocks in a type-safe way.
All that is left is to pass our reducer to useReducer hook.
1 | const [{ data, isLoading, error }, dispatch] = useReducer(reducer, { |
I’m passing the reducer function to the hook along with initial state which has isLoading set to false and the remaining properties undefined. The result is a pair with the current state object as first element (which I’m instantly destructuring) and dispatch function as the second element.
Next, I need to update the usage of useEffect hook so that it dispatches relevant actions.
1 | useEffect(() => { |
Finally, we should update the JSX to take the new pieces of state into account and show the loading indicator and the error message when available.
1 | return ( |
The whole component implementation can be found here. Note that we’re still using useState
hook to store the query. This information is completely unrelated to the rest of the state, therefore there would be no advantage in including it the state managed by useReducer
.
If we take a look at the interface representing state of this component, we will notice that some combinations of properties are not valid.
For example, it is not possible that isLoading === true
while data
is not empty.
Similarly, error
and data
cannot be both defined at the same time.
How can we improve this? Let’s use discriminated unions again!
1 | type State = |
Why is this approach better? Because it makes illegal states unrepresentable. The previous interface definition allowed certain combinations of property values even though we were sure that they would never occur in reality.
In a more complex component this could force us to add some type casts or handle impossible situations. Thanks to discriminated unions we can eliminate this issue.
It is generally a good idea to make your types match reality as closely as possible.
Please find the updated implementation here.
In this article we have looked at typing the useReducer React hook in a real-world scenario. What’s more, we’ve learned some patterns of typing state and actions using discriminated unions. Finally, we’ve seen how advanced TypeScript features can help make our types more precise.
Update: check out this blog post to see how to further improve above solution using the AsyncResult
pattern.
However, there are some situations when deeper understanding of hooks’ types might prove very useful. In this article, we’re going to focus on the useState
hook.
I’m going to assume that you have a basic understanding of this hook. If this is not the case, please read this first.
useState
undefined
1 | const [state, setState] = useState<string>(undefined); |
1 | const [state, setState] = useState<string | undefined>("hello"); |
1 | const [state, setState] = useState<"hello" | "bye">("hello"); |
First of all, let’s take a look at the type signature of useState
. You’ll see how much information you can extract solely from types, without looking at the docs (or the implementation).
If you’re only interested in practical examples, skip to the next section.
1 | function useState<S = undefined>(): [ |
As you can see, there are two versions of useState
function. TypeScript lets you define multiple type signatures for a function as it is often the case in JavaScript that a function supports different types of parameters. Multiple type signatures for a single function are called overloads.
Both overloads are generic functions. The type parameter S
represents the type of the piece of state stored by the hook. The type argument in the second overload can be inferred from initialState
. However, in the first overload, it defaults to undefined
unless the type argument is explicitly provided. If you don’t pass initial state to useState
, you should provide the type argument explicitly.
useState
Hook ParametersThe first overload doesn’t take any parameters - it’s used when you call useState
without providing any initial state.
The second overload accepts initialState
as parameter. It’s type is a union of S
and () => S
. Why would you pass a function that returns initial state instead of passing the initial state directly? Computing initial state can be expensive. It’s only needed during when the component is mounted. However, in a function component, it would be calculated on every render. Therefore, you have an option to pass a function that calculates initial state - expensive computation will only be executed once, not on every render.
useState
Hook Return TypeLet’s move to the return type. It’s a tuple in both cases. Tuple is like an array that has a specific length and contains elements with specific types.
For the second overload, the return type is [S, Dispatch<SetStateAction<S>>]
. The first element of the tuple has type S
- the type of the piece of state. It will contain the value retrieved from the component’s state.
The second element’s type is Dispatch<SetStateAction<S>>
. Dispatch<A>
is simply defined as (value: A) => void
- a function that takes a value and doesn’t return anything. SetStateAction<S>
is defined as S | ((prevState: S) => S)
. Therefore, the type of Dispatch<SetStateAction<S>>
is actually (value: S | ((prevState: S) => S)) => void
. It is a function that takes either an updated version of the piece of state OR a function that produces the updated version based on the previous version. In both cases, we can deduce that the second element of the tuple returned by setState
is a function that we can call to update the component’s state.
The return type of the first overload is the same, but here instead of S
, S | undefined
is used anywhere. If we don’t provide initial state it will store undefined
initially. It means that undefined
has to be included in the type of the piece of state stored by the hook.
Most of the time you don’t need to bother with providing type arguments to useState
- the compiler will infer the correct type for you. However, in some situations type inference might not be enough.
The first type of situation is when you don’t want to provide initial state to useState
.
As we saw in the type definition, the type argument S
for the parameterless defaults to undefined
. Therefore, the type of pill
should be inferred to undefined
. However, due to a design limitation in TypeScript, it’s actually inferred to any
.
Similarly, setPill
‘s type is inferred to React.Dispatch<any>
. It’s really bad, as nothing would stop us from calling it with invalid argument: setPill({ hello: 5 })
.
1 | export const PillSelector: React.FunctionComponent = () => { |
In order to fix this issue, we need to pass a type argument to setState
. We treat pill
as text in JSX, so our fist bet could be string
. However, let’s be more precise and limit the type to only allow values that we expect.
1 | const [pill, setPill] = useState<"red" | "blue">(); |
Note that the inferred type of pill
is now "red" | "blue" | undefined
(because this piece of state is initially empty). With strictNullChecks
enabled TypeScript wouldn’t let us call anything on pill
:
1 | // 🛑 Object is possibly 'undefined'.ts(2532) |
…unless we check the value first:
1 | // ✅ No errors! |
Another kind of situation when you would provide a type argument to useState
is when initial state is defined, but you want to be able to clear the state later.
1 | export const PillSelector: React.FunctionComponent = () => { |
Since initial state is passed to useState
, the type of pill
gets inferred to string
. Therefore, when you try to pass undefined
to it, TypeScript will error.
You can fix the problem by providing the type argument.
1 | const [pill, setPill] = useState<"blue" | "red" | undefined>("blue"); |
Wrapping up, we’ve analysed the type definitions of useState
function thoroughly. Based on this information, we saw when providing the type argument to useState
might be necessary and when the inferred type is sufficient.
I like how hooks are great example of how much information can be read from type definitions. They really show off the power of static typing!
]]>As it turns out, there are more cases in which propagation of generic type arguments is desirable. One of them is passing a generic component to a Higher Order Component in React.
.@orta Do you know how to Propagate Generics across higher-order-components?
— Frederic Barthelemy (@fbartho) July 26, 2019
I have a generic component A, whose props is type Props = PubProps<B> & InjectedProps
I have abind()
-HoC that injects InjectedProps
I want return value from bind:
APrime<B> accepting PubProps<B>
The post is inspired by the problem Frederic Barthelemy tweeted about and asked me to have a look at.
I’m not going to give a detailed explanation, as there are already plenty to be found on the internet. Higher Order Component (HOC) is a concept of the React framework that lets you abstract cross-cutting functionality and provide it to multiple components.
Technically, HOC is a function that takes a component and returns another component. It usually augments the source component with some behavior or provides some properties required by the source component.
Here is an example of a HOC in TypeScript:
1 | const withLoadingIndicator = |
As you can deduce from the type signature, withLoadingIndicator
is a function that accepts a component with P
-shaped properties and returns a component that additionally has isLoading
property. It adds the behavior of displaying loading indicator based on isLoading
property.
If you’re having trouble understanding this example, check out my explanation of TypeScript generics.
So far so good. However, let’s imagine that we have a generic component Header
:
1 | class Header<TContent> extends React.Component<HeaderProps<TContent>> {} |
…where HeaderProps
is a generic type that represents Header
‘s props given the type of associated content (TContent
):
1 | type HeaderProps<TContent> = { |
Next, let’s use withLoadingIndicator
with this Header
component.
1 | const HeaderWithLoader = withLoadingIndicator(Header); |
The question is, what is the inferred type of HeaderWithLoader
? Unfortunately, it’s React.ComponentType<HeaderProps<unknown> & { isLoading: boolean; }>
in TypeScript 3.4 and later or React.ComponentType<HeaderProps<{}> & { isLoading: boolean; }>
in previous versions.
As you can see, HeaderWithLoader
is not a generic component. In other words, generic type argument of Header
was not propagated. Wait… doesn’t TypeScript 3.4 introduce generic type argument propagation?
Actually, it does. However, it only works for functions. Header
is a generic class, not a generic function. Therefore, the improvement introduced in TypeScript 3.4 doesn’t apply here ☹️
Fortunately, we have function components in React. We can make type argument propagation work if we limit withLoadingIndicator
to only work with function components.
Unfortunately, we cannot use FunctionComponent
type since it is defined as an interface, not a function type. However, a function component is nothing else but a generic function that takes props and returns React.ReactElement
. Let’s define our own type representing function components.
1 | type SimpleFunctionComponent<P> = (props: P) => React.ReactElement; |
By using SimpleFunctionComponent
instead of FunctionComponent
we loose access to properties such as defaultProps
, propTypes
, etc., which we don’t need anyway.
Obviously, we need to change Header
to be a function component, not a class component:
1 | declare const Header: <TContent>( |
We wouldn’t be able to use FunctionComponent
here anyway, since Header
is a generic component.
Let’s now take a look at the inferred type of HeaderWithLoader
. It’s…
1 | <TContent>(props: HeaderProps<TContent> & { isLoading: boolean }) => React.ReactElement |
…which looks very much like a generic function component!
Indeed, we can use Header
as a regular component in JSX:
1 | class Foo extends React.Component { |
Most importantly, HeaderWithLoader
is typed correctly!
As you can see, typing HOCs in React can get tricky. The proposed solution is really a workaround - ideally, TypeScript should be able to propagate generic type arguments for all generic types (not only functions).
Anyway, this example demonstrates how important it is to stay on top of the features introduced in new TypeScript releases. Before version 3.4, it wouldn’t be even possible to get this HOC typed correctly.
]]>I present to you this list of high-level TypeScript best practices that will help you take advantage of TypeScript to the fullest possible extent.
This article is also available in Russian: 5 заповедей TypeScript-разработчика (by Vadim Belorussov).
Types are a contract. What does it mean? When you implement a function, its type is a promise to other developers (or to your future self!) that when they call it, it will return a specific kind of value.
In the following example, the type of getUser
promises that it will return an object that will always have two properties: name
and age
.
1 | interface User { |
TypeScript is a very flexible language. It’s full of compromises made in order to make its adoption easier. For example, it allows you to implement getUser
like this:
1 | function getUser(id: number): User { |
Don’t do this! It’s a LIE. By doing this, you LIE to other developers (who will use this function in their functions). They expect the object returned by getUser
to always have some name
. But it doesn’t! Then, what happens when your teammate writes getUser(1).name.toString()
? You know it well…
Of course, this lie seems very obvious. However, when working with a huge codebase, you will often find yourself in a situation when a value you want to return (or pass) almost matches the expected type. Figuring out the reason for type mismatch takes time and effort and you are in a hurry… so you decide to cast.
However, by doing this, you violate the holy contract. It’s ALWAYS better to take time to figure out why types do not match than to do the cast. It’s very likely that some runtime bug is lurking under the surface.
Don’t lie. Respect your contracts.
Types are documentation. When you document a function, don’t you want to convey as much information as possible?
1 | // Returns an object |
Which comment for getUser
would you prefer? The more you know about what it returns, the better. For example, knowing that it could return undefined
, you can write an if
statement to check if the value it returned is defined before accessing its properties.
It’s exactly the same with types. The more precise a type is, the more information it conveys.
1 | function getUserType(id: number): string { /* ... */ } |
The second version of getUserType
is much more informative, and hence it puts the caller in a much better situation. It’s easier to handle a value if you know that it is for sure (contracts, remember?) one of the three strings than if it can be any string. For starters, you know for sure that the value is not an empty string.
Let’s see a more realistic example. State
type represents the state of a component that fetches some data from the backend. Is this type precise?
1 | interface State { |
The consumer of this type must handle some unlikely combinations of property values. For example, it’s not possible that both data
and errorMessage
will be defined (data fetching can either be successful or result in an error).
We can make the type much more precise with the help of discriminated union types:
1 | type State = |
Now, the consumer of this type has much more information. They don’t need to handle illegal combinations of property values.
Be precise. Convey as much information as possible in your types.
Since types are both contract and documentation, they’re great for designing your functions (or methods).
There are many articles around in the internet that advise software engineers to think before they write code. I totally agree with this approach. It’s tempting to jump straight into code, but it often leads to some bad decisions. Spending some time thinking about the implementation always pays off.
Types are super helpful in this process. Thinking can result in writing down the type signatures of functions involved in your solution. It’s awesome because it lets you focus on what your functions do instead of how they achive it.
React JS has a concept of Higher Order Components. They are functions that augment given component in some way. For example, you could create a withLoadingIndicator
Higher Order Component that adds a loading indicator to an existing component.
Let’s write the type signature for this function. It takes a component and returns a component. We can use React’s ComponentType
to indicate a component.
ComponentType
is a generic type parameterized by the type of properties of the component. withLoadingIndicator
takes a component and returns a new component that either shows the original component or shows a loading indicator. The decision is made based on the value of a new boolean property isLoading
. Therefore, the resulting component should require the same properties as the original component plus the new property.
Let’s finalize the type. withLoadingIndicator
takes a component of type ComponentType<P>
where P
denotes the type of the properties. It returns a component with augmented properties of type P & { isLoading: boolean }
.
1 | const withLoadingIndicator = <P>(Component: ComponentType<P>) |
Figuring out the type of this function forced us to think about its input and its output. In other words, it made us design it. Writing the implementation is a piece of cake now.
Start with types. Let types force you to design before implementing.
The first three points require you to pay a lot of attention to types. Fortunately, you are not alone in the task - TypeScript compiler will often let you know when your types lie or when they’re not precise enough.
You can make the compiler even more helpful by enabling --strict
compiler flag. It is a meta flag that enables all strict type checking options: --noImplicitAny
, --noImplicitThis
, --alwaysStrict
, --strictBindCallApply
, --strictNullChecks
, --strictFunctionTypes
and --strictPropertyInitialization
.
What do they do? In general, enabling them results in more TypeScript compiler errors. This is good! More compiler errors mean more help from the compiler.
Let’s see how enabling --strictNullChecks
helps you identify a lie.
1 | function getUser(id: number): User { |
The type of getUser
promises that it will always return a User
. However, as you can see in the implementation, it can also return an undefined
value!
Fortunately, enabling --strictNullChecks
results in a compiler error:
1 | Type 'undefined' is not assignable to type 'User'. |
TypeScript compiler detected the lie. You can get rid of the error by telling the truth:
1 | function getUser(id: number): User | undefined { /* ... */ } |
Embrace type checking strictness. Let the compiler watch your steps.
TypeScript language is being developed at a very fast pace. There is a new release every two months. Each release brings in significant language improvements and new features.
It is often the case that new language features allow for more precise types and stricter type checking.
For example, version 2.0 introduced Discriminated Union Types (which I mentioned in Be precise).
Version 3.2 introduced --strictBindCallApply
compiler option which enables correct typing of bind
, call
and apply
functions.
Version 3.4 improved type inference in higher order functions, making it easier to use precise type when writing code in functional style.
My point here is that it really pays off to be familiar with language features introduced in the latest releases of TypeScript. They can often help you adhere to the other four commandments from this list.
A good starting point is the official TypeScript roadmap. It’s also a good idea to check out TypeScript section of the Microsoft Devblog regularly as all release announcements are made there.
Stay up to date with new language features and make the work for you.
I hope you find the list useful. Like anything in life, these commandments shouldn’t be followed blindly. However, I firmly believe those rules will make you a better TypeScript programmer.
I’d love to hear your thoughts about it in the comments section.
]]>Did you ever wonder where do those names come from? While you might have some intuition about what a union of two types is, the intersection is usually not understood well.
After reading this article, you will have a better understanding of those types which will make you more confident when using them in your codebases.
Union type is very often used with either null
or undefined
.
1 | const sayHello = (name: string | undefined) => { /* ... */ }; |
For example, the type of name
here is string | undefined
which means that either a string
OR an undefined
value can be passed to sayHello
.
1 | sayHello("milosz"); |
Looking at the example, you can intuit that a union of types A
and B
is a type that accepts both A
and B
values.
This intuition also works for complex types.
1 | interface Foo { |
Foo | Bar
is a type that has either all required properties of Foo
OR all required properties of Bar
. Inside sayHello
it’s only possible to access obj.xyz
because it’s the only property that is included in both types.
What about the intersection of Foo
and Bar
, though?
1 | const sayHello = (obj: Foo & Bar) => { /* ... */ }; |
Now sayHello
requires the argument to have both foo
AND bar
properties. Inside sayHello
it’s possible to access both obj.foo
, obj.bar
and obj.xyz
.
Hmm, but what does it have to intersection? One could argue that since obj
has properties of both Foo
and Bar
, it sounds more like a union of properties, not intersection. Similarly, a union of two object types gives you a type that only has the intersection of properties of constituent types.
It sounds confusing. I even stumbled upon a GitHub issue in TypeScript repository ranting about naming of these types. To understand the naming better we need to look at types from a different perspective.
Do you remember a concept called sets from math classes? In mathematics, a set is a collection of objects (for example numbers). For example, {1, 2, 7}
is a set. All positive numbers can also form a set (an infinite one).
Sets can be added together (a union). A union of {1, 2}
and {4, 5}
is {1, 2, 4, 5}
.
Sets can also be intersected. Intersection of two sets is a set that only contains those numbers that are present in both sets. So, an intersection of {1, 2, 3}
and {3, 4, 5}
is {3}
.
Let’s imagine two sets: Squares
and RedThings
.
The union of Squares
and RedThings
is a set that contains both squares and red things.
However, the intersection of Squares
and RedThings
is a set that only contains red squares.
Computer science and mathematics overlap in many places. One of such places is type systems.
A type, when looked at from a mathematical perspective, is a set of all possible values of that type. For example the string
type is a set of all possible strings: {'a', 'b', 'ab', ...}
. Of course, it’s an infinite set.
Similarly, number
type is a set of all possible numbers: {1, 2, 3, 4, ...}
.
Type undefined
is a set that only contains a single value: { undefined }
.
What about object types (such as interfaces)? Type Foo
is a set of all object that contain foo
and xyz
properties.
Armed with this knowledge, you’re now ready to understand the meaning of union and intersection types.
Union type A | B
represents a set that is a union of the set of values associated with type A
and the set of values associated with type B
.
Intersection type A & B
represents a set that is an intersection of the set of values associated with type A
and the set of values associated with type B
.
Therefore, Foo | Bar
represents a union of the set of objects having foo
and xyz
properties and the set of objects having bar
and xyz
. Objects belonging to such set all have xyz
property. Some of them have foo
property and the others have bar
property.
Foo & Bar
represents an intersection of the set of objects having foo
and xyz
properties and the set of objects having bar
and xyz
. In other words, the set contains objects that belong to sets represented by both Foo
and Bar
. Only objects that have all three properties (foo
, bar
and xyz
) belong to the intersection.
Union types are quite widespread so let’s focus on an example of an intersection type.
In React, when you declare a class component, you can parameterise it with thy type of its properties:
1 | class Counter extends Component<CounterProps> { /* ... */ } |
Inside the class, you can access the properies via this.props
. However, the type of this.props
is not simply CounterProps
, but:
1 | Readonly<CounterProps> & Readonly<{ children?: ReactNode; }> |
The reason for this is that React components can accept children elements:
1 | <Counter><span>Hello</span></Counter> |
The children element tree is accessible to the component via children
prop. The type of this.props
reflects that. It’s an intersection of (readonly) CounterProps
and a (readonly) object type with an optional children
property.
In terms of sets, it’s an intersecion of the set of objects that have properties as defined in CounterProps
and the set of objects that have optional children
property. The result is a set of objects that have both all properties of CounterProps
and the optional children
property.
That’s it! I hope this article helps you wrap your head around union and intersection types. As it’s often the case in computer science, understanding the fundamentals makes you better at grasping programming concepts.
]]>strictFunctionTypes
. It helps you avoid another class of bugs, and it’s also an excellent opportunity to learn about some fundamental computer science concepts: covariance, contravariance, and bivariance.Strict function type checking was introduced in TypeScript 2.6. Its definition in TypeScript documentation refers to an enigmatic term: bivariance.
Disable bivariant parameter checking for function types.
strictFunctionTypes
catch?First of all, let’s see an example of a bug that can be caught by enabling this flag.
In the following example, fetchArticle
is a function that accepts a callback to be executed after an article is fetched from some backend service.
1 | interface Article { |
Interestingly, TypeScript with default settings compiles the following code without errors.
1 | interface ArticleWithContent extends Article { |
Unfortunately, this code can result in a runtime error. The function passed as a callback to fetchArticle
only knows how to deal with with a specific subset of Article
objects - those that also have content
property.
However, fetchArticle
can fetch all kinds of articles - including those that only have title
defined. In such case, r.content
is undefined, and runtime exception is thrown.
1 | TypeError: undefined is not an object (evaluating 'r.content.toLowerCase') |
Fortunately, enabling strictFunctionTypes
results in a compile-time error. A compile-time error is always better than a runtime error, as it surfaces before users run your code.
1 | Argument of type '(r: ArticleWithContent) => void' is not assignable to parameter of type '(article: Article) => void'. |
If you just wanted to learn what strictFunctionTypes
does, you can stop reading right now. However, I encourage you to follow along and learn some background behind this check.
First, let’s introduce a type to represent generic single-argument callbacks.
1 | type Callback<T> = (value: T) => void; |
The reason fetchArticle
shouldn’t accept callback
is that the callback is too specific. It only works on a subset of things that can be fed into it.
The type of callback
should not be assignable to the type of onSuccess
parameter.
In other words, the fact that ArticleWithContent
is assignable to Article
does not imply thatCallback<ArticleWithContent>
is assignable to Callback<Article>
. If such implication were true, Callback
type would be covariant.
In our case, the opposite is true - Callback<Article>
is assignable to Callback<ArticleWithContent>
. That’s because a callback that can handle all articles is also able to handle ArticleWithContent
. Therefore, Callback
is contravariant.
If both implications were true at the same time, then Callback
would be bivariant.
Let’s now revisit the definition of strictFunctionTypes
.
Disable bivariant parameter checking for function types.
Does it make sense now? With the check enabled, function type parameter positions are checked contravariantly instead of bivariantly.
On a side note, some function types are excluded from strict function type checks - e.g., function arguments to methods and constructors are still checked bivariantly.
Wrapping up, strictFunctionTypes
is a useful compiler flag that helps you catch a class of bugs related to passing function arguments, such as callbacks.
The concept behind this flag is contravariance, which is a property of a type (type constructor, strictly speaking) that describes its assignability with respect to its type argument.
]]>In fact, Anders Hejlsberg himself is excited about this feature 😉
Really excited about this one…https://t.co/bbTF9bdFO5
— Anders Hejlsberg (@ahejlsberg) March 4, 2019
So, what’s the meaning of this enigmatic update in TypeScript?
Note: at the moment of writing TypeScript 3.4 is still a Release Candidate. You can install it by running npm install typescript@next
.
Let’s have a look at the following example. Imagine that you’re fetching a collection of objects from some backend service and you need to map this collection to an array of identifiers.
1 | interface Person { |
Next, you decide to generalize getIds
function so that it works on any collection of objects having the id
property.
1 | function getIds<T extends Record<'id', string>>(elements: T[]) { |
Fair enough. However, the code for this simple function is quite verbose. Can we make it more concise?
Sure, we can take advantage of a functional programming technique called pointfree style. Ramda is a nice library that will let us compose this function from other functions: map
and prop
.
1 | import * as R from 'ramda'; |
map
is partially applied with a mapper function prop
which extracts the id
property from any object. The result of getIds
is a function that accepts a collection of object. You can read a more detailed explanation in my article about pointfree style.
Sadly, TypeScript (pre 3.4) has bad news for you. The type of getIds
is infered to (list: {}) => {}
which is not exactly what you’d expect.
You can explicitly type map
but it makes the expression really verbose:
1 | const getIds = R.map<Record<'id', string>, string>(R.prop('id')); |
This is where propagated generic type arguments come in. In TypeScript 3.4 the type of getIds
will correctly infer to <T>(list: readonly Record<"id", T>[]) => T[]
. Success!
Now that we now what propagated generic type arguments is about, let’s decipher the name.
R.map(R.prop('id'))
is an example of a situation when we pass a generic function as an argument to another generic function.
Before version 3.4 TypeScript the type of parameters of inner function type was not propagated to the result type of the call.
Even if you’re not particularly excited about pointfree style programming, bear in mind that some popular libraries that rely on function composition and partial application and will also benefit from this change.
For example, in RxJS it is possible to compose new operators from existing ones using pipe
function (as opposed to pipe
method). TypeScript 3.4 will certainly improve typing in such scenarios.
Other examples include Redux (compose
for middleware) and Reselect.
Introduction of propagated generic type arguments has significany consequences for pointfree style programming. Before this update, using libraries such as ramda
or lodash/fp
with TypeScript was really cumbersome - you had to explicitly provide type arguments to certain calls which made the code far less readable.
TL;DR: Propagated generic type arguments pave the way for wider adoption of functional programming techniques in TypeScript.
Cover photo: JamesDeMers from Pixabay.
]]>I was surprised to learn that there are more such types and some of them seem to be undocumented. This article contains a list of all such types.
The list is based on what I could find in es5.d.ts
on github.
Partial
Partial<T>
returns a type that has the same properties as T
but all of them are optional. This is mostly useful when strictNullChecks
flag is enabled.
Partial
works on a single level - it doesn’t affect nested objects.
A common use case for Partial
is when you need to type a function that lets you override default values of properties of some object.
1 | const defaultSettings: Settings = { /* ... */ }; |
Update:
However, this technique is not 100% type-safe. As pointed out by AngularBeginner, if custom
has a property that has been explicitly set to undefined
, the result will end up having this property undefined as well. Therefore, its type (Settings
) will be a lie.
A more type-safe version of getSettings
would look like this:
1 | function getSettings(custom: Partial<Settings>): Partial<Settings> { |
Required
Required<T>
removes optionality from T
‘s properties. Again, you’ll most likely need it if you have strictNullChecks
enabled (which you should 😉).
Similarly to Required
, Partial
works on the top level only.
The example is somehow symmetrical to the previous one. Here, we accept an object that has some optional properties. Then, we apply default values when a property is not present. The result is an object with no optional properties - Required<Settings>
.
1 | function applySettings(settings: Settings) { |
Readonly
This one you probably have heard of. Readonly<T>
returns a type that has the same properties as T
but they are all readonly
. It is extremally useful for functional programming because it lets you ensure immutability at compile time. An obvious example would be to use it for Redux state.
Once again, Readonly
doesn’t affect nested objects.
Pick
Pick
lets you create a type that only has selected properties of another type.
An example would be letting the caller override only a specific subset of some default properties.
1 | function updateSize(overrides: Pick<Settings, 'width' | 'height'>) { |
Record
Record
lets you define a dictionary with keys belonging to a specific type.
JavaScript objects can be very naturally used as dictionaries. However, in TypeScript you usually work with objects using interfaces where the set of keys is predefined. You can work this around by writing something like:
1 | interface Options { |
Record
lets you do this in a more concise way: type Options = Record<string, string>
.
Exclude
Exclude
makes a lot of sense when you look at types in terms of sets of possible values. For example, number
type can be looked at as a set containing all numerical numbers. A | B
is called a union because its set of possible values is a sum of possible values of A
and possible values of B
.
Exclude<T, U>
returns a type whose set of values is the same as the set of values of type T
but with all U
values removed. It is a bit like substruction, but defined on sets.
A good example of using Exclude
is to define Omit
. Omit
takes a type and its key and returns a type without this key.
1 | interface Settings { |
Omit<T, K>
can be defined by picking all keys from T
except K
. First, we’ll Exclude
K
from the set of keys of T
. Next, we will use this set of keys to Pick
from T
.
1 | type Omit<T, K extends keyof T> = Pick<T, Exclude<keyof T, K>>; |
Extract
Extract<T, U>
return those types included in T
that are assignable to U
. You can say that it returns a common part of T
and U
. However, the types don’t have to be exactly the same - it suffices that a type from T
is assignable to U
.
For example, you can use Extract
to filter out function types from a union type:
1 | type Functions = Extract<string | number | (() => void), Function>; // () => void |
NonNullable
NonNullable<T>
removes null
and undefined
from the set of possible values of T
.
It is mostly useful when working with strictNullChecks
and optional properties and arguments. It has no effect on a type that is already not nullable.
1 | type Foo = NonNullable<string | null | undefined>; // string |
You can find a good usage example of NonNullable
in my previous article.
Parameters
This useful type returns a tuple of types of parameters of given function.
1 | function fetchData(id: number, filter: string) { |
One interesting usage is typing wrapper functions without having to repeat the parameter list.
1 | function fetchDataLogged(...params: Parameters<typeof fetchData>) { |
ConstructorParameters
ConstructorParameters
is exactly the same as Parameters
but works with constructor functions.
1 | class Foo { |
One caveat is that you have to remember about typeof
in front of the class name.
ReturnType
The name is pretty self-explanatory - it returns a type returned by given function. I found this type really useful.
One example is in Redux where you define action creators and reducers. A reducer accepts a state object and an action object. You can use ReturnType
of the action creator to type the action object.
1 | function fetchDataSuccess(data: string[]) { |
InstanceType
InstanceType
is an interesting one. You can say that it is complimentary to typeof
operator.
It accepts a type of a constructor function and returns an instance type of this function.
In TypeScript, class C
defines two things:
C
for creating new instances of class C
C
- the instance typetypeof C
returns the type of the constructor function.
InstanceType<typeof C>
takes the constructor function and returns type of the instances produced by this function: C
.
1 | class C { |
Cover photo: TeroVesalainen from Pixabay.
]]>strictNullChecks
compiler flag) is one of the best things that happened to TypeScript. Thanks to this feature you can make your code a lot safer by eliminating a whole class of bugs during compile time.However, enabling strict null checks comes at a cost. Adding appropriate conditions might make your code more verbose. This is especially painful in case of accessing deeply nested properties.
In this article, you’ll see how to take advantage of mapped types to deal with nested properties in an elegant, concise way.
Check out the source code with snippets used in this article here.
Many thanks to mgol for the inspiration for the idea behind this article.
Update 1: Check out an interesting this discussion on the topic in this reddit thread.
Update 2: Many thanks to Useless-Pickles who pointed out some issues with the initial version of code in this post. Check out theirs implementation with improved type safety.
Imagine you’re working with the following interface:
1 | interface Customer { |
At some point, you might want to find out the city of the company of given customer. Without strictNullChecks
, it would be pretty straightforward.
1 | const c: Customer = /* ... */; |
Of course, this is very unsafe. With strict null checking enabled, TypeScript forces you to ensure that an object is defined before accessing its property. The least verbose way of doing this is to use the &&
operator.
1 | const city = |
This is not bad, but can we do better?
lodash
?Lodash library has a nice utility function get
. It lets you access a deeply nested property in a safe way. Basically, you can specify a path to the property. If any object on the path is undefined, the function will return undefined. Otherwise, it will return the value of the property.
1 | import { get } from 'lodash'; |
This code is pretty neat and concise. However, the problem with this approach is that it’s not type-safe. There is nothing stopping you from making a silly typo and then spending hours figuring that out
1 | get(c, 'company.addres.city'); |
So, is there a way to make get
type-safe?
Let’s write our own version of get
. In the first iteration get
will only accept a single level of nesting (ie. it will handle get(c, 'company')
properly).
1 | function get< |
The function body is pretty straightforward. What’s interesting here is the type signature. get
is a generic function with two type parameters.
The first one (T
) is the type of object from which we want to read the property.
The second one (P
) is a type that is assignable from keyof NonNullable<T>
. What is keyof NonNullable<T>
? It returns a type that is a union of literal types corresponding to all property names of NonNullable<T>
.
For example, keyof Customer
is equal to "name" | "company"
.
Literal type is a type that only has a single possible value. In this instance, prop
would have to be a string that is equal to either "name"
or "company"
.
Finally, why do we use NonNullable<T>
instead of T
? T
can be any type, including one that accepts null
and undefined
values. We want to access a property of T
, so first we need to make sure that it is not null
nor undefined
. Hence, we wrap it with NonNullable
.
Thanks to this type signature, the compiler will make sure that we use a correct string when passing the prop
argument. Indeed, the following code returns a type error.
1 | get(c, 'kompany') |
This is cool, but how about deeper nesting?
This is going to be tricky. We need a way to say that the type of N-th argument somehow depends on the type of (N-1)-th argument.
In fact, it is not currently possible to do this for an arbitrary number of arguments in TypeScript. It is one of the limitations of its otherwise powerful type system.
Fear not, the hope is not lost yet! We can cheat a little. In practice, how many levels of nesting are you going to need? 3? 4? 10? The number is not big. We can take advantage of this fact and defined a finite number of overloads for get
.
1 | function get< |
TypeScript lets us provide multiple type signatures for a function that can handle any number of arguments. We define one signature for each level of nesting that we want to support. For given level of nesting N, we need to define a signature that takes the object and N property names. The type of each property name will have to be one of the keys of the previous property. Once you understand the mechanism, it’s pretty straightforward to create these overloads.
We now have a nice, type-safe way to access deeply nested properties!
1 | get(c, 'company', 'address', 'city') |
In fact, this technique is widely used in libraries and frameworks. Here, you can observe it being used in RxJS.
BTW, this type signature sometimes gives false negatives. If all properties and the source object itself are not nullable, then the result type should not include undefined
. Check out this implementation for more details.
In this article, you’ve seen how to solve a common problem of safely accessing deeply nested properties. On the way, you have learned about index types (the keyof
operator), literal types and the generic technique of typing functions that accept multiple arguments whose types depend on each other.
Please leave a comment if you enjoyed this article!
Cover photo: source.
]]>You can play with the code here.
I love using React together with TypeScript. Being able to safely (type-wise) pass properties in JSX is a big win to me. However, once you want to do something non-standard, typing your code properly becomes less obvious.
Some time ago I was implementing a generic function that took a React component as a parameter and returned a type that was based on the type of the component’s props
. I needed a mechanism that would extract the type of props
from the component’s type. It turned out that this can be achieved with a direct application of TypeScript’s conditional types.
Conditional types bring conditional logic to the world of types. They allow you to define something like a function on types that takes a type as an input and based on some condition returns another type.
1 | type IsString<T> = T extends string ? "yes" : "no"; |
In this example IsString
takes T
and examines it. If T
is a string then the result would be a "yes"
literal type (a type whose only possible value is "yes"
string). Otherwise, it will be a "no"
.
1 | type A = IsString<number>; |
Look how similiar this is to a regular TypeScript function:
1 | const isString = (x: any) => typeof x === "string" ? "yes" : "no"; |
The difference is that a conditional type operates in the world of types while a regular function operates in the world of values.
How can we take advantage of this mechanism and use it to extract the type of React component properties? Let’s create a conditional type that checks whether a given type is a React component.
1 | type IsReactComponent<T> = |
We had to specify the type parameter for React.Component
so we used any
. However, TypeScript lets us do something cooler - we can use the infer
keyword instead.
1 | type IsReactComponent<T> = |
infer
creates a new type variable P
that will store the type parameter of T
if it indeed extends React.Component
. In our case, it will be exactly what we’re looking for - the type of props
! So, instead of returning "yes"
literal type, let’s simply return P
.
1 | type IsReactComponent<T> = |
Now, we can assume that this type will only be used with actual React components. We’d like the compilation to fail otherwise. Instead of returning "no"
, let’s return the never
type. It’s a special type that is intended exactly for such situations. We return never
when we don’t want something to happen. If a variable has never
type then nothing can be assigned to it.
1 | type PropsType<C> = |
And that’s it!
But hey, React is all about functional components now, isn’t it? Let’s adjust PropsType
to take this into account.
1 | type PropsType<C> = |
A piece of cake! Conditions in conditional types can be nested, just like a regular ternary operator. Also, note that extends
is not just about classes inheriting from other classes. The condition checks if a type is assignable from another type.
1 | const Header: React.FunctionComponent<{ text: string }> |
As pointed out in voreny‘s comment, there is a better way to handle both functional and class components. PropsType
can be defined using React.ComponentType
(see here for definition):
1 | type PropsType<C> = |
The only caveat is that it has to be used with typeof
keyword for classes:
1 | type ArticleProps = PropsType<typeof Article>; |
I hope this article convinced you of the usefulness of conditional types in TypeScript. This is just one of many applications of advanced types, so don’t hesitate to dive deep into this section of TypeScript docs!
]]>strictNullChecks
flag in TypeScript for a long time. However, it wasn’t until recently that I had a chance to work with a huge codebase with strict
mode enabled. Although I was aware of the benefits of this flag, I didn’t expect it to be as awesome as it turned out to be in reality.Read on to learn why you should definitely enable this flag in the project you’re working on.
For those of you who know what strictNullChecks
does, you can safely skip to the next paragraph.
The purpose of strictNullChecks
is to help you write safer code. It achieves that by pointing out places where you could be forgetting that a variable is null
or undefined
.
Imagine this small example:
1 | const appDiv: HTMLElement = document.getElementById('app'); |
It looks fine. However, what if there were no element with id app
? In such case document.getElementById
would return null
and we would get a TypeError: appDiv is null
error. We would learn about the mistake only at runtime. But the purpose of TypeScript is to help you find mistakes during compile time!
Let’s enable strictNullChecks
. Now your IDE will highlight appDiv
with the following message:
1 | Type 'HTMLElement | null' is not assignable to type 'HTMLElement'. |
What this means is that the return type of document.getElementById
is HTMLElement | null
and we’re trying to assign it to a constant with type HTMLElement
. TypeScript doesn’t allow this because the target type is narrower then the source type.
Enabling strictNullChecks
changed the type of document.getElementById
to HTMLElement | null
instead of simply HTMLElement
. In other words, this type is now more honest and closer to the truth. It admits that a null
value can also be returned from this method. By doing this, it forces you to handle such case.
If we change the type of appDiv
to HTMLElement | null
, the next line will compile with the following error:
1 | Object is possibly 'null'. |
We cannot access a property of appDiv
because it could be a null
. TypeScript forces us to consider the null
case. How to address it? It depends. Maybe you need to create this app
element. Or maybe in such case it’s better to throw an exception? The simplest solution would be to not do anything in case of falsy appDiv
.
1 | const appDiv: HTMLElement | null = document.getElementById('app'); |
This works because of TypeScript’s great feature called type guards. The compiler is able to figure out that the type of appDiv
inside the if
is HTMLElement
instead of HTMLElement | null
thanks to some static code analysis.
What exactly changed that allowed the compiler to work this way?
By default (with strictNullChecks
disabled), the compiler behaves as if every type you use in your code was implicitly replaced with its union with null
and undefined
.
In other words, null
and undefined
values are part of every type. Therefore, typing something as HTMLElement | null | undefined
was redundant since HTMLElement
already included null
and undefined
.
Converesly, with strictNullChecks
enabled,
strictPropertyInitialization
)It’s easiest to explain pointfree style by showing a small example. Let’s say we have this array of objects representing mountains:
1 | const mountains = [ |
As you know, you can use array methods such as map
or reduce
to run some calculations on this array.
1 | const names = mountains.map(mountain => mountain.name); |
Both of these methods are higher-order functions - they accept a function as an parameter. In both cases we create an anonymous function using arrow function syntax and pass it as an argument. Now let’s make those two lines more pointfree.
1 | mountains.map(prop('name')); |
What happened here? I replaced arrow functions with small, self-explaining functions such as prop
and add
. These functions come from a library called ramda, but you could easily write them yourself. As you can imagine, prop('name')
returns a function that reads name
property from given object and add
simply adds two numbers.
1 | prop('name')({ name: 'Kazbek'}); // 'Kazbek' |
Pointfree style can be summarized as the following rule: never mention the data. In our example, data refers to parameters of anonymous functions. Instead of using anonymous functions, we use small, very generic functions. We build applications by composing these functions. This forces us to focus on data tranformations instead of on the data itself.
BTW, point in pointfree doesn’t refer to .
that is part of method call syntax. It comes from category theory, where points are elements of sets. Pointfree in this context means that you focus on transformations (morphisms) between set elements (points) instead of on the points themselves.
There are two very important concepts that make pointfree style possible. The first one is called partial application. I’ve already used partial application in this article. It happened when I called prop('name')
. Logically, prop
takes two arguments: a property name and an object:
1 | prop('name', { name: 'Kazbek'}); |
However, we used it as if it was a single argument function: prop('name')
. By doing this, we’ve partially applied prop
. The result is a function that will wait for the second argument and only evaluate once it receives it.
But wait, if I only provided one argument to two argument function, it will still evaluate - undefined
will be passed as the second argument, right? Well, it’s true unless the function is curried. Currying is the process of taking a multi-argument function and transforming it into a single argument function.
Here is a simplified, multi-argument version of prop
- it can’t be partially applied:
1 | const prop = (propName, object) => object[propName]; |
And here is a curried version:
1 | const prop = (propName) => (object) => object[propName]; |
Do you see the difference? The second version is indeed a single argument function which returns another single argument function. You don’t need to curry your functions manually - you can use curry
function from ramda
instead.
1 | const prop = curry((propName, object) => object[propName]); |
Another concept essential for pointfree style is function composition. I wrote a separate article about, so please take a look if you are not familiar with the concept.
Ok, I promised you some RxJS code 😉 Let’s start with a simple example. Upon clicking the breedsFetchEl
button we want to fetch some data from Dog API and present it in breedsListEl
div. Easy peasy.
1 | const click$ = fromEvent(breedsFetchEl, 'click'); |
Let’s convert it to pointfree style. We will focus on the anonymous function inside map
. Let’s try to figure our what’s going on there:
message
property is read from result
.Object.keys
function.join
on the value returned by that call.We don’t like method calls in functional programming - they are not composable. Let’s first replace the third step with a function call. We will use join
from ramda
.
1 | map(result => join(', ', Object.keys(result.message))) |
Property access also comes from the OOP word, so let’s change it to a function call. This time, let’s use prop
from ramda
.
1 | map(result => join(', ', Object.keys(prop('message', result)))) |
Now this looks familiar. We have three nested function calls. We call the first one and pass the result to the second one. The result of the second one is passed as an argument of the third one. This is exactly what function composition is. Let’s use pipe
to replace the anonymous function with a function composed from these three functions.
1 | map( |
But wait, prop
and join
are two argument functions and now we partially apply them! How is this possible? The answer to this is that ramda
functions are really clever. They are curried by default. However, they do also work as expected when called with multiple arguments.
Let’s now take a look at a slightly more complex scenario. We have a text field (dogNameInputEl
) and we want to show dog images (inside dogImageEl
) based on breed names entered into this field. We want the image to update as you type, but we don’t want to overload the server with request. Hence, we will only send the request if breed name’s length exceeds 2 and we will debounce calls.
1 | const nameInput$ = fromEvent(dogNameInputEl, 'input'); |
Cool, let’s deal with this example line by line. The first operator maps input
events to actual values typed into the text field.
1 | map(() => dogNameInputEl.value) |
We need a function that ignores its input and always returns dogNameInputEl
. Then we could pass its result to prop('value')
. It turns out that there is such function in ramda
. It’s called… always
!
1 | map(pipe(always(dogNameInputEl), prop('value')) |
Next line filters input values so that we only process those that are longer then 2. It boils down to function composition again. First, we calculate string length. You guessed right, there is a function called length
in ramda
that we can use. Second, we compare the result with 3. ramda
has a function called gte
. However, if we used it like this gte(3)
the we would get a function that returns true
if 3 is greater then or equal to its argument. Therefore, we need to swap gte
‘s arguments. flip
is a function which does exactly that!
1 | filter(pipe(length, (flip(gte))(3))) |
There is no anonymous function in debounce
, so we can skip it. Next, mergeMap
transforms breed name to a response object retrieved from the server. Again, we can use function composition. First, we transform the name into an URL. Next, we pass the URL to ajax.getJSON
. Transforming the name into an URL is achieved using template strings. We could compose this using ramda
‘s functions but the result would not be readable at all. In such case, it’s better to create a helper function and simply use it inside pipe
. It’s not super pointfree, but code readability is more important (another approach would be to use order based string formatting library, such as sprintf-js
).
1 | const getImageUrl = |
Finally, the last line simple extracts message
property from the result. I’m pretty sure you already know how to do that 😉 The end result will look like this:
1 | const image$ = nameInput$.pipe( |
What do you think about pointfree style? In my opinion, it’s a great programming practice, but one that has to be applied with caution. In general, it makes your code more concise and eliminates the boilerplate of anonymous functions. More importantly, it forces you to think about your program as a composition of small, generic building blocks (functions). Pointfree style lets you experience the full power of functional programming!
However, pointfree shouldn’t be applied at all cost. In some cases, it might make the overall readability of code much worse. What’s more, if you’re working in a team, you have to keep in mind that such style of writing code might be unfamiliar to others.
Don’t hesitate to share your opinion about pointfree style in comments section!
]]>I’ve been considering this idea for some time (shout out to Piecioshka whose article [PL] planted the seeds). Previously, my blog used to be based on Wordpress (to which I migrated from Blogger). Worpdress is a great platform that powers a lot of blogs and businesses but with time, more and more issues have been bugging me:
Hexo is a static site generator. It is based on the simple but clever idea that a blog doesn’t actually have to be dynamic. Why generate posts’ HTML over and over again while they look the same to every user?
With Hexo you write blog posts in markdown and the blog gets generated as a bundle of static HTML files. You can then very easily upload this bundle to Github Pages which will host your static files for free!
Although I had a lot of content to migrate, the process was made much easier thanks to hexo-migrator-wordpress
package.
With this setup, all of the issues I had with Wordpress are now addressed!
Please let me know how do you like the new look of the blog! I would also be grateful for an e-mail if you find something that doesn’t work properly.
]]>It’s common for a modern single-page application (SPA) to fetch data from the server via a REST API call. The vast majority of web applications do this. There are, though, many challenges related to this approach, one of which is handling long-running queries. In order to ensure a great user experience, we can’t have the user wait four or five minutes to see the results of an action.
This is often the case here at Sumo Logic, where, for instance, the user interface (UI) sends complex search queries to the backend. Depending on the query, processing might take a few minutes. In this article we will discuss different approaches for dealing with this issue. We’ll rely on the RxJS library to help us with this task because it’s perfect for dealing with complex, asynchronous flows.
There are multiple approaches that can be taken and in this article I’ll discuss three of them. The list is here mostly for inspiration, as the solution for your specific problem will very likely depend on your use case and the design of your API. Here’s a quick summary of the different approaches I will discuss throughout this post:
Code examples below are simplified; in reality you also need to take care of error handling and unsubscribing.
This is the most basic approach because we don’t really fix the problem, but rather simply improve user experience by indicating to the user that the query is being processed (or whatever long-running action is happening in your system). Let’s assume our task is to fetch a list of customers. Unfortunately, this API call is rather slow. In order to make sure that the user is aware of the fact that a query is being processed, we’ll show a loading spinner. Let’s say we already have the following function, which can fetch the list of customers from the backend. It returns an observable, which will emit once, when the server replies. If you’re using the fetch API, you can easily convert a promise to an observable using the from function.
1 | function fetchCustomers(): Observable<Customer[]> { ... } |
Fetching the list is very likely initiated by a user action such as clicking a button. Let’s create a click$
stream, which emits button clicks and then use switchMap
to transform it into customers$
stream, which will emit lists of customers retrieved from the server.
1 | const click$ = fromEvent(buttonEl, 'click'); |
As a next step, we’ll create a new stream that emits true whenever the loading spinner should be shown and false when it shouldn’t. We’ll do this by merging click$
and customer$
streams:
click$
means that we should show the spinner so we’ll map it to truecustomer$
means that we should hide the spinner so we’ll map it to false1 | const isLoading$ = merge( |
Now, all that’s left is to subscribe to the stream and update the loading indicators visibility.
1 | isLoading$.subscribe(isLoading => |
The goal of this second approach is to improve the user experience by not making the user wait for the whole query to be processed, but rather to show something whenever some results are available. We’ll achieve this by splitting the long-running query into smaller queries. Of course this approach is based on some assumptions about our API:
What do I mean by splitting the query into smaller ones? For example, instead of fetching the full list of customers at once, we might decide to fetch small portions of the list and combine them in the UI. Let’s see the code, and assume that we’re now working with the following function that can be parameterized by some offset. This offset can be used to decide which part of the list to fetch. Let’s also assume that the function will always fetch a fixed number of matching customers (e.g. 100).
1 | function searchCustomersPaged(query: string, offset: number) |
We can start by creating an array of offsets and map it into queries. The first query will fetch customers from 0 to 99, the second will fetch customers from 100 to 199, and so on.
1 | const offsets = [0, 100, 200, 300]; |
Each stream will emit a null followed by the actual result. As a next step, we’ll combine those streams into a single stream which emits concatenated, non-empty results.
1 | const result$ = combineLatestFun(queries).pipe( |
We ended up with a single stream that will emit a growing list of customers, which we can show to the user in real time. This is a much nicer user experience then having to wait for the whole list to be fetched. Note: it’s important to keep in mind that browsers put limitations on the number of concurrent queries made to the same domain. It doesn’t make any sense to exceed this number.
Finally, in this approach we’re going to fire parallel queries aiming for different accuracy of the result. We’ll then wait and, after some fixed amount of time, return the best (most accurate) result of those received so far. Quick shout out goes to one of my colleagues, Omid Mortazavi, who came up with the idea for this third approach. How does this translate to the customer search scenario? Let’s say that the API includes a parameter for specifying the level (precision) of search accuracy. A customer search with a lower accuracy will be faster but not as exhaustive as a search with a higher accuracy. We want to present the user with the best result yet we don’t want them to wait too long. We’ll therefore trigger several searches, of varying precision, and only wait a fixed amount of time. Similar to the previous approach, let’s start by creating an array of different accuracy levels and mapping them into queries.
1 | const accuracyLevels = [5, 3, 1]; |
Next, let’s create a stream that will emit true after a fixed period of time elapses.
1 | const timeoutElapsed$ = timer(10000).pipe( |
Finally, we’ll combine all of the streams in queries with the timeoutElapsed$
stream. The combined stream will emit whenever any of the source streams emit. The second parameter of combineLatest
is a function in which we decide what to do when it happens. The logic is as follows:
1 | const result$ = timeoutElapsed$.pipe( |
Below you can find marble diagrams demonstrating this approach based on two concurrent queries.
Example 1: one query finishes before timeout elapses
Example 2: both queries finish before timer elapses
Example 3: neither query finishes before timeout elapses
One final thought: if the API provides such an option, cancel any pending searches once we’ve presented the result to the user to avoid unnecessary backend work or network traffic. In scenarios demonstrated by the diagrams above there is no cancellation at all. Therefore, the result$
stream emits multiple times, which might not be desirable.
We’ve discussed three different approaches to improving user experience when dealing with long running API calls. While this list is by no means exhaustive and these techniques might need some adjustments based on your specific situation, I hope you’ve seen some of the power of functional-reactive programming with RxJS and can see other areas of your applications which can benefit from the possibilities it enables.
]]>Most importantly, you need to submit a lot. Of course, there are conferences where talks are chosen purely based on the abstract you provide. However, the majority will look at your speaking history, your presence in the web, etc. So unless you are a core contributor to a popular library or a published author, your chances of getting selected are relatively low (PaperCall.io mentions 1 in 10 on average). Therefore, you need to apply to a lot of conferences hoping to get selected for at least one. This takes time and effort, but it pays off. In 2018, I applied to almost 30 conferences and got selected to 5 of them.
Adjust the topic of your talk to the audience. It’s a cliche, but it has some important consequences. Your topic doesn’t have to be super advanced. When applying to general-purpose, multi-track conferences which don’t focus on a specific technology, it’s perfectly ok to submit an introductory level topic. One of my best-rated talks was an introduction to functional programming in JavaScript at a frontend track of 4 Developers 2018 - a large, multi-tech conference. I think it’s generally a good idea to begin with a less advanced topic as such presentation will be easier to deliver. Conversely, your talk should be a bit more in-depth if you are going to present it at a specialist conference dedicated to a specific tech.
Your day will be split into two parts: before and after your talk. The shorter the first part, the better - but it’s usually something you don’t have influence on. Anyway, I used to stress a lot before a talk, which made the first part of the day really unpleasant. However, with time and experience, the time before the talk becomes much less stressful. I know it’s hard, but try not to give up to the stress. Talk to other speakers - those more experienced are usually very friendly. The inexperienced are in the same situation as you, so sharing your anxiety will make it easier to bear. There is a chance that fellow speakers will be sitting in the audience during your talk. You will feel better knowing there is at least one friendly person in the audience. Shout out to Assim Hussain who was really supportive and gave my talk a lot of visibility by twitting it live at Refresh Conference 2018!
OMG. I finally understand what the pipe operator does and how the es2018 pipe operator and rxjs pipe are related, functional composition! Thanks @miloszpp and @REFRESHRocks #RefreshRocks pic.twitter.com/3zy85tpGHt
— Asim Hussain (@jawache) September 7, 2018
Don’t rehearse too much on the conference day. One run before leaving the hotel in the morning is enough. By this time, you should already have rehearsed enough.
Now you can enjoy the rest of the conference! Have some rest and relax a bit. If you still have some energy left, it’s a good idea to approach some attendees and talk to them. Ask them how do they like the conference in general. You don’t have to directly ask about feedback regarding your talk. Most of the time, the topic will come up anyway. If it doesn’t, it’s still a great chance to learn the attendee’s perspective and see what kind of talks do they like. BTW, I don’t feel comfortable approaching strangers at all. However, it seems to be a learnable skill (same as not getting freaked out before your talk is). I find this guide very helpful in dealing with this.
Wait a few days and send an email to the organizers asking about feedback with regards to your talk. Most conferences provide some way of rating talks to the attendees. The feedback can be a great way to tell what needs improvement and how well did the topic match your audience. However, don’t worry too much if you receive some negative, non-constructive remarks. There will always be some haters and it really doesn’t mean anything when 2 out of 200 people didn’t like your talk. Now you can decide whether you want to reuse your topic or come up with a new one. I think reusing the topic a few times is perfectly alright. However, I adjust and polish my talk before each conference - based on the feedback, target audience and desired talk duration.
Totally! I’ve already mentioned some benefits of public speaking in the previous article on the topic. It gets even better when you start going to conferences instead of meetups. When traveling abroad, you will often be reimbursed for the flight and a hotel. You get to meet some seriously experienced speakers and can get a lot of inspiration and advice from them. Most importantly, it’s really rewarding and gives you a sense of achievement.
Best conference speaker gift bag ever! Dzięki #programistok pic.twitter.com/trW2BmUAm2
— Milosz Piechocki (@miloszpp) September 30, 2018
So, if you’re wondering whether you’re ready for a conference talk - don’t hesitate and start applying!
]]>JS Poland is an amazing event where you can:
Readers of codewithstyle.info have a unique chance to win a free ticket for the conference! Currently it’s worth over 100 euros. In order to win the ticket, please share the link to one of the articles from my blog on Twitter and include #jspolandwithstyle hash tag. Please remember about the hashtag, otherwise I won’t be able to find your tweet. The post with the largest sum of retweets and likes wins! I will announce the winner on October 23.
]]>Let me remind you that the end goal is not to convince you to ditch all JavaScript frameworks and build your own instead. My point is to explain the reasoning behind commonly used patterns and how they relate to functional programming. Source code for this article is available here.
Our little framework relies on view
function which translates state into DOM tree. The function is invoked on every state update. This means that on every state change we need to re-create the whole DOM tree and have it re-rendered by the browser. In a complex application with multiple actions and huge DOM tree this could have a huge impact on performance. Below you can find a screenshot illustrating the problem. The whole div
is updated even though clicking Complete should only affect two table rows.
Basically, we’d like to limit the amount of unnecessary DOM-related work. On the other hand, we want our code to stay declarative and functional so we have to avoid direct, imperative DOM manipulation.
Virtual DOM is the answer to our problems! It is a clever technique where instead of creating actual DOM objects you operate on virtual DOM elements. Operations on virtual nodes are much faster than on actual nodes since there is no browser API involved. Obviously, at some point we need to update the actual DOM tree. Here is how we will do this:
view
function will return a virtual DOM treeapp
invocation) we will compare the result of the view
call with the previous treepatches
that represent minimal changes to the DOMpatches
can be applied to the actual DOM tree; only relevant parts of the DOM are updated, not the whole treeBelow you can see the difference after enhancing the application with virtual DOM. Note that only relevant parts of the DOM are highlighted.
Let’s rework our application to take advantage of virtual DOM.
The first step is to adjust the view
function to return virtual nodes instead of real DOM nodes. We are not going to implement the virtual DOM mechanism itself. It’s a highly non-trivial task and not in the scope of this article. Instead, let’s use one of existing virtual DOM libraries. Our library of choice is simply called virtual-dom. The best thing about it is that it’s compatible with hyperscript-helpers
. Remember how we wrapped hyperscript
with hyperscript-helpers
so that we were able to use functions such as div
, h2
, table
, etc.? This extra level of indirection will prove enormously useful now. Our view
function will continue using these functions. However, they will proxy to virtual-dom
instead of hyperscript
resulting in virtual DOM nodes instead of real DOM nodes.
1 | import h from 'virtual-dom/h'; |
That’s it! There are no more changes to view
function!
The next (and last) step is to adjust the app
function. It’s going to get a bit more complex. Before, all we had to do was to replace the old DOM tree with the new tree. However, view
returns a virtual tree now. We need to compare the new tree and with old one using diff
function provided by the library. The comparison will return a set of patches
which can be later applied on the actual DOM tree. The above procedure describes what happens on state update. However, we need an initial DOM tree to begin with! We can get one from the initial virtual tree by calling createElement
, also provided by the library. Below you can find the update code.
1 | import diff from 'virtual-dom/diff'; |
The app
function accepts a new parameter called previousView
. We need it to be able to compare updated virtual DOM with the previous version. When previousView
is null
, it means that app
is called for the first time (with initialState
) and that there is no real DOM tree in place yet. Therefore, we call createElement
and append the result to rootNode
. When previousView
is not empty, we should compare it with the new virtual tree (updatedView
) and apply patches on rootNode
‘s first child (because we initially attached the whole tree to rootNode
). Obviously, this part is not functional code. Patching the DOM is an imperative operation with side effects. However, this part of the code wasn’t pure in the first place. What’s important is that we’ve managed to preserve the purity of the rest of the code.
The concept of virtual DOM is instrumental in React framework. This is actually what made React famous for its performance. By the way, if you are familiar with React, you might have noticed similarities between our application and the framework. Indeed, our view
function is nothing less than a React functional component! It’s interesting to see how other frameworks deal with limiting the amount of DOM operations while maintaining declarativeness. For example, Angular takes a different approach based on change detection. You can read more about it in one of my posts. This comparison between those two mechanisms seems very interesting to me and I asked a question on Reddit specifically about it. I’ve got an amazing reply from Rob Wolmard which details the pros and cons of both approaches:
The tradeoff (and part of the philosophy Angular is built around) is that templating allows Angular to deeply understand a template, and generate highly optimized code, in both pure CPU cycles, but low memory consumption and garbage collection. The flexibility of being able to return whatever from a JSX-style
render()
function means the framework has to be able to handle whatever, and each time a new virtual DOM representation is created, it can consume a fair amount of memory - especially important on low end devices.
In this post you’ve learned what virtual DOM is and how frameworks can use it to optimize performance while remaining declarative. I believe that this example nicely illustrates how trying to build your own framework can push you to learn new things and understand how existing frameworks work underneath.
]]>I’m not going to use any framework but there will be some helper libraries involved. The source code is available here.
Every application has a state. It can be distributed across fields in multiple objects (components, controllers, etc.) or it can be centralized. In the centralized approach, the state is an object that stores the information the application needs to function. In our case, the state will contain an array of objects, each one representing a climb.
1 | export const initialState = { |
Feel free to extend the list and add some more climbs!
Now that we know the structure of the state, it’s time to create a function that produces a DOM tree based on the state. Since we’re doing functional programming, this function should, of course, be pure. How to create the DOM tree in a functional way? The standard DOM API is not very functional - it relies on global objects and mutations. Instead, we’re going to use two libraries - hyperscript and hyperscript-helpers. Hyperscript makes it possible to create DOM trees in a declarative way. Instead of manipulating nodes you simply declare what you’d like the document to look like:
1 | var h = require('hyperscript') |
With hyperscript-helpers the task becomes even easier as instead of using h
function all the time you can use functions named after HTML tags
1 | h('div') ---> div() |
If you don’t feel like using JS functions instead of HTML, you can achieve the same results with JSX.
The role of the view function is simple - it has to translate state into a DOM tree by calling various hyperscript-helpers
functions. Most of these functions accept three parameters:
class
and id
of the elementIt’s a good idea to break down the view
function into smaller functions representing different components of the UI. You might put the implementation of this function inside a separate view.js
file. Let’s take the top-down approach and start with the root function. We separate climbs into two arrays representing completed and remaining climbs. Next, we use the climblist
component to display them. CSS classes are referring to Bootstrap.
1 | import h from 'hyperscript'; |
Component climblist
is simply an HTML table. We map
every climb inside the state to climblistRow
component. As an exercise, you can add a static table header to this function!
1 | function climblist(climbs) { |
Finally, climblistRow
consists of cells containing data from climb
objects. The last column contains a button that will toggle “completeness” of a climb. For now, it will not do anything.
1 | function climblistRow(climb) { |
You are now ready to put the view function into action. Create an index.html
file with the following body:
1 | <body> |
Now all you need to do is to run view
function on initialState
and attach the result to the root
node. Create and index.js
file with the following content:
1 | import { view } from './views'; |
To make this all work together you will need a module bundler. Check out the webpack config in the source code. When you launch the application you should see a table displaying climbs defined in the initial state.
So far our application is rather boring. Toggle buttons don’t do anything interesting. It’s time to change it! As you know, the most important part of the application is the state object which is the golden source of truth. So if we want something to happen then we need to modify the state. But hey, it’s functional programming - we can not mutate the state! Instead, we will return a fresh, updated copy of the state and re-generate the DOM based on it. Let’s create a function called reducer
which takes a state object and an action object and returns a new state object. But what’s the action object? It’s simply a piece of data representing some event in the application. We will only have a single action that will be triggered when the user clicks on the toggle button. It has to store the id of the climb that is being toggled. Let’s create actions.js
file with the following contents:
1 | export const TOGGLE_COMPLETED_ACTION = 'TOGGLE_COMPLETED_ACTION'; |
It’s a common convention to have these two fields inside action objects. type
determines what action is it and payload
contains details of the event. In our case payload
is simply climbId
but it could be any object. Next, we will write the reducer
function. Given the state and the TOGGLE_COMPLETED_ACTION
action, it needs to return a new state object where the climb with provided id
is replaced with a new climb object with completed
field flipped.
1 | export function reducer(state, action) { |
Note how we make sure not to mutate the state
object. Spread operator and map
method are pretty useful in this scenario. Next, we need a way to trigger this new action. Let’s assume for now that view
function accepts another parameter called dispatch
. It will be a function and it will be used to trigger actions. We need to pass this parameter all the way down to climblistRow
component where it can be hooked to onclick
property of the toggle button.
1 | button( |
Make sure to put dispatch
as the first parameter of all view functions. It will allow you to do partial application using curry
function (which you can get from Ramda).
1 | ...climbs.map(curry(climblistRow)(dispatch)) |
It’s time to create some machinery so that action results in DOM updates. This is the only impure fragment of the codebase! Replace the contents of index.js
with the following code.
1 | const rootNode = document.getElementById('root'); |
The app
function applies provided state object to the DOM. What’s more, it defines the dispatch
function which is used to trigger actions. This function uses reducer
to apply the action on existing state and recursively passes the new state object to app
. Finally, we call app
with initialState
. And that’s it! Run the application and try out toggle buttons - they should work now.
You’ve just created a working application which is as functional as it possibly can. Of course it has some impure parts - every application should have some side effects, otherwise, it wouldn’t be very useful! The trick is to minimize and isolate the impure parts. For those of you familiar with Redux or MobX this approach can feel very familiar. Indeed, we’ve implemented a simplistic version of a state management framework. I totally suggest using a real framework instead but for educational purposes, it’s a good idea to see how it works underneath. You might have concerns about performance. Re-creating the whole DOM on every tiny state change doesn’t sound optimal. The answer to this is virtual DOM. There are libraries that allow you to create virtual DOM nodes that can later be compared and only small, incremental changes to the DOM are executed. This is exactly how React works! hypescript-helpers
are compatible with some virtual DOM libraries. I hope the article was clear enough and that now it’s clearer how to apply functional programming principles in real life!
In this last episode, I look at a very interesting topic of how RxJS can be used to implement a state management technique known from libraries such as Redux. As always, you’ll learn it based on a practical use-case - building a form undo mechanism! I hope you’ve enjoyed the course and it made you interested in reactive programming.
Let me know in comments if there is a topic that I haven’t covered that you’d like to watch a video about!
]]>We’re moving on to the topic of reactive forms which are another piece of reactive API in Angular. In this video, I’m implementing a form auto-save mechanism.
You can access the whole course from here. Let me know in comments if you liked it!
]]>Let’s start with some statistics!
These stats are not 100% complete because after a few months of blogging I changed the platform from Google to Wordpress. Interestingly, the best day in term of views happened in the second month of blogging resulting in over 10,000 views (this post made it to Hacker News front page). I’ve never managed to do this again.
The number divided by 30 gives a not-so-bad average of almost 2 posts a month. It’s far less than many experienced bloggers recommend (at least once a week or even daily). I’ve tried posting weekly but couldn’t keep up at all. Writing a post takes me at least several hours as it often requires researching a topic and writing some code. A possible solution to this is to introduce shorter, less thorough posts. Do you think it makes sense?
The number of monthly views is increasing steadily which I think is good as it indicates some progress. However, I’m not so much interested in quantity as in quality of visits. Google Analytics says over 87% bounce rate and only 52-second average session duration which is not satisfactory to me. I need to think about ways to increase engagement and make readers want to read stay on the blog. Do you have any ideas? What’s more, most of the traffic comes from Google search and does not hit posts related to functional programming. I decided to make it the main theme of the blog some time ago. Readers hitting posts about Firebase will not likely be interested in exploring the blog any further as the main theme doesn’t correspond with what they were looking for. I’m not sure how to address it. The most viewed posts happen to be rather old so they had time to build a good rank in Google search. Maybe the functional programming stuff will also be reached this way in the future.
I think it’s a good number. When I imagine 200 people in a room, it’s quite a lot 😀 I have to admit that over 95% of subscribers were incentivized by the free ebook about functional programming in JavaScript. I’ve tried a pop-up once and it was working pretty well. However, I’ve decided it’s too annoying and disabled it. The open rate on the last newsletter was 43.8% while click rate was 7.3%.
This number is the one I’m the most dissatisfied with. Bear in mind that half of the comments are mine (I reply to almost every comment) and some of them are self-pingbacks. I would gladly trade most of the views for bigger engagement on the blog. There are many smaller blogs where you see heated discussions in comments. It never happens on my blog. I’ve tried directly asking questions in posts in order to increase engagement but it doesn’t seem to work. I’m thinking about writing more opinion-based posts so that at least I can get ranted about in the comments section 😀 Any other advice from you?
The blog has undergone some changes in the course of these 30 months. I’m learning all the time and try to make it better and better. Here are some most important events from the blog’s history.
I’d like to finish on a positive note so let me share how the blog influenced my life in a positive way. And hell, it did!
There is probably more. As you can see, there are many ways you can benefit from a blog apart from making it a source of passive income. So, if you are undecided then stop thinking about it and start writing!
]]>bufferTime
operator. Enjoy and please remember to subscribe if you like what you see! The whole course is available here.
]]>pipe
function. But do you really understand what it does?RxJS is often called a functional-reactive programming library. It should not come as a surprise that you will find many functional programming inspirations in it. One of them is the pipe
function. Take a look at the below piece of code:
1 | const getElement = (id) => document.getElementById(id); |
The logElementValue
function takes an id
and logs to the console the value of the element with provided id
. Can you see a pattern in this function’s implementation? Firstly, it calls getElement
with id
and stores the result in el
. Next, the result is passed to getValue
which produces a new result, el
. Finally, el
is passed to console.log
. What this function does is simply taking the result of a function and passing it as an argument to another function. Is there a better, more concise way to implement this function? Let’s say we just have two functions (getElement
and getValue
). We will implement a generic function called compose
that will pass the result of getElement
to getValue
.
1 | const compose = (f, g) => (x) => g(f(x)); |
The definition is very simple but may take a moment to parse. We’ve defined a function that takes two functions f
and g
(that would be getElement
and getValue
in our case) and returns a new function. This new function will take an argument, pass it to f
and then pass the result to g
. That’s exactly what we need! Now I can rewrite logElementValue
:
1 | function logElementValue(id) { |
pipe
But, wait! Once we have the result of calling getValueFromId
we immediately pass it to console.log
. So it’s the same pattern here. We could write it like this:
1 | function logElementValue(id) { |
But life would be much simpler if compose
could take any number of functions. Can we do this? Sure:
1 | const composeMany = (...args) => args.reduce(compose); |
Another brain teaser! composeMany
takes any number of functions. They are stored in args
array. We reduce
over args
composing every function with the result of composing previous functions. Anyway, the results is a function that takes any number of functions and will pass the result of N-th
function to (N+1)-th
function. But what have we achieved by that?
1 | function logElementValue(id) { |
Which can be simplified even more:
1 | const logElementValue = composeMany(getElement, getValue, console.log); |
Isn’t that cool? We have significantly simplified the code. It’s now very clear what logElementValue
does. And by the way - composeMany
is just a name a came up with. The official name is pipe
!
1 | const logElementValue = pipe(getElement, getValue, console.log); |
pipe
Let’s take an example of pipe
method usage in RxJS.
1 | number$.pipe( |
We can also write it in a different way:
1 | const { pipe } = rxjs; |
And the result is exactly the same! As you can see, the pipe
function in RxJS behaves in exactly the same way that the pipe
function that we’ve defined in the first part of the article. It takes a number of functions and composes them by passing the result of a function as an argument to another function. You might say that this is different than the previous example because here we’re invoking map
and filter
and not simply passing them. Actually, both map
and filter
will return functions. We’re not composing map
and filter
themselves but rather the functions returned by invoking them. Each of the returned functions takes a source observable and returns an observable. Therefore, it’s possible to compose them.
You can check out how RxJS implements pipe
function here.
Our function is such a useful concept that it might be added as a separate operator to the JavaScript language! It would mean that the example from the previous article can be written in an even simpler way:
1 | const logElementValue = getElement |> getValue |> console.log; |
You can see the details of the proposal here.
pipe
- SummaryI hope this article helped you understand what pipe
function is all about. You should now feel more comfortable using it! The fact that RxJS migrated from the traditional, object-oriented approach of applying operators to the pipeline approach shows how strong the influence of functional programming is nowadays. I think that’s great! Let me know in comments if you prefer pipe
function to traditional method chaining.
Please let me know your thoughts about this episode in the comments!
]]>Maybe
in part 1 and part 2, let’s move on to another useful example of monads. In this article, we’ll introduce a new type called Result
which is the functional programming’s answer to exceptions. You can find the source code for this article here. Exceptions are a very popular way of handling errors and unexpected situations in code. They are present in mainstream languages such as Java and C# and of course JavaScript. Interestingly, some new programming languages (such as Rust) deliberately didn’t introduce exceptions. Are exceptions compatible with functional programming? Unfortunately, not so much. For example, pure functions shouldn’t have side effects. Throwing an exception is actually kind of side effect - it can lead to termination of the program. Worse than that, exceptions introduce some unpredictability to the code.
1 | function divide(x: number, y: number): number { |
Although the type signature tells us that divide
returns a number
, this is not always the case. We have to be very careful and make sure that we remember to handle the error. However, there is nothing in the type system that will make sure that we don’t forget to do that.
How can we make it explicit that something can go wrong inside divide
? Let’s create a new type Result<TSuccess, TFailure>
. Remember how Maybe<T>
could either be Some
or None
? Similarly, Result<TSuccess, TFailure>
can either be Success
which represents the happy path or Failure
which means that something went wrong and TFailure
is the type of the error. In other words, instances of our new type can either contain a valid result or an information about what went wrong. In contrast to exceptions, it is now explicit that an error can happen. What’s more, we know exactly what kinds of errors we have to deal with (TFailure
tells us so).
Result
Let’s start with the following definition.
1 | export class Result<TSuccess, TFailure> { |
We’ve created a class that can only be constructed in two ways - via success
or failure
static methods. The class internally stores either a value
representing valid result or errorValue
containing information about what went wrong. Let’s start with a simple method that extracts a value from Result
. Remember that an instance of Result
can either be a Success
or a Failure
. Therefore, when extracting the value we always have to assume that an error could have occurred. We need to provide handleError
function which can deal with this error.
1 | get(handleError: (errorValue: TFailure) => TSuccess): TSuccess { |
Similarly to Maybe
, we need some operations to be able to conveniently work with Result
types. Let’s start with map
. In the happy scenario, it will take a function that will be applied to the value stored inside Result
. However, if Result
contains an error, it will simply ignore the provided function and return a failure
.
1 | map<R>(f: (wrapped: TSuccess) => R): Result<R, TFailure> { |
However, it might be the case that the operation that we want to perform on the value stored inside Result
returns a Result
itself! In such case, we need flatMap
.
1 | flatMap<R>(f: (wrapped: TSuccess) => Result<R, TFailure>): Result<R, TFailure> { |
Result
in practiceGreat, we’re now ready to put our new type to work. Let’s adjust the code from the previous posts so that instead of using Maybe
to represent potentially empty result, it uses Result
to represent the potentially failed result.
1 | findById(id: number): Result<Employee, string> { |
We’ve updated the findById
method so that it wraps the returned employee inside Result.success
, provided that it was available. Otherwise, it returns Result.failure
with an error message describing what went wrong. Therefore, TFailure
will be a string
in our case. Next, let’s update the model. Now Employee.supervisorId
is a Result
as well! We treat a situation when an employee does not have a supervisor as kind of an error.
1 | export interface Employee { |
Now we need to make some adjustments to the usages of the above code inside main.ts
file. Firstly, let’s change the event listener code to create a Result
instance based on the content of the HTML input. Next, the Result
is passed to getSupervisorName
function which will return a Result
as well (as we will see in a moment). Finally, when extracting the value from the Result
instance, we provide a callback to handle the potential error.
1 | findEmployeeButtonEl.addEventListener("click", () => { |
Finally, the getSupervisorName
function. And this is the most interesting part of the article because… the function looks almost exactly the same as in the case of Maybe
!
1 | function getSupervisorName(enteredIdResult: Result<string, string>): Result<string, string> { |
The only adjustments are type signatures and the safeParseInt
function. It turns out that map
and flatMap
operations are so generic that they can handle two distinct scenarios with the same piece code. I hope you can see the power of monads now! You can now run the program and enjoy nice error messages. Try out different scenarios such as providing non-existent id, id of an employee without a supervisor, non-numeric id, etc.
In this article, we saw how to use monads to replace exceptions with a more functional-friendly approach. Thanks to the Result
type, we can make it explicit that a function can fail. What’s more, we force the caller to always assume that something could have gone wrong and provide an error handler.
Error handling in this example is rather simplified. We use strings to convey the error message. However, there is nothing stopping you from using more advanced types in order to pass more meaningful information. For example, you could use discriminated unions to represent different kinds of errors. What do you think about this approach to error handling? Do you think it’s more readable than traditional exceptions? Share your thoughts in comments!
]]>Did you like the talk? Do you think it’s a good way to explain monads? Let me know in comments below!
]]>You can find all the code from the series in this repository. Check out different branches for code relevant to the specific part of the series.
Generator functions have been introduced to JavaScript as part of the ES6 standard. They are a special case of functions where it’s possible to pause execution in the middle of the function body. This might sound counter-intuitive, especially if you consider the fact that JavaScript is single-threaded and follows the Run-to-completion approach. However, with generators, the code is still executing synchronously. Pausing execution means giving the control back to the caller of the function. The caller can then resume execution at any point. Let’s see a simple example:
1 | function* numbers(): IterableIterator<number> { |
There are two pieces of new syntax here. Firstly, there is a *
following the function
keyword. It indicates that numbers
is not a regular function but a generator function. Another new thing is the yield
keyword. It’s a bit like return
but it can be used multiple times inside the function’s body. By yielding a value, the generator returns a value to the caller. However, unlike return
, the caller may decide to resume execution and give control back to the function. When it happens, the execution continues from the latest yield
. First, we need to invoke numbers
to get a generator instance.
1 | const numbersGenerator = numbers(); |
At this point, not a single line of the numbers
function has been executed. For this to happen, we need to call next
. It will start executing the function until it encounters the first yield
. At this point, the control will be given back to the caller.
1 | console.log('Outside of numbers'); |
When we run these lines, we will get the following output:
1 | Outside of numbers |
As you can see, the execution jumps back and forth between numbers
and its caller. Each next
call returns an instance of IteratorResult
which contains the yielded value and a flag done
which is set to false
as long as there is more code to execute inside numbers
.
Generators are a very powerful mechanism. One of their uses is building lazy, infinite sequences. Another one is co-routines - a concurrency model where two pieces of code can communicate with each other.
As a reminder, we implemented the getSupervisorName
function in the following way:
1 | function getSupervisorName(maybeEnteredId: Maybe<string>): Maybe<string> { |
Obviously, the code is more readable than the original version which involved three levels of nesting. However, it looks very different from regular, imperative code. Can we make it look more like imperative code? As we know, generators allow us to pause execution of a function so that it can be later resumed. This means that we can inject some code to be executed between yield
statements of given function. We could try to write a function that takes a generator function (a function with some yield
statements) and injects the logic of handling empty results. Let’s start by writing an implementation of getSupervisorName
that will use yield
statements to handle empty results.
1 | function* () { |
Let’s assume that maybeEnteredId
is defined in the function’s clojure and that it’s type is Maybe<string>
. Now, the semantics of const enteredIdStr = yield maybeEnteredId
is:
maybeEnteredId
contains a value then assign this value to enteredIdStr
None
In other words, yield
works exactly like flatMap
, but the syntax is very different. It’s much more familiar to an imperative programmer.
run
This is not over yet. We still need a function that will be able to consume this generator function. In other words, we need something that will give meaning to all these yield
statements. We’ll call this function run
. It will take a generator function and produce Maybe
instance containing the result of the computation. Let’s start with an imperative implementation of run
:
1 | static run<R>(gen: IterableIterator<Maybe<R>>): Maybe<R> { |
run
function accepts a generator function gen
. This function describes our computation.gen.next(lastValue)
. This call will cause control flow to enter gen
and execute until the first yield
(ignore lastValue
for now).run
. The value passed to yield
is wrapped inside IteratorResult
and returned as a value of gen.next
,result
has a done
flag. It indicates whether control flow inside gen
has reached the end of its body (i.e. whether there’s more code to execute).result.value
holds the value returned by yield
. It’s an instance of Maybe
. Therefore, we check if it’s an empty result (a None
). If this is the case, we return
a None
from the whole computation.Maybe
and assign the inner value to lastValue
.lastValue
is not empty. It will be passed to gen
as a result of calling yield
.There’s quite a lot going on here. A good way to understand this is to think about two pieces of code communicating with each other.
gen
sends a Maybe<T>
instance to run
by calling yield m
run
replies with an instance of T
by calling gen.next(lastValue)
The whole point of this is that we can hide the logic of empty result handling inside run
. From the caller’s perspective, it will look like this:
1 | function getSupervisorName(maybeEnteredId: Maybe<string>): Maybe<string> { |
What we’ve achieved is a clean and elegant implementation of getSupervisorName
with all the boilerplate hidden in run
. However, contrary to the flatMap
solution, this code looks more intuitive to an imperative programmer.
That’s nice, you might say, but what does it have to do with monads? We’re not taking advantage of the fact that Maybe
is a monad. Let’s fix that. You might’ve noticed some similarity between run
implementation and flatMap
. Both implementations have to deal with empty results and apply a similar logic: if a Maybe
instance is empty then return early with a None
. Otherwise, continue the computation with the unwrapped value. It turns out that we can implement run
using flatMap
!
1 | static run<R>(gen: IterableIterator<Maybe<R>>): Maybe<R> { |
This recursive implementation is much more in the spirit of functional programming.
step
function which takes an optional value
and passes it to gen.next
. As we know, this will cause execution to resume inside gen
, up to the nearest yield
.result.done
. If it’s false (there is still some code to execute), we simply call flatMap
on result.value
and recursively pass step
as continuation function. flatMap
will take care of an empty result.None
, the recursive call to step
will run gen
up until the next yield
. And so one, and so one.The client code looks exactly the same.
In this post, we’ve looked at how generators can be leveraged to improve the experience of using monads. They make monadic code even cleaner and, what’s important when working in teams, easier to understand to programmers with imperative background (i.e. the vast majority). The idea of using generators in such a way is the basis of the async/await
mechanism. As you’ll learn in the future posts, Promise
is also a monad and async/await
is nothing more than a specialized variant of function*/yield
. But let’s not jump ourselves ahead :) Did you find this approach interesting? What is a more readable way to write monadic expressions - with or without generators? Share your thoughts in comments below!
Observables
.However, I’ve noticed that many Angular programmers don’t take advantage of this reactiveness built into Angular. What’s more, there aren’t many examples available on the internet showing how to use reactive programming in Angular in more complex scenarios.
This is why I decided to create this video course series. I present to you Reactive Programming in Angular. The course is available for free on my YouTube channel! So far I’ve recorded three episodes which you can find below. Subsequent episodes will be published on this blog, so stay tuned!
If you’re enjoying the course, subscribe to my YouTube channel and let others know about it! If you have any questions or feedback, feel free to contact me at milosz@codewithstyle.info. What do you think about the course? Did it bring you some value? Let me know in comments!
]]>I’ve already tackled this topic on the blog a few times (see monads in C# and monads in Scala) but this time I would like to explore how monads can be useful in the front-end world. One final word - I chose TypeScript over JavaScript because it’s just easier to talk about monads in a strongly-typed language. You don’t have to be a TypeScript expert to understand the article.
You can find all the code from the series in this repository. Check the commit history for code relevant to the specific part of the series.
Let’s get ready for our monadic journey!
We’re going to build a simple application that implements the following scenario: A company has a hierarchical employee structure (each employee can have another employee as a supervisor). As a user, I would like to be able to enter employee ID (a numeric value) and see their supervisor’s name. Let’s start with a plain, non-monadic implementation. Here is some HTML for the user interface:
1 | <body> |
The HTML consists of an input for the employee’s ID and a button to search for the employee’s supervisor’s name. And here comes the code that orchestrates this form:
1 | import { EmployeeRepository } from "./employee.repository"; |
Firstly, we get hold of some HTML elements. Next, we attach a click handler to the button. Inside the handler, we invoke the getSupervisorName
function which holds all of the actual logic (we will get back to it soon). Finally, we update the p
tag with search results. Finally, let’s have a quick look at the EmployeeRepository
class:
1 | import { Employee } from "./employee.model"; |
It’s just an in-memory storage of the employee hierarchy with some hardcoded values. The Employee
interface could look like this:
1 | export interface Employee { |
As promised, let’s focus on the getSupervisorName
function.
1 | function getSupervisorName(enteredId: string) { |
As we can see, the function body involves several levels of nesting. This is because many things can go wrong during the search for the supervisor.
In other words, there are many operations involved and each of them can return an empty result (e.g. empty input field, empty search result, etc.). The function needs to handle all of these edge cases and hence the deep nesting of if
statements. Is there anything wrong with it? I think yes:
Let’s see how to solve both of these problems.
Maybe
One way of simplifying code is to identify a pattern and create an abstraction that hides the implementation details of this pattern. The recurring theme in the getSupervisorName
function is the nesting of if
statements.
1 | if (result) { |
But how to create an abstraction over such a pattern? The reason we have to do these if
checks is that the value stored inside result
can be empty. We’ll create a simple wrapper type that holds a simple value and is aware of whether the value is empty (ie. null
or undefined
or empty string) or not. Let’s call this wrapper type Maybe
.
1 | export class Maybe<T> { |
Instances of Maybe
hold a value
that can either be an actual value or null
. Here, null
is the internal representation of an empty value. The constructor is private so you can only create Maybe
instances by calling some
or none
static methods. fromValue
is a convenience method that transforms a regular value to a Maybe
instance. Finally, getOrElse
is a safe way of extracting the value contained by Maybe
. The caller has to provide the default value that will be used in case Maybe
is empty. So far, so good. We can now explicitly say that the result returned by some method can be empty. Let’s change the findById
method on EmployeeRepository
:
1 | findById(id: number): Maybe<Employee> { |
Note that the return type of findById
is now more meaningful and better captures the programmer’s intention. findById
can indeed return an empty value if an employee with given ID doesn’t exist inside the repository. What’s more, we can change the Employee
interface to explicitly state the fact that supervisorId
can be empty:
1 | export interface Employee { |
We’ll now add some operations to make Maybe
type more useful. You know the map
method that you can call on arrays, right? It applies a given function to every element of an array. If we look at Maybe
as at a special array that can have from zero to one elements, it turns out that defining map
on it totally makes sense.
1 | map<R>(f: (wrapped: T) => R): Maybe<R> { |
Our map
takes a function f
that transforms the element wrapped by Maybe
and returns a new Maybe
with the result of the transformation. If Maybe
was a none
then the result of map
will also be an empty Maybe
(just like calling map
on an empty array would give you an empty array). R
is the type parameter representing the type returned by f
transformation. But how is this map
useful? The original version of the getSupervisorName
function included the below if
statement:
1 | const supervisor = repository.findById(employee.supervisorId); |
But findById
returns a Maybe
now! And we have the map
operation available which, accidentally, has exactly the same semantics as the if
statement above! Therefore, we can rewrite the above piece like this:
1 | const supervisor: Maybe<Employee> = repository.findById(employee.supervisorId); |
Didn’t we just hide the if
statement behind an abstraction? Yes, we did! However, we’re not ready to rewrite the whole function in such style yet.
map
, or maybe flatMap
?Using map
works fine for transformations such as above. But how about this one?
1 | const employee = repository.findById(parseInt(enteredId)); |
We could try to rewrite it using map
:
1 | const employee: Maybe<Employee> = repository.findById(parseInt(enteredId)); |
See the problem? The type of supervisor
is Maybe<Maybe<Employee>>
. This is because our transformation function now maps from a regular value to a Maybe
(and previously it was mapping from regular value to a regular value). Is there a way to transform Maybe<Maybe<Employee>>
to a simple Maybe<Employee>
? In other words, we would like to flatten our Maybe
. Again, there is an analogy to arrays. You can flatten nested array [[1, 2, 3], [4, 5, 6]]
to [1, 2, 3, 4, 5, 6]
. We’ll add a new operation to Maybe
and call it flatMap
. It’s just like map
but it also flattens the result so that we don’t end up with nested Maybe
s.
1 | flatMap<R>(f: (wrapped: T) => Maybe<R>): Maybe<R> { |
The implementation is pretty simple. If given instance of Maybe
is not empty then we extract the wrapped value, apply the provided function and simply return the result (which can either be empty or not empty). If the instance was empty, we simply return an empty Maybe
. Note the type signature of f
. Previously, it was mapping from T
to R
. Now, it’s mapping from T
to Maybe<R>
. Thanks to the addition of flatMap
, we can now rewrite the above code piece like this:
1 | const employee: Maybe<Employee> = repository.findById(parseInt(enteredId)); |
Now, we’ve got all we need to rewrite the getSupervisorName
function.
1 | function getSupervisorName(maybeEnteredId: Maybe<string>): Maybe<string> { |
We’ve eliminated all of the nested if
statements! The getSupervisorName
function’s body is now an elegant pipeline of transformations applied to the input value. We’ve hidden the details of handling empty results because they’re just boilerplate and obfuscate the real intention of the code. They’re now taken care of by Maybe
. Note that if any of the operations inside flatMap
returned a none
, it would cause the whole thing to immediately return a none
. This is actually the same behaviour that we had with nested if
statements. Here is an example of how the function can be used inside the click handler:
1 | findEmployeeButtonEl.addEventListener("click", () => { |
And, guess what, Maybe
is a monad! The formal definition of a monad is that it’s a container type that has two operations:
return
- which creates an instance of the type from a regular value (some
and none
in our case)bind
- which lets you combine monadic values (flatMap
in our case)There are also some monadic laws that every monad has to follow but let’s not dive into it yet. For now, you have to trust me that our Maybe
implementation follows these laws :)
In this first post of the series, we’ve created our first monad. The purpose of the Maybe
monad is to abstract away handling of empty values. Thanks to the introduction of this type, we can now write code without having to worry about empty results. In the next article, we’ll see how thanks to TypeScript we can write code that uses monads in an even more readable way. Do you find monads interesting? Do you feel like you understand the concept now or is it still a mystery? Please let me know in comments!
ng-packagr
.ng-packagr
First of all, when creating an Angular library, it’s important to understand that you need to include a bit more than just plain TypeScript files inside the NPM package. It should contain JavaScript code that is ready to be consumed in various ways - e.g. by Webpack, Rollup or Angular CLI. There are some best-practices describing what to include in an Angular NPM package - it’s called Angular Package Format. It’s actually not trivial to produce a package that complies with the standard. Fortunately, we can use the excellent ng-packagr tool for that. The documentation for the package is pretty good and it’s actually quite easy to get going with it.
It worked pretty well for me too, except for two issues on which I’ve spent too much time. Both of them only manifested themselves when I tried to build the consuming project (the project to which my new common package is a dependency) using Angular Ahead of Time compilation (AOT). When trying to build my consumer project I’ve got the following error:
1 | ERROR in Error: Unexpected value 'SomeModuleName in ...........path-to-module.d.ts' imported by the module 'AppModule in ...............path-to-module.ts'. Please add a @NgModule annotation. |
After some digging, I’ve found out that the root cause was that when building my package I wasn’t generating appropriate metadata to be further used by the AOT compiler. This should be normally done by ng-packagr
. It turned out that the tool could not generate the metadata properly because I was using invalid paths inside the public_api.ts
file. Inside my package, I was making heavy use of index.ts
files where I re-exported all the relevant symbols from the module. The file structure looked like this:
1 | + src |
And the public_api.ts
was referencing the modules like this:
1 | import * from './src/module1'; |
It turns out that this is not enough to generate the metadata properly. I had to change the imports to include index
.
1 | import * from './src/module1/index'; |
After this change, the AOT compilation started to work properly. The only place where I could find this information was a comment by JoeQueR on GitHub. Thanks, JoeQueR!
forRoot
and AOTAnother problem I had isn’t that much related to ng-packagr
itself but rather to the NgModule.forRoot
convention. forRoot
is a static method which allows isolating services provided by given module so that they are only provided once. The forRoot
method should be only invoked when importing the module in the app
module or in the core
module. This approach helps you avoid issues with lazy loading. In my common package, I’ve implemented a forRoot
method which conditionally provided some additional services based on an argument passed to the method. It looked a bit like below:
1 | export class SomeModule { |
It turns out that you cannot do this! All the providers should be included in generated metadata and in this case, the final provider list will only be known at runtime so it’s not possible to figure it out during compile time. Therefore, it’s not possible to implement such scenario. The providers
array has to be determined statically.
By interpreter I mean a program that can execute source code of another program written in a different programming language. For example, JavaScript, PHP or Python are interpreted languages - you need an interpreter to run them. We will look into building an interpreter for a very simple language - algebraic expressions, such as:
1 | -5 * (1 + (3 / 6)) |
When building an interpreter (or a compiler) you need to build some data structures that represent the source code. These data structures form something called Abstract Syntax Tree (AST). It’s a tree-like structure because the source code usually involves lots of nesting. The AST of our example algebraic expression would look like this:
You can see that the tree structure corresponds to the order in which mathematical operations are applied. For example, 3 / 6
should be calculated first and that’s why it’s low in the tree. The multiplication will be evaluated at the very end and hence it’s the tree’s root. Discriminated unions are perfect for representing such a tree. Our AST has different kinds of nodes:
Let’s create some types that correspond to these kinds.
1 | interface BinaryOperatorNode { |
We’ve created the ExpressionNode
discriminated union. It’s a union of three node kinds:
NumberNode
is the simplest type - it contains a numerical valueBinaryOperatorNode
represents an operation on two expressions; the operator
field determines what kind of operations it is; left
and right
fields contain arguments of the operationUnaryOperatorNode
is like BinaryOperatorNode
but only contains a single argumentInterestingly, these types are recursive in a way. For example, UnaryOperatorNode
has inner
property which can be any ExpressionNode
- indeed, we can negate a number, but also a much more complex expression wrapped in parenthesis.
Now we have data structures in place there are only two questions left:
We’ve already seen how to consume discriminated unions with a switch
statement. This is no different here with the exception that since our types are recursive, the evaluation function will be recursive as well.
1 | function evaluate(expression: ExpressionNode): number { |
The evaluate
function takes an ExpressionNode
object and evaluates it to a number. First, we check for an instance of NumberNode
in which case the expression will simply evaluate to the value held by the object.
1 | case "UnaryOperator": |
For UnaryOperatorNode
we need to evaluate the value of the nested expression first. That’s why we make a recursive call to evaluate
itself. Once this is done we either negate or simply return it as it is.
1 | case "BinaryOperator": |
BinaryOperatorNode
has two nested expression - left
and right
- so we need to evaluate both of them. Next, we use the relevant mathematical operation on them to calculate the final value. Here is the full source code of the evaluate
function:
1 | function evaluate(expression: ExpressionNode): number { |
Finally, we need an object instance for testing. Here is the (42 + 5) * -12
expression represented as an AST:
1 | const expr1: ExpressionNode = { |
Now we can test our evaluate
function:
1 | console.log(evaluate(expr1)); |
Voila, we’ve just finished our first interpreter written in TypeScript using discriminated unions.
In this article, we’ve looked into taking advantage of discriminated unions in a non-trivial, real-world scenario - implementing a language interpreter. It’s easy to imagine how the types used to build an AST can be much more complex. For a real programming language, we would have to create types for statements, method calls, function declarations, etc. However, the principle would be roughly the same.
]]>Let me start by showing you an example of a problem that can be solved with discriminated unions. You’re working on an application which deals with the management of customers. There are two kinds of customers: individual and institutional. For each customer kind, you store different details: individual customers have a first and last name and a social security number. Companies have a company name and a tax identifier. You could model the above situation with the following types:
1 | enum CustomerType { |
Unfortunately, you have to make most of the fields optional. If you didn’t, you would have to fill in all of the fields when creating an instance of Customer
. However, you don’t want to fill companyTaxId
when creating an Individual
customer. The problem with this solution is that it’s now possible to create instances that don’t make any sense in terms of business domain. For example, you can create an object with too little info:
1 | const customer1: Customer = { |
…or one that has too much data provided:
1 | const customer2: Customer = { |
Wouldn’t it be nice if the type system could help us prevent such situations? Actually, this is what TypeScript is supposed to do, right?
With discriminated unions, you can model your domain with more precision. They are kind of like enum types but can hold additional data as well. Therefore, you can enforce that a specific customer type must have an exact set of fields. Let’s see it in action.
1 | interface IndividualCustomerType { |
We’ve defined two interfaces. Both of them have a kind
property which is a literal type. Variable of literal type can only hold a single, specific value. Each interface contains only fields that are relevant to the given type of customer. Finally, we’ve defined CustomerType
as a union of these two interfaces. Because they both have the kind
field TypeScript recognizes them as discriminated union types and makes working with them easier. The biggest gain is that it’s now impossible to create illegal instances of Customer
. For example, both of the following objects are fine:
1 | const customer1: Customer = { |
…but TypeScript would fail to compile this one:
1 | // fails to compile |
Let’s now see how to implement a function that takes a Customer
object and prints the customer’s name based on their type.
1 | function printName(customer: Customer) { |
As we can see, TypeScript is clever enough to know that inside case "individual"
branch of the switch
statement customer.type
is actually an instance of IndividualCustomerType
. For example, trying to access companyName
field inside this branch would result in a compilation error. We would get the same behaviour inside an if
statement branch. There is one more interesting mechanism called exhaustiveness checking. TypeScript is able to figure out that we have not covered all of the possible customer types! Of course, it would seem much more useful if we had tens of them and not just two.
1 | // fails to compile |
This solution makes use of the never
type. Since case "institutional"
is not defined, control falls through to the default
branch in which customer.type
is inferred to be of type InstitutionCustomerType
while being assigned to never
type which of course results in an error.
Discriminated union types are pretty cool. As I mentioned, the whole point of TypeScript is to help us catch mistakes that we would make without having type checking. Discriminated unions help us model the domain in more detail, therefore making illegal instances impossible to create.
One could argue that the same thing could be achieved with inheritance (or interface extension in this case). And that’s true. Solving this with inheritance would be an Object Oriented Programming approach while discriminated unions are specific to Functional Programming. I think this approach makes more sense in the context of web applications where we often fetch data from some REST API which doesn’t support object inheritance. What’s more, exhaustiveness checking is not possible to achieve with object inheritance. It’s an example of the classical composition versus inheritance dilemma.
]]>I will risk a statement that the choice of topics in this conference proves that frontend programming is getting more and more functional. At least four talks had something to do with functional programming!
OnPush
change detection strategy (hence immutability)Since I was a speaker I didn’t have a chance to see all talks, so the list is not be complete.
While it might seem that the talk was about using NGRX, it was actually more about how to build your own NGRX! NGRX is a Redux implementation with a reactive API which plays well with the rest of Angular. The talk included a large live coding session where Todd explained the Redux design pattern by building an application from scratch in vanilla JavaScript. I think it’s a great idea, especially given that we’re surrounded by so many frameworks and sometimes we use them without having understood how they work inside.
This short talk was given by the organiser of the conference. While it was not technical, I think it was really inspiring. Dariusz explained how to benefit the most from the conference by setting up clear learning goals.
A truly amazing talk, packed with tips and suggestions on how to tackle performance issues in Angular applications. First, Nir stated that optimizing Angular performance is mostly about shortening the change detection process. Next, he talked about many ways to achieve this including: using the OnPush
change detection strategy, manually controlling change detection by detaching ChangeDetectorRef
, using memoization for caching results of pure functions, using pure pipes, running code outside of NgZone
, and many more.
Very insightful and well delivered talk about architecting authentication in SPA applications. Manfred presented many different options including using the OAuth2 standard and 3-rd party providers such as Firebase. He has also talked about popular security attacks and how to tackle them.
As mentioned, I will write a separate article with the summary of my talk.
The talk was delivered by an early adopter of Cloud Firestore - the new storage offering from Firebase - so we’ve been given some first-hand experiences. Since I was stuck with the Realtime Database, the presentation was perfect to me - it explained all of the advantages of the new Firestore solution. The live demo which involved hundreds of people interacting with the application real-time (and displayed on a huge cinema screen) was pretty impressive too!
#ngPolandConf
This guy currently collect more emails than recruiters :D :D pic.twitter.com/CTyeD1nQ6G— Frantisek Ferko (@FrantisekFerko) November 21, 2017
Nice and succinct summary of the new features introduced in Angular 5. Bartłomiej tried to answer the question whether the new version is mature enough to upgrade. Additionally, he presented some cool statistics about the Angular repository.
Gerard talked about how to use the RxJS Marbles testing library to visualize and understand different RxJS operators. It allows you to describe test prerequisites and expectations using… ASCII art :) The library is actually used for testing the RxJS library - you can find some tests on GitHub. Great for experimenting and wrapping your head around hundreds of RxJS operators.
]]>Let’s think about how the process of change detection could look like in big picture.
Now we know when change detection should be performed - whenever there is a chance that some model will be modified. In the above example it’s a handler for a user-generated event that can make changes to the model. However, there are other situations that can result in model updates such as:
setInterval
functionWrapping up, model changes (hence change detection) always start with some sort of asynchronous event.
We already know when change detection should be called. However, we don’t know how the framework ensures that it will be called. This is the first of the differences between Angular and AngularJS. In AngularJS change detection was invoked by calling $scope.digest()
. Most of the times we didn’t have to call it manually because the framework did it for us. Directives such as ngClick
or the $http
service would make sure to run the digest cycle (i.e. change detection) after executing the handlers we’ve provided (that could potentially make changes to the model). The consequence of this was that in some rare cases you had to tell Angular to run the digest cycle by calling scope.$digest
or scope.$apply
manually. Angular takes a different, more robust approach. Instead of invoking CD manually, it takes advantage of a concept called zones. Zone.js is a library which allows you to inject your own code into some low-level calls. In particular, it allows Angular to run change detection code automatically after an asynchronous event is handled.
Let’s now focus on the change detection mechanism itself. It’s actually quite simple - the framework keeps track of values in the model before and after running your event handlers. Change detection simply looks at these values and compares them. Once it finds some differences, it’s able to tell that a change in the model has occurred. Such mechanism is called dirty checking.
Whenever we use data binding or pass an expression to ng-if
or ng-for
we create a watcher. During the $digest
call, AngularJS iterates over all watchers and compares old values of watched expressions with new values. The tricky part is that AngularJS allows two-way data binding. That means that after a change in the model is reflected in the view, it might turn out that the change in the view triggers another change in the model. Therefore, a single pass through the watcher array may not be enough. AngularJS repeats the process until it detects no more changes in the model. Unfortunately, it’s quite easy to write code in a way that the model never stabilizes. What’s more, having to iterate over the watchers so many times is not great in terms of performance.
1 | $scope.$watch("foo", function () { |
Above you can see an example of infinite digest loop taken from Angular documentation.
Angular is smarter about this. It doesn’t support two-way data binding any more. Therefore, it can assume that a single pass of change detection will always be sufficient. This concept has a name - it’s called unidirectional data flow. It’s unidirectional because the data always flow from the model to the template. It’s not possible for a view change resulting from change detection to trigger a change in the model - this would be data flowing in the other direction. Hey, but isn’t [(ngModel)]
a two-way data binding? Actually, it’s not. It’s just a syntactic sugar for a simultaneous event binding and property binding. Importantly, it still works with single pass change detection. Unidirectional data flow is a major simplification when compared with AngularJS. It makes your application more predictable. It’s also much better in terms of performance.
Finally, AngularJS and Angular differ in how the dirty checking mechanism is implemented. In AngularJS the mechanism is dynamic. It means that watchers are created and added to the array during runtime, on demand. Angular takes a different approach. It generates a change detector for every single component. Such change detector is based on the template so it only compares the values of expressions used in property bindings. With Ahead of Time compilation enabled, Angular can generate change detectors during build time! Such change detectors are much easier to optimize for the JavaScript Virtual Machine than the dynamic mechanism used by AngularJS. We say that change detection in Angular is VM-friendly. This has major performance implications.
In this article I’ve explained basics of change detection in Angular. Having understood this it’s easier to understand some of the design choices taken in Angular versus AngularJS. To wrap up, here is a table that summarizes the major differences.
AngularJS | Angular |
---|---|
Change detection is invoked internally by the framework (by calling \$digest). Sometimes it is necessary to trigger change detection manually | Change detection is invoked automatically using zones. The framework hooks into internal browser API calls and performs CD after asynchronous events. |
Two-way data binding is supported which means that single pass of change detection is not enough. AngularJS runs digest cycles until the model stabilizes. | Two-way data binding is not supported hence single pass of change detection is enough. Unidirectional data flow is enforced - the data flows from the model to the view, never the other way around. |
Dirty checking is dynamic. Watchers are created at runtime. | Every component has its own custom change detector. With AOT enabled change detectors can be generated at build time. Change detection code is VM-friendly. |
Change detection in AngularJS is a topic that comes up often during interviews for AngularJS developer positions. If you’re interested, check out this article for more AngularJS interview questions.
]]>This time we will focus on the essence of functional-reactive programming. Let’s see how we can reinvent the way we look at how data flows in our program. Check out the first part of this article where I explain the AsyncPipe. Credits to Piecioshka whose questions inspired me to look into RxJS in more detail!
So far we’ve only seen one kind of Observable - the one returned by the HttpClient. As mentioned, it’s not a particularly exciting Observable. It would only contain a single event - the arrival of HTTP response from the server. However, one may argue that there are many other streams of events out there in a GUI application. Consider a button with some click handler. A sequence of button clicks can be looked at as at a stream of events - hence, an Observable. It’s very easy to create an observable from a button in Angular:
1 | ({ |
We’ve created a Subject and call next on it after every click. This makes subscribers run the handler on every click. The type of clickStream is Observable<any>
because we’re not interested in any data associated with the click but merely with the click itself. Nothing exciting here so far. We could’ve accomplished exactly the same with a regular event handler.
As button caption suggests, let’s fetch some data whenever the button is clicked. We could do it this way:
1 | ngOnInit() { |
But there are issues with this approach. It doesn’t allow us to take advantage of the AsyncPipe we’ve talked about in the previous post. Besides, we should take care about unsubscribing of all the Observables created with button clicks. What we would much prefer is to somehow combine the click stream with the inner Observable. It turns out that RxJS lets us do exactly this!
1 | import "rxjs/add/operator/mergeMap"; |
We’ve used the mergeMap operator. It takes a function that for each value produced by an Observable creates a new Observable. It then merges all the resulting Observables into a single one which we can now safely use with the AsyncPipe!
On each click our clickStream Observable produces an (empty) value. We take this value and call httpClient.get which gives us an Observable that will produce a single value. If we merge these Observables we will get a stream of values returned from the server.
I hope you agree that the mergeMap approach is much more readable and concise than the nested subscribe approach. However, this is not the only benefit. Having our data fetching mechanism represented a single observable allows us to use the whole arsenal of RxJS operators. A very common use case would be to make some sort of safe guard that would prevent the user from firing bazillion of HTTP requests by clicking furiously on the button. With RxJS it’s super easy!
1 | // import "rxjs/add/operator/debounceTime"; |
The debounceTime operator waits 500 milliseconds after each button click. If there is a new click coming within this timespan than it drops the previous one. Thanks to that we will only make the request for the last click. Imagine implementing this without RxJS… Another approach would be to use the switchMap operator.
1 | this.postStream = this.clickStream.switchMap(() => |
It works like mergeMap but if there is a new click while the previous request is still not resolved than the previous request will be dropped (cancelled).
I wanted to show you that with RxJS we can change the way we think about the data flow in Angular applications. Doing this allows us to make use of some interesting RxJS operators but it also lets us eliminate mutable state from our component. Such components are easier to maintain and harder to break.
]]>As a quick reminder, RxJS is a functional-reactive programming JavaScript library. It helps you write maintainable, readable code by allowing you to express your application as a manipulation on streams of events. If you have no experience with RxJS, I recommend you to read the part of my course dedicated to it.
It’s especially easy to overlook RxJS support in Angular when you are coming from the AngularJS (1.x) background where asynchrony was based on promises. For example, the $http service in AngularJS returned a Promise which would resolve once the remote server responded to our request. In Angular (2+) it’s still possible to work in exactly the same way. The HttpClient service (Http service before Angular 4.3) returns Observables. However, Observables are easily convertible to Promises with the toPromise operator. In some cases, that’s ok. However, there are cases where we can benefit from sticking to Observables. What does it actually mean that HttpClient returns an Observable? An Observable represents a stream of events. Given a callback (provided as an argument to subscribe call) it will run it every time a new event is produced. When we make a remote server call our “stream of events” contains only a single event - the response coming back from the server. It’s therefore a very specific kind of Observable, but an Observable nonetheless.
Besides having Observables baked into the API, Angular also supports consuming them with the async pipe. Let’s see an example.
1 | ({ |
Pay attention to the template. We are binding to the bands property. Since it’s an Observable we can’t bind to it directly. However, the AsyncPipe comes to rescue. If it weren’t for the AsyncPipe, we would need to manually subscribe to the Observable. What’s more, we would need to remember to unsubscribe from it when the component is destroyed.
You need to be careful when using the Async pipe, though. Let’s have a look at the following example.
1 | ({ |
Here we are fetching a single post and want to display it’s details. Hence, we use the AsyncPipe twice. Surprisingly, if we check the Network tab in our browser’s developer tools, we will discover that two HTTP requests have been made instead of one.
To explain this we need to understand the difference between hot and cold observables. Cold observables are the ones that “trigger” the observed stream when they are being subscribed to. HttpClient returns cold observables. We are using the AsyncPipe twice here which invokes the subscription twice. In consequence, an HTTP request is fired twice. On the other hand hot observables are ones that are already “triggered”. When we subscribe we will only see new events. The fact of subscribing has no side effects. It’s possible to fix our problem here by converting the cold observable to a hot one. However, it has its drawbacks too. The best option is actually to use good old Promises.
1 | this.post = this.httpClient |
In this post you have seen how to deal with Observables returned by the HttpClient service in Angular. I have also shown that it sometimes makes sense to fall back to plain Promises. In the next part we will unleash the real power of RxJS by combining our Observable with another event stream.
]]>You need to be extra careful when using long running jobs in connection with DisableConcurrentExecution attribute.
Some time ago I run into an interesting problem at work. I was using Hangfire to process requests from a queue. Users could add requests to the queue and than a Hangfire job would run every 5 minutes, take them off one by one and execute them. Processing of a single request was quite lengthy - it took about 2 minutes. The way I implemented it was to load all pending requests and execute them in a single run of the job. What’s more, I wanted the requests to be processed sequentially. I applied the DisableConcurrentExecution attribute in order to make sure that there is only a single instance of the job running at given time. The problem materialized itself when I added several hundreds requests to the queue. After some time the job started throwing the following error:
1 | Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool |
What happened was the following:
It was a nasty issue and took some time to figure out. We’ve ended up with a workaround and increased the connection pool size because we knew that this huge batch of requests was a one-off thing. However, the whole design turned out to be flawed. It would be a much better idea to have more fine-grained jobs and process a single request in a single job execution. And this it my key takeaway from this bug story.
]]>Learn how Angular works under the hood and how to significantly decrease the size of the bundle with your Angular application. Enjoy!
]]>Given that applications are becoming more and more complex, it might become tricky to manage these streams in a traditional way (with callbacks). Reactive programming is a programming paradigm in which streams of data are central and therefore it’s much easier to work with them. RxJS is a functional reactive programming library. It means that it leverages functional techniques to facilitate dealing with event streams. In simple words, it lets you use the same operations that you learned to perform on arrays on event streams.
This post is part of the Functional Programming in JavaScript series.
RxJS introduces the concept of observable. An observable represents a stream of data (or events). Given an observable you can subscribe to it. When subscribing you provide a callback which will be fired every time a new item is popped out of the stream. Let’s take a text input field and create an observable that will represent the stream of keys typed into it.
1 | <script src="https://unpkg.com/@reactivex/rxjs@5.0.0-beta.12/dist/global/Rx.js"></script> |
1 | var textInput = document.getElementById("textInput"); |
As you can see, it’s pretty straightforward. RxJS allows us to easily convert classic JavaScript events to observables. Let’s now subscribe to this observable.
1 | keyStream.subscribe(e => console.log("Key pressed: ", e.key)); |
I promised that RxJS has something to do with functional programming. Do you remember the filter operation? It would take an array and a predicate function and filter out elements which don’t satisfy the predicate. With RxJS you can treat the stream of events as if it were a regular JavaScript array. Can you guess the result of applying filter to our keyStream?
1 | keyStream |
Calling filter on an observable creates a new observable which will only emit events that satisfy the predicate. In the above example we’re only passing on key presses if they are capital letters.
Other array operations that you’ve learned such as map or reduce can also be applied to observables.
Let’s say that every time a user types an upper case character we would like to perform a search using some REST API. We can do this elegantly using RxJS! We need to start thinking in terms of streams. We have a stream of key presses. Let’s transform it into a stream of search results coming from the REST service. What would we use if we wanted to transform an array-of-something to an array-of-something-else? We would use map. And this is exactly what we will use now.
1 | keyStream |
We use fetch to make the call to the backend server. However, fetch returns a promise so what we’ve done so far is transformed the stream of keys to a stream of promises. Not exactly what we wanted. Fortunately, it’s trivial to convert a promise to an observable.
1 | keyStream |
There is one more problem with this though. At the moment we’re mapping each key press to an observable. Therefore, we’ve transformed an observable of keys to an observable of observables! In other words, for each key we will get a separate observable. It’s not perfect. We would much rather interact with a single observable instead. For that we need to combine the observables from the stream into a single one. Fortunately, there is an operator for that!
1 | keyStream |
flatMap takes a stream and a function mapping each event to another observable. Then it combines all of the resulting observables into a single one.
flatMap
is an extremely important operation in the world of functional programming. It is a very high-level abstraction which allows you to combine structures. For example, there is a variant of flatMap for array operations in the lodash library. Its purpose is to take an array of arrays and combine the nested arrays into a single array. Can you see the pattern emerging?
There are many other interesting operators in RxJS which are not rooted in functional programming but are definitely worth exploring. Let’s finish off our example with one more improvement. At some point we might realize that calling the REST service after every key press is killing performance in our app. One possible solution to this is to only fire the request after some time passed since the last request. This would be a nightmare to implement with callbacks or promises. With RxJS it’s a matter of a single line
1 | keyStream |
debounceTime
will do exactly what we want. It will wait 500 milliseconds after each key press. It will only emit this key press if there are no successful key presses in the incoming 500 milliseconds.
I’ve just scratched the surface of RxJS in this post. There are many other interesting operators which I encourage you to explore. Most importantly, it should all be much easier now once you are familiar with array operations. Once again you can see how universal the ideas in functional programming are. If you have any issues understanding anything in this post or if you simply would like to provide feedback, please leave a comment below. I want this course to be as good as possible and I need your help for that! If you found this post helpful, please consider sharing it on Facebook or Twitter.
]]>This post is part of the Functional Programming in JavaScript series.
Every web application in JavaScript has state. State is the sum of all data stored in memory during execution of the application. Every list of objects that you fetch from some backend API is part of the state. Variables that indicate whether some UI component should be visible or not are part of the state. Information about the currently logged user is part of the state. In a traditional Single Page Application the state is distributed across different places and can be updated from many different places. Therefore, it’s quite easy to lose control over the state. At some point, when making another change to your application, you might be surprised to learn that this particular variable is being updated by some method that you totally forgot about.
Redux says that you should store the application state in a single object. It can be a complicated, deeply nested object. The important thing is that the state is stored in a single location (as opposed to being distributed across the whole application). Another rule imposed by Redux is that the state should be immutable - you should never change anything in it manually. Instead, whenever you wish to update something you should return a new copy of the state. Redux also says that all changes to the state should be initiated by actions. An action could be for example clicking on a button or receiving some data. The action can contain some additional data. So, you have a state and an action object. Given these two you should produce a new state object. Redux says that a function which takes a state and an action and produces a new state is called a reducer.
Let’s have a look at some real example. You are working on a bookstore application. Such application would store a list of books as part of its state. Therefore, the state object could look like this:
1 | const state = { |
One of the possible actions a user can perform is to buy a book. Let’s see how an object describing such an action could look like.
1 | const action = { |
Now we need to write a reducer - a function that will take the state and the action and produce a new state object. Given a list of books with quantities and an action object saying which and how many books are being bought, we need to find the book in the state and decrease its in-store quantity. Keep in mind that we need to return a new state object and not modify the existing one!
1 | function reduce(state, action) { |
Let’s go through this code. First, we inspect the action’s type - there will be more actions in our application so we need to differentiate between them. Next, we map the existing collection of books to a new collection of books. The new one will be very similar to the old one except it will have decreased quantity for one of the books. That’s exactly what the function passed to map does. For most of the books it will return an unchanged object. However, when it finds a book with title corresponding to the one in the action, it will produce a new book object with decreased quantity.
Ok, but what are the benefits of Redux? So far it looks like a complicated way to do simple things. That’s actually true. It doesn’t make much sense to use Redux when your application’s state is very simple. It shows its merit when you have to deal with a very complicated application that would normally be implemented with state distributed in many places. In such case, an application written the Redux way would be much less prone to introducing new bugs. Representing your application’s state as a single object gives you other interesting benefits. For example, you could store every action before applying it. You would then get a full history of your application’s state. It would be very easy to travel back in time or see how would your application look like if one of the actions hasn’t happened (by applying all actions apart from the one). In fact, there are tools which allow you to do exactly that.
You might find the name reducer sounding familiar. Indeed, reducers are very much related to the reduce higher order function which you learned about in one of the previous chapters. Reduce would operate on an array and would take a function which accepts the existing accumulator and next element array and returns the new accumulator. That’s exactly what a reducer function in Redux does! If you were given an array of actions and a reducer function than you could call reduce on that list and provide the reducer function as an argument. It’s also worth mentioning that functions such as reducers have a special name in functional programming. They are called pure functions. A function is pure if given some specific arguments will always return the same result. It means that the result doesn’t depend on any mutable state as is the case in Redux. Additionally, pure functions cannot cause any mutations themselves - in fact, they cannot produce any side effects at all (e.g. they cannot manipulate the DOM or print to console).
Wait, wasn’t Redux supposed to be a library? The code above doesn’t include any non-vanilla JavaScript calls. That’s true - you can actually build applications the Redux way without using the Redux library! The library itself provides you with some utilities that make it easier to build applications. However, this article focuses on the concepts behind Redux and not on the library itself. Once you understood the concepts you will find learning the library very easy.
This chapter combines lots of ideas from the previous posts and shows you a really powerful concept in functional programming. Redux is a completely new way of building applications and definitely takes some time to get accustomed to but it can make handling complexity much easier. Note the heavy use of concepts introduced in the previous parts of the code - higher order functions, object spread operator and immutability. It shows how interconnected the concepts in functional programming are. For me it feels like pieces of a puzzle coming together! If you have any issues understanding anything in this post or if you simply would like to provide feedback, please leave a comment below. I want this course to be as good as possible and I need your help for that! If you found this post helpful, please consider sharing it on Facebook or Twitter.
]]>Have you noticed the abundance of tutorials, introductions and guides devoted to post-ES6 JavaScript? All these conferences, workshops, bootcamps where you get a chance to learn about async/await or Redux? The books and on-line courses? I think it shows how many people are struggling to grasp all these new JavaScript topics.
The point I’d like to make is that all of these new ideas in JavaScript are not new in terms of Computer Science. Therefore, you can really get ahead of others and future-proof your career by investing time into learning about existing, (superficially) less sexy, programming languages and frameworks.
Source: https://imgflip.com/i/1ar013
If you’ve been writing JavaScript for some time you must have noticed the revolution that is happening for some time. The dawn of ES6 (a.k.a. ECMAScript 2015) made JavaScript a much more complex language. Just look at the list of new language features at es6-features.org - it’s quite impressive given that you’re looking at a new version of existing language and not a completely new language. What’s more, you have to make more and more effort just to keep up with the language - ECMAScript standard will now be updated every year.
Although the 2016’s and 2017’s versions have not been as revolutionary as the 2015’s, the concepts introduced in them are definitely non-trivial. The explosion of new JavaScript frameworks doesn’t help. You can find all the new concepts introduced in Redux or RxJS intimidating even if you are a seasoned JavaScript developer.
There are far too many seemingly innovative concepts in JavaScript to mention all of them here. Instead, let me focus on a few examples just to give you the general idea.
Promises in JavaScript were introduced as a cure to the so-called callback hell. Asynchrony in JavaScript is unavoidable. Before promises the only way to handle asynchronous operations was to pass callbacks as arguments to asynchronous functions (such as loading some data from the backend). Multiple asynchronous operations often mean that you must have multiple levels of nesting of callbacks which makes the code less readable and difficult to maintain.
Promises provide abstraction over a computation that might not have finished yet. Having such an abstraction lets you elegantly chain and compose async operations. This amazing animation illustrates the concept (along with async/await):
The idea of promises is not new at all. In fact, a very similar concept (Tasks) has been present in C# since .NET Framework 4.0 (year 2010). Java had Futures long before that but they weren’t composable.
1 | task |
Above you can see some C# code which uses Tasks (the equivalent of Promises). You have to admit that there are similarities with the JavaScript version. Rest assured that having understood the idea behind Tasks, wrapping your head around Promises is trivial.
Reactive programming has been introduced into JavaScript in the form of the RxJS library. It became quite popular, even making its way into the Angular framework.
Illustration of event stream processing. Source: http://reactivex.io/documentation/observable.html
RxJS introduces the concept of Observables. It’s another way to handle asynchrony. Instead of setting up callbacks that should fire when some event happens you deal with a stream of events. It’s very powerful because you can perform different operations on that stream of events - for example filter out some events or join two event streams. En example of event is someone clicking something in your application. Reactive programming builds on the Observable pattern which has been with us since the dawn of Object Oriented Programming. I could find references to Rx.NET (reactive extensions library for .NET) as early as from 2010.
Besides, reactive operators (such as map, filter or reduce) are derived directly from functional programming and languages as old as Haskel (1985). Below you can find a comparison between reactive code in JavaScript and .NET. Again, there are some minor differences but the concept is identical.
1 | observable.Select(n => n * n) |
The async/await syntax is one more way to deal with asynchrony. It lets you write asynchronous code which looks as if it was synchronous! Here is an example from the Mozilla docs:
1 | async function getProcessedData(url) { |
Async/await became part of JavaScript only recently - as part of the ES2017 standard. It has been present in C# since version 5 of the language (year 2012). Besides, async/await is strongly related to the concept of generators (another JavaScript novelty introduced in ES6). Generators were present in Python since version 2.2 (year 2001).
Finally, an example not related to asynchronous programming. Redux is a library for managing complex state. Instead of representing your state as a mutable object(s) which changes whenever the user interacts or some data comes from the backend, you can look at it as a stream of immutable objects. Each consecutive object represents the new version of the state. New versions are produced by a reducer function which takes the previous version and an event and computes how the event affects the new shape of the state.
This idea borrows heavily from architectural patterns such as event sourcing and CQRS (as admitted by Redux creators). This article from 2005 explains event sourcing in detail. I can’t point to any particular language here but chances are that if you have done any backend programming you might have heard about event sourcing long before it made its way into the JavaScript world.
These are just a few examples of seemingly new ideas in JavaScript which have actually been around for some time. I make a lot of references to .NET and C# because it is the language I know best but in fact C# is pretty progressive in terms of introducing new language features. I hope you feel convinced that spending some time to broaden your horizons and learn some backend language is totally worth it. If not, let me know your thoughts in the comments section! If you liked the article please consider sharing it.
]]>
Fortunately, Immutable.js comes to rescue. It is a library which makes working immutable objects much more natural. In this post we will learn how to use the library and how we can benefit from it.
This post is part of the Functional Programming in JavaScript series.
Immutable.js introduces a few immutable data structures. Let’s focus on Map which is an immutable version of plain JavaScript object. The name comes from the world of algorithms and data structures where we use it as a name for a structure which maps some keys to some values (just like a JavaScript object). It’s very easy to initialize a Map from a plain JavaScript object:
1 | const product = Immutable.Map({ name: "Product", quantity: 10 }); |
We must not use assignment operator with a Map object’s properties. Instead, we should call the set method on it.
1 | const product = Immutable.Map({ name: "U-lock", quantity: 10 }); |
As expected set will not mutate the original object. Instead, it will return a new updated object. We can easily return to the world of plain JavaScript objects with toJS:
1 | updatedProduct.toJS(); |
There are many interesting operations available on Map. For example, setIn works great with deeply nested objects. Imagine having an object with several levels of nesting and having to update a property residing in a deeply nested object.
1 | const employee = { |
First of all, note that we’ve used fromJS instead of Map . This is because Map only works shallowly - when it sees a property that’s a reference to an object it doesn’t convert it to Map but jest leaves it as it is. On the other hand fromJS will always traverse all levels of nesting. As you can see setIn takes an array of strings defining the path of properties leading to the value we are interested in. The second argument is the value we want to set the property to. Obviously setIn doesn’t mutate the object but returns a fresh copy instead.
We haven’t yet discussed immutability of arrays. In the previous chapters we’ve talked about how to avoid mutating objects. What about arrays? Basic array operations in vanilla JavaScript are mutable - they update the array in-place:
1 | const numbers = [1, 2, 3, 4]; |
An immutable version of push would have to clone the array first and add the element to the new copy. We’ve already discussed a similar approach when talking about sort in the chapter about lodash.
1 | function immutablePush(arr, newElement) { |
The ES6 spread operator lets us do this in a more concise way. It’s like the object spread operator - it “unwraps” array elements and lets you put them directly in another list.
1 | const newNumbers = [...numbers, 7]; |
But how about adding an element in the middle of an array? It gets a little messy.
1 | const newNumbers = [...numbers.slice(0, 2), 7, ...numbers.slice(3)]; |
Immutable.js simplifies immutable array operations by introducing a List object. List has methods similar to array’s but all of them are immutable and return a fresh copy instead of mutating the list.
1 | const numbersList = Immutable.List([1, 2, 3, 4, 5]); |
List has many useful methods which make working with it in immutable way easier than working with arrays.
We haven’t touched on an important drawback of immutability yet - performance. Cloning objects in JavaScript is expensive. With immutability we have to use cloning a lot since we aren’t allowed to mutate objects. This is especially painful if you have to deal with large objects and only want to update a single property. In such case you would need to re-create the whole object anyway. Fortunately, Immutable.js tries to address the issue by using various optimization techniques including structural sharing. It stores Maps in such a way that when you update some property you get a reference to a new Map which shares some of the data with the old Map. Since Maps are immutable this isn’t a problem - we don’t need to worry that the shared part will be accidentaly modified.
I have created a benchmark in order to measure the performance of Immutable.js. In this benchmark I create an array of 100000 moderately complex objects. Next, I compare the performance of changing a single property of this object in one of 3 ways:
We can see that all immutable options are much, much slower than simply mutating the object. This is because of the cost of cloning objects. However, we can also see that cloning with Immutable.js is two times faster than using the spread operator. The benchmark does not take into consideration the fact that in order to use Immutable.js our array of objects had to be converted into an array of Maps which is a quite expensive operation. The results would be much worse in such case. Therefore, it makes sense to use Immutable.js all the way throughout your application in order to avoid converting between Immutable.js data structures and vanilla JavaScript data structures. The benchmark is publicly available. Feel free to play with it: https://jsperf.com/immutability-performance-nested-objects/1
This post was a quick introduction to Immutable.js. We’ve learned about immutable data structures such as Map and List, how to perform operations on them and how Immutable.js approaches performance issues related to immutability. It’s understandable if the posts about Immutability seem a little abstract so far. In the next post we will fix it - you will see how useful can it be on an example of Redux. If you have any issues understanding anything in this post or if you simply would like to provide feedback, please leave a comment below. I want this course to be as good as possible and I need your help for that! If you found this post helpful, please consider sharing it on Facebook or Twitter.
]]>This post is part of the Functional Programming in JavaScript series.
Immutability is a fancy name for a specific rule for writing code. This rule says: never change a value or reference once it has been assigned. There are many reasons why you might decide to decide to use immutability. Most importantly, it makes your code easier to reason about. When working with traditional code with mutations, every time you call some function and pass an object to it, you have to assume that the function might change some property in the object you’ve passed. Such changes might be surprising, especially when some other developer is working on that function.
If you decide to embrace immutability in your code base you can be sure that if you pass an object to a function, none of its properties will be changed. Less surprises and less possibilities of error. Let’s see an example:
1 | function setShipmentAddress(person, product) { |
Assume that we are using this function somewhere in our code. It’s part of a common library and another developer is responsible for it so we don’t bother looking at the source code. One day, the developer is told that the product’s address should always be in upper case. Let’s assume he’s not terribly careful and decides to implement it this way:
1 | function setShipmentAddress(product, person) { |
You don’t bother to look at the source code and start using the function straight away. Initially, it looks good - the function has assigned a lowercase address to product. However, after deployment to production you start receiving bug reports - why is person’s address displayed in lower case? Oh, wait…
All this mess could be avoided if you agreed on the immutability rule. In such case the other developer would have to implement setShipmentAddress in such a way that it would not mutate any of the input objects. Instead, it would return a fresh product object with updated address. You could then assume that this function will never change either product nor person so the above situation would never happen!
1 | function setShipmentAddress(product, person) { |
You can see that with immutability we have a clear separation of function’s inputs and outputs. This is a very simplified example. In reality such unintentional side effects can be much more subtle and have much worse consequences. Let’s see how to avoid them at all thanks to immutability.
We will start small and have a look at simple variables. Variables in JavaScript are obviously mutable - once you assign an object or value to a given variable, you are free to change it at any point in the future. ES6 introduced a new keyword to the language: const. It is meant to be used in place of the var keyword. Declaring a variable as const means that you cannot change it in the future!
1 | const a = 10; |
Running the above code results in an error:
1 | Uncaught TypeError: Assignment to constant variable. |
You might be wondering why anyone would use one such thing. The main advantage is that it helps you avoid situations in which you accidentally change an existing variable. What’s more, it expresses your intentions better. If you mark a variable as const and then another developer comes and wants to change your code in a way that would require reassignment on that variable, they will realize that wasn’t your intention. They will think twice before making the change.
Let’s look at a code piece from one of the previous chapters:
1 | var safetyProducts = _.filter(products, p => p.category === "Safety"); |
Here we declare three variables and each of them is used only once. It will never make sense to re-assign to them. Therefore, it makes sense to change them to const.
1 | const safetyProducts = _.filter(products, p => p.category === "Safety"); |
Note that this code piece was already written in functional style. Using const often makes sense when dealing with functional code. One final remark about const is that while it guarantees that the variable cannot be changed, it doesn’t say anything about the object (or array) assigned to that variable. Therefore, this code would be perfectly legal:
1 | const product = { |
This is because the variable stores a reference to an object. We only mark the reference as immutable but not the object.
How do we implement immutable objects in JavaScript then? We can use the Object.freeze method. It’s job is to mark all properties as read-only and prevent adding new properties.
1 | const john2 = { |
As we can see the age property was not changed. If we enabled strict mode this code would throw an error.
Ok, but how can we implement “changes” to frozen object? Instead of mutating it we will simply return a new copy with applied changes. How to implement this, though? One of the ways would be:
1 | function buyProduct(product) { |
However, imagine having an object with 50 properties. We would need to rewrite them all!
Object.assign
Let’s have a look at some more clever ways to do this. The first option is to use the Object.assign method introduced in ES6. Object.assign takes a target object and a source object(s) and copies all properties from the source object(s) to the target object.
1 | const product = { |
This code will print the following result to the console:
1 | {name: "U-lock", quantity: 10, category: "Safety"} |
Wait, but we are mutating the product object here, right? That’s correct. However, we can use a little trick to return a new object instead of mutating the existing one. We can easily specify a new, empty object as the source object:
1 | const categorizedProduct = Object.assign({}, product, { category: "Safety" }); |
Here we are providing two source objects and an empty target object. All of the properties from the source objects will be copied to the target empty object. As a result we get a fresh object with all the properties of product and the new category property. It’s important to note that Object.assign performs shallow assignment. If we have a nested object in one of the sources than it will not be cloned. Is it good or bad? It depends on our use case.
1 | const john = { |
This is actually not part of the JavaScript specification (yet). It is part of a proposal and will likely make it to one of the future JavaScript versions. However, it’s already supported in most modern browsers. Object spread operator is an even more convenient way of applying changes to objects without having to mutate them. Let’s see an example:
1 | const alice2 = { ...john, name: "Alice" }; |
This new syntax can be roughly translated into: take all properties from john object, combine them with the new name property and put it all in the new object. The three dots are applied to an object in order to “unwrap” its properties. Since such “unwrapped” properties cannot live on their own, they should always be put within curly braces. But curly braces always create a new object hence we will get a fresh object with copied properties. We don’t need to add any new property. This line would simply clone john :
1 | const johnCopy = { ...john }; |
Just as in the case of Object.assign we have to pay special attention to nested objects.
In this post we’ve discussed four techniques which can help us write immutable code: constants, Object.freeze, Object.assign and object spread operator. The last two techniques do not enforce immutability but rather make it easier to implement. I hope you agree with me that there are some benefits to using immutability. If you’re still not convinced, bear with me until we talk about Redux which unleashes the full potential of immutability. Before that, we will take a look at Immutable.js - a library which can make writing immutable code feel much more natural.
If you have any issues understanding anything in this post or if you simply would like to provide feedback, please leave a comment below. I want this course to be as good as possible and I need your help for that! If you found this post helpful, please consider sharing it on Facebook or Twitter.
]]>This post is part of the Functional Programming in JavaScript series.
The best place to explore functions available in lodash is the documentation. You will notice that there are many expandable sections on the left hand side. For now let’s focus on Array and Collection.
Lodash methods are not available directly on the array object. It could be achieved with JavaScript’s prototypical inheritance but it’s not considered a good practice to extend native prototypes (actually, it’s disputable but lodash creators decided not to do this). Therefore, in order to use a lodash method we need to call it on the global _ object. You may note that this will make chaining less convenient but there is a cure for that - we’ll look at it at the end of this post. Let’s see a usage example:
1 | var numbers = [1, 2, 3, 4, 5]; |
Drop function takes an array and a number of elements to drop. It returns a new array that doesn’t contain the first n elements. As a side note, it’s actually not a higher order function since it doesn’t take a function as argument.
Let’s consider the following requirement: we’re running a bike parts shop. W__e are given a list of items in the customer’s shopping cart. We should validate that he is ordering at least one piece of each item. All of the imperative approaches to this problem I can think of are a little clumsy. We could either:
Let’s see how we can use lodash in order to solve it in an elegant way:
1 | var basket = [ |
Function every takes a function that evaluates to true or false, applies this function on all elements and returns true only if the function was true on all elements. In other words, it checks whether every element satisfies given condition. In fact, what we want to do is to check if it is not true that every element satisfies given condition. Therefore, we can check whether there are some elements that don’t satisfy our condition (the condition being quantity greater than 0).
1 | if (_.some(basket, item => item.quantity == 0)) { |
Which results in even cleaner and more readable code. Doesn’t it feel a bit like writing in natural language?
Here comes another requirement! We’ve received a list of available products from some backend API. It would be nice to display them in separate boxes based on the category they belong to. The imperative solution is particularly verbose:
1 | var products = [ |
We create an empty object (productsByCategory) in which we will store the results - keys will represent categories and for each key we will store an array of products. Next, we iterate over the products. For each product we check whether we already have an entry in the productsByCategory object. If we don’t then we need to create it and initialize it with an empty array. Finally, we add the product to the list under its category. The result will look like this:
1 | {Safety: Array(2), Basics: Array(1)} |
I bet you’re expecting the functional version to be much simpler - and it is. Yes, it’s a single line again.
1 | var productsByCategory = _.groupBy(products, product => product.category); |
The function takes an array and a function which determines how to group the elements of that array. The grouping functions is evaluated for each element. Those elements for which the same value is returned are packed into separate groups. Finally, an object is returned with keys equal to unique values returned by the grouping function applied on all of the elements.
The grouping function does not have to be a simple property selector - we can put any sort of expression in it. For example, it’s trivial to split products into groups based on the length of their names:
1 | var productsByNameLength = _.groupBy(products, product => { |
The last useful function I’d like you to look at is orderBy. As the name suggests, it let’s you sort an array. Vanilla JavaScript already has a sort function built-in. orderBy is more convenient to use and more functional in its nature. Firstly, while sort orders the array in-place, orderBy doesn’t modify the existing array but returns a fresh copy. Secondly, sort takes a comparator function - a function which takes two elements and compares them. orderBy is more consistent with other functions we’ve looked at - it takes a function which selects the value to use for ordering. Let’s say that we would like to have a fresh copy of our products array sorted by quantity. Without lodash we would need to do the following:
1 | var sortedBasket = basket.slice(0); |
In the first line we use slice in order to clone the array. slice is for cutting out slices of an array. We can tell it to cut out the whole array as a slice which will give as a copy of this array. Next we call sort providing a comparator function. This comparator takes two elements and returns a negative number if the first element is smaller than the second, positive number if the second element is smaller and 0 if the elements are equal. By subtracting the second element from the first we will achieve the desired behavior. Let’s now see a lodash solution:
1 | var sortedBasket = _.orderBy(basket, item => item.quantity); |
Here we just specify that we want to use the quantity property for sorting. The original array will remain as it was. You might be wondering why it can be important to not change the original array. One important aspect of functional programming is immutability. The idea is to never modify existing objects but instead return new copies. It might sound wasteful but in fact it has many advantages. We will take a deeper look at immutability in the posts to come. For now, here are a few practical examples of when it’s required to not sort an array in-place.
I’ve already mentioned that with lodash syntax it’s no longer as easy to chain method calls as with vanilla Javascript. However, lodash has solved this problem. Let’s see an example:
1 | var safetyProducts = _.filter(products, p => p.category === "Safety"); |
We have three consecutive lodash calls here. It would be nice to chain these calls so that the data flow would be more readable.
1 | var bigQuantites = _.chain(products) |
Here we use the chain method to wrap our array in a special object that knows about all the lodash methods. Next, we can simply call lodash methods directly on this object. At the end we need to unwrap our array by calling value.
And that’s all about lodash. Of course, there is much, much more in it and I strongly encourage you to explore it using the documentation. Since you already know the concept of higher order functions understanding the remaining functions will be much easier for you. I hope you find higher order functions on arrays useful and fun. They’re really worth learning and understanding since the concept is very widespread. The investment will pay off in random places (for example when learning RxJS).
How to use functional array operations? From now on, every time you find yourself writing a for loop, stop for a moment and try to figure out if you can write it using in a more functional style. The cases when it’s not possible or beneficial are very, very rare. This post concludes the chapter about array operations. Next we will take a look at immutability in JavaScript!
If you have any issues understanding anything in this post or if you simply would like to provide feedback, please leave a comment below. I want this course to be as good as possible and I need your help for that! If you found this post helpful, please consider sharing it on Facebook or Twitter.
]]>This post is part of the Functional Programming in JavaScript series.
I will describe to you a real-world problem which my colleague was dealing with at work. He was working on a filtering mechanism where the user could define conditions on a column in a table. The column would usually be mapped to a property of an object. However, it should also support nested properties. At some point in his code, he had to solve the following problem: given a string representing the path of nested properties in an object, return the value of that nested property. We can’t assume anything about the length of the path - properties can be arbitrarily nested. For example, given path author.address.city.size and object:
1 | { |
This function should return “large”. As usual, let’s first approach the problem in a traditional, imperative way.
1 | var path = "author.address.city.size"; |
Firstly, we split the path so that instead of a single string, we deal with an array where each element is a single property in consecutively nested object. Next, we initialize a helper variable current to the object which we wish to inspect (book in our case). This variable will store the currently nested objects as we descend deeper and deeper inside the object. We iterate over the path parts and for each part we use it to go one level deeper. Once we are done, we end up with the desired value. And here comes the functional version:
1 | pathParts.reduce((current, currentPart) => current[currentPart], book); |
A single line! Isn’t it awesome? Ok, I cheated a bit by omitting some code, but it’s still one line versus four.
With previous higher-order functions (forEach , map and filter) we passed a function to be applied on each element. This time it is a bit different. We pass a function which takes two arguments - an accumulator and the currently processed element. What is accumulator? It is a value which represents the intermediate result of processing. Reduce looks at consecutive elements of the provided array. The accumulator should always hold a valid result for all elements processed so far. In other words, accumulator accumulates the results of processing consecutive array elements.
It’s easiest to understand when compared with the imperative code above. The current variable is actually an accumulator. For every loop step, we take the current level on nesting (stored in the accumulator) and use it to get to the next level of nesting. Then, we store the result as the new accumulator. Reduce follows exactly the same process but it hides the details of looping and initializing the accumulator. Let’s have a deeper look at how reduce should be called. It takes two parameters:
Let’s have a look at another example in which we will use reduce in a different way. Our use case is much simpler now: take an array of numbers and return a sum of its elements. Pause for a moment now and try to come up with a way to use reduce to solve this problem. Let’s try to figure out what should we store in the accumulator. I’ve already said that for every step the accumulator should store the valid result for the currently processed array elements. In our case the result is the sum of currently processed elements - that’s exactly what we will store in the accumulator. Next, let’s see what the reducer function should do.
As we know, it should take the current array element and the accumulator and produce the new accumulator. Since we agreed that the accumulator will store the sum of elements processed so far, then in order to produce a new accumulator we simply need to add the current array element to the old accumulator! Finally, the initial accumulator value. Since we’re going to add array elements to it, so it’s best to initialize it simply to 0. Wrapping it up, this is how our reduce call should look like:
1 | var numbers = [1, 2, 3, 4, 5]; |
In this article I’ve explained one of the most powerful concepts of functional programming - the reduce function. Reduce has abundance of applications. Redux is based exactly on this concept. In Redux, we write reducer functions which take the current state and an action and produce a new state. Does it sound familiar now? I will dedicate a separate episode of this course to Redux, so stay tuned. Another great example is the MapReduce programming model used in parallel processing of large data sets. I hope I’ve got you at least a little bit excited about reduce :-)
I will conclude the part about array operations with a post about lodash - a library which extends the collection of higher order functions available in vanilla JavaScript with many more operations.
If you have any issues understanding anything in this post or if you simply would like to provide feedback, please leave a comment below. I want this course to be as good as possible and I need your help for that! If you found this post helpful, please consider sharing it on Facebook or Twitter.
]]>This post is part of the Functional Programming in JavaScript series.
Arrays are a basic programming construct present in most of programming languages. They let us deal with situations where we need to store or operate on multiple instances of some piece of data. As I already explained, imperative programming is all about executing instructions in a sequence. We can use various kinds of loops to deal with arrays. Usually, the loop body describes what to do to each element of an array. Let’s have a look at the for loop in JavaScript. The below example iterates over an array and prints each element.
1 | var books = [ "Gone with the Wind", "War and Peace" ]; |
This code looks very familiar. Is there anything wrong with it? The problem that we are solving here can be stated as print every element of given array of books. However, there is much more going on in this piece of code:
Won’t you agree that while our intent is very simple, the above process is quite complex? What’s worse, there are many things that we can get wrong in this code - initialize the index to 1, use <= instead of <, forget about incrementing the index variable. This is because we have to be very specific about how to iterate the array and it’s quite easy to miss something. We could rewrite this code using the while loop. However, it wouldn’t really buy as anything since the process would be roughly the same, only written in different way. Actually, using the for…of construct from ES6 would be a big improvement in readability while still being imperative code. I’ll talk about it in one of the future posts. For now, let’s skip over it.
As I’ve already mentioned in the first post of the series, we can rewrite this code in a more functional way:
1 | books.forEach(function (book) { |
What’s going on here? We’re calling the build-in forEach method on the books array. forEach is kind of special because it accepts a function as one of it’s parameters. In functional programming, we have a special name for functions which accept (or return) other functions as parameters - higher order functions. But what does forEach actually do? It runs given function on every element of an array. So basically, it does exactly the same thing as the for loop above but it hides the ugly details of having an index variable and incrementing it. Isn’t that cool? Let’s break it down a bit. Instead of using an anonymous function, let’s use a named function:
1 | function printBook(book) { |
Now, calling books.forEach(printBook) is essentially equivalent to:
1 | printBook(books[0]); |
I hope you agree that the forEach loop improved our code on multiple levels. However, we can observe a much more spectacular improvement with the following example. Let’s have a look at the following use case: given an array of post ids, return an array of post objects. Let’s assume that we have a REST API available that will return a post object given a post id (we will use an actual API from https://jsonplaceholder.typicode.com).
1 | var webApiUrl = "https://jsonplaceholder.typicode.com/posts/"; |
In the above piece, we iterate over the array of ids. For each id we call the REST API using fetch and get a promise object in return. We push this object to a special array postPromises where we store the results. Finally, we call Promise.all which would transfer the array of promises into a single promise containing the desired array of posts (if you’re not familiar with promises, here is a good read). That seems like a lot of code to do such a simple thing! And we have even more “imperative code overhead” - not only we need to maintain the index variable but we also have to create an array to which we will add results of the fetch call, one by one. Let’s now see a functional solution to the same problem.
1 | var webApiUrl = "https://jsonplaceholder.typicode.com/posts/"; |
So what does map do? It takes an array and transforms it to another array by applying given function on each element. Let’s see how it is different from forEach:
So far the functions we’ve discussed operated on all elements of an array. What if we would like to apply some operation only to some elements of an array? In other words, what if we wanted to filter out some elements? As usual, let’s see an example of an imperative approach first. We will extend the example from the previous post: _given an array of post ids, return an array of posts _but only for even ids. In the imperative version, we can simply filter out odd ids by adding an if statement:
1 | var webApiUrl = "https://jsonplaceholder.typicode.com/posts/"; |
Let’s now have a look at a functional version:
1 | var webApiUrl = "https://jsonplaceholder.typicode.com/posts/"; |
We’ve added a call to the filter function. The filter function takes a filtering function and applies it to every element of an array. If the value returned by the filtering function is false, than given element is not included in the resulting array. In other words, the filtering function is used to decide which elements should stay and which should be filtered out.
In the above example, you can already see how easily we can chain invocations of higher order functions. This is because every such function is invoked on an array and returns an array. With such code, the data flow is immediately visible and easy to understand. What I mean by that is that it’s easy to understand how the array is transformed.
In this post we’ve discussed three very useful functions which allow you to better articulate your intents with code. What’s more, using these functions might result in less buggy code - they present fewer opportunities to make a bug then traditional, imperative code. The forEach example was basic and rather theoretical while the other examples were taken from a real-world situation. In the next post, we will continue exploring the world of higher order functions by looking at reduce.
If you have any issues understanding anything in this post or if you simply would like to provide feedback, please leave a comment below. I want this course to be as good as possible and I need your help for that! If you found this post helpful, please consider sharing it on Facebook or Twitter.
]]>This post is part of the Functional Programming in JavaScript series.
Recently, we’ve observed a massive trend of embracing functional programming. Many object oriented languages started to incorporate functional programming features (e.g. streams in Java, pattern matching in C#, etc.). In the front-end world there is an abundance of examples of how functional programming concepts made its way into popular frameworks:
In my experience, the above topics are difficult to grasp for people who are not familiar with functional programming at all. Learning the functional ways gives you a great head-start when dealing with these concepts. And given that the trend is definitely in favor of FP, we may anticipate even more FP concepts coming into the JavaScript world. Obviously, there are more benefits to FP. Using functional techniques usually results in cleaner, less buggy, self-descriptive code. Besides, it is really fun and intellectually rewarding.
Whether you like it or not, you are probably going to use functional programming to some extent. Why not take initiative and learn it properly?
Many people associate programming with writing down some instructions telling the computer what to do. This is just one of many approaches to programming. We call this approach imperative programming. In imperative programming, your primary building block are statements. There are many kinds of statements - variable assignments, if statements, for loops and more. Your program executes from top to bottom, taking the statements one by one and running them. Functional programming is different.
In functional programming, instead of specifying exactly what to do by providing a list of instructions, you define your program as a function which takes some input and produces some output. But how can a complex program fit into a function? The answer lies in composition. You break down your program into multiple smaller functions and compose them. As a result, when writing functional code, you don’t need to be as specific as while writing imperative code. Because of that, the code is more readable, less prone to bugs and mistakes and more self-describing.
The below example illustrates what I mean by that. In the imperative way, you need to be very specific about how to use an index variable to iterate over an array. You need to provide details such as how to initialize the variable and how to increment it. There are many places where you can get it wrong and the intent is not that clear by looking at the code. Compare it with the functional piece. We abstract away the details of how to iterate over an array. The code is very readable and catches the intent of the programmer.
1 | var books = [ "Gone with the Wind", "War and Peace" ]; |
Although JavaScript is far from being a pure functional language, it has some support for functional programming and makes writing functional code pretty natural.
Most importantly, functions are first category citizens in JavaScript. It means that you can assign functions to variables and do with them anything you would do with any variable. Most importantly, you can pass functions as parameter to another function or return it from a function. Thanks to that, JavaScript makes it possible to naturally compose functions.
1 | function updateHeader(updater) { |
In the above example, we can see that JavaScript allows us to treat functions as data. We pick one of the functions at random and pass it as an argument to another function.
Another aspect of JavaScript which makes it FP friendly is the support for anonymous functions. In functional programming we create functions all the time. Therefore, it wouldn’t be very convenient if every function had to be named. Fortunately, in JavaScript we can create functions without naming it.
1 | updateHeader(function (element) { |
Actually, thanks to the ES6 standard we can write anonymous functions in a more concise way:
1 | updateHeader(element => { |
One very useful aspect of anonymous functions is called clojures. Thanks to the clojure mechanism, it’s possible to reference variables from outside of the scope of the function inside the anonymous function’s body. This mechanism is especially useful when returning functions.
1 | function getHeaderUpdater() { |
By returning a function which references a variable from outer scope, we’ve created a clojure. It means that the returned function has captured the headerEl variable - it will remain in memory even when the control flow exits getHeaderUpdater.
As we will soon learn, immutability is an important aspect of FP. The idea of immutability is that you shouldn’t mutate (modify) objects. Once you assign some properties to an object, they should stay as they are until the very end. If you need to change some property, you should return a new copy of the object instead of modifying it.
Immutability makes your code much less prone to errors - you can make assumptions about your objects and you don’t need to worry that they will be changed from a completely different place in your code. JavaScript doesn’t support programming very well - it was designed as a language to mutate the DOM model. However, ES6 adds support for destructuring which can be helpful with regards to immutability. What’s more, there are libraries which can help you enforce immutability such as immutable.js.
I hope that I’ve convinced you that it’s worth spending some time on learning functional programming. Since you already know JavaScript and at the same time the language is pretty well suited for writing functional code, there are really no excuses not to give it a try! Stay tuned and get ready for the next article in which we will talk about dealing with arrays the functional way.
If you have any issues understanding anything in this post or if you simply would like to provide feedback, please leave a comment below. I want this course to be as good as possible and I need your help for that! If you found this post helpful, please consider sharing it on Facebook or Twitter.
]]>I really believe that getting into public speaking gives you enormous benefits. It helps you develop as a person and ultimately become more open and confident. I’m still at the beginning of this road. Nevertheless, I’d like to share some of my thoughts and hopefully inspire some of you to start speaking in public.
As I already mentioned, I believe learning to be a better speaker can be very beneficial for you. Here is what exactly do I mean:
When asked why they don’t try public speaking, many of my programming friends answer: I have no interested topic to talk about. It’s a nice excuse but entirely not true in my opinion. The topic of your talk doesn’t have to be sexy/revolutionary/something-nobody-else-talked-about. I think a great topic recipe is: pick a new framework/language/approach related to your area of expertise, learn it and explain it to people. Many people don’t have time/energy to stay up to date so by explaining the stuff to them you will be providing value.
This might sound tough for a shy person but it’s not really that bad. The keyword here is meetups. Use meetup.com or if it’s not popular in your town, just check out the student groups at your university. Approach the meetup organizer and offer that you would like to give a talk on the selected topic. If it sounds scary, you can write a personal message to the organizer in which you say that you attended the meetup a couple of times and really liked it and would like to contribute (I did exactly this).
Why meetups? They will provide you with a low-risk way to get started. The audience is not too large and the people who attend meetups are usually there to learn something and not to judge your speaking skills. This was very comforting to me.
You can find ton of (often conflicting) advice on how to prepare your presentation. You must really try out some approaches and find what works best for you. What works for me is to sit down with a pen and piece of paper and come up with a storyboard - a series of small rectangles with images and text. They don’t necessarily have to map 1-1 to slides but rather provide a skeleton for your presentation. However, I’d rather focus on things related to reducing stress.
Most importantly - rehearse! From my experience, it’s the single most important way to reduce the stress during the actual talk. You have to build up the confidence that you know what you’re talking about and that you’ll not get lost in the middle of the talk. This doesn’t mean memorizing the whole script - in fact, I would discourage it. It’s more about remembering all the points you want to touch upon during your talk.
Get some audience for your rehearsals - a friend or a spouse. I know, it will feel totally awkward the first time you do it. But trust me, it’s worth it!
Although I would never memorize the whole script, I think it can be helpful to learn the first few lines by heart. It will give you the comforting feeling that no matter what you will not fail at the start of your presentation. Your rehearsals should be as close to the actual performance as possible. Practice your posture, body language, switching slides. Everything matters.
Finally, there is a chance that you’d like to include a demo/live coding in your session. As you may suspect, the possibility of failure during a live coding session is high and therefore this element introduces a lot of stress. Consider recording and playing a screencast - it might seem lame but for most people it won’t matter. And for you, it will be far less stressful.
Arrive early and double check all of your dependencies - internet connection, slide projector connection. Definitely get a wireless clicker/presenter - you don’t want to be physically tied to your laptop.
Familiarize yourself with the microphone - your voice will sound strange to you if you’ve never used it before. If you’re not confident about your body stance, the simplest thing to do is to just stand straight and hold your hands behind your back. It will automatically make your chest more open and you feel more confident. It sounds stupid but it really works! And no body language is better than wild, distracting gesticulation.
Enjoy the QA session! If you have difficulty approaching people on daily basis, now is the moment when people are approaching you. Deliver the talk and enjoy the applause at the end!
]]>In Part 1 of this series I described the architecture of my app. Let’s focus on the part related to push notifications. We will use Firebase Cloud Messaging to implement notifications. Why do we need it? As the name suggests, push notifications are being pushed directly to users’ devices. If we wanted to implement it ourselves, we would need to somehow figure out how to find the device, connect to it and send data to it. FCM can do this work for us. We can tell it what’s the message and who to deliver it to and it will take care of the delivery.
However, we still need something to actually send the message (i.e. to tell FCM what and to whom should be delivered). FCM refers to the piece that pushes messages as application server. Our setup is serverless so we will not use a single, centralized server for pushing the notifications. Instead, we will use a function-as-a-service offering called Webtask.io. Webtask lets us run a piece of JavaScript code in the cloud, without having to care about where and how it’s executed. BTW, you can use Firebase Cloud Functions instead of Webtask. I decided to use Webtask because when I was working on Friendtainer, Functions were not available yet.
Friendtainer has a single webtask that’s responsible for determining which users need to be shown a reminder. It’s automatically started every 24 hours. For each user that needs to be shown a reminder, the webtask tells FCM to deliver a notification to that user. FCM finds the user’s device and sends the notification. Since the device has a service worker installed and running in the background, it handles the notification and shows a notification banner.
As I said, FCM takes care of message delivery to a specific device. However, we are responsible for pairing of users and devices. Although a user can have multiple devices (especially with a PWA that can be run on desktop, smartphone, tablet, etc.), we are going to assume one device per user - it simplifies things by a lot. There are three pieces required to get this working:
Notifications only make sense if you are able to receive them even if the application is not in the foreground. This is possible to achieve in modern web applications thanks to service workers. Service worker is a piece of JavaScript code which can run in the background, independently of the website which loaded it. Currently, service workers are used mostly for two things:
If you are using Ionic, it has already generated the service worker for you. You will find that it’s not empty. Ionic sets up offline caching of all static resources for you. Let’s setup handling messages from GCM in our service worker:
1 | importScripts('https://www.gstatic.com/firebasejs/3.5.2/firebase-app.js'); |
Next, we need to extend the user interface of our app to allow users to register their intent to receive push notifications. I’ve created the below service.
1 | () |
Let me explain how it works. On the high level, our application needs to call Firebase in order to get a registration token. This token identifies the target that notifications will be pushed to. By target, I mean specific browser on a specific device. If you request permissions from Firefox and Chrome both installed on the same machine, you will get two different registration tokens.
When you call enableNotifications, the library checks if the browser permits showing notifications. If this is not the case, the user will be asked whether he wishes to receive notifications. The promise will succeed only if he accepts. Next, we call setupOnTokenRefresh in order to handle the situation when the token is updated. Finally, we call userService.setFcmKey() . You should implement the UserService yourself and this particular method should simply store the token in some user profile object (in the database).
Basically, we need to associate users to FCM tokens beacause when sending a message, we can’t specify the targeted user but only his FCM token. You should consider associating multiple tokens with single user. If you want to be able to push messages to all devices owned by a user, you need to associate all of them with the user’s profile. Note that we also implemented disableNotifications . This method removes the association between the current user and the FCM token.
You should consider calling this method when the user is logging out of the application - otherwise, you may end up in a situation when some other user using the same device receives notifications targeted to the other user. Lastly, remember to pass messagingSenderId when initializing Firebase in the app and add the following values in your manifest file:
1 | "gcm\_sender\_id": "103953800507", // this value is always the same |
The final missing piece is to send the notification. In Friendtainer it’s the responsibility of a Webtask which is triggered on some schedule. You can use any FaaS provider for that (using Firebase Cloud Functions might be a good choice since than you will stay within the Firebase ecosystem). The actual sending is pretty easy. Let’s see an example.
1 | const fcmOptions = { |
We need to retrieve the association between the user and the token from the database first. In the above code, it’s stored in the userData object. Next, we build an object describing what should go into the message and who to send it to. Finally, we make a simple web request to Firebase Cloud Messaging. It will then route our message and deliver it to the desired device.
In this series I talked about building a mobile app using the latest greatest offerings of the PWA approach. We’ve covered how to make a web app look like a mobile app, make the web app installable on a mobile device and how to implement web push notifications which behave exactly as native notifications. I think given all these features the web platform is now ready to compete with native applications and that in the long run we will see the demise of many mobile apps.
]]>
It was a great experience. I’ve spend a lot of time practicing, trying to implement all the knowledge I’ve learned from various blog posts about public speaking and tech talks (I really recommend this and this). I had a lot of support from my contracting agency for which I’m very grateful. It was a lot of hard work but it’s totally worth it. The way you feel after the presentation when people come over to thank you for a great talk is the best reward you can get. Besides, I feel like the whole process made me more confident and a better communicator in general.
My talk included a demo in which I built a small web application. This application allowed the audience to ask questions in real-time (using their phones) which I answered at the end of the talk. This turned out to be a funny thing to do since people were posting anonymously and some of the questions were not related to the talk at all :). However, I’ve decided to pick some and post here along with some more elaborate answers.
What are the disadvantages sides of serverless?
I only mentioned vendor lock-in (the fact that it’s difficult to move your stuff between different service providers; this is particularly true with BaaS) while in fact there are more points to consider. This article does a great job enumerating them. In short:
How do you authenticate users so only data they have permission to is pulled to their instance of the client?
In Firebase Authentication is tightly integrated with Real-time database. This means that you can use a special variable called auth when creating validation rules. Let’s say we store contacts in a database. For each user id we store a list of contacts.
1 | { |
With the below rule we can define that an authenticated user only has access to the data under his user id.
1 | "rules": { |
Won’t such automation push the “regular programmers” out of the market at some point?
I think that serverless might remove some of the sysops jobs. It’s effectively a form of outsourcing and companies usually outsource stuff so that they don’t need to hire people full-time.
From the developer’s perspective, I think the best thing we can do is to stay up to date and learn serverless.
Can I do server processing doing some tasks without client-side?
The author clarified that what he meant is whether it’s possible to execute serverless code in other ways than by making a web request from some client. It totally is, for example Webtask.io has a scheduled execution mode in which you can define that your task should be triggered on some sort of schedule.
Cloud Functions in Firebase have an even longer list of possible triggers. For example, you can setup your function to execute when something specific happens in your Realtime Database or when some condition related to Google Analytics is fulfilled.
How about access from no browser client in context off logic duplication
That’s a totally valid point. If you have clients developed in multiple platforms (e.g. JavaScript and .NET) than you will end up with duplicated logic.
On the other hand, serverless might help you avoid duplication in some other places. For example, you can have your codebase shared between the client and your FaaS functions. During the demonstration in my talk I used exactly that - I reused the config file and the domain model in the Webtask.
]]>As I mentioned in the previous post, there are a couple of requirements that a web app has to comply with in order to be called a PWA. In this post we’ll focus on the items in bold.
Ionic is a web/mobile application framework based on Angular. It’s primary goal is to facilitate building hybrid mobile applications. Hybrid mobile apps are apps consisting of two parts - a native wrapper and a web app. The wrapper acts as a bridge between the web app and the device, providing an API to native functionalities. Although we are not going to build a hybrid app but a PWA, we can still benefit from Ionic. It contains a rich library of UI components which help us build a web app that looks like a native app. What’s more, Ionic detects the platform it’s running on and the components look different on iOS, Android and Windows Phone.
Friendtainer on Android
Friendtainer on iPhone
Ionic comes with it’s own command line tool which you can use to generate the project skeleton as well as particular pages.
In order to get started with Ionic you need to download it first:
1 | npm install -g ionic |
Now you can use the ionic command line tool. Run this command to generate the project skeleton:
1 | ionic start myApp sidemenu |
Now you can start adding Ionic pages like this:
1 | ionic generate page Contacts |
Ionic page is basically an Angular component with some added functionality. Every Ionic app has a root component called AppComponent which hosts the navigation and the current page. Pages form a stack where the topmost page is the one that’s currently visible to the user. It’s super-easy to create pages using the rich component library provided in Ionic. The documentation is great, providing live examples which demonstrate the look and feel on all supported platforms. Ionic make it very convenient to develop and test your app. Just run the following command to run it locally in development mode:
1 | ionic serve |
Once your app is ready you need to deploy it. You need to build a package that contains optimized versions of all the JavaScript files and static assets. Ionic’s build process is geared towards supporting different target platforms such as iOS or Android. This only applies to hybrid apps. In our case, we want to produce a package that has no dependency on the bridging API. For that we need to specify browser as the target platform when building the package. Before deploying your app to production, it’s highly recommended to optimize it. Optimization involves bundling and minifying the JavaScript source code, running Angular AOT (Ahead Of Time) compilation and setting the app in prod mode. As a result, the optimized package is much smaller and has much better performance. In order to build the app with the optimizations, use the prod flag:
1 | ionic build --prod browser |
The build process can take some time. Once it’s done, you can find your package inside the platforms/browser directory. It’s just a bunch of static files so you can just drop it on a HTTP server. Or, use Firebase hosting as I did.
So far, Ionic helped us deal with providing native-like experience inside the app. How about some elements external to the app? Specifically, I mean home screen integration and the full-screen mode. The web app manifest specification allows us to control those aspects of the user experience. Manifest file is basically a JSON file containing some metadata about your application. We can use this file to specify the name of the app, the icon to be used on the home screen and whether it should run full screen or not. Once we provide all this information, some modern browsers (including Chrome) will display a prompt suggesting the user visiting your app to add an icon for it to the home screen.
Chrome home screen prompt
Ionic apps created using the CLI tool already include a manifest file so you just need to modify it. You can find it under the following path: src/manifest.json Below you can find the manifest file for Friendtainer. As you can see, it contains the app title, references to the icon image in several sizes and some info about colors.
1 | { |
Full-screen mode is achieved by setting the display option to standalone. Android phones will also provide a splash screen for your app based on the settings found in the manifest file. The splash screen is composed of the app title and the app image. Background and theme colors are also used when showing the splash screen.
Splashscreen based on the above manifest
The remaining two settings starting with the gcm prefix are related to Google Cloud Messaging. I will describe them in more detail once we discuss push notifications. To be continued…
]]>I decided to create the app as a Progressive Web Application (PWA). What is a PWA? It’s a new term coined by Google which basically means a web application which provides a native-like experience on mobile devices. There are a few indicators of whether a web app is a PWA:
What are the advantages of a PWA over a native mobile app? Discoverability! You don’t need to put your app in the App Store or the Play Store. All your users need to do is to go to a specific URL. Besides, you can use your web skills and don’t need to learn platform-specific technologies such as Swift or the Android framework.
As you can see, there are quite a few requirements that the app should satisfy. Let’s take them one by one.
I wanted to develop my app fast and not spend time on setting up infrastructure so I decided to go for serverless architecture with Firebase Database instead of a dedicated backend. Firebase Database is a no-SQL, real-time database that web applications can directly connect to. It is tightly integrated with Firebase authentication and has a powerful validation rule engine. Therefore, it’s possible to avoid having a separate application server.
The app will consist of the following components:
If you are not that familiar with Firebase or Webtasks, take a look at my previous post where I talk more about it.
Check out Part 2: Ionic 2 and app manifest
]]>Below is the code that caused issues. It is an interface declaration with two overloads of a single Get method.
1 | public interface IRepository<T> where T : class |
The code itself was fine. However, if we try to call it like this:
1 | repository.Get("some id"); |
strange things will happen. Under VS2013 the code will compile without issues. However, under VS2017 it will cause a compile error:
The call is ambiguous between the following methods or properties: IRepository<T>.Get(object, params Expression<Func<T, object>>[])
and IRepository<T>.Get(object, params string[])
Hmm, this totally makes sense. How is the compiler supposed to know which overload I mean? The solution is pretty simple - either don’t use method overloading here or provide a third overload that takes no parameter list.
1 | T Get(object id); |
I started to wonder which language feature introduced in C# 6.0 or C# 7.0 is responsible for this change of behaviour. After spending some time on fruitless thinking, I decided to ask a question on StackOverflow. Lasse in his elaborate answer enlightened me that this is not strictly a change introduced by one of the new language features but rather a stricter behaviour introduced by the Roslyn compiler which is shipped with Visual Studio starting from version 2015. I have later found this stated explicitly in Roslyn documentation.
I decided to solve the issue by adding a third method overload taking only the id parameter. In its implementation I picked one of the existing overloads randomly and called it with an empty parameter list:
1 | public T Get(object id) |
How surprised I was to find out that some of our unit tests started to fail. After another couple of hours, I found the reason. It turned out that one of the overloads of the Get method behaved differently with an empty parameter list (the first one would load all includes for an entity when given an empty list while the second one would load none).
This was the moment when I realized how dangerous the before-Roslyn behaviour was. Given a call that was in fact ambiguous, the compiler would choose one of the overloads in a way that is by no means clear or intuitive. If by any luck it chose the same overload that you meant (as it happened in our case) you were relying on subtle, non-documented implementation detail of the compiler . The whole story thought me to be more careful when dealing with method overloading. The algorithm used for overload resolution is actually pretty complex and implements lots of rules (as exemplified here or in John’s Skeet C# in Depth book). Always make sure that it’s absolutely clear (both to you and the readers of your code) which method overload you mean.
]]>I proudly announce that my blog is partnering up with JS Poland 2017!
JS Poland is a promising conference where you will learn about the current state of ES6, Angular, React, Redux, TypeScript and many more.
The conference features 15 great speakers such as Luca Mezzalira (Google Developer Expert, running the London JavaScript Community), Nir Kaufman (whose talk at NgPoland got me into Redux) and Gil Fink (a Microsoft Most Valuable Professional).
There will also be 3 days of workshops. JS Poland will take place June 19 in Warsaw. If you aren’t from Poland, it’s an exciting opportunity to visit my beautiful country! The conference is planned to host 800 guests from all over the world!
As part of my partnership with JS Poland, I’m delighted to offer you discount codes and one free ticket. To get the 10% discount, register here and use the following code: codewithstyle_._ The code is valid until 2017-04-15 23:59.
Want to get the free ticket worth over 100 EUR? Share this post on Twitter and Facebook. The person with the highest count of likes under their share by the end of May will get a free ticket!
JS Poland is organized by my friend, Dariusz Kalbarczyk, who has already successfully organized NgPoland 2016 conference which gathered more than 700 hundreds developers from all over the world. See you at JS Poland!
]]>Code coverage is a metric which indicates the percentage of volume of your source code covered by your tests. It is certainly a good idea to have code coverage reports generated as part of Continuous Integration - it allows you to keep track of quality of your tests or even set requirements for your builds to have a certain coverage. Code coverage in Visual Studio is only available in the Enterprise edition. Fortunately, thanks to OpenCover you can still generate coverage reports even if you don’t have access to the Enterprise license. In this article I will show you how to configure a Build Definition on Team Foundation Server 2015/2017 to use OpenCover to produce code coverage reports.
UPDATE: The full script is available here.
UPDATE 2: Christian Klutz has created a VSTS task based on this article. You may want to check it out. Unfortunately, I won’t be able to offer any help on the topic since I haven’t been using TFS for some time.
We are going to put some files on TFS. We will need:
The last three items are available as NuGet packages. I suggest organizing all these files into the following directory structure:
1 | BuildTools |
Once done, check it in to your TFS instance. I’ve put the BuildTools directory on the top level of the repository. Next, I’ve added a mapping to my Build Definition in order to make that directory available during the build.
Let’s now write the PowerShell script. The script is going to perform a couple of steps:
The script will take a couple of parameters as input:
1 | Param( |
Next, let’s run the Find-Files utility to search against the pattern defined in $testAssembly. This code is copied from the original Run Visual Studio Tests task source code.
1 | \# load helper functions from the vsts-task-lib library |
We can finally run OpenCover. The command to do this is pretty complicated. OpenCover supports different test runners (VSTest being only one of them) so we need to specify the path to VSTest as one of the arguments. The path below (%VS140COMNTOOLS%..\\IDE\\CommonExtensions\\Microsoft\\TestWindow\\vstest.console.exe
) is valid for Visual Studio 2015 installation. Another important argument is -mergebyhash . It forces OpenCover to treat assemblies with the same hash as one. I’ve spent a few hours figuring out why my coverage score is so low. It turned out that OpenCover analyzed few copies of the same assembly.
1 | Start-Process "$PSScriptRoot\\..\\Packages\\OpenCover.4.6.519\\OpenCover.Console.exe" -wait -NoNewWindow -ArgumentList "-register:user -filter:""$OpenCoverFilters"" -target:""%VS140COMNTOOLS%\\..\\IDE\\CommonExtensions\\Microsoft\\TestWindow\\vstest.console.exe"" -targetargs:""$testFilesString /TestCaseFilter:$testFiltercriteria /logger:trx"" -output:OpenCover.xml -mergebyhash" -WorkingDirectory $PSScriptRoot |
Next, let’s convert the results generated by OpenCover to Cobertura format.
1 | Start-Process "$PSScriptRoot\\..\\Packages\\OpenCoverToCoberturaConverter.0.2.6.0\\tools\\OpenCoverToCoberturaConverter.exe" -Wait -NoNewWindow -ArgumentList "-input:""$PSScriptRoot\\OpenCover.xml"" -output:""$PSScriptRoot\\Cobertura.xml"" -sources:""$sourcesDirectory""" |
Finally, we will generate a HTML report based on the results from OpenCover.
1 | Start-Process "$PSScriptRoot\\..\\Packages\\ReportGenerator.2.5.6\\tools\\ReportGenerator.exe" -Wait -NoNewWindow -ArgumentList "-reports:""$PSScriptRoot\\OpenCover.xml"" -targetdir:""$PSScriptRoot\\CoverageReport""" |
And that’s it.
We will need to add three build steps to our Build Definition. If you have a Visual Studio Tests task in it, remove it - you will no longer need it.
1 | -sourcesDirectory "$(Build.SourcesDirectory)" -testAssembly "**\\*.Tests.dll;-:**\\obj\\**" -testFiltercriteria "TestCategory!=INTEGRATION" |
And that’s it! Run the build definition and enjoy your code coverage results. You can find the on the build summary page. The HTML report is available as one of the build artifacts.
]]>LINQ is a technology introduced in C# 3.0 and .NET 3.5. One of its major applications is processing collections in an elegant, declarative way. Here’s an example of LINQ’s select expression:
1 | var numbers = new[] { 1, 2, 3, 4, 5 }; |
Query expressions are one of the language features which constitute LINQ. Thanks to it LINQ expressions can look in a way which resembles SQL expressions:
1 | var squares = from x in numbers select x * x; |
Before LINQ you would need to write a horrible, imperative loop which literates over the numbers array and appends the results to a new array.
It’s pretty easy to understand what select expression does in the above example: it apples a given expression to each element of a collection and produces a collection containing the results. Let’s now imagine that instead of arbitrary collection, we are working with a special kind of collection - one that can have either one element or no elements at all. In other words, it’s either empty, or full. How should select expression act on such a collection? Exactly the same way that it works with regular collections. If our collection has one element than apply the given expression to it and return a new collection with the result. If the collection is empty, just return an empty collection. Note that such a special collection is actually quite interesting - it represents an object that either has a value or is empty. Let’s create such an object and call it Maybe.
1 | public class Maybe<TValue> |
Let’s create two factory methods to allow more convenient creation of instances of Maybe.
1 | public static class MaybeFactory |
Thanks to type inference in generic method calls and the static usingfeature we can now simply write:
1 | var some = Some(10); |
Since we’ve already discussed how select would work on Maybe, let’s implement it! Adding support for query expressions to your custom types is surprisingly easy. You just need to define a method which confirms to a specific signature (it’s an interesting design decision by C# creators which allows more flexibility than requiring the type to implement a specific interface).
1 | public Maybe<TResult> Select<TResult>(Func<TValue, TResult> mapperExpression) |
What’s going on here? Firstly, let’s take a look at the signature. Our method takes a function which transforms the value contained by Maybe to another type. It returns an instance of Maybe containing an instance of the result type. If it’s confusing, just replace Maybe with List or IEnumerable. _It makes perfect sense to write a _select expression which transforms a list of ints to a list of strings. It works the same way with our Maybe type. Now, the implementation. There are two cases:
Let’s give it a try:
1 | Maybe<int> age = Some(27); |
Nice! We can now use select expressions with Maybe type.
Let’s now imagine that given an employee’s id, our goal is to return the name of theirs supervisor’s supervisor. A person can but does not have to have a supervisor. We are given a repository class with the following method:
1 | public Person GetPersionById(Guid id) { ... } |
And a Person class:
1 | class Person |
In order to find the person’s supervisor’s supervisor’s name we would need to write a series of if statements:
1 | public static string GetSupervisorSupervisorName(Person employee) |
Can we improve this code with our new Maybe type? Of course we can! First of all, since Maybe represents a value which may or may not exist, it seems reasonable for GetPersonById to return _Maybe<Person>
_ instead of Person.
1 | public static Maybe<Person> GetPersonById(Guid id) |
Next, let’s modify the Person class. Since a person can either have or not have a supervisor, it’s again a good fit for the Maybe type:
1 | class MonadicPerson |
Given these modifications we can now rewrite GetSupervisorSupervisorName in a neater and more elegant way:
1 | public static Maybe<string> GetSupervisorName(Maybe<MonadicPerson> maybeEmployee) |
Why is this better than the previous version? First of all, we explicitly represent the fact that given a person, the method might or might not return a valid result. Previously, the method always returned a string. There was no way to indicate that it can sometimes return null (apart from a comment). A user of such a method could forget to perform null check and in consequence be surprised by a runtime error. What’s more, we avoid the nesting of if statements. In this example we only go two levels deep. What if there were 5 levels? Code without these nested if statements is much cleaner and more readable. It expresses the actual logic, not on the boilerplate of null-checking.
If you’re copying these snippets to Visual Studio, you might have noticed that the last one won’t compile. By implementing Select we told the compiler how to apply functions to values inside Maybe instances. However, here we have a slightly more complex situation. We take a value which sits inside a Maybe instance and apply a function to it. As a result we get another Maybe instance, so now we have a Maybe inside a Maybe. The compiler doesn’t know how to handle this situation and we need to tell it by implementing SelectMany.
1 | public Maybe<TResult> SelectMany<TIntermediate, TResult>( |
The first parameter to SelectMany is a function which takes a value (which sits inside Maybe) and returns a new Maybe. In our example, that would be a function which takes a Person and returns its ReportsTo property. The second parameter is a function which takes the original value, the value sitting inside Maybe returned by the first parameter and transforms both into a result. In our case that would be a function which takes a Person and returns its Name. Inside the implementation we have the nested if statements that we had to write when we didn’t use the _Maybe _type. And this is the crucial idea about monads - they help you hide ugly boilerplate code and let the developer focus on the actual logic. Again, let me draw a diagram for those of you who prefer visual aids:
Monad is any generic type which implements SelectMany (strictly speaking, this is far from a formal definition, but I think it’s sufficient in this context and captures the core idea). SelectMany is a slightly more general version of an operation which in the functional programming world is referred to as bind. Monadic types are like wrappers around some values. Binding monads is all about composing them. By wrapping and unwrapping of the values inside monads, we can perform additional operations (such as handling empty results in our case) and hide them away from the user. Maybe is a classic example of a monad. Another great candidate for monad is C#’s Task<T>
type. You can think of it as a type that wraps some value (the one that will be returned when the task completes). By combining tasks you describe that one task should be executed after the other finishes.
I hope this article helped you understand what monads are about. If you find this interesting, check out the F# programming language where monads are much more common and feel more natural. Check out this excellent resource about F#: https://fsharpforfunandprofit.com/. It’s also worth mentioning that there exists an interesting C# library which exploits the concepts I described in this article: https://github.com/louthy/csharp-monad. Check it out if you’re interested.
]]>Site note: Firebase Authentication can be very useful when building a serverless application. For reference, here is a working example illustrating this article: https://github.com/miloszpp/angularfire-sdk-auth-sample.
Firebase offers two ways of implementing authentication:
We’ll implement three components:
We’ll use the excellent Angularfire2 library. It provides an Angular-friendly abstraction layer over Firebase. Additionally, it exposes authentication state as an observable, making it very easy for other components to subscribe to events such as login and logout.
To begin with, let’s install Angularfire2 and Firebase modules:
npm install firebase angularfire2
Next, we need to enable email/password authentication method in the Firebase console.
Finally, let’s load Angularfire2 in our app.module.ts:
1 | import { AuthProviders, AuthMethods } from 'angularfire2'; |
Firstly, let’s inject AngularFire into the component:
1 | public model: LoginModel; |
As you can see, this.af.auth is an observable. It fires whenever an event related to authentication occurs. In our case, it’s either logging in or logging out. FirebaseAuthState stores information about currently logged user. Next, let’s add two methods for logging in and logging out:
1 | public submit() { |
As you can see, we simply propagate calls to the Angularfire2 API. When logging in, we need to provide email and password (encapsulated in model). Finally, we need some HTML to display the form:
1 | <form (ngSubmit)="submit()" *ngIf="!authState"> |
The form is only visible when the user is not logged in (authState will be undefined). Otherwise, we show the user name and the logout button.
We’ve allowed our users to logged in but so far there are no registered users! Let’s fix that and create a registration component. Firstly, we need to inject the AngularFire service just like we did in the login controller. Next, let’s create a method to be called when the user provides his registration details:
1 | public submit() { |
Finally, here goes the HTML form:
1 | <form (ngSubmit)="submit()"> |
In this tutorial I showed you how to take advantage of Firebase Authentication and use it in an Angular 2 application. This example doesn’t exploit the full potential of Firebase Authentication - it can also do email verification (with actual email sending and customizable email templates), password recovery and logging in with social accounts (Facebook, Twitter, etc.). I will touch on these topics in the following articles. Let me know if you have any feedback regarding this post - feel free to post a comment!
]]>I just finished reading this must-read position for C# developers. I believe that it’s very easy to learn a programming language to an extent that is sufficient for creating software. Because of that, one can easily lose motivation to dig deeper and gain better understanding of the language. C# in Depth is a proof of why one shouldn’t stop at this point. There is a lot to learn by looking at the details of a language, how it evolved and how some of it’s features are implemented. I think the book is fantastic. I loved the author’s writing style which is very precise (very little hand waving) but not boring at the same time. It feels that he’s giving you just the right amount of detail. Here are a couple of interesting things I learned about when reading the book. The list is by no means complete but it gives a taste of what’s in the book.
To sum up, I totally recommend reading this book. It’s not a small time investment, but I think it’s totally worth it.
]]>We’re going to build (guess what?) a TODO app. The client (in Angular 2) will be calling a Webtask whenever an event occurs (task created or task marked as done). The Webtask will update the data in the Firebase Database which will be then synchronized to the client. Webtask is function-as-a-service offering which allows you to run pieces of code on demand, without having to worry about infrastructure, servers, etc. - i.e. serverless.
The full source code is available on Github.
UPDATE: recently I gave a talk on this topic during #11 AngularJS Warsaw meetup. During the talk I built a silghtly different demo application which additionally performs spell checking in the webtask. Check out the Github repo for the source code.
Let’s start with a very simple client in Angular 2. We will use Angular CLI to scaffold most of the code.
1 | npm install -g angular-cli |
It takes a while for this command to run and it will install much more stuff than we need, but it’s still the quickest and most convenient way to go. Let’s create a single component.
1 | cd serverless-todo |
Now, let’s create the following directory structure. We’d like to share some code between the client and the webtask so we will put it in common directory.
1 | src |
Let’s start with defining the following interfaces inside model.ts. The first one is a command that will be sent from the client to the webtask. The second one is the entity representing an item on the list that will be stored in the database.
1 | export interface AddTaskCommand { |
Finally, remember to add the Tasks component to app.component.html :
1 | <app-tasks></app-tasks> |
Before we proceed, you need to create a Firebase account. Firebase is a cloud platform which provides useful services for developing and deploying web and mobile applications. We will focus on one particular aspect of Firebase - the Realtime Database. The Realtime Database is a No-SQL storage mechanism which supports automatic synchronization of clients. In other words, when one of the clients modifies a record in the database, all other clients will see the changes (almost in real-time). Once you created the account, let’s modify the database access rules. By default, the database only allows authenticated users. We will change it to allow anonymous reads. You can find the Rules tab once you click on the Database menu item.
1 | { |
Firebase provides a generous limit in the free Spark subscription. Create an account and define a new application. Once you are done, put the following definition in config.ts :
1 | export const config = { |
If you cannot find your settings, here is a helper for you. If you are really lazy, you can use the following settings, although I cannot guarantee any availability. Let’s now add Firebase to our client. There is an excellent library called AngularFire2 which we are going to use. Run the following commands:
1 | npm install --save firebase |
Modify the imports section of AppModule inside app.module.ts so that it looks like this (you can import AngularFireModule from angularfire2 module):
1 | imports: [ |
Now you can inject AngularFire object to Tasks component (tasks.compontent.ts ):
1 | public tasks: FirebaseListObservable<Task[]>; |
You will also need some HTML to display tasks. I will include the form for adding tasks as well (tasks.component.html ):
1 | <h1>TODO list</h1> |
Our client is ready to display tasks, however there are no tasks in the database yet. Note how we can bind directly to FirebaseListObservable - Firebase will take care of all the updates for us.
Now we need to create the Webtask responsible for adding tasks to the list. Before we continue, please create an account on webtask.io. Again, you can use it for free for the purposes of this tutorial. The website will ask you to run the following commands:
1 | npm install wt-cli -g |
Creating Webtasks is amazingly easy. You just need to define a function which takes a HTTP context and a callback to execute when the job is done. Paste the following code inside webtasks/add-task.ts :
1 | import { config } from '../common/config'; |
The above snippet parses the request body (note how it uses the same AddTaskCommand interface that the client). Later, it creates a Task object and calls Firebase via the REST API to add the object to the collection. You could use the Firebase Javascript client instead of calling the REST API directly, however I couldn’t get it working in the Webtask environment. Obviously in a production app you would perform validation here. Note that you need to define the firebaseSecret constant. You can find the private API key here:
Firebase complains that this is a legacy method but it’s simply the quickest way to do that. Why do we need to pass the secret now? That’s because we defined a database access rule which says that anonymous writes are not permitted. Using the secret key allows us to bypass the rule. Obviously, in a production app you would use some proper authentication. We are ready to deploy the Webtask. A Webtask has to be a single JavaScript file. Ours is TypeScript and it depends on many other modules. Fortunately, Webtask.io provides a bundler which can do the hard work for us. Install it with the following command:
1 | npm i -g webtask-bundle |
Now we can compile the TypeScript code to JavaScript, then run the bundler to create a single file and then deploy it using the Webtask CLI:
1 | tsc add-task.ts && \ |
Voila, the Webtask is now in the wild. The CLI will tell you its URL. Copy it and paste inside config.ts:
1 | export const config = { |
There is just one missing part - we need to call the Webtask from the client. Go to the Tasks component and add the below method:
1 | addTask(content: string) { |
This function is already linked to in HTML. Now, run the following command in console and enjoy the result!
1 | ng serve |
In this short tutorial I showed how to quickly build a serverless web application using Webtasks. Honestly, you achieve the same result without the Webtasks and by talking directly to Firebase from the Client. However, having this additional layer allows you to perform complex validation or calculations. See Tradux for a nice example of a complex Webtask. You can very easily use Firebase to deploy your app.
]]>The key takeaway for me is to deffinietly look into Redux (the presentation by Nir Kaufman). The framework introduces a great idea from functional programming to the frontend world. Redux allows you to express your application’s logic as a set of reducer functions which are applied on the global, immutable state object in order to produce the “next version” of state. Thanks to that, it’s much easier to control and predict state transitions. Similiarity to the State Monad seem obvious.
Another very interesting point was the presentation by Acaisoft’s founder who showed a live demo of an on-line quiz app with real-time statistics. The application was implemented in Angular 2 with serverless architecture (AWS Internet of Things on the backend side), event-sourcing and WebSockets. It was exciting to observe a live chart presenting aggregated answers of 250 conference participants who connected with their mobiles.
Definietly the most spectacular talk was the one about using Angular to control hardware connected to a Raspberry Pi device (by Uri Shaked)! The guy built a physical Simon game) that was controlled by an Angular app. Thanks to angular-iot he was able to bind LED lights to his model class. The idea sounds crazy but it’s a really convincing demonstration that Angular can be useful outside of the browser. If you are interested, you can read more here.
Last but not the last, I have to mention the workshop about TypeScript 2 (again by Uri) which I attended the day before. Although I knew TypeScript before, it was interesting to learn about the new features such as null strictness and async/await. Coming from a C# background, it’s very easy to spot the Microsoft touch in TypeScript. I believe the language is evolving into the right direction and I’m happy to see more and more ideas from functional programming being incorporated in other areas.
Wrapping up, I think the conference was very convincing at demonstrating how much stuff is happening around frontend development. I like the general direction towords each its evolving and I hope that I will have many opportunities to work with all the new stuff.
]]>Recently I decided to get into the habit of reading IT books regularly. To start with, I wanted read something about building scalable architectures. I did a quick research on Amazon and chose Scalability Rules: 50 Principles for Scaling Web Sites by Martin L. Abbott, Michael T. Fisher.
Based on comments and reviews, it was supposed to be more on the technical side. I was slightly disappointed in this aspect. However, I think this is still a worthy read. The book is divided into 13 chapters. Each of the chapters contains several rules. What stroke me is that these rules are very diverse.
We’ve got some very, very general advice that could be applied to any kind of software development (e.g. Don’t overengineer, Learn aggressively, Be competent). We’ve got stuff for CTOs or IT directors in large corporations (e.g. Have at least 3 data centers, Don’t rely on QA to find mistakes). There are also some specific, technical rules - what I was after in the first place. I’m not convinced mixing these very different kinds of knowledge makes sense since they are probably targeted to different audiences (which is even acknowledge by the authors in the first chapter).
Some of the rules felt like formalized common sense, backed with some war stories from the authors’ experience (e.g. AFK Cube). However, some of the stuff was indeed new to me. It was also interesting to see the bigger picture and the business side of things (potential business impact of failures, emphasis on the costs of different solutions, etc.).
I think the book is a great choice if you are a CTO of a SaaS startup or a freshly promoted Architect without prior experience of building scalable apps (having the experience would probably teach you much then the book). If you are a Developer who wants to get some very specific, technical advice then the book will serve well as an overview of topics that you should learn more deeply for other sources (such as database replication, caching, load balancing, alternative storage systems). Nevertheless, I think the book is a worthy read that will broaden your perspective.
]]>Slick is a Functional Relational Mapper. You might be familiar with Object Relational Mappers such as Hibernate. Slick embraces Scala’s functional elements and offers an alternative. Slick authors claim that the gap between relational data and functional programming is much smaller than between object-oriented programming. Slick allows you to write type safe, SQL-like queries in Scala which are translated into SQL. You define mappings which translate query results into your domain classes (and the other way for INSERT and UPDATE ). Writing plain SQL is also allowed.
1 | // Example taken from docs |
Anorm is a thin layer providing database access. It is in a way similar to Spring’s JDBC templates. In Anorm you write queries in plain SQL. You can define your own row parsers which translate query result into your domain classes. Anorm provides a set of handy macros for generating parsers. Additionally, it offers protection against SQL injection with prepared statements. Anorm authors claim that SQL is the best DSL for accessing relational database and introducing another one is a mistake.
1 | // example taken from docs |
As mentioned, Slick API is non-blocking. Slick queries return instances of DBIO monad which can be later transformed into Future . There are many benefits of a non-blocking API such as improved resilience under load. However, you will not notice these benefits unless your web applications is handling thousands of concurrent connections. Anorm, as a really thin layer, does not offer a non-blocking API.
Slick’s DSL is very expressive but it will always be less than plain SQL. Anorm’s authors seem to have a point that re-inventing SQL is not easy. Some non-trivial queries are difficult to express and at times you will miss SQL. Obviously, you can always use the plain SQL API in Slick but what’s the point of query type safety if not all of your queries are under control? Anorm is as expressive as plain SQL. However, passing more exotic query parameters (such as arrays or UUID s) might require spending some time on reading the docs.
One of huge strengths of Slick is query composability. Suppose you had two very similar queries:
1 | SELECT name, age, occupation, c.country |
In Slick, it’s very easy to abstract the common part into a query. In Anorm, all you can do is textual composition which can get really messy.
In Slick you can define two-way mappings between your types and SQL. Therefore, INSERT s are as simply as:
1 | authors += author |
In Anorm you need to write your INSERT s and UPDATE s by hand which is usually a tedious and error-prone task.
Another important feature of Slick is query type safety. It’s amazing when performing changes to your data model. Compiler will always make sure that you won’t miss any query. In Anorm nothing will help you detect typos or missing fields in your SQL which will usually make you want to write unit tests for your data access layer.
Slick seems to be a great library packed with very useful features. Additionally, it will most likely save your ass if you need to perform many changes to your data model. However, my point is that it comes at a cost - writing Slick queries is not trivial and the learning curve is quite steep. And you risk that the query you have in mind is not expressible in Slick. An interesting alternative is to use Slick’s plain SQL API - it gives you some of the benefits (e.g. non-blocking API) but without sacrificing expressability. As always, it’s a matter of choosing the right tool for purpose. I hope this article will help you to weigh in all arguments.
]]>First of all, your project needs to be buildable with SBT. This can be achieved simply - any project that follows the specific structure can be built with SBT. additionally, we are going to need a build.sbt file with the following contents at the top-level:
1 | lazy val root = (project in file(".")). |
Note that we are using Scala version 2.10 despite that at the time of writing 2.11 is available. That’s because SBT 0.13 is build against Scala 2.10. You need to make sure that you are using matching versions, otherwise you might get compile errors.
Our example plugin is going to add a new command to SBT. Firstly, let’s add the following imports:
1 | import sbt.Keys._ |
Next, we need to extend the AutoPlugin class. Inside that class we need to create a nested object called autoImport. All SBT keys defined inside this object will be automatically imported into the project using this plugin. In our example we are defining a key for an input task - which is a way to define an SBT command that can accept command line arguments.
1 | object MySBTPlugin extends AutoPlugin { |
Now we need to add an implementation for this task:
1 | override lazy val projectSettings = Seq( |
And that’s it.
SBT lets us test our plugins locally very easily. Run the following command:
1 | sbt compile publishLocal |
Now we need an example project that will use our plugin. Let’s create an empty project with the following directory structure:
1 | src |
Inside plugins.sbt , let’s put the following code:
1 | addSbtPlugin("com.github.miloszpp" % "scala-ts" % "0.2.0") |
Note that this information needs to match organization , name and version defined in your plugin. Next, add the following lines to build.sbt:
1 | import sbt.Keys._ |
Make sure that you use the fully qualified name of the plugin object. You can use Scala version older than 2.10 in the consumer project. Now you can test your plugin. Run the following command:
sbt “hello Milosz”
Note the use of quotes - you are passing the whole command, along with its parameters to SBT.
If you would like to make your plugin available to other users, you can use OSS Repository Hosting. They are hosting a public Maven repository for open source projects. Packages in this repository are automatically available to SBT users, without further configuration. The whole procedure is well described here. One of the caveats for me was to change the organization property to com.github.miloszpp (I host my project on GitHub). You can’t just use any string here because you need to own the domain - otherwise, you can use the GitHub prefix.
]]>Currently we are using TypeScript for writing the frontend part of a web application which communicates with backend in Scala. The backend part exposes a REST API.
One of the drawbacks of such desing is the need for writing Data Transfer Objects definitions for both backend and frontend and making sure that they match each other (in terms of JSON serialization). In other words, you need to define the types of objects being transferred between backend and frontend in both Scala and TypeScript.
Since this is a rather tedious job, I came up with an idea to write a simple code generation tool that can produce TypeScript class definitions based on Scala case classes.
I’ve put the project on Github. It’s also available via SBT and Maven. Here is the link to the project: https://github.com/miloszpp/scala-ts
]]>Debugging exceptions in asynchronous programs is a pain. When issuing an asynchronous IO operation you provide a callback that should be executed when the operation returns. In most implementations, this callback might be executed on any thread (not necessarly the same thread that invoked the operation). Since call stack is local to the thread, the stacktrace that you get when handling an exception is not very informative. It will not trace back to the servlet so you may have hard time figuring out where what actually happened.
1 | Exception in thread "main" java.lang.IllegalArgumentException |
Some libraries use a mechanism called ThreadLocal variables (available in Java and C#, in Scala known as DynamicVariable ). By definition, these libraries do not work well with asynchronous code, for the same reason that we get poor stacktraces. I have already discussed one of such situations on my blog. Another one is Mapped Diagnostic Context from the Logback framework. MDC is a nice mechanism that allows you to attach additional information to your logs. Since the information is contextual, it will be available even to logs written from within external libraries. However, as one might expect, MDC is implemented with thread-local variables. Therefore, it doesn’t work well with Scala’s futures. There is a way to get MDC with Futures working by writing a custom ExecutionContext (Scala’s threadpool) that is aware of contextual data and propagates across threads.
Unless you are very careful, it is quite easy to not wait for a Future to complete but instead to fork execution into two branches. When an exception is thrown in a Future that nobody is waiting for, it will most likely just go unnoticed.
1 | def postData(url: String, data: String): Future[Unit] = // ... |
Above code will compile. However, saveToDb will most likely be called before postData returns since execution has been forked. Any exception thrown inside postData will most likely be missed. The correct way to write the above code would be:
1 | postData("http://example.com", "message") flatMap { _ => |
Caching gets more complicated in an asynchronous web application, unless the library you use for caching is designed to work with async code. One of the most common patterns in caching libraries is to let you provide a function that should be executed when a value in cache is missing. See the below example of Guava Cache:
1 | cache.get(key, new Callable<Value>() { |
If doThingsTheHardWay returned a Future (was asynchronous) then you would have to block the thread and wait for the result. Mixing blocking and non-blocking code is generally discouraged and may lead to undesirable situations such as deadlocks.
Asynchronous code adds complexity. In Scala, you need to use all sorts of Future combinators such as flatMap , map or Future.sequence in order to get your code to compile. The issue is partially addressed by async/await language extensions/macros (available for example in Scala and C#) but it can still make your code less readable and harder to reason about.
]]>The main goal of Entity Framework is to map an object graph to a relational database. Tables are mapped to classes. Relationships between tables are represented with navigation properties. The above example will be mapped to the following classes:
1 | public partial class Article |
The highlighted lines declare navigation properties. Thanks to navigation properties, it’s very convenient to access details of Article’s Author. However, it comes at a cost. Imagine the following code in the view:
1 | <table> |
Assuming that ViewBag.Articles is loaded with the below method, this code might turn out to be very slow.
1 | public List<Article> GetArticlesOlderThan(DateTime dateTime) |
Unfortuantely, it will fire a separate SQL query to the database server for each element in the Articles collection. This is highly suboptimal and might result in long loading times.
The reason behind this behaviour is the default setting of Entity Framework which tells it to load navigation properties on demand. This is called lazy loading. One can easily overcome this problem by enabling eager loading:
1 | return context |
Eager loading will cause EF to pre-load all Authors for all selected Articles (effectively, performing a join). This might work for simple use cases. But imagine that Author has 50 columns and you are only interested in one of them. Or, Author is a superclass of a huge class hierarchy modelled as table-per-type. Then, the query built by EF would become unncessarly huge and it would result in transfering loads of unnecessary data.
One way to handle this situation is to introduce a new type which has all Article’s properties but additionally has some of the related Author’s properites:
1 | public class ArticleDto |
Now we can perform projection in the query. We will get a much smaller query and much less data transfered over the wire:
1 | public List<ArticleDto> FastGetArticlesOlderThan(DateTime dateTime) |
We improved performance, but now the code looks much worse - it involves manual mapping of properties which is in fact trivial to figure out. What’s more, we would need to change this code every time we add or remove a field in the Article class. A library called Automapper comes to rescue. Automapper is a convention-based class mapping tool. Convention-based means that it relies on naming conventions of parameters. For example, Author.FirstName is automatically mapped to AuthorFirstName . Isn’t that cool? You can find it on NuGet. Once you add it to your solution, you need to create Automapper configuration:
1 | static MapperConfiguration config = new MapperConfiguration(cfg => |
Here we declare that Article should be mapped to ArticleDto, meaning that every property of Article should be copied to the property of ArticleDto with the same name. Now, we need to replace the huge manual projection with Automapper’s ProjectTo call.
1 | public List<ArticleDto> AutomappeGetArticlesOlderThan(DateTime dateTime) |
You need to add one more line:
1 | using AutoMapper.QueryableExtensions; |
And that’s it. You’ve just improved readability of your code and made it less fragile to changes.
Automapper is a very flexible tool. You don’t need to rely on naming convensions, you can easily declare your own mappings. Additionally, we have just used just a specific part of Automapper - Queryble Extensions which work with ORMs. You can also use Automapper on regular collections or just on plain objects.
I believe the problem I highlighted here is just a symptom of a much broader issue of incompatibility of relational and object oriented worlds. Although Entity Framework tries to address the issue by allowing to choose between eager and lazy loading, I don’t think it is a good solution. Classes managed by EF being elements of a public API are a big problem. As a user of such interface you never know if a navigation property is loaded and whether accessing it will result in a DB query.
Therefore, I advocate the use of mapped DTOs. This approach reminds me slightly of an idea called Functional Relational Mapping adopted for example by the Slick framework for Scala. I believe it to be a great alternative to classic ORMs. Some references:
]]>Let me explain by giving you an example. If you have ever used a web framework you might have been wondering how it handles multiple concurrent requests from different users. The traditional approach is to spawn a new thread (or get one from a thread pool) for every request that comes in and release it once the request is served. The problem with this solution is that whenever those threads perform IO operations (such as talking to a database) they simply block and wait for the operation to finish. Therefore, we end up wasting precious CPU time by allowing our threads to be blocked on IO.
Instead of blocking threads on IO operation we could use an asynchronous database API. Such API is non-blocking. However, running a database query using such an API requires you to provide a callback. Callback in this case would be a function that would be invoked once the result is available. So, in the asynchronus model your thread serves the request, runs some computations and when it needs to call the database, it initiates the call and than switches to do some other, useful work. Some other thread will continue execution of your request when the database returns.
The biggest pain of writing programs in the asynchronous model is the necessity of callbacks. Fortunately, in C# we have lambda functions which allow us to write callbacks with ease. However, even with lambdas we can end up with lot of nesting. The key to asynchronous programming in C# is the Task class. Task represents a piece of work that can be either blocking or heavy on processor so it makes sense to run it asynchronously.
1 | Task<HttpResponseMessage> getGoogleTask = client.GetAsync("http://google.com"); |
In the first line we create a task that fetches the Google main page. The task starts immedietely on a thread from a default, global thread pool. Therefore, the call itself is not blocking. On the second line we attach a callback which defines what should happen once the result is fetched. As I said, it is easy to introduce nesting with callbacks. What if we wanted to visit Facebook but only if we succeeded fetching the Google page?
1 | client.GetAsync("http://google.com").ContinueWith(googleTask => |
This code isn’t very readable. Also, if we wanted to visit more websites, we could end up with even more levels of nesting. C# 5.0 introduced an excellent language feature that lets you write asynchronous code just as if it was synchronous: the async and await keywords. The above example can be rewritten as follows:
1 | var googleResponse = await client.GetAsync("http://google.com"); |
One caveat about async/await is that the method containing any await calls must itself be declared as async. Also, the return type of such a method must be a Task. Therefore, the asynchronous-ness always propagates upstream. This actually makes sense - otherwise you would need to synchronously wait for a task to finish at some point. Modern web frameworks such as ASP.NET MVC let you declare the methods that handle the incoming requests as asynchronous.
1 | public async Task<ActionResult> FetchWebsites() |
One more thing about C# tasks - with them executing stuff in parallel is incredibly easy.
1 | string[] websites = new string[] { "http://google.com", "http://facebook.com" }; |
Task.WhenAll creates a task that will be finished when all tasks from the provided array are finished.
Let’s have a look at how Scala approaches the problem. One of the approaches to asynchronous programming is to use Futures. Future is a class that has very similiar semantics to C#’s Task. Unfortunately, there is no built-in asynchronous HTTP client in Scala, but let’s assume we’ve got one and it’s interface looks like this:
1 | trait HttpClient { |
We can write code that looks very similiar to the C# example with flatMap:
1 | client.get("http://google.com").flatMap(googleResp => |
Flatmap invoked on a future takes a callback that will be execute once the result of that future is available. Since that callback must return a Future itself, we must return an empty future (Future.successful) in the else branch of our if. When fetching the Facebook page, we use map instead of flatMap because we don’t want to start another future inside the callback. Again, the main issue with this code is that it is nested. Very similarly to how Scala handles nested null checks with Option monad, here we can again use the for-comprehension syntax to get rid of nesting!
1 | for { |
As you might have expected, parallel processing is also supported with Futures:
1 | val websites = List("http://google.com", "http://facebook.com") |
An example of a web framework that supports asynchronous request handlers is Scalatra.
As you can see, C# and Scala approach asynchronous programming similliarly. What I find interesting here is how Scala handles callback nesting with the generic mechanism of for comprehension and C# introduces a separate language feature for that. This is exactly the same pattern as in Option monad vs null-conditional operator. To be honest, I find the async/await overall a bit more awesome - it really makes you feel as if you were writing synchronous code.
Update: as pointed out by Darren and Yann in comments, you can also do async/await in Scala thanks to this library. There is also a pending proposal to add it to the language that admits that it’s inspired by C#’s asyns/await syntax.
]]>The number looks good to me although it gets interesting if we look at the distribution of views over different posts:
So, most of the views are due to my latest post, Scala’s Option monad versus null-conditional operator in C#. I submit most of my posts to Hacker News and this is also the main source of hits.
The conclusion here is that the title of the blog post really matters. I am yet to discover why this particular one caught attention but my suspicion is that with functional programming being a hot topic nowadays might be the reason.
This is much worse than what I aimed for (which is at least one post per week). The primary reason is lack of time since writing a longer piece is at least 2 hours for me. What I plan to do about it is to do more short posts explaining solutions to some interesting problems I encounter at work or while working on side projects (such as Accessing request parameters from inside a Future in Scalatra).
I chose Blogger following the advice on one of other programming blogs. So far, I’m not totally happy with it and I kind of regret that I did not choose Wordpress. I once had a blog on Wordpress for a while and what I liked there is that some of the traffic came from other Wordpress users thanks to its Discover and Recommendations features. I thought a similiar thing will happen here with Google+ but it’s not happening at all. Additionally, the choice of free templates is much poorer, the built-in editor is not very convenient and the statistics module is less fancy. Update: I decided to move the blog to Wordpress because of the reasons mentioned above.
]]>Imagine we have a nested data model and want to call some method on a property nested deeply inside an object graph. Let’s assume that Article does not have to have an Author, the Author does not have to have an Address and the address does not have to have a City (for example this data can be missing from our database).
1 | public class Address { |
This is very unsafe code since we are at risk of NullReferenceException
. We have to introduce some null checks in order to avoid the exception.
1 | if (article != null) { |
Yuck! So much boilerplater code to do a very simple thing. It’s really unreadable and confusing. Fortunately, C# 6.0 introduces the null-conditional operator. The new operator denotes ?.
and can be used instead of the regular .
whenever it is possible that the value on the left can be null
. For example, the below piece can be read as “call ToUpper
only if bob
is not null
; otherwise, just set bobUpper
to null
“.
1 | var bob = "Bob"; |
Returning to our previous example, we can now safely write:
1 | Console.WriteLine(article?.Author?.Address?.City?.ToUpper()); |
Option
typeAs I explained in one of my previous posts, in Scala we avoid having null
variables at all cost. However, we would still like to be able to somehow reflect the fact that a piece of data is optional. The Option[T]
type can be used to explicitly mark a value as optional. For example, vale bob
with type Option[String]
means that bob
can either hold a String
value or nothing:
1 | val someBob: Option[String] = Some("Bob") |
Therefore, we can easily model the situation from the previous example as follows:
1 | case class Address(street: String, city: Option[String]) |
Notice how, compared to C#, Scala forces us to explicitly declare which field is and which field is not optional. Now, let’s look at how we could implement printing article’s author’s city in lower case:
1 | if (article.author.isDefined) { |
This naive approach is not a big improvement when compared to the C# version. However, Scala lets us do this much better:
1 | for { |
Although this version is not as short as the one with C#’s null-conditional operator, it’s important that we got rid of the boilerplate nested if
statements. What remained is a much more readable piece of code. This is an example of the for-comprehension syntax together with the monadic aspect of the Option
type.
Option
monadBefore I exaplain what exactly is going on in the above piece of code, let me talk more about methods of the Option
type. Do you remember the map
method of the List
type? It took a function and applied it to every element of the list. Interestingly, Option
does also have the map
method. Think of Option
as of a List
that can have one (Some
) or zero (None
) elements. So, Option.map
takes a function and if there is a value inside the Option
, it applies the function to the value. If there is no value inside the Option
, map
will simply return None
.
1 | scala> val address = Address("street", Some("New York")) |
Now, can we somehow use it with our initial problem? Let’s see:
1 | val cityLowerCase = article.author.map { author => |
I think it looks slightly better than the nested if approach. The problem with this is that the type of cityLowerCase
is Option[Option[Option[String]]]
. The actual result is deeply nested. What we would prefer to have is an Option[String]
. There is a method similiar to map
which would give us exactly what we want - it’s called flatMap
.
1 | val cityLowerCase: Option[String] = article.author.flatMap { author => |
Option.flatMap
takes a function that transforms an element inside the option to another option and returns the result of the transformation (which is a non-nested option). The equivalent for List
is List.flatMap
which takes a function that maps each element of the list to another list. At the end, it concatenates all of the returned lists.
1 | scala> List(1, 2, 3, 4).flatMap(el => List(el, el + 1)) |
The fact that Option[T]
and List[T]
have the flatMap
means that they can be easily composed. In Scala, every type with the flatMap
method is a monad! In other words, a monad is any generic type with a type parameter which can be composed with other instances of this type (using the flatMap
method). Now, back to for-comprehension. The nice syntax which allows us to avoid nesting in code is actually nothing more than a syntactic sugar for flatMap
and map
. This:
1 | val city = for { |
…translates into this:
1 | val cityLowerCase: Option[String] = article.author.flatMap { author => |
For comprehension works with any monad! Let’s look at an example with lists:
1 | scala> for { |
For each element in the first list we produce a list ranging from 1 to this element. At the end, we concatenate all of the resulting lists.
My main point here is to show that both C# and Scala introduce some language elements to deal with deep nesting. C# has null-conditional operators which deal with nesting null checks inside if statements. Scala has a much more generic mechanism which allows to avoid nesting with for-comprehension and flatMap
. In the next post I will compare C#’s async
keyword with Scala’s Future
monad to show the similarities in how both languages approach the problem of nested code.
post
and get
handlers and Scalatra will automagically take care of them. Recently I encountered a minor issue with Scalatra’s support for Futures - it is not possible to access params
or request
values from code inside a Future. The below code throws a NullPointerException
.1 | get("/someResource/:id") { |
Scalatra exposes access to contextual data such as the current user or request parameters via members such as params
or request
. These values are implemeted as DynamicVariables
. Dynamic variables is Scala’s feature which allows a val
to have different values in different scopes. The point is that DynamicVariable
implementation is based on Java’s ThreadLocal
. Therefore, when executing code in a Future you may not rely on these values since you might be on another thread! An obvious solution to this problem is to retrieve request parameters before entering the Future:
1 | get("/someResource/:id") { |
However, this is not always a very convenient solution. I came up with the following workaround:
1 | get("/someResource/:id") { |
Firstly, we take a copy of the current request. Later, inside the Future we tell Scalatra to substitute the request
dynamic variable’s value with our copy. Therefore, the call to params
will use the correct request
and there will be no error.
As I recently learned, there is a much better way to solve this issue that is actually built into Scalatra. The way to go is using the AsyncResult class. Our example would look like this:
1 | get("/someResource/:id") { |
AsyncResult is an abstract class. We create an instance of anonymous type that extends it and overrides is value. AsyncResult takes copies of current request and response values when created and makes them available to code inside is . You can find more information here.
]]>UPDATE: This article was originally called Combining two objects in lodash. I’ve updated it to cover more ways of combining objects in JavaScript
For example, given these two objects:
1 | const a = { |
…what can be done to avoid copying properties manually?
1 | const c = { |
Object.assign
Object.assign
is a built-in method introduced to JavaScript as part of the ES6 standard. It allows you to copy properties from a target object to the source object. A possible solution to the above problem could be:
1 | Object.assign(b, a); |
This way b
would have both its own properties and a
‘s properties. However, we might want to avoid modifying b
. In such case, we can introduce a new, empty object and copy properties from a
and b
to it.
1 | const c = Object.assign({}, a, b); |
If for some reason you cannot use ES6 language features in your application, you can resort to using the lodash library.
1 | const c = _.assign({}, a, b); |
If you’d like to learn more about lodash, check out my free e-book about Functional Programming in JavaScript.
Another ES6-based solution and my personal favourite is to use the object spread operator.
1 | const c = { ...a, ...b }; |
The triple-dot operator unwraps an object and lets you put its properties into a new object. By unwrapping both a
and b
and putting it into a new object ({}
) we end up with an object having both a
‘s and b
‘s properties. So, which way is the best way? It depends on your preference and requirements. Just pick one and be consistent in your choice!
Option
type and pattern matching.null
referencesIf you have programmed in C# (or Java, or any other language that supports null
references) you must already know the pain of NullReferenceException
. This exception is thrown whenever you are expecting that a variable points to an actual object but in reality it does not point to anything. Therefore, calling a method on such reference would result in the exception. There is a famous quote from Tony Hoare who introduced the concept of null
references claiming that it was his billion-dollar mistake:
I call it my billion-dollar mistake. It was the invention of the null reference in 1965. At that time, I was designing the first comprehensive type system for references in an object oriented language (ALGOL W). My goal was to ensure that all use of references should be absolutely safe, with checking performed automatically by the compiler. But I couldn’t resist the temptation to put in a null reference, simply because it was so easy to implement. This has led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty years.
Option
typeWhat does it mean when a NullReferenceException
is thrown? As I said, it means that the CLR was expecting a reference to an object but found an empty refence and does not know what to do with it. In the majority of cases, it means that you as the programmer should have thought about it and check for the null
reference before doing anything with it. Unfortunately, it would require some great discipline to keep track of all references that could become null
and to take care of each and every one of them. The Option
type comes to rescue. The idea is to force the compiler to do the hard work for you. Option[T]
is an abstract type which has two subclasses: Some[T]
and None
. For example, a value of type Option[Int]
represents an object that can, but does not have to hold some integer. If this Option
is an instance of Some
than the object has some value. If it’s None
than it does not have any value. So, None
is like null
except we explicitly declare that an object can be None
by making it an Option
. If we decide to use Option
types in our project we must forget about null
references completly. Therefore, whenever we expect a value of type T
to be optional, we must declare it as Option[T]
. Thanks to that, the compiler will forbid us from writing such code:
1 | def makeUpper(text: Option[String]) = text.toUpperCase() |
Such code, although lengthier, is much, much safer than traditional code which allows use of null
references. Of course, the key thing is to make sure that there is never a null
inside Some
value. However, this is easy to ensure as long as we decide not to use null
references in the whole project.
The above code snippet introduces some new syntax. The match
construct is the Scala syntax for pattern matching. It is a very powerful tool common in functional programming. You can think of it as a much more advanced switch
statement which always returns a value as a whole. In the above example, the value of textOpt
is examined. It is an instance of Option
type and we know that it has two subclasses. Therefore, there are two case
branches. The first branch demonstrates how the value contained inside Some[T]
can be extracted. Pattern matching can be used with simple types:
1 | x match { |
Additionally, pattern matching works very well with case classes which we discussed in the previous post.
1 | abstract class Animal |
C# creators were in a great situation since they could learn from Java’s mistakes. They didn’t waste the opportunity and did the right thing. The main problem with Java’s generics is type erasure. The term means that the information about the type parameter of a generic type is not available at runtime. In simple words, this:
1 | List<string> list = new LinkedList<string>(); |
…becomes this:
1 | List list = new LinkedList(); |
Type erasure makes writing generic types more difficult and less clean. For example, sometimes generic methods have to explicitly take a Class
object representing the type parameter (like here). In C# this is not the case. You can easily access the type of the type parameter:
1 | class List<T> |
Not long ago I found this article on Hacker News. It discusses some of the new features of Java 8 such as lambdas, streams and functional interfaces. These things are called modern Java whereas in C# they have been available for quite a long time (not to mention that they have been available in Haskell or Ocaml for even longer). While not everyone has to agree about superiority of functional over imperative programming, it’s hard to disagree that processing collections with higher-order functions (such as map/select.aspx) or filter/where.aspx)) is cleaner, less error-prone and much more readable than doing it with loops. Even though Java has already adopted lambdas and higher-order functions, it seems that C# has better support for them. Examples?
Stream
before calling map
or filter
Type inference is a nice feature that allows you not to declare the type of a variable if it’s being initialized on the same line. While it’s not as great as in Scala or Haskell, it certainly lets you cut some boilerplate code. Java does also have some type inference but it is limited to generic methods. With type inference, the below declaration:
1 | Dictionary<int, List<Tuple<int, int>>> graph = new Dictionary<int, List<Tuple<int, int>>>(); |
…can be written as:
1 | var graph = new Dictionary<int, List<Tuple<int, int>>>(); |
C# 5.0 introduced excellent support for asynchronous programming. The async
and await
keywords let you replace callback-style programming with code that looks exactly as if it were synchronous. It makes the code much cleaner and far easier to read. The comparison with Java is especially striking if you look at pre-Java 8 code where in order to execute a piece of code asynchronously, you had to create an anounymous type with one method! Have a look at usage of the AsyncHttpClient library:
1 | AsyncHttpClient asyncHttpClient = new AsyncHttpClient(); |
…and compare it with this C# code:
1 | async Task<int> AccessTheWebAsync() |
Value types is part of the reason why there is a _C_ in _C#_. There are two kinds of types in C# - value types and reference types. Value types differ from reference types mainly in the assignment sementics. When you assign a reference to a new variable, this variable points to the same object. When you assign a value type to a new variable, the whole piece of memory holding the data in the type is copied. This is great for lightweight objects representing data. In some situations it might save you from writing the equals
and hashCode
operators. What’s more, value types cannot be null which makes them safer than reference types. Finally, value types make primitive types such as int
or double
more natural. In Java, every type is a reference type.
Extension methods allow you to add functionality to an object (even if it had already been compiled). One of the cool uses of extension methods is providing a concrete method for an interface. Also, they allow better code reuse and makes it easier to write fluent APIs. Example of an extension method:
1 | public interface Animal { |
Finally, there are many great features introduced in C# 6.0. The language seems to be gravitating towards functional programming, which I think is a good idea, but most of them do not require the programmer to learn a new paradigm. To name some of the most exciting features of C# 6.0:
Maybe
monad)I have named just a few of the language features of C# which I believe make it a superior language to Java. Obviously, there are many more things to look at when choosing a language than its features. However, I think it’s worth mentioning that thanks to Mono, Xamarin and Microsoft’s BizSpark program for startups, .NET became much more accessible to small companies and startups than it was a decade ago.
]]>You are most likely familiar with lambda expessions in C#. Lambda expression is simply an anonymous function. Lambdas are useful when you want to pass a piece of code as a parameter to some other function. This concept is actually one of the cornerstones of functional programming. One great example of how useful lambdas are operations on collections. The following piece of code takes a list of integeres, filters out odd numbers and multiplies the remaining numbers by 5.
1 | var list = new List<int> { 1, 2, 3, 4, 5 }; |
In Scala, it would look surprisingly similiar:
1 | val list = List(1, 2, 3, 4, 5) |
Scala uses more traditional FP names for map
and filter
but apart from this, the code looks very similiar. In Scala, we can make it a bit tighter (and less readable):
1 | val list = List(1, 2, 3, 4, 5) |
As you can see, Scala allows you to use anonymous parameters inside anonymous functions. However, be careful when using the underscore notation. The (_ * 5) + _
expression does not translate into x => (x * 5) + x
. Instead, the second underscore is assumed to be the second anonymous parameter of the lambda, therefore meaning this: (x, y) => (x * 5) + y
.
C# not only allows to have functions which take functions as parameters but also functions that return other functions. In the following piece, the GetMultiplier
function takes a single integer and returns a function that can multiply it by any other integer.
1 | static Func<int, int> GetMultiplier(int a) { |
Let’s see how would it look in Scala:
1 | def getMultiplier(x: Int): Function1[Int, Int] = { |
Again, it looks fairly similiar. The Function1[Int, Int]
has the same semantics as Func%lt;int, int> - it represents a one-argument function that takes an integer and returns an integer. Interestingly, in Scala Function1[Int, Int]
can be denoted as Int => Int
.
1 | def getMultiplier(x: Int): Int => Int = { |
We can go one step further and rewrite the above function as:
1 | def getMultiplier(x: Int)(y: Int) = x * y |
This certainly looks odd - our function now has two parameter lists! It is Scala’s special syntax for functions returning functions. You can pass one integer to getMultiplier
and what you get is a partially applied function. What is the type of getMultiplier
now? It’s Int => (Int => Int)
which can also be written simply as Int => Int => Int
. This technique is called currying. The idea of currying is that a function with multiple parameters can be treated as a function that takes the first parameter and returns a function that takes a second parameters which returns a function… and so on.
The syntax in Scala is indeed quite different from C# syntax. Let’s have a look at this HelloWorld
program in Scala.
1 | object HelloWorld { |
First of all, the object
keyword seems unfamiliar. In Scala, singleton objects are part of the language. It is like declaring a class and saying that there can be only one instance of this class - and this instance is like a global variable, accessible by name of the class. The concept is not very similiar to static classes in C#. Another difference is method declaration. As you can see, in Scala the type (of method or variable) comes after the name, not before. The def
keyword marks a method declaration. The Unit
type is a bit like void
in C# - it is the return type of a method which does not return any sensible value. One more thing - the println
call is not preceeded with any class/object name. In Scala, objects
can behave like namespaces
in C#. It is possible to import all methods from an object. The using static members feature in C# 6.0 gives you the same behaviour. It is possible to write the above piece in a more compact way:
1 | object HelloWorld { |
As you can see, the braces can be omited for single-line methods. Also, the Unit
type disappeared - now it is inferred by the compiler (similarly to how the var
keyword works in C#). Again, C# 6.0 brings us something similiar - the expression-bodied members.
I have already introduced the object
keyword. Let’s now have a look at regular classes.
1 | class Ship(name: String, x: Double, y: Double) { |
1 | object HelloWorld { |
Again, let’s look at the differences, case by case. There is no such thing as class constructor here, as we know it from C#. The constructor arguments are writen next to the class name. The initialization code lies directly in the class body. You could conclued that the class can only have one construcor in Scala. This is not true - additional constructors can be provided as secondary constructors. In the following lines there are two field declarations. Fields are public by default. The keyword used here is val
which means that x
and y
are immutable. Immutability is at the heart of functional programming. Immutable values are values that cannot be modified. It may seem counterintuitive at first but in fact immutable values can help you eliminate whole classes of errors form your programs. I will discuss immutability in more detail in one of the future posts. For now, I recommend this article. Member fields do not have to have type declarations - the compilers infers the correct types. The distanceFrom
method declaration is pretty straightforward. You may notice that there is not return
statement here. This is because in Scala the method always, by default, returns the last expression in its body. In our case, there is only one expression. Class instantiation is very C#-like - we use the new
keyword and provide constructor arguments.
Scala introduces a very useful concept called case class. Let’s see how we could rewrite the above code in a more succinct way.
1 | case class Ship(name: String, positionX: Double, positionY: Double) { |
With case classes, all constructor parameters automatically become members, hence no need for member initialization. Also, the new
keyword is no longer needed for creating new instances. Although very helpful, this is only one aspect of case classes. More importantly, case classes automatically provide value-base equals
, hashCode
and toString
implementations. Additionally, they are sealed. In other words, case classes are perfect for creating immutable data types. Let’s now compare the C# and Scala implementations of a class representing a two dimensional point so that you can see for yourself how nicer it is to write in Scala.
1 | case class Point2d(x: Double, y: Double) { |
And now C#:
1 | using System.IO; |
As you may now, implementing equals is not trivial.aspx). Not to mention that we would need to implement GetHashCode
. In Scala we get the default implementation for free.
enableCellEdit
to true
in columnDef
. What wasn’t obvious for me, is that for this to work you also need to load the ui.grid.edit
module and add the uiGridEdit
directive to the element on which you enabled uiGrid
. Code example below.1 | $scope.gridOptions.columnDefs = [ |
1 | <div ui-grid="gridOptions" ui-grid-edit class="grid"></div><br> |
Type variance is not just relevant to generics but also to ineritance of regular, not generic, classes. When overriding a method in a class you usually make sure that it has the same argument types and return type. Note that it is not always necessary. For example, it makes sense for the overriding method to return a subtype of the return type of the original method.
1 | class AnimalList { |
The caller of getAnimal
will expect an instance of Animal
. Returning something more derived (a Dog
) will be perfectly type safe. Therefore, we can say that return type of overriden method is covariant. Let’s now look at argument types.
1 | class Animal {} |
AdvancedDogComparator
is a specialized version od DogComparator
. Just as DogComparator
, it can compare two Dogs
but it can do more than that. So, AdvancedDogComparator.isLarger
must take at least a Dog
, but it can also take the supertype of Dog
- an Animal
. We can say that parameter types of the overriden method are contravariant. You may see an analogy here to how we deduced in the first post that it should be possible to make MyList<T>
covariant as long as it does not have the add
method. Return type covariance is supported both Java and C#. Argument type contravariance is not supported neither in Java nor C#. One more interesting case - if you create a covariant generic interface of type T
, C# will not allow you to create a method that takes T
in it.
1 | interface IMyList<out T> { |
Compiler output:
1 | error CS1961: The covariant type parameter \`T' must be contravariantly valid on \`IMyList.add(T)' |
This is actually related to the contravariance of argument types when overriding methods. Any subtype of IMyList
would have to override add
. Therefore, the T
would have to be contravariant but it is declared as covariant (the out
) keyword which makes a contradiction.
So, what is this cryptic title about? Let me start with this classic example in Java.
1 | class Animal { } |
Would you expect this piece of code to compile? The answer depends on what operations are available on MyList
. Let’s assume that MyList
is very similiar to ArrayList
and it allows you to get
and add
.
1 | class MyList<T> { |
Now, assuming that the questioned piece of code would compile, it would be perfectly valid to add a Cat to the list of Animals which is in fact a list of Dogs. This is not something we would want the compiler to allow.
1 | animals.add(new Cat()); |
In this case, MyList<Dog>
is not (does not inherit from) MyList<Animal>
. We call MyList
invariant. This is the kind of behaviour that we get in Java. Let’s now assume that MyList
is read-only and does not have an add
method.
1 | class MyList<T> { |
Now, the previous issue is no longer the case. If we call animals.get()
we can get either a Dog
or a Cat
an we are ok with this. In such case, it makes sense to allow the questioned piece to compile. Hence, MyList<Dog>
is (does inherit from) MyList<Animal>
and we call MyList
covariant.
As stated before, in Java the below piece would not compile. In other words, generic types in Java are invariant. This is quite limiting when compared to other languages which allow you to specify variance for generics.
1 | MyList<Dog> dogs = new MyList<Dog>(); |
Compiler output:
1 | HelloWorld.java:22: error: incompatible types |
However, there is an interesting exception to generic’s invariance in Java. The below code will compile:
1 | Dog[] dogs = new Dog[]; |
So, what happens when we try do add a Cat to an array of Dogs? Java gives us an exception (of course this will happen on runtime and not on compile time). So, arrays are covariant in Java! This is not a very elegant situation and the reasons behind it are mainly historic. There is a good explanation of this on Stack Overflow.
Similarly to Java, C# would not allow us to compile below code:
1 | class MyList<T> { } |
Compiler output:
1 | error CS0029: Cannot implicitly convert type \`MyList' to \`MyList' |
However, C# goes a step further and allows us to create variant generic interfaces. It is possible to mark a type parameter with the in
keyword to make the generic interface covariant.
1 | interface IMyList<out T> { } |
There are some nice examples of contravariance in C#. Since covariance means that you can use a more derived type than specified in type parameter, in contravariance you can use a more generic type than specified. It may seem a bit counterintuitive but let’s look at the Action<T>
type which represents a function that takes a parameter of type T and does not return anything.
1 | Action<Base> b = (target) => { /* do something with target */ }; |
In this case, it makes sense to say that Action<Base>
is Action<Derived>
. Action<Derived>
requires a prameter of type Derived
so giving it an instance of something more generic (Base
) is ok. In the next post I will look at how variance is exploited in inheritance.