Browser Performance

In this post we’ll go over the topic of browser performance. We’ll cover tools and techniques to identify, measure and improve performance on the client side of web applications.

Assumptions. Our tests use the Chrome browser, which means the tools likely only make sense for this specific browser (general techniques are agnostic). All the screenshots are taken either on a Linux machine running Gnome or the latest MacOS, though it should be relatively similar to Chrome in other platforms. The Chrome version used is v80.

Setup. I’ll be using simple standalone HTML/JavaScript code. I recommend downloading the folder browser-performance.

Analyzing Performance

FPS Meter

This widget tracks the number of frames per second. To enable it,

In the … icon menu, click on Show console drawer:

Screenshot from 2020-03-16 10-12-37

In the Rendering tab, check the box FPS meter:

Screenshot from 2020-03-16 10-17-55

And it should looks like this:

Screenshot from 2020-03-16 09-46-38

The yellow bar represents 60 FPS, which is the threshold under which the page might look sluggish. The bar chart on the right is a histogram view of the visible line chart. The bars at the bottom represent low FPS and on the top high FPS. The number on the top are the range (max/min FPS).

Initially the numbers will be low because the is not previous data, since it only measure FPS when the page re-renders. To prime the data you need to manually trigger many visual updates, say resizing the window or specific actions on your application:

Screenshot from 2020-03-16 10-26-35

Flame Chart

The flame chart is a very useful visualization to understand performance. It allows identifying expensive functions at a glance (long bars) and dive in into sub-functions quickly.

To use this, download and open (locally) the on_click.html example. Click on the “Performance” tab. Click the record button. Click the “Click Me” button on the page. After some number is displayed, click on Stop on the recording. We should get a chart similar to the one below:

Screenshot from 2020-03-17 10-20-06

Flame chart view

We can see the area chart at the top which is an overview of the processing happening below. The y-axis in that chart is amount of CPU.

The second chart is the flame chart. It has the bars corresponding to function calls (e.g. Event:click) and below that are the functions that were called. We see a generic Function Call because we used a anonymous arrow function like below:

Another thing to notice is that if we zoom enough, we see what looks like multiple calls of the expensive() function but in our code it is a single continuous function. I’m not sure why Chrome does that, but I’m assuming it has to do with the garbage collection kicking in.

Screenshot from 2020-03-17 10-26-02

There are other interesting alternative views at the bottom. The doughnut chart gives a breakdown on types of work being done. In our case it’s just a mix of scripting and idle:

Screenshot from 2020-03-17 10-28-42

The Call Tree view is great for the details. It allows looking at specific call chains and how much each is contributing. In our example, we can see Event: click is an expensive operation but it’s the expensive() function that is the culprit (based on the Self time):

Screenshot from 2020-03-18 09-41-47

To find specific bottlenecks the Bottom-Up view might be more interesting. If the Call Tree shows a nested tree from the root, the Bottom-up view shows the internal nodes but sorted by Self Time:

Screenshot from 2020-03-18 09-44-46

Performance Metrics

When we record a trace that includes a page load, it also includes some standard metrics like First contentful paint (FCP). The definition of some them are here [4].

Screenshot from 2020-03-18 10-00-29

It’s possible to write our own events that show up in Timings. We just need to add markers and  measure duration using the performance API [6]:

Then it shows up in the Timeline row:

Screenshot from 2020-03-18 10-21-37

The result of performance.measure() contains useful information:

Screenshot from 2020-03-18 10-21-06

Network

When we first open the Network tab, it immediately starts recording the network activity, but since we want a full-scan of the page requests, let’s stop it, clear the data, start recording again, reload the page and then stop recording to get a report.

The sample below is from imdb.com:

Screen Shot 2020-03-19 at 9.23.48 AM

Network tracing for imdb.com

Similar to the performance tab, this view provides a timeline which we can zoom in. The table below lists all network requests performed by the browser, including static resources like images, JavaScript files and async requests to fetch more data.

We can filter by request type at the top:

Screen Shot 2020-03-19 at 9.29.24 AM

Here we could look for oversized images (though unlikely you’ll need to show a gigantic high-resolution image by default – so look into sending a thumbnail with a link to the larger image) or slow async requests.

Async requests are of type XHR (XmlHttpRequest), so we can filter by that:

Screen Shot 2020-03-19 at 9.33.48 AM

the bars on the last column give an indication of not only the time a request took, but the dependencies. Sometimes an async request doesn’t start until a specific one finishes. If we hover over the bar of the render request, we see:

Screen Shot 2020-03-19 at 9.38.42 AM

It has a nice breakdown on the timings, including DNS lookup and SSL latency (authentication). It also includes queuing time, the amount of time waiting TTFB, and then the content download.

The queuing indicates that the request was ready to go, but needed to wait on some other dependency or due to browser limits. Usually there’s a limit of active XHRs per domain the browser can have (<10) [7].

TTFB means "Time to first byte" – this means that the time it takes to start receiving data. This is indicative of the time the server took processing.

Content download is the time spend getting the resource/data. It's correlated with the size of the data, network bandwidth/speed.

Measuring Performance

Duration from start

The performance API keeps track of specific events in the page loading lifetime, so we can measure durations compared to a specific point in time.

performance.now() [9] provides the timestamp since the page started to load (in rough terms – see this more precise definition).

Side note: browsers purposefully reduce the precision of these APIs to protect against attacks that rely on high-precision timing information [10].

Custom Durations

As we mentioned in an earlier in measurement.js, we can measure durations. By using performance.mark() and performance.measure() [5].

Benchmarking

We can use the APIs above to create simple benchmarks.

Before we start: always beware of over-optimizing code! You might end up spending a lot of time to improve performance against a specific benchmark, browser or even JavaScript engine version. That’s part of the reason V8 retired the benchmark tool Octane.

That said, if you identify some code in the critical path that executes the same operation on the order of millions of times, it might be worth optimizing it. One example is changing from array methods like forEach() and reduce() to pure for-loops. We can perform several runs of the same function and get the average. I opted for using a random number generator to avoid any optimization related to caching.

The function generated an array with 10 million random floats and calculates the sum. It then runs the code 10 times. The results follow:

  • for loop 12ms
  • reduce 156ms
  • forEach: 208ms

On Firefox it yields

  • for loop 40ms
  • reduce 48ms
  • forEach 64ms

We can see a for loop is ~92% and ~16% faster than reduce on Chrome and Firefox respectively. Some takeaways:

  • It’s highly browser-dependent! Don’t over-optimize code for one browser.
  • Although the relative gains are substantial, the absolute improvements are only 100ms for an array of 10 million items – how practical is this? The creation and allocation of the array probably takes more time than this.
  • This is pure computation, so no DOM access is needed, so consider using Web Workers

Techniques

Here we discuss some techniques to improve performance. The first two are general purpose optimizations but we include here for completeness.

Reduce Algorithmic Complexity

We’ll start with a quote from Donald Knuth [15]:

Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.

Before optimizing code, we should consider optimizing algorithm. This means reducing the big-O complexity. In a recent project, we found several wins by identifying hidden O(n^2) code and converted them to O(n).

Trade Memory for Computation

We often need to make this tradeoff: to improve speed of code (computation) we have to use more memory. This is actually how we improved the O(n^2) to O(n): by keeping a look-up hash table for fast access.

In JavaScript a quick look-up table can be implemented via Object, or ES6 Map and Set. Other techniques involve caching (memoization).

Encode JSON Payloads

In [8] Addy Osmani and Mathias Bynens suggest that if you’re sending large JSON payload via the network, it’s better to send them JSON-encoded instead of JavaScript objects literals:

Because the JSON grammar is much simpler than JavaScript’s grammar, JSON can be parsed more efficiently than JavaScript

Reduce CSS selector complexity

In [17], Lewis suggests reducing the complexity of CSS selectors such as

to

The reason being:

Roughly 50% of the time used to calculate the computed style for an element is used to match selectors (…)

Use Flexbox

In [18] Lewis suggests that flexbox is more efficient than other layouting alternatives such as float positioning. One caveat made in the article is that is node widely supported. As of 2020 however, that doesn’t seem the case anymore.

Avoid reading DOM properties right after mutating them

In [18] Lewis explains that the Layout phase happens after the JavaScript runs:

Screen Shot 2020-03-28 at 11.51.13 AM

This means that changing a DOM property and reading it afterwards, requires the browser to block and perform the layout before reading the new value. If possible, make the read before the write.

Avoid some advanced CSS effects

This is more of a tradeoff between performance and design. In [19] Lewis mentions that properties such as box-shadow can be expensive.

It’s unclear however which specific CSS properties are expensive.

Use Layers (but sparingly)

In [19] Lewis suggests creating layers for elements that are updated very often, to avoid these updates from causing updates in the rest of the page. One way to force the creation of a layer is:

These should only be created if there’re clear improvements in performance, since like indexes in databases, they come with memory and management cost.

Use requestAnimationFrame()

Since JavaScript’s main thread is used for rendering, running expensive function might cause the UI to freeze. In order to prioritize the rendering, we can schedule expensive functions to run we can use requestAnimationFrame(). According to [3]:

When visual changes are happening on screen you want to do your work at the right time for the browser, which is right at the start of the frame. The only way to guarantee that your JavaScript will run at the start of a frame is to use requestAnimationFrame

In [20], Lewis provides a more specific case for the use of requestAnimationFrame(): on scroll input handlers since they block the UI.

There’s one caveat however: Input handlers execute before requestAnimationFrame(), so if it makes style changes and then we read it in requestAnimationFrame, we run into the problem described in Avoid reading DOM properties right after mutating them.

Web Workers

As we discussed previously, web workers run in a separate thread, which prevents your application from freezing because the main thread is blocked. To understand why this happens, it’s important to know the basics of how the JavaScript execution model works.

The major caveat is that web workers don’t have access to the DOM but they do have I/O access (like XHR).

Web Assembly

It’s possible to use lower level languages like C++ and Rust and run them on the browser by compiling them to an intermediate representation called Web Assembly (WASM).

This guide explains how to compile Rust code to Web Assembly, package it as an npm package and execute it on the browser via Webpack. I’ve tried myself and the process is relatively straightforward.

Portability. Web Assembly’s official website [14] claims it is portable across different platforms.

Existing libraries. One of the major selling points of Web Assembly is being able to run existing libraries written in languages like C/C++. In [11] PSPDFKit’s team showcase their port of a PDF renderer that previously had to be done on the server and now can be done by shipping the binary as wasm. They perform some benchmark, getting mixed results in terms of performance but had several follow-up with browser vendors and were able to improve their performance (though no subsequent analysis was available in the post).

Web Assembly’s website [13] lists other use cases.

Performance. The main benefit I can see for starting a project from scratch in say, C++ or Rust, would be performance over JavaScript. JS vs WASM’s site [12] provides a set of computations in JavaScript and wasm compiled from C++. The results are not promising: in most cases Javascript is faster.

GPU via WebGL

WebGL (Web Graphics Library) is a JavaScript library available in most browsers, with the main purpose of rendering 2D and 3D graphics. But because it has access to the GPU (JavaScript doesn’t have direct access), it’s can be used for heavy but highly parallelizable computations such as Machine Learning [16].

If your application requires this sort of processing, WebGL can be an interesting idea.

Conclusion

This was another of the notes on Web Development. The other notes are listed in the Related Posts section below.

In this post we covered a wide variety of tools and techniques regarding performance of client-side applications. We didn’t delve into much detail, but hopefully some of the links will serve as starting pointers for further research.

For me it was particularly interesting to learn more about Web Assembly and I’m interested in studying it further, especially with Rust.

References

[1] Get Started With Analyzing Runtime Performance
[2] Rendering Performance
[3] Optimize JavaScript Execution
[4] User-centric performance metrics
[5] User Timing API
[6] StackOverflow: Using performance.mark() with Chrome dev tools performance tab
[7] StackOverflow: How many concurrent AJAX (XmlHttpRequest) requests are allowed in popular browsers?
[8] The cost of parsing JSON
[9] MDN: performance.now()
[10] Github w3c/hr-time: Reducing the precision of the DOMHighResTimeStamp resolution
[11] A Real-World WebAssembly Benchmark
[12] JS vs WASM
[13] WebAssembly: Use Cases
[14] WebAssembly: Portability
[15] Donald Knuth: Structured Programming with go to Statements
[16] TensorFlow.js: Machine Learning for the Web and Beyond
[17] Reduce the Scope and Complexity of Style Calculations
[18] Avoid Large, Complex Layouts and Layout Thrashing
[19] Simplify Paint Complexity and Reduce Paint Areas
[20] Debounce Your Input Handlers

Related Posts

  • Notes on JavaScript Interpreters – This post focuses on the detail of Chrome’s V8 interpreter. The relationship with performance is that understanding how JavaScript code is executed (e.g. event loop model, JIT) can give us insight on how to optimize it.
  • Web workers – is a browser feature that allows running code outside of the main thread. This can speed up computation as long as no DOM access is needed.
  • Notes on how browsers work – provides a basic overview on the stages of executing a web page, from downloading HTML and Javascript, executing the scripts, constructing the DOM tree and laying out the page. Understanding these steps might give insights on how to better optimize the app.
  • Notes on Javascript Memory Profiling in Google Chrome – memory is related to performance because memory and CPU are the major bounded resources your app has to deal with. Not taking care of memory (e.g. memory leaks) could also exhaust the memory available for the execution engine to make the CPU-Memory tradeoffs we mentioned above.

Sockets

In this post we’ll discuss sockets, more precisely Network Sockets. It’s based on two excellent articles from Brian “Beej Jorgensen” Hall. We’ll first cover some basic OS and Network concepts and then go over some important functions needed to create a simple socket server that can handle requests from multiple clients.

This post assumes basic familiarity with the Linux operating system and the C programming language.

OS Basics

File Descriptors

A file descriptor is basically a numerical identifier (id) to a lookup table a given process contains. It is used to model not only files, but also things like stdin (special id = 0), stdout (id = 1) and stderr (id = 2). Sockets are also represented as file descriptors as we’ll see later.

Fork

fork() is a system call the current process can use to generate copies processes (known as children), that run the same program. One observation is that the child process gets a copy of the parent’s data.

This will be important for our example where the main process uses fork() to generate children to handle connections. The child needs to inherit some of the file descriptors from the parent.

fork() returns 0 if it’s the current process executing the code or a non-zero value corresponding to the process id (pid) of the child. A common way to use fork() is the following:

The return value can be used to distinguish between a parent and child and hence we can have them execute different code.

Signal

Signals are one of the simplest ways to communicate with a process. We just send a code that the process knows how to handle. There are OS-level signals like SIGKILL and SIGTERM and user-defined such as SIGUSR1. It’s possible to override how a process handles specific signals via sigaction() which take a structure that points to a handler:

Network Basics

Beej describes the Layered Network Model but follows with this interesting observation:

Now, this model is so general you could probably use it as an automobile repair guide if you really wanted to. A layered model more consistent with Unix might be:

  • Application Layer (telnet, ftp, etc.)
  • Host-to-Host Transport Layer (TCP, UDP)
  • Internet Layer (IP and routing)
  • Network Access Layer (Ethernet, wi-fi, or whatever)

In this model, sockets are in the application layer since it relies on TCP or UDP on top of IP. We’ll go over these 3 things next.

IP the Internet Protocol

The guide discusses some details of the IP, including the IPv4 and IPv6 distinction and different address types. The flags AF_INET and AF_INET6 are part of the socket API and associated to these 2 types.

One interesting detail is the byte order (Little-Endian and Big-Endian): while each computer systems can represent data in different ways, the order is standardized for the internet, and is Big-Endian. This is also known as Network Byte Order.

UDP and TCP

The User Datagram Protocol is connectionless (stateless) and provides no guarantees on the order of the datagrams, their delivery or that duplicates are avoided. The messages sent via UDP are known as datagrams.

The Transmission Control Protocol relies on a connection (via a 3-way handshake), guarantees order and perform retries. The messages sent via TCP are known as data stream.

The socket types SOCK_STREAM and SOCK_DGRAM are associated to the TCP and UDP protocols respectively.

Socket API

Before we proceed with our example, we’ll cover the C functions in sys/socket.h that correspond to the Linux socket APIs:

getaddrinfo() is a relatively high-level function that is capable of resolving an address (e.g. google.com) to an actual IP and port. An example of use is (fullcode):

The variable servinfo is of type addrinfo, which is a node of a linked list:

getaddrinfo() returns a list of such values, any that match the criteria from the input parameters. In the Beej’s client code, we’ll see iterates over that list until it finds a set of parameters that it can connect to.

socket() returns a file descriptor. It’s basically creating a register with a given ID in a table and it returns that identifier we’ll use to establish a connection later. The interface is as follows:

int socket(int domain, int type, int protocol);

  • domain could be one of AF_INET and AF_INET6 (IPv4 / PIv6)
  • type is one of SOCK_STREAM or SOCK_DGRAM (TCP / UDP)
  • protocol could be one of PF_INET and PF_INET6

In practice AF_INET is the same as PF_INET. Beej says:

This PF_INET thing is a close relative of the AF_INET (…) they’re so closely related that they actually have the same value (…), it was thought that maybe an address family (what the “AF” in “AF_INET” stands for) might support several protocols that were referred to by their protocol family (what the “PF” in “PF_INET” stands for). That didn’t happen.

More conveniently we can also use the results of getaddrinfo() to fill these for us:

bind() binds a socket to a specific hostname and port. It is not always needed (for example in the client case). The client doesn’t usually care which port is used on its own side, so it can let the OS choose. For the server case it’s important because it defines the IP address and port the socket will listen to.

This information can be more easily provided via the struct returned from getaddrinfo(). By providing null to getaddrinfo() and AI_PASSIVE to ai_flags, we’ll have this function fill the IP in res for us:

connect() is the function a client can use to indicate the desire to establish a connection with a given server. Similarly to bind() we can use the results from getaddrinfo():

Note how we didn’t need to bind(). connect() filled the local host and a random port for us.

listen() is the function a server calls to define how many connections it can listen to at a specific socket. Beej’s article mentions a system limit of 20, but using 5 to 10 seems to work in practice.

accept() is the function that is actually connects with a specific client. So that the original socket can keep listening to other incoming connections, accept() returns a new socket descriptor which will be used to send and receive messages.

send() / recv() are used to send and receive messages via the established connection. One important aspect is that while you specify the size of the data being sent, the API does not guarantee it will send the whole data, so you need to write a loop to make sure all data is  sent/received.

Example

The high-level sequence of calls for the API above is:

  • getaddrinfo()
  • socket()
  • bind()
  • accept()
  • recv()/send()

For the client we have:

  • getaddrinfo()
  • socket()
  • connect()
  • send()/recv()

Beej provides examples for server and client codes in C: server.c and client.c.

Conclusion

Beej has an entire guide dedicated to inter-process communication. The guide covers the basic concepts such as creating new processes via fork() and handling signals; synchronization mechanisms suck as locks and semaphores; and communication mechanisms such as pipes, message queues and sockets. I like the conversational style of his writings.

I probably wrote code using sockets a long time ago. I didn’t have time to dig deep on this subject so I didn’t feel like I learned a ton. It was a good refresher nevertheless.

References

[1] Beej’s Guide to Unix Interprocess Communication
[2] Beej’s Guide to Network Programming

Related Posts

Python Coroutines

In this post we’ll cover Python coroutines. First we’ll study David Beazley’s great tutorial [1] to learn about generators and coroutines, and how to build a multitask system from the ground-up.

3052214847_dbd27f9ccc_w (1)

Goddess Durga – source: Flickr

We’ll then take a look at a Python module called asyncio which makes use of coroutines to enable asynchronous I/O.

Generators

Python generators are functions what allow return the execution back to the caller such that when called again it will resume its execution from the same place. It’s the same concept of JavaScript generators, which we talked about before.

The syntax for yielding the execution back is via the yield keyword. Here’s an example of a custom implementation of range():

Because the execution can be yielded at any time in the code, we can use infinite loops:

Another way to use generators is to “pipe” (in the Linux way), one generator into another, like in the example where we read lines from a dictionary file and find those ending in “ion”:

This also highlights one advantage of using generators abstraction: the lower memory footprint since it can process one line at a time. Sure, we could achieve the same with regular functions, but they would need to be combined in a single block:

Which is efficient but doesn’t allow modularity.

Combined with the fact that generators can be infinite loops, one can model functionality like tail and grep as generators (e.g. tail -f | grep foo) which is the exact example Beazley provides in his presentation [1].

Coroutines

In PEP 342 it explains the motivation behind coroutines:

Coroutines are a natural way of expressing many algorithms, such as simulations, games, asynchronous I/O, and other forms of event-driven programming or co-operative multitasking. Python’s generator functions are almost coroutines — but not quite — in that they allow pausing execution to produce a value, but do not provide for values or exceptions to be passed in when execution resumes. They also do not allow execution to be paused within the try portion of try/finally blocks, and therefore make it difficult for an aborted coroutine to clean up after itself.

There are four main changes:

  • yield is an expression instead of statement, which means you can assign the return type of yield to a variable.
  • send() method which can be used to send value to the yield expression. send() is a general version of next(), which is equivalent to .send(None).
  • throw() method which raises an exception at the place where the last yield was called.
  • close() method which raises a GeneratorExit at the place where the last yield was called. Effectively used to force terminating the generator.

Here’s a simple coroutine that just prints what it receives:

Notice that we have to call an empty send() to “initialize” the coroutine. According to PEP 342:

Because generator-iterators begin execution at the top of the generator’s function body, there is no yield expression to receive a value when the generator has just been created. Therefore, calling send() with a non-None argument is prohibited when the generator iterator has just started (…). Thus, before you can communicate with a coroutine you must first call next() or send(None) to advance its execution to the first yield expression.

In the example below we highlight how this flow can be confusing. The first send() receives the value from yield, but it is only able to send a value through a subsequent send().

Since in a coroutine most of the times the first send() will be to initialize the coroutine, Beazley provides a decorator to abstract that:

Piping

We can chain multiple coroutines the same we did with generators but now the order is somewhat reversed. To see why, consider our generator pipe example rewritten with coroutines. In here we pass print_step() to filter_step() and filter_step() to get_words(). Contrast this with the generator version where we start with get_words, pass its results to filter_step() and then print_step().

Event Handling

Beazley provides an example using an XML parser to manipulate XML and convert to JSON. The API of xml.sax relies on the visitor pattern, where at important steps of the AST traversal (e.g. startElement, character, endElement) it calls specific methods of a content handler.

The example uses an adapter layer (cosax.py), to allow using coroutines instead the content handler. It also combines multiple smaller coroutines to produce a more complex ones, demonstrating the composability of coroutines.

Coroutines as tasks

After some preliminary overview of Operating Systems Beazley builds a prototype of a multitasking system by modeling coroutines as tasks (no threads or processes).

  • Task – wrapper around coroutine
  • Scheduler – can create new tasks and schedule tasks – keeps a loop as long as there are tasks. Existing tasks can cause tasks to be scheduled. JavaScript event loop.

Yielding inside a task is like an OS interruption (trap).

System Calls

One of the interesting implementation of the multitasking system is system calls. The task/coroutine issues a system call by providing it as a value to the yield (e.g. yield GetTid()). This is received by the scheduler which can provide the necessary context to the specific system call implementation.

The tutorial then covers one of the most important parts of the multitasking system which is the ability to do asynchronous I/O operations (e.g. reading and writing to a file).

The idea is to introduce a “system call” corresponding to performing I/O. When a task invokes that, it doesn’t get rescheduled but put in a staging area (self.read_waiting in the code below). The scheduler has then a continuously run task that polls the OS to check if any of the file descriptors corresponding to the tasks in the staging area are ready, in which case the task is rescheduled.

Beazley uses that system to implement a simple server that receives data via a socket:

Trampolining

One of the limitations of the scheduler that is developed up to this point is that the yield has to happen at the top-level of coroutine, that is, it cannot invoke other functions that yield, which limits refactoring and modularization.

To overcome this limitation, the author proposes a technique called trampolining. Here’s an example where the execution of add() is done by the top level code, not by main():

We can do a similar thing in the multitasking system. Because the coroutines can recursively call other coroutines, we need to keep a callstack. We can do it inside Task by keeping an explicit stack:

In the example above, when a coroutine that is wrapped in Task makes another coroutine call, for example:

client, addr = yield Accept(sock)

Within Task, result will be a generator, so the current coroutine will be put on the stack and the execution will pass to that generator instead:

self.target = result

When that sub-coroutine yields the execution, the original caller is resumed

self.target = self.stack.pop()

Note that this all happen transparently to the scheduler itself.

This concludes the study of the multitask system from the ground up.

Modern Asynchronous I/O

The tutorial we just studied is from 2009 and while still relevant, Python went through many changes since then.

We’ll now cover some of the development since the generator-based coroutines were introduced and that culminated in a standard module for handling asynchronous I/O.

Delegating to subgenerators: yield from

PEP 380 introduced a new syntax for delegating the execution to another generator until that generator is finished. Simeon [5] provides an example where using yield from simplifies the code. In the code below generator() calls generator2() and generator3() in a loop to “exhaust” their execution.

The same can be accomplished with yield for, which does that implicitly:

Async / Await

PEP 492 Introduces the async and await syntax. The motivation contains these two reasons among others:

It is easy to confuse coroutines with regular generators, since they share the same syntax; this is especially true for new developers.

Whether or not a function is a coroutine is determined by a presence of yield or yield from statements in its body, which can lead to unobvious errors when such statements appear in or disappear from function body during refactoring

To address these issues, a function containing the async is considered a native coroutine (as opposed to a generator-based coroutine):

A native coroutine can await generator-based coroutines in which case it has the same  behavior as the wait for. Borrowing from the example above:

The asyncio module

PEP 3156 Describes a system to support asynchronous I/O, which is now known as the asyncio module. It proposes a coroutine-based multitasking system similar to the one Beazley describes. It has support to common I/O operations like those involving files, sockets, subprocesses, etc.

We won’t dive into much detail in this post, but we can see parallels to the scheduler we  studied previously. Here’s an example where asyncio.run() schedules the coroutine main() which in turn executes the sub-coroutines sleep() one after another:

This is not taking advantage of the non-blocking capabilities of asyncio. Why? Recall that await is equivalent to yield from and that causes the current coroutine to wait the call to finish until it continues. If we run this code we get:

will sleep: 1
slept: 1
will sleep: 2
slept: 2

What we want is to schedule both sub-coroutines at the same time, and asyncio allows that via the gather() method:

If we run this code we get:

will sleep 1
will sleep 2
slept 1
slept 2

Which means that the first sleep executed but yielded the execution back to the scheduler after the sleep() call. The second sleep() got run, printing “will sleep 2”.

Conclusion

I’ve used async/await and event loops in other languages such as Hack and JavaScript, but only recently ran into it in Python. I kept seeing mentions of coroutines and that led me to study it in more details. Overall I felt like I learned a lot.

David Beazley’s tutorial is really good. They’re thorough and provide lots of analogies with operating systems concepts.

I also liked the presentation medium: the slides are self-contained (all the information is present as text) and he shows the whole code at first then repeats the code multiple times in subsequent slides, highlighting the important pieces in each of them. This is difficult to achieve in flat text like a blog post.

References

[1] David Beazley – A Curious Course on Coroutines and Concurrency
[2] PEP 342 – Coroutines via Enhanced Generators
[3] PEP 380 — Syntax for Delegating to a Subgenerator
[4] PEP 492 — Coroutines with async and await syntax
[5] Simeon Visser – Python 3: Using “yield from” in Generators – Part 1
[6] PEP 3156 — Asynchronous IO Support Rebooted: the “asyncio” Module

2019 in Review

This is a meta-post to review what happened in 2019.

Posts Summary

This year I continued my efforts to learn about Bioinformatics. I wrote about the DNA Assembly problem, Protein Fold prediction and Protein Design.

I covered some Computer Science topics such as Consistent Hashing, the Aho-Corasick algorithm and Constructing Trees from a Distance Matrix (as part of my studies in Bioinformatics).

For Web Development I learned about JavaScript Async Functions and CDNs in more details. I explored Observable, a JavaScript notebook.

As part of my resolution for 2019, I learned a bit more about Rust, in particular its memory management model which is the part I struggled (and still do) the most with. I also started revisiting Python because of work, and started with Python Type Hints.

I assembled my own Desktop which inspired me to learn more about the Von Neumann Architecture. I also went back to using Linux (after many years exclusive on MacOS).

The Blog in 2019

The most popular blog post of the year was the Python Type Hints (10k views, 9k visits in just 2 days!). It was somehow feature on the front page of Hacker News being at position #8 at some point! The discussion was mostly about the merits of typed vs. untyped languages but it was a nice surprise nevertheless.

With some delays here and there, I kept the resolution to post once a month on average. The blog completed 7 years with 92 posts.

Resolutions for 2020

My plan is to focus in learning Python, since now I have an opportunity to use it at work. I’ll keep Rust on the side and learn more about it if time permits.

My studies on Bioinformatics made me feel that the computational models are too simplistic to be useful in practice. I’d love to get exposed to real-world examples, even if they’re ugly heuristics, but for now I’m done with the theory. As I dived deeper on the subjects, I started seeing more and more mentions to quantum physics and quantum computing.

I studied QM back in college, but I cannot claim to understand even the basics of it. As for Quantum Computing, I know as much as Justin Trudeau ;) My focus will be in learning Quantum Computing.

I’m still interested in reading more papers, especially on distributed systems and AI, and mobile development (I mentioned these for the sake of tradition at this point, since I never end up following up on them).

Personal

At the end of the year I like to look back and reflect on and remember all the things I’ve done besides work and the technical blog.

Trips

In summary, in 2019 I visited Israel, Egypt, India. Within the US, I had a chance to visit many national parks.

Early in the year, after a work trip, I went to Egypt and visited some sites in Cairo and Luxor.

egypt.jpg

1. Giza Pyramids, 2. Temple in Luxor, 3. Lanterns in the Khan el-Khalili

In November we attended a wedding in Hyderabad. We decided to tour around New Delhi, Jaipur and Agra before heading south to Hyderabad and Tirupati. From my selection below, one can see how much I admire the Mughal architecture.

india.jpg

Top: 1. Humayun Tomb (New Delhi), 2. Hawa Mahal (Jaipur), 3. Jal Mahal. Bottom: 1. Karauli Temple, 2. Taj Mahal (Agra), 3. Qtub Tombs (Hyderabad)

I had a chance to see many new National Parks and their natural wonders, as part of a quick trip to Seattle, Washington and a road trip in Southern Utah.

nps.jpg

Top: 1. Lake Diablo (North Cascades), 2. Angel’s Landing Overlook (Zion), 3. Canyons (Bryce). Bottom: 4. Rock formations along the road 5. Famous arch (Arches) 6. Islands in the sky (Canyonlands).

Books

Books I enjoyed this year:

  • Range: Why Generalists Triumph in a Specialized World – David Epstein makes a case that being a generalist can lead to a more successful path. He provided a bunch of examples including Roger Federer and Galileo. Engaging.
  • A History of Western Philosophy – Bertrand Russell covers many philosophers and schools of thought. Good perspective on Western Europe’s history. His accounts often expose his own beliefs and opinions.
  • Ten Drugs: How Plants, Powders, and Pills Have Shaped the History of Medicine – Thomas Hager describes the history of 10 drugs, from opioids, to vaccines and high colestherol drugs. Engaging.
  • The Book of Why – Judea Pearl describes some of his work such as Causal Models and Do Calculus in accessible language. The ideas are very clever and interesting, but the language of the book is a bit hyperbolic (e.g. terms like miracle, revolution and “scientists couldn’t have done this before“) which gets in the way.
  • Algorithms to Live By – Brian Christian and Tom Griffiths draw ideas from Computer Science (mostly combinatorial optimization) to solve problems from real life. Great to see many problems and algorithms from grad school in here.
  • Outliers – Malcom Gladwell analyzes several success stories to make a compelling case that success is not about just genius and hard-work. The environment where you were born and grow up matter a lot more than we think.
  • Educated: A Memoir – Intense and powerful story by Tara Westover. She grew up in an environment very hostile to her education but succeeded in achieving high goals against the odds. Really well written.
  • Shoe Dog: A Memoir by the Creator of NIKE – history of Nike from the perspective of its founder, Phil Knight. He is a good writer and Nike had a very interesting initial business model.
  • Mythos: The Greek Myths Retold – Stephen Fry provides a nice overview of the Greek Mythology focusing on the gods and semi-gods (not heroes). Casual writing style and interesting connections to the current world (via etymologies).

I’m looking forward to many more adventures, growth and learning in 2020!

 

Observable

logo

Observable is a web-first interactive notebook for data analysis, visualization, and exploration [1].

It was created by Mike Bostock, the creator of d3.js, which we discussed previously way back in 2014 [2]. At some point, Mike stepped down from d3.js to focus on a new library called d3.express, which he presented in 2017 during the OpenVis conference [3] and was still in the makes. d3.express eventually got renamed to Observable.

I’ve been experimenting with Observable and in this post I’ll cover some of my learnings while trying to build two visualizations.

One of the benefits of notebooks like Observable is to co-locate code with its output. This makes it much easier to explain code because in addition to markdown comments, we can modify the code and see the results in real-time. This means that a lot of the documentation of APIs available in Observable are notebooks themselves. For these reasons, this post consists of a series of links to notebooks with high-level comments.

Before we start, some assumptions I’m making are familiarity with imperative programming and the basics of JavaScript.

What is Observable?

Why Observable – this notebook explains the motivation behind Observable and how it works at a high level:

An Observable notebook consists of reactive cells, each defining a piece of state. Rather than track dependencies between mutable values in your head, the runtime automatically extracts dependencies via static analysis and efficiently re-evaluates cells whenever things change, like a spreadsheet

Five-Minute Introduction – provides a hands-on approach of the many topics, some of which we’ll cover in more detail in this post:

  • Real-time evaluation of JavaScript snippets
  • Cell references
  • Markdown
  • HTML/DOM display
  • Promises
  • Generators
  • Views
  • Importing from other notebooks

Let’s start by delving into the details of JavaScript snippets and cell references.

Observable vs. JavaScript

Observable’s not JavaScript – in this notebook Bostock explains the differences between JavaScript and Observable. Some notes I found interesting from that notebook:

  • A cell is composed by a cell name and an expression:
    • [cell name] = [expression]

It can be simple statements like 1 + 2, or multi-line statements like

Screenshot from 2020-01-04 09-56-25

  • The value of cell_name can be used in other cells, but by default is read-only.
  • Each cell can only have one cell_name  assignment. In other words, it can only “export” one variable. It’s possible to cheat by exporting an object. Note that because curly brackets are used to denote code blocks, we need to wrap an object literal with parenthesis:

Screenshot from 2020-01-04 10-14-58

  • Cells can refer to other cells in any part of the code – Observable builds a dependency DAG to figure out the right order. This also means dependencies cannot be circular. How Observable Runs explains this in more details.
  • Constructs like async functions (await) and generators (yield) have special behavior in a cell. We’ll expand in the Generators section below.
  • Cells can be mutated if declared with a qualifier (mutable). We’ll expand in the Mutability section below.

Mutability

Introduction to Mutable State – cells are read-only by default but Observable allows changing the value of a cell by declaring it mutable. When changing the value of the cell elsewhere, the mutable keyword also must be used.

It’s important to note that the cell is immutable, but if the cell is a reference to a value, say an Object, then the value can be mutated. This can lead to unexpected behavior, so  I created a notebook, Mutating references, to highlight this issue.

Markdown

Markdown summary – is a great cheat sheet of Markdown syntax in Observable. I find the Markdown syntax in Observable verbose which is not ideal given text is so prevalent in notebooks:

md`Hello World`

More cumbersome still is typing inline code. Because it uses backticks, It has to be escaped:

md`Hello \`code\``

To be fair, most code will be typed as cells, so this shouldn’t be too common. It supports LaTeX via KaTeX which is awesome. KaTeX on Observable contains a bunch of examples.

Generators

Generators are a JavaScript construct that allows a function to yield the execution back to the caller and resume from where it stopped when it’s called again. We talked about this in the Async Functions in JavaScript post.

Introduction to Generators explains how generators can be used in Observable. The combination of generators and delays (via promises) is a great mechanism to implement a ticker, which is the base of animation. Here’s a very simple way to define a ticker in Observable (remember that Promises are awaited without extra syntax):

Views

Introduction to Views explains what views are and how to construct one from scratch. I like this recommendation:

If there is some value that you would like the user to control in your notebook, then usually the right way to represent that value is as a view.

Views are thus convenient ways to expose parameters via buttons and knobs such as text inputs, dropdowns and checkboxes.

I ran into a problem when using checkbox with labels in it, like the cell below:

Screenshot from 2020-01-03 23-14-08

It does not yield true/false as I wanted. This notebook, Checkbox, discusses the problem and offers a solution. This is an example where great but imperfect abstractions make it hard to understand when something doesn’t work. It’s worth learning about how viewof is implemented behind the scenes.

In one of my experiments I needed to synchronize a view and a ticker. Bostock provides a neat abstraction in his Synchronized Views notebook.

Importing from other notebooks

Introduction to Imports shows how easy it is to import cells from other notebooks.

It also shows that it’s possible to override the value of some of the cells in that notebook:

Screenshot from 2020-01-03 23-28-09.png

This is neat, because the chart cell depends on the data, and by allowing overriding data, it allows parameterizing chart. In other words, we import chart as a function as opposed to a value (the result of chart(data)).

Versioning. When importing a notebook we always import the most recent version, which means it could potentially break if the notebook’s contract changes. The introduction above mentions the possibility of specifying the version when importing but didn’t provide an example on how to do so.

My strategy so far has been to clone a notebook if I’m really concerned about it changing, but that means one has to manually update / re-fork if they want to sync with the upstream changes.

React Hooks

I’m used to write UI components with React and luckily Jed Fox created a notebook which I forked into this React Notebook which allowed me to use React Hooks to write a view.

Conclusion

I really like Observable and am looking forward to add implement more data visualizations in it. I experimented with 2 animations so far: a simplistic Solar System model and a spinning Globe.

One natural question that crossed my mind is whether I should have written this whole post as an Observable notebook. In the end I decided to stick with an old-school static post for two reasons:

  • Consistency: Observable only supports JavaScript, so if I’m writing a post about Python I’d need to fallback to a post, and I wouldn’t want to have a mix of both.
  • Durability: Observable has a lot of potential but it’s still mostly experimental and I’m not sure I can rely on it sticking around for a long time.

References

Notebooks mentioned in the post

Other

[1] Observable – Why Observable?
[2] Introduction to d3.js
[3] OpenViz Conf – 2017
[4] Async Functions in JavaScript

Python Type Hints

Jukka Lehtosalo is a Finnish Software Engineer at Dropbox. He developed an optional type system for Python, mypy, during his PhD thesis in Cambridge. After meeting in a Python conference, Guido van Rossum (creator of Python) invited Lehtosalo to join him at Dropbox. They started adopting mypy in real use cases during a Hackathon, which led to mypy being one of the most popular Python type checkers [2].

In this post we’ll cover mypy in general terms as well many examples demonstrating the syntax and capabilities of this type checker.

Introduction

mypy is a static type checker. This means it does not run during the code execution, so it’s mostly useful during development, much like tests. It relies on manual annotations in the code called type hints, which identify the types of arguments, return types, internal variables and member variables.

One simple example of such annotations is for a function to compute the length of a string:

We are indicating that the input argument s is a string and it return an integer corresponding to its length. The main job of a type checker is to find inconsistencies in the annotations in our code (or in the libraries we use). For example, here’s an incorrect type annotation:

When both operands are integers, the operator + returns another integer, so n + 1 is an integer, which is a clear contradiction given we provided the return type as string.

Type annotations are free unit tests

This contrived example doesn’t make the usefulness of a type checker obvious, but imagine a case where we implicitly make assumption on the argument type. For example, a small program to compute how much an exchange house would give a user for their cash.

If we don’t perform input validation correctly, the code above might work for the tested scenarios but would fail in production the moment the user provided a string. Of course good QA will cover scenarios like these, but this is to sort of guarantees one gets for free by making the types explicit:

The previous code would now fail the type checker validation.

Type hints are optional and incremental

Most of the design of type annotations are well documented in PEP 484, the document claims:

Python will remain a dynamically typed language, and the authors have no desire to ever make type hints mandatory, even by convention.

This also seems to imply that Python type hints will always be partial / gradual, since if full typing is required, it will make transition from non-typed to fully typed codebases prohibitive. Also, there are concrete benefits even with partial typing.

A partial type system makes it optional to add type annotations to variables, instead of it being fully mandatory (like Java or C++). The type checker then performs validation with whatever information it has in hands.

Incomplete typing can be dangerous if developers build trust on the type checker while it’s only performing partial checks due to incomplete information. Let’s consider an example:

At first glance it seems a well typed program and if we run the type checker it will pass. But if we run main(), we’ll have a runtime error. The problem is that expect_string is missing the return type, so in main(), the type checker cannot infer the type of untyped, so it doesn’t perform any validation on expects_int().

Local inference

The previous example also highlights an important aspect of the mypy type checker: it only performs inferences at the function boundary. In theory it should infer that expects_string returns str because we’re returning its argument and then infer that untyped is a string.

This is fine in this example but it could be very expensive to make these inferences, especially if we need to consider branching and recursive calls to other functions. In theory the type checker could go only a few levels deep in the function call but this would make the behavior of the type checker very hard to reason about.

For that reason, the type check will only consider the type of the functions being called. For example, it knows expects_string() expects a string and returns no type, so this is what it will assign to untyped no matter what happens inside expects_string().

Now that we know the basics of the type checker, let’s cover some of the syntax and more advanced typing that mypy supports.

Examples

Setup

Before we start, it’s useful to be able to test the snippets. To do so, copy the code into a file, say example.py and run this command in the terminal:

mypy example.py

which will print any type errors that exist in example.py. mypy can be installed via Python packaging system, pip. Make sure to user Python 3:

pip3 install mypy

Primitive types

bool, int, str, float are the types one will most likely use in functions. As seen above, we can use these to type arguments and return types:

Composed types

We’ll look into generics later, but it should be straightforward to understand the typing of composed types like lists, dictionaries and tuples:

It’s worth noting that these types need to be explicitly imported from the typing module.

None vs. NoReturn

None is used to indicate that no return value is to be expected from a function.

NoReturn is to indicate the function should not return via the normal flow:

Local Variables

The example below demonstrates the syntax for typing local variables. In general typing local variables is not necessary since their type can often be inferred from the assignment / initialization.

Optional and Union

Optional[T] is a generic type that indicates the type is either T or None. For example:

Optional is a special case of a more general concept of Union of types:

My personal take is that Union should be avoided (except for special cases like Optional) because it makes the code harder to deal with (having to handle multiple types) and it’s often better via inheritance (base type representing the union).

Any vs. object

Any is equivalent to not providing the type annotation. On the other hand, object is the base of all types, so it would be more like a Union of all the types. A variable of type object can be assigned a value of any type, but the value of an object variable can only be assigned to other object variables. It’s possible to refine the type of a variable to a more specific one. See “Refining types”.

Classes

There are three main things we need to consider when annotating variables in a class context:

  • We can add types to the member variable (_n in this case).
  • We don’t type self: it’s assumed to be the type of the class where it’s defined.
  • The return type of __init__ is None

Callables

Callables can be used to type higher order functions. Here’s an example where we pass a function (lambda) as argument to a map function:

The type of the function is Callable. The first element, [int] in the example above, is a list of the types of the arguments. The second argument is the return type.

As another example, if we want to define a reduce function on strings, our callback has now type Callable[[str, str], str] because it takes 2 arguments.

Generics: Type Variables

Type variables allow us to add constraints to the types of one or more argument and/or return types so that they share the same type. For example, the function below is typed such that if we pass List[int] as argument, it will return Optional[int]:

Note that the string passed to the TypeVar() function must match the of the variable it is assigned to. This is an inelegant syntax but I’m imagining it’s the result of working around syntax limitations of Python (and the difficulties in changing the core Python syntax for annotations).

We can use multiple TypeVars in a function:

Constraints. According to [3] it’s also possible to limit the type var to be of a specific types:

TypeVar supports constraining parametric types to a fixed set of possible types (note: those types cannot be parametrized by type variables).

It also notes:

There should be at least two constraints, if any; specifying a single constraint is disallowed.

Which makes sense, if we were to restrict a TypeVar to a single type we might as well use that type directly.

In the example below we allow Tmix to be bound to either int or str. Note this is different from Union[int, str] because the latter is both int and str at the same time, while the former is either int or str, depending on how it’s called. The third call to fmix() below would be valid for a Union.

Parametrized Classes

We’ve just seen how to parametrize functions via TypeVar. We can also extend such functionality to classes via the Generic base class:

Ignoring type hints

During the transition from untyped to typed code, it might be necessary to temporarily turn off type checking in specific parts of the code. For example, imagine a scenario where types were added to a widely used function but many existing callers are passing incorrect types.

The comment # type: ignore makes the type checker skip the current line (if an inline comment) or the next line (if a line comment). Here’s an obvious type violation that is ignored by the type checker:

It’s also possible to turn off type-checking completely for the file:

A # type: ignore comment on a line by itself at the top of a file, before any docstrings, imports, or other executable code, silences all errors in the file

Refining types: isinstance() and cast()

In some cases we might receive values from untyped code or ambiguous types like Unions or object. Two ways of informing the type checker about the specific type is by explicit check via isinstance() or cast(). The isinstance() will be usually in a if clause:

This allows the type checker to infer the type of x within the x block, so it won’t complain about the call of inc(x).

Another, more drastic, approach is to use cast([type], x) which returns x if it has in runtime but otherwise throws an exception, but this allows the type checker to refine the type of x to statically. Here’s an example:

It’s a bummer that the order of arguments of cast([type], var) and isisntance(var, [type]) are inconsistent.

Arbitrary argument lists

It’s possible to type arbitrary argument lists, both the positional and named ones. In the example below, args has type Tuple[str, ...] and kwds has type Dict[str, int]. Note that the ... in Tuple[str, ...] indicates an arbitrary-length tuple of strings. It’s unclear to me why it’s a tuple and not a list, but I’d guess it has to do with how it represents non-arbitrary argument lists (e.g. foo(x: int, y: str) would be Tuple[int, str]).

Conclusion

I really like Python and it’s my go-to language for small examples and automation scripts. However, having had to work with a large code base before, I was really displeased by its lack of types.

I’m also a big proponent of typed code, especially for large, multi-person code bases, but I understand the appeal of rapid development for prototypes and Python now offers both.

As we saw in some examples, the syntax is cumbersome at times but overall I find the mpyp type checker pretty expressive.

References

[1] mypy: About
[2] Dropbox Blog: Thank you, Guido
[3] PEP 484 — Type Hints

Aho-Corasick

AhoPicAlfred Aho is a Canadian computer scientist. He is famous for the book Principles of Compiler Design co-authored with Jeffrey Ullman. Aho also developed a number of utilities for UNIX which has widespread usage, including the AWK programming language (acronym for Aho, Weinberger and Kernigham) and variants of grep.

Unfortunately, I couldn’t find much information about Margareth J. Corasick, except that she worked at Bell Labs and together with Aho, developed the Aho-Corasick algorithm, which we’ll talk about in this post.

Multiple-string search

We have studied before the problem of finding a string in a text, for which the Knuth-Morris-Prat algorithm is well-known.

Aho and Corasick studied the problem of finding a set of strings in a text. More precisely, given a body of text H and a set of search terms S, we want to find all occurrences of each term of S in H.

They came up with the Aho-Corasick algorithm which is able to solve this problem in linear time on the size of H and the size of the output (i.e. the total number of characters that need to be printed when a match is reported).

It consists in pre-computing a look-up structure from S and then scanning the text using it. Whenever it detects a match was found, it prints the corresponding matches. Let’s see first how to construct the structure.

The look-up structure

The look-up structure is constructed in two phases, each to construct two sets of edges: goto and fail edges.

The goto edges are essentially the edges of a trie containing the entries in S. The trie can be seen as a state machine where each node represents a prefix of one of the entries in S and the transition function tells us to which prefix to go next given an input character. These edges are thus called goto arrows. We denote by g(s, a) the node we navigate to if we’re at node s and receive input a.

Given a text H = [a1,a2,…,a_n], we can start at the root and follow the edges labeled a1, a2 and so on. If we land on a node representing an entry in S we print it.

Eventually though, we may reach a node s such that the next character a_j doesn’t have a corresponding edge g(s, a_j). We know that s is a suffix of [a1,a2,…,a_{j-1}], say s = [a_k,…,a_{j-1}]. We also know there’s no other unreported entry in S that starts at k but there might be one that starts at a_{k+1}. We could restart the search at k+1 and at the root of the tree, but can we do better?

Because S is fixed, no matter what text H we have, by the time we encounter a dead end at a node r, we know that the last characters seen are  [a_k,…,a_{j-1}] = r. Now suppose s = [a_{k+1},…,a_{j-1}, a_j] happens to be in the trie. Then we can continue our search from s, without having to backtrack. If k+1 doesn’t work, we can try k+2, k+3, and so forth. More generally, say that we eventually found the smallest index k* > k such that s* = [a_k*,…,a_{j-1}, a_j] is in the trie. In other words s* is the longest proper suffix of s in the trie. When we fail to find s = r + a_j in the trie, we can resume at s*. This suggests we could have a fail edge from s to s* in our trie to quickly resume the search without backtrack.

Technically, s doesn’t exist in the trie, so we can’t add an edge there. However, we can still add the failure edge in r, and denote it as f(r, a_j) = s* for all labels a_j not in a goto edge.

Given the goto and fail edges, we have a proper state machine which we can use to find matches in H. Let’s combine them into a single set of edges e(s, a) (that is e(s, a) = g(s, a) if it exists, otherwise f(s, a)).

We are still missing printing out the matches. Because of the shortcuts we take via the fail edges, we never explicitly visit a suffix of a state s. For this reason, when visiting a state s, we need to print all its suffixes that belong to S.

We’ll now see how to construct the fail edges and the output.

Building the fail edges

We’ll show how to construct the failure edges efficiently. We define auxiliary edges ps(s) which represents the longest proper suffix of s that is in the trie. Note that f(r, a) is ps(r + a) for when g(r, a) doesn’t exist.

The idea is that if we know ps(r) for all nodes s of depth up to l in the trie, we can compute ps(r + a), that is for the nodes in the next level.

Consider each node r of depth l and each symbol a. Let s = r + a. We know ps(r) is in the trie, but is ps(r) + a in the trie? Not necessarily, but maybe a suffix of ps(r) is? We can try ps(ps(r)) and further suffixes until we find one that is in the trie, or we reach the empty string.

How can we make sure to only process node s only after all the edges from lower levels have been processed? We can process the nodes in bread-first order, which can be achieved using a queue. The pseudo-code could be:

Note how instead of searching for the largest suffix by removing one character at a time and checking if it exists in the trie, we’re “jumping” to suffixes via repeated application of ps(), which is what makes the algorithm efficient as we shall see. You might be asking if we’re not missing any valid suffix when doing that, but no worries, we analyze the correctness of this in Lemma 1 (Appendix).

Building the output

If we compute output(s) in a bread-first order, we can assume we know output(r) for nodes r lower level than s. Assuming we already computed ps(s), we have that output(s) = output(ps(s)) + {s} if s is in S:

How do we know by jumping through suffixes via ps(s) we are not missing any matches? We demonstrate that in Lemma 2 (Appendix).

Implementation in Python

I’ve implemented the pseudo-code proposed here in Python, and made it available on Github. The main implementation aspect is how we compute the output list. To avoid storing the matches multiple times, we model it as a linked list. More precisely, when computing output(s) from ps(s), if ps(s) is itself a keyword, we point output(s) to ps(s). Otherwise we point it to output(ps(s)).

When printing output(s) we just need to traverse the list,:

Conclusion

It was a lot of work to study Aho Corasick’s paper [1], understand the proof and re-write with my own words, but this provided me much better intuition behind the algorithm.

Like the KMP, I’ve known this algorithm before but never took the time to fully understand it.

My initial implementation was in Rust, but I got stuck when modeling the nodes because of self-reference. This led me to this Stackoverflow answer which I haven’t time to study, but seems like a good idea for a post.

Appendix

Correctness

Lemma 1. Let s be a prefix in the look-up structure. Then ps(s) is the longest proper suffix of s that exists in the look-up structure T.

Let’s prove this by induction on the size of s. The base is for nodes of level 1, where ps(s) = root (empty string), which is true, since the proper suffix of a single letter is empty.

Assuming that ps(s’) is the longest proper suffix of s that exists in the look-up structure for |s’| < |s| , let’s prove this is true for ps(s) as well. Let’s show first that ps(s) is a proper suffix of s in T. We know that ps(r) is a suffix of r, and so is ps(ps(r)) since a suffix of a suffix of r is also a suffix of r, and so on. Let’s define ps_k(r) = ps(ps_{k-1}(r)) and ps_n(r) such that g(ps_n(r), a) exists. By construction we assigned ps(s) = g(ps_n(r), a). Since ps_n(r) is a suffix of r, and r + a = s, we know what ps_n(r) + a is suffix of s. We’ll show there’s no other proper suffix or s in loop-up structure larger than ps(s). Suppose there is a s* in T such that |s*| > |ps(s)|. Let r* be = s* + a. Note that r* is a suffix of r and it exists in T (since s* is in there and T it’s a prefix tree). It cannot be larger than ps(r) because of our induction hypothesis. r* has to be larger than ps_n(r) since |ps(s)| = |ps_n(r)| + 1, |s*| > |ps(s)| and |s*| = |r*| + 1.

The first observation is that r* != ps_k(r) for any k. Otherwise the algorithm would have set ps(s) = g(r*, a) = s*. So there must be some k for which |ps_k(r)| > |r*| > |ps_{k+1}(r)|, this means that for r’ = ps_k(r), there’s a suffix r* suc that |r*| > |ps(r’)|. But this is a contradiction of the induction hypothesis saying that ps(r’) is the largest suffix of r’ in T.

Lemma 2. output(s) contains y if and only if y is in S and a suffix of s.

Let’s prove one direction: if output(s) contains y then y is in S and a suffix of s. This is easy to see by construction, output(s) contains s if it’s in S, and also the outputs of ps(s), which is a suffix of s, and by induction are all in S and suffix of ps(s), and hence of s.

Now suppose y is a keyword and suffix of s. Let’s show that there’s a k for which ps_k(s) = y. If this was not the case, then by a similar argument we did in Lemma 1, there is ps_{k-1}(s) such that y is suffix of it, and |y| > |ps_{k}(s)|, which violates Lemma 1, since  ps(ps_{k-1}(s)) = ps_{k}(s) is not the longest possible. Since ps_k(s) = y, by construction, we know that output(y) will eventually be merged into output(s).

Lemma 3. At the end of the loop in the Algorithm 1, state s is in the look-up structure T and it is a suffix of [a1,a2,…,aj].

This is true for a1: it would be either in state [a1] in T, or the empty state, both being suffix of [a1]. Assuming this hypothesis holds for j-1, we were in a state r before consuming aj, and r is the longest suffix of  [a1,a2,…,aj-1].

By construction we know that the end state s is still in T, since we only followed valid edges of T (gotos and fails). We need to show s is the longest suffix of [a1,a2,…,aj]. For the sake of contradiction, assume there’s s* such that |s*| > |s| in T that is a suffix of [a1,a2,…,aj]. Let r* be such that s* = r* + aj. Note that r* is in T and is a suffix of [a1,a2,…,aj-1] and by the inductive hypothesis |r*| < |r|. If we used a goto edge, then |r*| > |r| which contradicts the inductive hypothesis. If we used a fail edge, then s = f(r, aj). Using the same argument from Lemma 1, we have that r* is a suffix of r, and that r* is not ps_k(r) for any k, so it must be that for some k, x = ps_k(r), |x| > |r*| > |ps(x)|and is a contradiction because r* is a proper suffix of x but longer than ps(x).

Theorem 1. The Aho Corasick algorithm is correct.

We need to show that every substring of text which is in S, will be reported. Any substring we were to report end at an index j of H, so it suffices to show we report every suffix of [a1,a2,…,aj] in S, for all j.

Let s be the node we are at the end of each loop in Algorithm 1. Suppose there’s a suffix s* of [a1,a2,…,aj] in S that is not in output(s). From Lemma 2, we know that every keyword in S that is a suffix of s is in output(s), it follows that s* is not a suffix of s, so |s*| > |s|, which cannot be from Lemma 3.

QED

Complexity

Theorem 2. The look-up stricture can be constructed in linear time of the number of characters in S.

Let’s define size(S) = the total number of characters in S and A the set of symbols of the alphabet.

It’s known to that we can construct a trie from a set of string in S, so the goto edges are covered.

For constructing the ps() function, the cost will be associated on how many times we execute the inner loop. We visit each node once and for each of the |A| symbols, we perform the inner loop. Since|f(s)| < |s|, the number of times we execute the inner-most loop for a given state is bounded by |s|, so for each state we execute the inner-most loop |s|*|A| times. If we sum over all states, we have size(S)*|A|. Assuming |A| is constant and small, we can ignore it in the overall complexity. It’s worth noting Aho and Corasick’s original paper don’t depend on |A|.

The cost of constructing the output() function is O(1) if we were to use a shared linked list. In that case output(s) can be constructing with a cell s and have it point to the list of output(f(s)), so again, this would be bounded by size(S).

For the search algorithm, let’s ignore the cost of printing the output for now. We can follow an edge in O(1) time and do so exactly once per character.

In the worst case there could be a lot matches at each position. For example, if S = {a, aa, aaa, …, a^(n/2)} and H = a^n.  For n/2 positions we’ll need to output all entries in S, leading to a complexity of size(S)*n.

Since these values need to be printed, there’s no way to be faster than that and this could be the dominating factor in the total complexity. In practice however the number of matches is much smaller.

References

[1] Efficient String Matching: An Aid to Bibliographic Search
[2] Alfred Aho – Wikipedia