# Protein Design

We previously learned about the problem of predicting the folding of a protein, that is, given a chain of amino acids, find its final 3D structure. This time we’re interested in the reverse problem, that is, given a 3D structure, find some chain of amino-acid that would lead to that structure once fully folded. This problem is called Protein Design.

In this post we’ll focus on mathematical models for this problem, studying its computational complexity and discuss possible solutions.

### Mathematical Modeling

The 3D structure of a protein can be divided into two parts: the backbone and the side chains. In the model proposed by Piece and Winfree [3], we assume the backbone is rigid and that we’ll try to find the amino-acids for the side chains such that it minimizes some energy function.

This means we’re not really trying to predict the whole chain of amino acids, but a subset of those amino that will end up on the side chains.

The amino acids on the side-chain can have a specific orientation [2, 3], known as rotamer, which in turn can be represented by a single value, its dihedral angle. We can define some arbitrary order for these amino acids and label them with an index, which we call position.

At each position there are multiple rotamers possible and we need to select them to minimize the overall energy function. More formally, for each position i, let $R_i$ be the set of rotamers available and $r_i \in R_i$ the chosen rotamer.

The model assumes the cost function of the structure is pairwise decomposable, meaning that we can account for the interaction of each pair independently when calculating the total energy:

Where $E(r_i, r_j)$ is the energy cost of the interaction between positions i and j, assuming rotamers $r_i$ and $r_j$ respectivelly. The definition of E can be based on molecular dynamics such as AMBER.

In [3], the authors call this optimization problem PRODES (PROtein DESign).

### PRODES is NP-Hard

Pierce and Winfree [3] prove that PRODES is NP-Hard. We’ll provide an informal idea of the proof here.

First we need to prove that the decision version of PRODES is NP-Hard. The decision version of PRODES is, given a value K, determine whether there is a set of rotamers such that the energy cost is less or equal K. We’ll call it PRODESd. We can then prove that PRODESd is NP-complete by showing that it belongs to the NP complexity class and by reducing, in polynomial time, some known NP-complete problem to it.

We claim that this problem is in NP because given an instance for the problem we can verify in polynomial time whether it is a solution (i.e. returns true), since we just need to evaluate the cost function and verify whether it’s less than K.

We can reduce the 3-SAT problem, known to be NP-complete, to PRODESd. The idea is that we can map every instance of the 3-SAT to an instance of PRODESd and show that the result is “true” for that instance of 3-SAT if and only if the result is “true” for the mapped instance of PRODESd.

Let’s start by formalizing PRODESd as a graph problem, with an example shown in the picture below:

a) has 3 positions with their sets of rotamers. b) each rotamer becomes a vertex grouped into sets. We pick exactly one vertex per set and try to minimize the total cost of the edges associated with the selected vertices. Image source: [3]

Now, given a 3-SAT instance, we create a vertex set for each clause $C_i$ (containing a vertex for each literal), and a vertex set for each variable $x_i$ (containing vertices T and F). For each literal $x_i$ we add an edge of weight 1 to the vertex F of the set corresponding to variable $x_i$. Conversely, for each negated literal, $\bar x_i$, we add an edge of weight 1 to the vertex T. All other edges have weight 0.

For example, the instance $(x_1 + \bar x_2 + x_3) \cdot ( \bar x_1 + x_2 + x_3) \cdot (\bar x_3)$ yields the following graph where only edges of non-zero weight are shown:

Source: [3]

We claim that this 3-SAT instance is satisfiable if and only if the PRODESd is true for K=0. The idea is the following: for each vertex set corresponding to a variable we pick either T or F, which corresponds to assigning true or false to the variable. For each vertex set corresponding to a clause we pick a literal that will evaluate to true (and hence make the clause true). It’s possible to show that if 3-SAT is satisfiable, there’s a solution for PRODESd that avoids any edge with non-zero weight.

### Integer Linear Programming

Now that we know that PRODES is unlikely to have an efficient algorithm to solve it, we can attempt to obtain exact solutions using integer linear programming model. Let’s start with some definitions:

We can define our variables as:

The object function of our model becomes:

Finally, the constraints are:

Equation (1) says we should pick exactly one rotamer for each position. Constraints (2) and (3) enforce that $x_{i, j} = 1$ if and only if $r_i = r_j = 1$.

Note: the LaTeX used to generate the images above are available here.

### Conclusion

The study of protein prediction led me to protein design, which is a much simpler problem, even though from the computational complexity perspective it’s still an intractable problem.

The model we studied is very simple and makes a lot of assumptions, but it’s interesting as a theoretical computer science problem. Still I’m not sure how useful it is in practice.

### References

[1] Wikipedia – Protein design
[2] Wikipedia – Rotamer
[3] Protein Design is NP-hard – Niles A. Pierce, Erik Winfree

# Protein Folding Prediction

In this post we’ll study an important problem in bioinformatics, which consists in predicting protein folding using computational methods.

We’ll cover basic concepts like proteins, protein folding, why it is important and then discuss a few methods used to predict them.

## Amino Acids and Proteins

Amino acids are any molecules containing an Amino (-NH2) + Carboxyl (COOH) + and a side chain (denoted as R), which is the part that varies between different amino acids [1].

Etymology. A molecule containing a carboxyl (COOH) is known as carboxylid acid, so together with the Amino part, they give name to amino acids [2].

An amino acid where R attaches to the first carbon (see Figure 1) is known as α-amino acids (read as alpha amino acids).

Figure 1: alpha and beta carbons in a Carbonyl group (link)

There are 500 different amino acids known, but only 22 of them are known to be present in proteins and all of them are α-amino acids. Since we’re talking about proteins, we’ll implicitly assume amino acids are the alpha amino ones. See Figure 2.

Figure 2: Amino acid

Amino acids can be chained, forming structures known as pepitides, which in turn can be chained forming structures like polipeptides. Some polipeptides are called proteins, usually when they’re large and have some specific function, but the distinction is not precise.

Etymology.Peptide comes from the Greek to digest. According to [3], scientists Starling and Bayliss discovered a substance called secretin, which is produced in the stomach and signals the pancreas to secrete pancreatic juice, important in the digestion process. Secretin was the first known peptide and inspired its name.

## Protein Folding

To be functional, a protein needs to fold into a 3-D structure. The sequence of amino acids in that protein is believed [6] to be all that is needed to determine the final shape of a protein and its function. This final shape is also known as native fold.

There are 4 levels of folding, conveniently named primary, secondary, tertiary and quaternary.

• Primary structure is the non-folded form, the “linearized” chain of amino acids.
• Secondary structure is formed when parts of the chain bond together via hydrogen bonds, and can be categorized into alpha-helix and beta-strands.
• Tertiary structure is when the alpha-helix and beta-sheet fold onto globular structures.
• Quaternary structure is when 2 or more peptide chains that have already been folded combine with each other.

## Prediction

The prediction of protein folds usually involves determining the tertiary (most common form) structure from its primary form. We’ve seen before in DNA Sequencing  that reading amino acid sequences is easier when its denatured into a linear chain. On the other hand, analyzing the intricate folded forms can be very hard [7], even with techniques such as X-ray crystalography.

## Why is this useful?

If the protein structure is known, we can infer its function on the basis of structural similarity with other proteins we know more about. One can also predict which molecules or drugs can efficiently bind and how they will bind to protein [7].

From Wikipedia’s Nutritious Rice for the World [8], protein prediction can also help understand the function of genes:

… prediction tools will determine the function of each protein and the role of the gene that encodes it.

## Computational Methods

In this section we’ll focus on the computation methods used to aid the prediction of protein folds. There are 2 main types, template based and de novo.

Template based methods rely on proteins with known folds which can be used as template for determining the fold of a target protein with unknown fold. This only works well if the target protein is similar enough to the “template” one, or has some shared ancestry with it, in which case they are said to be homologous.

De novo methods do not rely on existing protein for templates. It uses known physico-chemical properties to determine the native structure of a protein. We can model the folding as an optimization problem where the cost function is some measure of structure stability (e.g. energy level) and steps are taken to minimize such function given the physico-chemical constraints. Molecules that have the same bonds but different structures are known as conformations, which form the search space of our optimization problem.

Let’s now cover some implementations of de novo methods.

Rosetta

One implementation of de novo method is the Rosetta method [9]. It starts off with the primary form of the protein. For each segment of length 9 it finds a set of known “folds” for that segment. A fold in this case is represented by torsion angles on the peptide bonds. The search consists in randomly choosing one of the segments and applying the torsion of its corresponding known folds to the current solution and repeat.

Note that while this method does not rely on protein templates, it still assumes an existing database of folds for amino acid fragments. However, by working with smaller chains, the chances of finding a matching template are much higher.

AlphaFold

One major problem with de novo methods is that they’re computationally expensive, since it has to analyze many solutions to optimize them, which is a similar problem chess engines have to.

Google’s DeepMind has made the news with breakthrough performance with its Go engine, AlphaGo. They used a similar approach to create a solver for protein folding prediction, AlphaFold. This approach placed first in a competition called CASP.

CASP is a competition held annually to measure the progress of predictors. The idea is to ask competitors to predict the shape of a protein whose native fold is known but not public.

Crowdsourcing

Another approach to the computational costs involved in the prediction is to distribute the work across several machines. One such example is Folding@home, which relies on volunteers lending their idle CPU time to perform chunks of work in a distributed fashion which is then sent back to the server.

An alternative to offering idle CPU power from volunteers’ machines is to lend CPU power from their own brains. Humans are good at solving puzzles and softwares like Foldit rely on this premise to find suitable folds. According to Wikipedia:

A 2010 paper in the science journal Nature credited Foldit’s 57,000 players with providing useful results that matched or outperformed algorithmically computed solutions

## Conclusion

Here are some questions that I had before writing this post:

• What is protein folding? Why predicting it important?
• What are some of the existing computational methods to help solving this problem?

Questions that are still unanswered are details on the algorithms like the Rosetta method. I also learned about Protein Design in the process, which I’m looking into studying next. The approach used by DeepMind seem very interesting, but as of this writing, they haven’t published a paper with more details.

As a final observation, while researching for this post, I ran into this Stack Exchange discussion, in which one of the comments summarizes my impressions:

This field is pretty obscure and difficult to get around in. There aren’t any easy introductions that I know of.

I found that the information available is often vague and sparse. I’d love to find a good textbook reference to learn more details. One possible explanation is that this field is relatively new, and there’s a lot of proprietary information due to this being a lucrative field.

## References

[1] Wikipedia – Amino acid
[2] Wikipedia – Carboxylic acid
[3] The Research History of Peptide
[4] Wikipedia – Homology modeling
[5] Wikipedia – Threading (protein sequence)
[6] Wikipedia – De novo protein structure prediction
[7] Quora: Why is protein structure prediction important?
[8] Wikipedia – Nutritious Rice for the World
[9] The Rosetta method for Protein Structure Prediction
[10] AlphaFold: Using AI for scientific discovery

# Content Delivery Network

In this post we’ll explore the basics of a Content Delivery Network (CDN) [1].

### Outline

Here are the topics we’ll explore in this post.

• Introduction
• Definitions
• Caching
• Routing

### Introduction

Content Delivery Networks, in a simplistic definition, is like the cache of the internet. It serves static content to end users. This cache is a bunch of machines that are located closer to the users than data centers.

Differently from data centers that can operate in a relatively independent manner, the location of CDNs are subject to the infrastructure of internet providers, which means a lot more resource/space sharing between companies is necessary and there are more regulations.

Companies often use use CDNs via third-party companies like Akamai and Cloudflare, or, as it happens with large internet companies like Google, go with their own solution.

Let’s see next reasons why a company would spend money in CDN services or infrastructure.

CDNs are closer to the user. Due to smaller scale and partnerships with internet providers, CDN machines are often physically located closer to the user than a data center can be.

Why does this matter? Data has to traverse the distance between the provider and the requester. Being closer to the user means lower latency. It also less hops to go through, meaning less required bandwidth [2].

Redundancy and scalability. Adding more places where data live increases redundancy and scalability. Requests will not go all to a few places (data centers) but will hit first the more numerous and distributed CDNs.

Due to shared infrastructure, CDN companies can shield small companies from DDoS attacks by distributing traffic among its own servers, absorbing the requests from the attack.

Improve performance of TLS/SSL connections. For websites that use secure HTTP connections, it’s necessary to perform 2 round-trips, one to establish the HTTP connection, another to validate the TLS certificate. Since CDNs must connect to the data center to retrieve the static content, it can leverage that to keep the HTTP connection alive, so that subsequent connections from client to data centers can skip the initial round trip [3].

Before we go into more details on how CDNs work, let’s review some terminology and concepts.

### Definitions and Terminology

Network edge. is the part of network before a request traverses before reaching the data center. Think of the data center as a polygonal area  and the edge being the perimeter/boundary.

Internet exchange points (IXP). It’s a physical location containing network switches where (Internet Service Provider) ISPs connect with CDNs. The term seems to be used interchangeably with PoP (Point of Presence). According to this forum, the latter seems an outdated terminology [4]. The map below shows the location of IXP in the world:

Locations of IXP in the world (source: Data Center Map)

Border Gateway Protocol (BGP). The BGP is used for routing traffic between different ISPs. To connect a source IP with its destination, the connection might need rely on a networks from different a different provider than the original one.

CDN peering. CDN companies rely on each other to provide full world coverage. The cost of having a single company cover the entire world is prohibitive. Analogous to connections traversing networks from multiple ISPs.

Origin server. To disambiguate between the CDN servers and the servers where the original data is being served from, we qualify the latter as origin server.

Modes of communication. There are four main modes [5]:

• Broadcast – one node sends a message to all nodes in the network
• Multicast  – one node sends a message to a subset of the nodes in the network (defined via subscription)
• Unicast – one node sends a message to a specific node
• Anycast – one node sends a message to another node, but there’s flexibility to which node it will send it to.

ISP Tiers. Internet providers can be categorized into 3 tiers based on their network. According to Wikipedia [6] there’s no authoritative source defining to which tier networks belong, but the tiers have the following definition:

• Tier 1: can exchange traffic with other Tier 1 networks without paying for it.
• Tier 2: can exchange some traffic for free, but pays for at least some portion of it.
• Tier 3: pays for all its traffic.

Most of ISPs are Tier 2 [7].

### Caching

One of the primary purposes of a CDN is caching static content for websites [8]. The CDN machines act as a look through cache, that is, the CDN serves the data if cached, or makes a request to the origin server behind the scenes, caches it and then returns it, making the actual data fetching transparent to the user.  We can see that the CDN acts as a proxy server, or an intermediary which the client can talk to instead of the origin server.

Let’s consider the details of the case in which the data is not found in the cache. The CDN will issue an HTTP request to the origin server. In the response the origin server can indicate on their HTTP header whether the file is cacheable with the following property:

Cache-Control: public

For example, if we visit https://www.google.com/, we can inspect the network tab and look at the HTTP response header of a random JavaScript file:

we can see the line cache-control: public. In fact, if we look at the origin of this file, we see it’s coming from the domain http://www.gstatic.com which is Google’s own CDN.

#### Caching and Privacy

One of the important aspect of caching is to make sure privacy is honored. Consider the case of a private photo that is cached by a CDN. We need to make sure that it’s highly unlikely that someone without access will be able to see it.

Since CDNs store the static content but not the logic that performs access control, how can we make it privacy-safe? One option is to generate an image path that is very hard to reverse-engineer.

For example, the origin server could hash the image content using a cryptographic hash function and use that as the URL. Then, the corresponding entry in the CDN will have that name.

In theory if someone has access to the URL they can see my private photo but for this to happen I’d need to share the image URL, which is not much different from downloading the photo and sending to them. In practice there’s an additional risk if one uses a public computer and happen to navigate to the raw image URL. The URL will be in the browser history even if the user logs out. For reasons like these, it’s best to use incognito mode in these cases.

To make it extra safe the server could generate a new hash every so often so that even if someone got handle of an URL, it will soon be rendered invalid, minimizing unintended leaks. This is similar to the concept of some strategies for 2-fac authentication we discussed previously where a code is generated that makes the system vulnerable very temporarily.

### Routing

Because CDNs is in between client and origin server, and due to its positioning, both physical and strategical, they can provide routing as well. According to this article from Imperva [9], this can mean improved performance by use of better infrastructure:

Much focus is given to CDN caching and FEO features, but it’s direct tier 1 network access that often provides the largest performance gains. It can revolutionize your website page load speeds and response times, especially if you’re catering to a global audience.

CDNs are often comprised of multiple distributed data centers, which they can leverage to distribute load and, as mentioned previously, protect against DDoS. In this context, we covered Consistent Hashing as a way to distribute load among multiple hosts which are constantly coming in and out of the availability pool.

CDNs can also rely on advanced routing techniques such as Anycast [10] that performs routing at the Network Layer, to avoid bottlenecks and single point of failures of a given hardware.

### Conclusion

I wanted to understand CDNs better than being “the cache of the internet”. Some key concepts were new to me, including some aspects of the routing and the ISPs tiers.

While writing this post, I realized I know very little of the practical aspects of the internet: How it is structured, its major players, etc. I’ll keep studying these topics further.

### References

[1] Cloudflare – What is a CDN?
[2] Cloudflare – CDN Performance
[3] Cloudflare – CDN SSL/TLS | CDN Security
[4] Cloudflare –  What is an Internet Exchange Point
[5] Cloudflare – What is Anycast?
[6] Wikipedia – Tier 1 network
[7] Wikipedia – Tier 2 network
[8] Imperva – CDN Caching
[9] Imperva – Route Optimization

# Async Functions in JavaScript

Async functions are a JavaScript construct introduced in the ES2017 spec. Put it simply, async functions allow writing asynchronous code using synchronous syntax.

In this post we’ll discuss async functions in JavaScript, covering some other concepts such as iterators, generators that can be used to implement async functions in cases they’re not supported by the runtime.

### A Glimpse

Before we start delving into the details, let’s get a sense on why async functions are useful, especially in the context of Promises.

In case you don’t know about Promises in JavaScript or need a refresher, I recommend checking that out first. If you’d like, we talked about Promises in a previous post which might serve as an introduction.

As we’ve learned, they’re very handy for reducing the so called “callback hell”. Here’s a contrived example prior to Promises:

 fetchSomeStuff( /* onSuccess*/ function (a) { fetchSomeStuffDependingOnA( a, /* onSuccess */ function (b) { fetchSomeStuffDependingOnB( b, /* onSuccess */ function (c) { /* Code goes on here */ }, /* onFailure */ function (ex) { handleException(ex); } ) }, /* onFailure */ function (ex) { handleException(ex); } ); }, /* onFailure */ function (ex) { handleException(ex); } );

view raw
callback_hell.js
hosted with ❤ by GitHub

We saw that with Promises we can simplify this to:

 fetchSomeStuff().then( function (a) { return fetchSomeStuffDependingOnA(a); } ).then( function (b) { return fetchSomeStuffDependingOnB(b); } ).then( function (c) { /* Code goes on here */ } ).catch( function (ex) { handleException(ex); } );

view raw
promises.js
hosted with ❤ by GitHub

Which is much cleaner. Now, the async syntax allows an even cleaner code by making it look synchronous:

 try { a = await fetchSomeStuff(); b = await fetchSomeStuffDependingOnA(a); c = await fetchSomeStuffDependingOnB(b); } catch (ex) { handleException(ex); }

view raw
async.js
hosted with ❤ by GitHub

We can now proceed into some details on how async functions work. To start, let’s learn about intermediate concepts, namely iterators and generators.

### JavaScript Iterators

An iterator is basically an object that has a method next(), which returns an object with fields value and done, the latter being a boolean indicating whether there’s a next value to be iterated on. An iterator is more like a design or code pattern: it’s not explicitly supported by the JavaScript runtime in any special way.

 function makeIterator(n) { return { cnt: n, next: function() { this.cnt—; return {value: this.cnt, done: this.cnt < 0}; }, }; } it = makeIterator(30); while (true) { const {value, done} = it.next(); if (done) { break; } console.log(value); }

view raw
iterator_plain.js
hosted with ❤ by GitHub

Iterable on the other hand is a contract that an object can be used as iterator. To indicate this we add a special field containing Symbol.iterator, which maps to a function that returns this object (similar to an interface in an OOP language) – and this constructed is handled as a special case.

In the example below we create an example iterator and use it with the for-of construct:

 function makeIterator(n) { return { cnt: n, next: function() { this.cnt—; return {value: this.cnt, done: this.cnt < 0}; }, [Symbol.iterator]: function() { return this } }; } it = makeIterator(30); for (value of it) { console.log(value); }

view raw
iterator.js
hosted with ❤ by GitHub

### JavaScript Generators

Generators are a syntax sugar for iterators in what it allows us to not keep track of a “global” state (in the example above via this.cnt). It does so by allowing the function to yield the execution back to the caller and resume from where it stopped when it’s called again. Behind the scenes, it creates an object with the same structure as the iterator object we defined above, namely with a next() method. It’s much clearer with an example:

 function* makeGenerator(n) { for(cnt = n – 1; cnt >= 0; cnt—) { yield cnt; } } it = makeGenerator(30); while (true) { const {value, done} = it.next(); if (done) { break; } console.log(value); }

view raw
generator_plain.js
hosted with ❤ by GitHub

First, we indicate the function is a generator with the * modifier (i.e. function*). Here we don’t have to explicitly define the next() function and we don’t need to keep track of the variable cnt outside of the function – it will be resumed from the state it had when we called yield.

As with iterators, we can make generators iterable by implementing a contract. In this case we create an object with a special field containing Symbol.iterator which maps to the generator function:

 function makeGenerator(n) { return { *[Symbol.iterator]() { for(cnt = n – 1; cnt >= 0; cnt—) { yield cnt; } }, }; } it = makeGenerator(30); for (value of it) { console.log(value); }

view raw
generator.js
hosted with ❤ by GitHub

### Async Functions <> Promises

We’re now ready to come back to async functions. We can think of async functions as syntax sugar for Promises. Suppose a function f() exists that returns a Promise. If we want to use the result of that Promise and return a new one, we could do, for example:

 function f() { return Promise.resolve(10); } function g() { return f().then(r => { return r + 1; }); } g().then(r => console.log(r));

view raw
simple_promise.js
hosted with ❤ by GitHub

Instead, we could replace g() with an async function, which “understands” Promises and returns them, making it possible to easily mix with Promise code. The code above would look like:

 function f() { return Promise.resolve(10); } // Implicitly returns a Promise async function g() { r = await f(); return r + 1; } g().then(r => console.log(r));

view raw
simple_async.js
hosted with ❤ by GitHub

Note how we swapped a Promise-based implementation with an async one without making any changes to the call stack that expected Promises throughout.

Handling errors. Async functions have a familiar syntax for error handling too. Suppose our function f() rejects with some probability:

 function f() { return Math.random() > 0.5 ? Promise.resolve(10) : Promise.reject(new Error('code x')); } function g() { return f().then(r => { return r + 1; }).catch(e => { return new Error('error:' + e.message); }); } g() .then(r => console.log(r)) .catch(e => console.error(e));

view raw
error_promise.js
hosted with ❤ by GitHub

If we are to replace g() with an async version, using the try/catch syntax:

 function f() { return Math.random() > 0.5 ? Promise.resolve(10) : Promise.reject(new Error('code x')); } async function g() { try { return await f(); } catch (e) { throw new Error('error: ' + e.message); } } g() .then(r => console.log(r)) .catch(e => console.error(e));

view raw
error_async.js
hosted with ❤ by GitHub

### Async Functions as Generators

As of this writing most major browsers support async functions on their latest versions except Internet Explorer. For a while though, if developers wanted to use async functions they needed to rely on transpilation (i.e. translate their async-based code into browser-compatible code). One of the most popular tools for this is Babel, which transpiles code with async functions into one using generators and some helpers.

We can study that code to learn how to implement async-like functions using generators. Consider this simple example chaining two Promises using an async function.

 function f1() { return Promise.resolve(21); } function f2(x) { return Promise.resolve(x * 2); } async function g() { r = await f1(); return await f2(r); } g().then(x => console.log(x));

view raw
async2.js
hosted with ❤ by GitHub

If we translate it using Babel we get some generated code. I removed parts dealing with error handling and inlined some definitions to make it easier to read. Here’s the result:

 function _asyncToGenerator(fn) { return function () { var self = this; var args = arguments; return new Promise(function (resolve, reject) { // Instantiates the generator var gen = fn.apply(self, args); function _next(value) { // Next step of the generator var info = gen.next(value); var newValue = info.value; if (info.done) { resolve(newValue); } else { newValue.then(_next); } } // Calls the generator recursively until it's done _next(undefined); }); }; } function f1() { return Promise.resolve(21); } function f2(x) { return Promise.resolve(x * 2); } function g() { _g = _asyncToGenerator(function* () { r = yield f1(); s = yield f2(r); return s; }); return _g.apply(this, arguments); } g().then(r => console.log(r));

view raw
async_as_gen.js
hosted with ❤ by GitHub

Let’s see what is happening here. First, we note that our async function got translated into a generator, basically replacing the await with yield. Then it’s transformed somehow via the _asyncToGenerator() function.

In _asyncToGenerator() we’re basically invoking the generator recursively (via gen.next()) and at each level we chain the Promise returned by a yield call with the result of the recursion. Finally we wrap it in a Promise which is what the async function does implicitly.

Intuition. Let’s try to gain an intuition on what’s happening here on a high level. The ability of resuming execution of a function at particular points (via yield in this case) is what enables us to avoid passing callbacks every where. Part of why we need pass the callback is that we need to carry the “code” around as a callback, but by having the run time keep the code around solves this problem. For example, in a Promise world, code 1 and code 2 are wrapped in the arrow functions:

 p.then(() => { /* code 1*/ }).then(() => { /* code 2 */ });

In a world where we can remember where we were when an async execution happened, we can in-line the code:

 /* code 1*/ /* async. yield execution to others */ /* …. */ /* return here */ /* code 2 */

view raw
inline.js
hosted with ❤ by GitHub

This translation relies on the existence of generators being fully supported by the runtime. In a world where generators didn’t exist as first class citizens, how could we implement them via helpers and also transpilation? We could probably use some sort of iterators and switches to simulate resuming execution at specific points in code, but this is out of the scope of this post and left as food for thought.

### Conclusion

In this post we learned about some more language features that help with code authoring and readability, namely generators and async functions. These are very useful abstractions that ends up being added to programming languages such as Python, C#, and Hack.

# Von Neumann Architecture

John von Neumann was a Hungarian-American mathematician, physicist, computer scientist and polymath, often regarded as the greatest mathematician of his time. He has contributed to a wide range of fields including quantum mechanics, geometry, topology, game theory, cellular automata, linear programming and computer architecture.

In this post we’ll discuss his contribution to the architecture of modern computers, known as von Neumann architecture (aka Princeton architecture).

### Historical Background

Von Neumann was working at the Manhattan project, which required a lot of computation (in particular to solve differential equations). He got involved on the design of the EDVAC computer together with J. Presper Eckert and John Mauchly and together they wrote a document titled First Draft of a Report on the EDVAC [1]. For an unfortunate reason the report circulated only with von Neumann’s name on it, and the architecture based on the report has only von Neumann’s name [2].

Furthermore, around the same time Alan Turing, who proposed the concept of stored-programs in the form of theoretical Universal Turing Machines (in the paper On Computable Numbers, with an Application to the Entscheidungsproblem), also wrote a paper Proposed Electronic Calculator, discussing the practical aspects of constructing such machines.

These independent approaches led to a debate on whether stored-program machines should be actually referred to von Neumann machines.

### Overview

von Neumann architecture diagram (source: Wikipedia)

The architecture consists of 5 specific parts [1]:

• (i) Central Arithmetic part (CA): an arithmetic logic unit (circuit capable of performing elementary arithmetic and bitwise operations) and a set of registers (small fast-access memory).
• (ii) Central Control (CC): a general purpose unit to carry out the execution of the instructions, to be stored elsewhere.
• (iii) Memory (M):  to store data during the program’s execution and also to store the program’s instructions.

The draft also specifies that there must be a way to connect between these 3 parts. It’s interesting the analogy it makes to the human brain:

The three specific parts CA, CC (together C) and M correspond to the associative neurons in the human nervous system. It remains to discuss the equivalents of the sensory or afferent and the motor or efferent neurons.

The external world is represented by the external medium, called R (which is not considered a part of the machine).

• (iv) Input mechanism (I): a way to transfer information from R to C (CA + CC) and M.
• (v) Output mechanism (O): a way to transfer information from C and M to R.

The authors in [1] also pose an interesting question on whether information should be stored in M or R. Supposedly R representing some sort of external memory. It does resemble a more modern debate on volatile (RAM) or persistent memory (disk).

### Modifications

One bottleneck of the original von Neumann machines is that both data and instruction go through the same bus. This offers a potential limit on speed because data and instructions cannot be read in parallel.

The Harvard architecture doesn’t have this issue by separating the memory (or at least having separate channels of communication with the central unit) and was implemented in the Harvard Mark I computer [3].

Harvard architecture. (source: Wikipedia)

However, there might be advantages of treating data and instructions as data since this allows for concepts such as just-in-time compilation where instructions might be written to memory during runtime and read as data. Modern computers (ARM, x86) use the so-called Modified Harvard architecture [4] which overcomes the bottleneck from von Neumann architecture by having a dedicated memory with a copy of the program (in the form CPU cache) but instructions can still be read as data when needed.

### Limitations of classical architectures

We’ll now focus on the limitations of current implementations of the Modified Harvard architecture. It’s really hard to make any concrete estimates on the early architecture proposals because they’re very high-level and the actual implementation might vary widely.

### Processing Power

In 1965 Gordon Moore, founder of Fairchild Semiconductor, wrote a paper predicting that the number of transistors in the CPU would double every year for the next decade. The industry was eager to follow this prophecy and the trend followed for several decades (it was later adjusted to every 2 years), but it slowed down in the early 2010s. As of today, CPUs have in the order of 10’s billions transistors.

Moore’s Law: transistor count (log axis) over year. (source: Wikpedia)

It’s impractical to think that we’ll be able to put more transistors in a chip forever since there are physical constraints. The question is what kind of constraints are we going to hit?

To pack more transistors in a chip we have to increase the size of the chip.

Increase the size of the chip. this incurs in more power consumption and heat dissipation and information has to travel longer distances, making computation potentially slower. Furthermore, large chips might be infeasible for small devices like smartphones.

Reduce the size of the transistor. For the current transistor design the size has a hard lower bound of the Silicon atom, which is about 0.2nm [6], but before we get there, we have to figure out how to manufacture them with such precision in a cost effective way. As of today, 14nm seems to be the smallest size that can be viably produced.

One question we need to ask ourselves is how having a large number of transistor in a chip translates into computing power? It’s a convenient proxy for CPU performance because it’s easy to measure. However, what can we achieve with more transistors? It allows higher parallelism via multiple cores and transistors can be used to build CPU caches, which improves the speed of common operations.

Other ways to potentially increase processing power is to keep the number of transistors constant but reduce the chip size. This decreases the distance needed for electrons to travel and by dissipating less heat, it’s possible to increase the clock frequency.

Besides reducing the size of the transistor, other strategies are being explored to reduce the chip’s area: instead of the classic 2D square layout, chip manufacturers are exploring stacking approaches to reduce the overall size of the chip.

### Memory Bandwidth

The speed improvements of RAM memories haven’t followed the ones from the CPU. The widening gap can become so large that memory speed will become a bottleneck for the CPU speed (known as the Memory Wall [8]).

To work around this limitation CPU cache is currently being used. Alternative solutions include adding an in-chip memory to reduce latency in the transportation of data.

### Conclusion

I didn’t have a good idea of what to write, but I was interested in understanding how close we are to practical limits of current computer architectures, so the idea was to go back to the early inception of computer architectures and learn some about it.

We’ve made rapid progress in the early days and had steady progress for a long time, so it’s reasonable to be optimistic, but progress has been slowing down, at least in the general purpose single-node computation. We’ve seen specialized hardware take place with GPUs and TPUs, and also the increase of parallel, concurrent and distributed computing.

Quantum computer still seems a dream far away. I wonder if there’s any value in rethinking the classical architecture model from scratch to see if we can escape from local minimum?

### References

[1] Introduction to “The First Draft Report on the EDVAC”
[2] Wikipedia – Von Neumann architecture
[3] Wikipedia – Harvard architecture
[4] Wikipedia – Modified Harvard architecture
[5] Wikipedia – Transistor count
[6] Is 14nm the end of the road for silicon chips?
[7] Intel demos first Lakefield chip design using its 3D stacking architecture
[8] Wikipedia -Random-access memory: memory wall

# Constructing Trees from a Distance Matrix

Richard Dawkins is an evolutionary biologist and author of many science books. In The Blind Watchmaker he explains how complex systems can exist without the need of an intelligent design.

Chapter 10 of that book delves into the tree of life. He argues that the tree of life is not arbitrary taxonomy like the classification of animals into kingdoms or families, but it is more like a family tree, where the branching of the tree uniquely describes the true ancestry relationship between the nodes.

Even though we made great strides in genetics and mapped the DNA from several different species, determining the structure of the tree is very difficult. First, we need to define a suitable metric that would encode the ancestry proximity of two species. In other words, if species A evolved into B and C, we need a metric that would lead us to link A-B and A-C but not B-C. Another problem is that internal nodes can be missing (e.g. ancestor species went extinct without fossils).

David Hill’s tree of life based on sequenced genomes. Source: Wikipedia

In this post we’ll deal with a much simpler version of this problem, in which we have the metric well defined, we know the distance between every pair of nodes (perfect information), and all our nodes are leaves, so we have the freedom to decide the internal nodes of the tree.

This simplified problem can be formalized as follows:

Constructing a tree for its distance matrix problem. Suppose we are given a n x n distance matrix D. Construct a tree with n leaves such that the distance between every pair of leaves can be represented by D.

To reduce the amount of possible solutions, we will assume a canonical representation of a tree. A canonical tree doesn’t have any nodes with degree 2. We can always reduce a tree with nodes with degree 2 into a canonical one. For example:

Nodes with degree 2 can be removed and the edges combined.

### Terminology

Let’s introduce the terminology necessary to define the algorithm for solving our problem. A distance matrix D is a square matrix where d_ij represents the distance between elements i and j. This matrix is symmetric (d_ij = d_ji), all off-diagonal entries are positive, the diagonal entries are 0, and a triplet (i, j, k) satisfy the triangle inequality, that is,

d_ik <= d_ij + d_jk

A distance matrix is additive if there is a solution to the problem above.

We say two leaves are neighbors if they share a common parent. An edge connecting a leaf to its parent is called limb (edges connecting internal nodes are not limbs).

### Deciding whether a matrix is additive

We can decide whether a matrix is additive via the 4-point theorem:

Four-point Theorem. Let D be a distance matrix. If, for every possible set of 4 indexes (i, j, k, l), the following inequality holds (for some permutation):

(1) d_ij + d_kl <= d_ik + d_jl = d_il + d_jk

Sketch of proof. We can derive the general idea from the example tree below:

We can see by inspection that (1) is true by inspecting the edges on the path between each pair of leaves. This will be our base case for induction.

Now, we’ll show that if we’re given a distance matrix satisfying (1), we are able to reconstruct a valid tree from it. We have that d_ik = a + e + c, d_jl = b + e + d, d_ij = a + b and d_kl = c + d. If we add the first two terms and subtract the last two, we have d_ik + d_jl - d_ij + d_kl = 2e, so we have

e = (d_ik + d_jl - d_ij + d_kl) / 2

We know from (1) that d_ik + d_jl >= d_kl + d_ij > d_kl, so e is positive.

If we add d_ik and d_ij and subtract d_jl, we get d_ik + d_ij - d_jk = 2a, so

a = (d_ik + d_ij - d_jk) / 2

To show that a is positive, we need to remember that a distance matrix satisfy the triangle inequality, that is, for any three nodes, x, y, z, d_xy + d_yz >= d_xz. In our case, this means d_ij + d_jk >= d_ik, hence d_ij >= d_ik - d_jk and a is positive. We can use analogous ideas to derive the values for b, c and d.

To show this for the more general case, if we can show that for every possible set of 4 leaves (i, j, k, l) this property is held, then we can show there’s a permutation of these four leaves such that the tree from the induced paths between each pair of leaves looks like the specific example we showed above.

For at least one one of these quadruplets, i and j will be neighbors in the reconstructed tree. With the computed values of a, b, c, d, e, we are able to merge i and j into its parent and generate the distance matrix for n-1 leaves, which we can keep doing until n = 4. We still need to prove that this modified n-1 x n-1 satisfies the 4-point theorem if and only if the n x n does.

### Limb cutting approach

We’ll see next an algorithm for constructing a tree from an additive matrix.

The general idea is that even though we don’t know the structure of the tree that will “yield” our additive matrix, we are able to determine the length of the limb of any leaf.  Knowing that, we can remove the corresponding leaf (and limb) from our unknown tree by removing the corresponding row and column in the distance matrix. We can then solve for the n-1 x n-1 case. Once we have the solution (a tree) for the smaller problem we can “attach” the leaf into the tree.

To compute the limb length and where to attach it, we can rely on the following theorem.

Limb Length Theorem: Given an additive matrix D and a leaf j, limbLength(j) is equal to the minimum of

(2) (d_ij + d_jk - d_ik)/2

over all pairs of leaves i and k.

The idea behind the theorem is that if we remove parent(j) from the unknown tree, it will divide it into at least 3 subtrees (one being leaf j on its own). This means that there exists leaves i and k that are in different subtrees. This tells us that the path from i to k has to go through parent(j) and also that the path from i to j and from j to k are disjoint except for j‘s limb, so we can conclude that:

d_ik = d_ij + d_jk - 2*limbLength(j)

which yields (2) for limbLength(j). We can show now that for i and k on the same subtree d_ik <= d_ij + d_jk - 2*limbLength(j), and hence

limbLength(j) <= (d_ij + d_jk - d_ik)/2

This means that finding the minimum of (2) will satisfy these constraints.

Attaching leaf j back in. From the argument above, there are at least one pair of leaves (i, k) that yields the minimum limbLength(j) that belongs to different subtrees when parent(j) is removed. This means that parent(j) lies in the path between i and k. We need to plug in j at some point on this path such that when computing the distance from j to i and from j to k, it will yield d_ij and d_jk respectively. This might fall in the middle of an edge, in which case we need to create a new node. Note that as long as the edges all have positive values, there’s only one location within the path from i to k that we can attach j.

Note: There’s a missing detail in the induction argument here. How can we guarantee that no matter what tree is returned from the inductive step, it is such that attaching j will yield consistent distances from j to all other leaves besides i and k?

This constructive proof gives us an algorithm to find a tree for an additive matrix.

Runtime complexity. Finding limbLength(j) takes O(n^2) time since we need to inspect every pair of entries in D. We can generate an n-1 x n-1 matrix in O(n^2) and find the attachment point in O(n). Since each recursive step is proportional to the size of the matrix and we have n such steps, the total runtime complexity is O(n^3).

Detecting non-additive matrices. If we find a non-positive limbLength(j), this condition is sufficient for a matrix to be considered non-additive, since if we have a corresponding tree we know it has to represent the length of j’s limb. However, is this necessary? It could be that we find a positive value for limbLength(j) but when trying to attach j back in the distances won’t match.

The answer to this question goes back to the missing detail on the induction step and I don’t know how to answer.

### The Neighbor-Joining Algorithm

Naruya Saitou and Masatoshi Nei developed an algorithm, called Neighbor Joining, that also constructs a tree from an additive matrix, but has the additional property that for non-additive ones it serves as heuristic.

The idea behind is simple: It transforms the distance matrix D into another n x n matrix, D*, such that the minimum non-diagonal entry, say d*_ij, in that matrix corresponds to neighboring vertices (i ,j) in the tree, which is generally not true for a distance matrix.

The proof that D* has this property is non-trivial and will not provide here. Chapter 7 of [1] has more details and the proof.

Given this property, we can find i and j such that d*_ij is minimal and compute the limbs distances limbLength(i) and limbLength(j), replace them with a single leaf m, and solve the problem recursively. With the tree returned by the recursive step we can then attach i and j into m, which will become their parents.

### Conclusion

In this post we saw how to construct a tree from the distance between its leaves. The algorithms are relatively simple, but proving that they work is not. I got the general idea of the proofs but didn’t get with 100% of detail.

The idea of reconstructing the genealogical tree of all the species is fascinating and is a very interesting application of graph theory.

### References

[1] Bioinformatics Algorithms: An Active Learning Approach – Compeau, P. and Pevzner P. – Chapter 10

[2] The Blink Watchmaker – Richard Dawkins

# Consistent Hashing

Daniel Lewin was an Israeli-American mathematician and entrepreneur. He was aboard the American Airlines Flight 11, which was hijacked by al-Qaeda during the September 11 attacks.

Tom Leighton is a professor (on leave) of Applied Mathematics at CSAIL @ MIT and an expert on algorithms for network applications.

Together, Lewin and Leighton founded the company Akamai, which was a pioneer in the business of content delivery networks (CDNs) and is currently one of the top players in the segment. One of the key technologies employed by the company was the use of consistent hashing, which we’ll present in this post.

### Motivation

One of the main purposes of the CDN is to be a cache for static data. Due to large amounts of data, we cannot possibly store the cache in a single machine. Instead we’ll have many servers each of which will be responsible for storing a portion of the data.

We can see this as a distributed key-value store, and we have two main operations: read and write. For the write part, we provide the data to be written and an associated key (address). For the read part, we provide the key and the system either returns the stored data or decides it doesn’t exist.

In scenarios where we cannot make any assumptions over the pattern of data and keys, we can try to distribute the entries uniformly over the set of servers. One simple way to do this is to hash the keys and get the remainder of the division by N (mod N), where N corresponds to the number of servers. Then we assign the entry (key, value) to the corresponding server.

The problem arises when the set of servers changes very frequently. This can happen in practice, for example, if servers fail and need to be put offline, or we might need to reintroduce servers after reboots or even add new servers for scaling.

Changing the value of N would cause almost complete redistribution of the keys to different servers which is very inefficient. We need to devise a way to hash the keys in a way that adding or removing servers will only require few keys from changing servers.

### Consistent Hashing

The key idea of the consistent hashing algorithm is to include the key for the server in the hash table. A possible key for the server could be its IP address.

Say that our hash function h() generates a 32-bit integer. Then, to determine to which server we will send a key k, we find the server s whose hash h(s) is the smallest that is larger than h(k). To make the process simpler, we assume the table is circular, which means that if we cannot find a server with hash larger than h(k), we wrap around and start looking from the beginning of the array.

Big blue circles are servers, orange circles are keys. Right: If we remove server S3, only entries corresponding to keys K5 and K4 need to be moved / re-assigned.

If we assume that the hash distributes the keys uniformly, including the server keys, we’ll still get a uniform distribution of keys to each server.

The advantage comes to when adding and removing servers to the list. When adding a new server sx to the system, its hash will be in between 2 server hashes, say h(s1) and h(s2) in the circle. Only the keys from h(s1) to h(sx), which belonged to s2, will change servers, to sx. Conversely, when removing a server sx, only the keys assigned to it will need to go to a different server, in this case the server that immediately follows sx.

How can we find the server associated to a given key? The naive way is to scan linearly the hashes until we find a server hash. A more efficient way is to keep the server hashes in a binary balanced search tree, so we can find the leaf with the smallest value larger that h(x) in O(log n), while adding and removing servers to the tree is also a O(log n) operation.

### Implementation in Rust

We will provide an implementation of the ideas above in Rust as an exercise. We define the interface of our structure as

 pub struct ConsistentHashTable { containers: rbtree::RBTree, entries: HashSet, // The hash function must have the property of mapping strings to // the space of u32 numbers with uniform probability. hash_function: fn (&String) –> u32 }

Note that we’ll store the list of servers (containers) and keys (entries) in separate structures. We can store the entries in a simple hash table since we just need efficient insertion, deletion and look up. For the containers we need insertion, deletion but also finding the smallest element that is larger than a given value, which we’ll call successor. As we discussed above, we can use a binary balanced search tree which allow all these operations in O(log n), for example a Red-Black tree. I found this Rust implementation of the Red-Black tree [1].

Finally, we also include the hash function as part of the structure in case we want customize the implementation (handy for testing), but we can provide a default implementation.

To “construct” a new structure, we define a method new() in the implementation section, and use farmhash as the default implementation for the hash function [2].

 impl ConsistentHashTable { pub fn new() -> ConsistentHashTable { return ConsistentHashTable { containers: rbtree::RBTree::new(), entries: HashSet::new(), hash_function: hash_function, }; } … } fn hash_function(value: &String) -> u32 { return farmhash::hash32(&value.as_bytes()); }

The insertion and removal are already provided by the data structures, and are trivial to extend to ours. The interesting method is determining the server corresponding to a given key, namely get_container_id_for_entry().

In there we need to traverse the Red-Black tree to find the successor of our value v. The API of the Red-Black tree doesn’t have such method, only one to search for the exact key. However due to the nature of binary search trees, we can guarantee that the smallest element greater than the searched value v will be visited while searching for v.

Thus, we can modify the search algorithm to include a visitor, that is, a callback that is called whenever a node is visited during the search. In the code below we start with a reference to the root, temp, and in a loop we keep traversing the tree depending on comparison between the key and the value at the current node.

 fn find_node_with_visitor( &self, k: &K, mut visitor: F ) -> NodePtr where F: FnMut(&K) { if self.root.is_null() { return NodePtr::null(); } let mut temp = &self.root; unsafe { loop { let key = &(*temp.0).key; let next = match k.cmp(key) { Ordering::Less => &mut (*temp.0).left, Ordering::Greater => &mut (*temp.0).right, Ordering::Equal => { visitor(&key); return *temp; } }; visitor(&key); if next.is_null() { break; } temp = next; } } NodePtr::null() }

view raw
find_node.rs
hosted with ❤ by GitHub

Let’s take a detour to study the Rust code a bit. First, we see the unsafe block [3]. It can be used to de-reference a raw pointer. A raw pointer is similar to a C pointer, i.e. it points to a specific memory address. When we de-reference the pointer, we have access to the value stored in that memory address. For example:

 let mut num = 5; let r1 = &num as *const i32; unsafe { println!("r1 is: {}", *r1); }

view raw
unsafe.rs
hosted with ❤ by GitHub

The reason we need the unsafe block in our implementation is that self.root is a raw pointer to RBTreeNode, as we can see in line 1 and 4 below:

 struct NodePtr(*mut RBTreeNode); … pub struct RBTree { root: NodePtr, … } … // in find_node_with_visitor() let mut temp = &self.root; let key = &(*temp.0).key;

view raw
node_ptr.rs
hosted with ❤ by GitHub

The other part worth mentioning is the type of the visitor function. It’s defined as

 fn find_node_with_visitor( … k: &K, mut visitor: F ) -> where F: FnMut(&K) { … visitor(&key); … }

view raw
closure.rs
hosted with ❤ by GitHub

It relies on several concepts from Rust, including Traits, Closures, and Trait Bounds [4, 5]. The syntax indicates that the type of visitor must be FnMut(&K), which in turns mean a closure that has a single parameter of type &K (K is the type of the key of the RB tree). There are three traits a closure can implement: Fn, FnMut and FnOnce. FnMut allows closures that can capture and mutate variables in their environment (see Capturing the Environment with Closures). We need this because our visitor will update a variable defined outside of the closure as we’ll see next.

We are now done with our detour into the Rust features realm, so we can analyze the closure we pass as visitor. It’s a simple idea: whenever we visit a node, we check if it’s greater than our searched value and if it’s smaller than the one we found so far. It’s worth noticing we define closest_key outside of the closure but mutate it inside it:

 let mut closest_key: u32 = std::u32::MAX; self.containers.get_with_visitor( &target_key, |node_key| { if ( distance(closest_key, target_key) > distance(*node_key, target_key) && *node_key > target_key ) { closest_key = *node_key; } } );

view raw
closest_key.rs
hosted with ❤ by GitHub

We also need to handle a corner case which is that if the hash of the value is larger than all of those of the containers, in which case we wrap around our virtual circular table and return the container with smallest hash:

 // Every container key is smaller than the target_key. In this case we 'wrap around' the // table and select the first element. if (closest_key == std::u32::MAX) { let result = self.containers.get_first(); match result { None => { return Err("Did not find first entry."); } Some((_, entry)) => { let container_id = &entry.id; return Ok(container_id); } } }

view raw
wrap_search.rs
hosted with ❤ by GitHub

The full implementation is on Github and it also contains a set of basic unit tests.

### Conclusion

The idea of a consistent hash is very clever. It relies on the fact that binary search trees can be used to search not only exact values (those stored in the nodes) but also the closest value to a given query.

In a sense, this use of binary trees is analogous to a common use of quad-trees, which is to subdivide the 2d space into regions. In our case we’re subdividing the 1d line into segments, or more precisely, we’re subdividing  a 1d circumference into segments, since our line wraps around.

I struggled quite a bit with the Rust strict typing, especially around passing lambda functions as arguments and also setting up the testing. I found the mocking capability from the Rust toolchain lacking, and decided to work with dependency injection to mock the hash function and easier to test. I did learn a ton, though!

### References

[1] GitHub: /tickbh/rbtree-rs
[2] GitHub: seiflotfy/rust-farmhash
[3] The Rust Programming Language book – Ch19: Unsafe Rust
[4] The Rust Programming Language book – Ch13: Closures: Anonymous Functions that Can Capture Their Environment
[5] The Rust Programming Language book – Ch13: Traits: Defining Shared Behavior

# Rust Memory Management

Graydon Hoare is a Software Developer who created the Rust programming language while working at Mozilla Research [1]. He has an interesting presence on the internet, for example responding to this question and on Twitter.

In this post we’ll talk about one of the key features of Rust, which I find the hardest to wrap my head around, which is its memory management.

### Motivation

In most programming languages we either have to manage memory allocation ourselves or rely on a complex garbage collector to which we have limited control and can lead to unpredictable performance bottlenecks.

Rust has a set of constraints around memory allocation that results in a deterministic automatic memory management.

We’ll now delve into more details on what these constraints are, how they enforce the necessary guarantees to enable an efficient memory management by the runtime.

### Ownership Rules

Rust has the following basic rules around ownership:

• Each value in Rust has a variable that’s called its owner
• There can only be one owner at a time
• When the owner goes out of scope, the value will be dropped

They’re very basic but have deep implications in how we think about programs. Let’s analyze some of them.

### Assignment transfers ownership

To conform the second rule, whenever we assign a variable to another, we are transferring the ownership from one variable to another. For example:

 let mut vec1 = vec![1, 2, 3]; // Performing changes is okay. vec1 is mutable vec1.push(4); // Assignment -> transferring ownership from vec1 to vec2 // This is the end of scope for vec1. let mut vec2 = vec1; // Error: vec1 cannot be read from/written to println!("{}", vec1.len()); // vec2 can still perform further mutations vec2.push(5);

view raw
assign.rs
hosted with ❤ by GitHub

In the example above, vec1 transfer ownership of its data to vec2. This means that we can neither read nor write to vec1 anymore. It’s as if it was out of scope (unless it is assigned ownership to some other data later).

By having a single owner we don’t have to worry about keeping track of references to a given object as garbage collectors do to know when we are allowed to free the memory. For example, if we had:

 let mut vec1 = vec![1, 2, 3]; { let mut vec2 = vec1; vec2.push(5); }

view raw
assignment_scope.rs
hosted with ❤ by GitHub

Because vec2 owns the vector allocated initially and it goes out of scope after line 4, the runtime can free the memory safely, since we know vec1 cannot access that data after line 3.

Similarly, when we use a variable as argument to a function, its data is transferred to the parameter. In the example below, vec1‘s data is transferred to vec in mutate_vec(), so we cannot access it in line 9.

 fn mutate_vec(mut vec: Vec) -> Vec { vec.push(4); } let mut vec1 = vec![1, 2, 3]; mutate_vec(vec1); // Error: ownership transferred in function call println!("{}", vec1.len());

view raw
transfer_function.rs
hosted with ❤ by GitHub

One way to return the ownership back to vec1 is for the function to return the argument.

 fn mutate_vec(mut vec: Vec) -> Vec { vec.push(4); return vec; } let mut vec1 = vec![1, 2, 3]; // Transferring ownership to the function vec1 = mutate_vec(vec1); println!("{}", vec1.len());

### References & Borrowing

To avoid transferring data to a variable on assignment, we can use references. In the example below, vec2 “borrows” the data from vec1 but there’s no ownership transfer. We have read access to the data via vec2, which we can do by dereferencing vec2 with &vec2.

 let vec1 = vec![1, 2, 3]; { // & indicates a reference, so we are not transferring // ownership, just borrowing let vec2 = &vec1; println!("{}", &vec2); } // vec1 still has access to the data, since there was no // ownership transfer println!("{}", vec1.len());

view raw
borrow.rs
hosted with ❤ by GitHub

If we want write access, we need to make vec1 mutable and also obtain a mutable reference to vec1 via the &mut operator, like in the example below:

 let mut vec1 = vec![1, 2, 3]; { let vec2 = &mut vec1; // We can mutate s1's data via a mutable reference! vec2.push(4); println!("{}", vec2.len()); } println!("{}", vec1.len());

view raw
borrow_mut.rs
hosted with ❤ by GitHub

However, if we try to access the data via vec1 while vec2 has a mutable reference to it, we’ll get an error.

 let mut vec1 = vec![1, 2, 3]; { let vec2 = &mut vec1; // We can mutate s1's data via a mutable reference! vec2.push(4); // Error: cannot access data when a mutable // reference is in place println!("{}", vec1.len()); } println!("{}", vec1.len());

view raw
borrow_mut_race.rs
hosted with ❤ by GitHub

We can take as many (non-mutable) references as we want:

 let mut vec1 = vec![1, 2, 3]; { let vec2 = &vec1; let vec3 = &vec1; println!("{} {}", vec2.len(), vec3.len()); } println!("{}", vec1.len());

view raw
borrow_many.rs
hosted with ❤ by GitHub

But once we try to obtain a mutable reference, we get an error:

 let mut vec1 = vec![1, 2, 3]; { let vec2 = &vec1; let vec3 = &vec1; println!("{} {}", vec2.len(), vec3.len()); // Error: cannot obtain mutable reference once // other references are in place let vec4 = &mut vec1; } println!("{}", vec1.len());

view raw
hosted with ❤ by GitHub

Read and write locks. Rust enforces these constraints to prevent race condition bugs in multi-thread applications. In fact the borrowing mechanism via references is very similar to read locks and write locks.

To recall, if some data has a read lock, we can acquire as many other reads locks as we want but we cannot acquire a write lock. This way we prevent data inconsistency between multiple reads since we prevent data mutations by not allowing writes. Conversely, if some data has a write lock, we cannot acquire neither read or write locks.

We can see that the regular borrowing implements a read lock while a mutable borrowing implements a write lock.

Dangling references. One potential issue with references is that if we return a reference to some variable that went out of scope.

 fn dangle() -> &Vec { let v = vec![1, 2, 3]; &v }

view raw
dangling_ref.rs
hosted with ❤ by GitHub

Luckily, the compiler will prevent this case by making a compile error. It’s now always the case that we cannot return a reference. If the reference we’re returning was sent as argument, it’s still a valid one, for example:

 fn no_dangle(v: &Vec) -> &Vec { println!("{}", v.len()); return &v; }

view raw
no_dangling_ref.rs
hosted with ❤ by GitHub

However, if we try to pass two parameters, we’ll run into a compile error:

 fn get_largest(v1: &Vec, v2: &Vec) -> &Vec { if (v1.len() > v2.len()) { return v1; } return v2; }

view raw
dangle2.rs
hosted with ❤ by GitHub

To see why it’s not possible to guarantee this memory-safe, we can consider the following code calling get_largest:

 let vec1 = vec![1, 2, 3]; let result; { let vec2 = vec![]; result = get_largest(&vec1, &vec2); } println!("{}", result.len());

view raw
dangling_call.rs
hosted with ❤ by GitHub

Here we’re sending references for both vec1 and vec2, and returning back either of them. If vec2 happened to be larger, we’d return it to result and try to access the data after vec2 went out of scope.

However, if result is used while both vec1 and vec2 are in scope, it should be theoretically safe to allow calling get_largest:

 let vec1 = vec![1, 2, 3]; { let vec2 = vec![]; let result = get_largest_with_lifetime(&vec1, &vec2); println!("{}", result.len()); }

In fact, it’s possible and we’ll see how next, but we’ll need to introduce a new terminology.

The lifetime of a variable represents the duration in which the variable is valid. In the example below, the lifetime of variable a is from line 2 to line 7. The lifetimes for b is from 4 to 5, for c is line 5 and for d is line 7. Note that a’s lifetime contains all other lifetimes and b’s lifetime contains c’s lifetime.

 { let a = 1; { let b = 2; let c = 3; } let d = 4; }

view raw
hosted with ❤ by GitHub

We can annotate a function using a syntax similar to generics to parametrize the function arguments by their lifetimes. For example, if we want to include the lifetimes of the arguments in get_largest(), we can do:

 fn get_largest_with_lifetime<'a>( v1: &'a Vec, v2: &'a Vec ) -> &'a Vec { if (v1.len() > v2.len()) { return v1; } return v2; }

This is essentially binding the each variable to the lifetime specified by 'a. Note it’s forcing that both variables have the same lifetime and that the return type has the same lifetime as the input parameters.

Now, if we replace get_largest() with get_largest_with_lifetime(), we won’t get compiler errors. In the example below, result has the same lifetime was the common lifetimes of vec1 and vec2, which is vec2‘s lifetime. This means we’re fine using result with the inner block.

 let vec1 = vec![1, 2, 3]; { let vec2 = vec![]; let result = get_largest_with_lifetime(&vec1, &vec2); println!("{}", result.len()); }

### Conclusion

Rust documentation is very detailed and well written. All the concepts presented in this post are explained in length there. In here I’m presenting them in my own words and different examples (vec instead of string).

I also tried to group topics around memory management, which is different from the documentation. In there “lifetimes” are not covered under ownership and I skipped the generics discussion.

### References

[1] Wikipedia – Rust (programming language)
[2] Rust Documentation: What is Ownership?
[3] Rust Documentation: References & Borrowing

# DNA Assembly

In this post we’ll discuss the problem of reconstructing a DNA segment from its fragments, also known as DNA assembly.

Context. When we first talked about DNA sequencing, we learned that there’s no good way to “scan” the whole series of nucleotides from a DNA with current technology. What can be done is to break the target segment into many small (overlapping) segments and then rely on computers to help with the task of reconstructing the original DNA sequence from these segments.

We can start by making some assumptions over the nature of the fragments to start. First, we’ll assume every fragment has the same length and second that we have every possible fragment of such length.

For instance, if our sequence is TATGGGGTGC, all possible fragments of length 3 are: ATG, GGG, GGG, GGT, GTG, TAT, TGC, TGG.

Note that the fragments overlap with each other. For example, TAT and ATG have an overlap of AT. This is crucial for us solve the problem, since if there was no overlap it would be impossible to order the fragments to obtain the original sequence, since there would be no “link” between any two fragments.

Let’s state the problem more formally given these constraints.

### The String Reconstruction Problem

Definitions. A k-mer of a string S is any substring of S with length k.  The String Composition Problem consists in, given the set of all k-mers from S, reconstruct the string S.

Reusing the example from above, if we are given ATG, GGG, GGG, GGT, GTG, TAT, TGC, TGG, we could reconstruct the string TATGGGGTGC. We’ll now see a way to solve this problem.

Solution. Assuming a solution exists, it will consist of a (ordered) sequence of the k-mers such that adjacent k-mers overlap in k-1 positions. From the example above, the permutation is

TAT, ATG, TGG, GGG, GGG, GGT, GTG, TGC

And two adjacent k-mers such as TGG and GGG, overlap in k-1 positions (GG).

We can model this problem as a graph problem. If we define a directed graph where each vertex corresponds to a k-mer, and an edge (u, v) exists if and only if the suffix of k-mer u is the prefix of the k-mer v, in other words, u overlaps with v.

Now, if we can find a path visiting each vertex exactly once, that will be a valid reconstructed string. This is known as the Hamiltonian path problem for general graphs it’s a NP-Hard problem.

Instead, we can model this problem using a graph as follows: for each k-mer, we have a vertex corresponding to its prefix of length k-1 and another to its suffix with length k-1. For example, for a k-mer TGG, there would exist vertices TG and GG. There’s then an edge from u to v, if the overlap of u and v in k-2 positions is a k-mer in the input. In the example, above, there’s an edge from TG to GG because TGG is a k-mer. Note we can have repeated (multiple) edges.

In this new graph, if we can find a path visiting each edge exactly once, we’ll find a reconstructed string to the set of k-mers. To see why, we can observe that each edge in this new graph is a k-mer and two consecutive edges must overlap in k-1 positions (the overlap being the vertex that “links” these two edges). The graph for the example we discussed above can be seen below:

Graph representing the k-mer set:ATG, GGG, GGG, GGT, GTG, TAT, TGC, TGG

Note that this second graph is the line graph of the one we first used, in a similar fashion that a de Bruijn graph of dimension n is a line graph of the one with dimension n-1. In fact, these graphs are a subgraph of the de Bruijn graphs.

As we saw in our discussions about Eulerian Circuits that this is a much easier problem to solve.

#### Dealing with Ambiguity

Even if are able to solve the String Reconstruction problem, we might not end up with the right DNA segment. In [1], the authors provide the example TATGCCATGGGATGTT which has the same 3-mer composition of TAATGGGATGCCATGTT. Let’s see strategies employed to work around this problem.

### The String Reconstruction from Read-Pairs Problem

While it’s currently infeasible to generate longer fragments that would reduce ambiguity, it’s possible to obtain what is called read-pairs. These are a pair of k-mers that are separated by a distance of exactly d in the target segment.

For example, TGC and ATG are a pair of 3-mers separated by distance 1 in TATGCCATGGGATGTT. We refer to a pair of k-mers separated by distance d as (k, d)-mers, or (pattern1 | pattern2) if we can omit the distance.

Solution. We can construct a de Bruijn-like graph for this problem too, which we’ll call Paired de Bruijn graph. First, let’s define the prefix and suffix of a (k, d)-mer.  Given a (k, d)-mer in the form of (a1, ..., ak | b1, ..., bk), its prefix is given by (a1, ..., ak-1 | b1, ..., bk-1) and its suffix by (a2, ..., ak | b2, ..., bk).

For every (k, d)-mer, we’ll have one vertex corresponding to the prefix and one to the suffix of this (k, d)-mer. There’s an edge from vertex u to vertex v if there’s a (k, d)-mer whose prefix is u and suffix is v.

Similar to the solution to the String Reconstruction problem, we can find an Eulerian path, but in this case that might not yield a valid solution. In [1] the authors provide an example:

Consider the set of (2, 1)-mers is given by (AG|AG), (AG | TG), (CA | CT), (CT | CA), (CT | CT), (GC | GC), (GC | GC), (GC | GC), (TG | TG).

After constructing the graph, one of the possible Eulerian paths is (AG|AG) → (GC | GC) → (CA | CT) → (AG | TG) → (GC | GC) → (CT | CT) → (TG | TG) → (GC | GC) →  (CT | CA) which spells AGCAAGCTGCTGCA, which is a valid solution.

However, another valid Eulerian path, (AG|AG) → (GC | GC) →  (CT | CT) →  (TG | TG)  → (GC | GC) → (CA | CT) → (AG | TG) → (GC | GC) →  (CT | CA) does not yield a valid string.

In [1] the authors don’t provide an explicit way to overcome this issue but they go on to describe how to enumerate all Eulerian paths, which seems to suggest a brute-force approach.

### Practical Challenges

Missing fragments. One of the assumptions we made, that all fragments of the sequence are present doesn’t hold true for the state-of-the-art sequencers.

A technique to address this issue is to break the fragments into smaller ones until we get full coverage.

Left: 10-mers not providing full coverage. Right: 5-mers obtained from 10-mers and having full coverage.

This trades off coverage with ambiguity, since smaller k-mers are more likely to contain repeats and that might not lead to a single solution.

Typos. Another limitation of sequencers is that they can misread nucleotides. If we perform multiple reads – some containing the correct nucleotides, some not – we’ll end up with a graph where some paths are valid and some are not. It’s possible to remove them via heuristics but they’re not perfect and can lead to the removal of valid paths.

### Conclusion

While following the textbook [1] I felt a bit lost due to so many detours and kept losing track of the main problem being solved. Because the textbook is meant to also be accessible to people without prior knowledge of Computer Science, so it does need to provide the base for concepts such as Graph Theory.

One think I missed from the content were a section for experiment results. Bioinformatics is a highly applied branch of Computer Science and all of these methods are heuristics or based on approximate models. I’d be interested in knowing how well they perform in practice.

What I liked the most about the presentation style is that it provides a simpler solution first, describe the issues with it and then provide a better solution. This helps understanding on why a given algorithm is this or that way.

Reconstructing a string using (k,d)-mers in an efficient way seems like an open problem, given the solution presented requires brute force in the worst case. I wonder if there has been any progress since.

### References

[1] Bioinformatics Algorithms: An Active Learning Approach – Compeau, P. and Pevzner P.
[2] Wikipedia – Sequence Assembly

# 2018 in Review

This is a meta-post to review what happened in 2018.

## Posts Summary

This year I set out to learn about Bioinformatics. I completed the Bioinformatics class on Coursera. Under this umbrella I wrote about Cell Biology and DNA Sequencing. I’m on my way to write about DNA Fragment Assembly, but wanted to work out the theory behind it first, which led me to Eulerian Circuits and De Bruijn Graphs.

I was curious about current technologies such as Blockchain and Two-factor Authentication and wrote a bit about them.

One of my resolutions for last year was to learn the Rust programming language. I implemented the code from few of my posts using it, including the HyperLogLog data structure and a solver for a game called Bulls and Cows. I still ventured a bit with OCaml by learning BuckleScript (JavaScript with OCaml syntax).

I continued my slow progress in studying Distributed Systems. This year I read Google’s F1 Database paper and wrote about LSM Trees.

Besides BuckleScript, I haven’t dedicated too much time to Web Development topics, the other one being Layout properties of CSS.

The most popular post is still the 2014 Introduction to the Parsec Library, with 1.3k visits. From this year, the recreational math problem, Bulls and Cows was most viewed. Overall the blog had a total of 9.6k visitors.

I kept the resolution to post once a month on average. The blog completed 6 years with 79 posts.

## Resolutions for 2019

I’ll repeat my resolutions from 2018 for 2019. I don’t think I learned nearly the minimum of Rust, especially around memory management, and I’ve only scratched the surface on Bioinformatics. Besides DNA analysis I learned more about other problems like protein folding that seem exciting.

I haven’t done any mobile projects and only read one paper, so I’ll put these on the bucket list as well.

## Personal

The end of the year is a good time to look back and remember all the things I’ve done besides work and the technical blog.

### Trips

I enjoy traveling and 2018 had plenty of trips. I haven’t had been to Europe before and this year I happened to go twice! Once for work, to England and another time for pleasure, to Greece.

In England I explored mostly around London including the cities of Bath and Dover.

Top: Tower Bridge; Iconic double-decker bus; Rosetta Stone at the British Museum. Bottom: Roman Baths; Dover Cliffs; Windsor Castle.

The trip to Greece included Athens, Santorini and a train ride to Kalambaka, to see the Meteora monasteries.

Top: Athens seen from the Acropolis, the Parthenon, and Santorini. Bottom: Temple of Zeus in Athens, a monastery on top of a mountain and the Akrotiri museum.

There were also trips around the US, including Albuquerque in New Mexico, New Orleans in Louisiana and Los Angeles in California.

Top: Taos Pueblo near Santa Fe NM, Petroglyphs in Albuquerque NM, Venice Canals in Los Angeles. Bottom: Getty Museum in Los Angeles; Jackson Square in Louisiana; French Quarter in Louisiana.

There was also a trip to Montana, to the Glacier National Park. I really like National Parks and I’m glad to have visited this one, which is very beautiful.

Glacier National Park: Iceberg Glacier, Mountain Goat and Bearhat Mountain

### Books

This year I read a lot of non-fiction, especially science-related. My favorites science books were:

•  Blind Watchmaker from Richard Dawkins. He delves into Darwin’s theory of evolution to  conclude it’s the most probable explanation for the existence of life on Earth.
• Genome: The Autobiography of a Species in 23 Chapters by Matt Ridley is a highly engaging and elucidating tour of our genome. Each chapter is dedicated to one chromosome and he provides an example of trait or disease related to it.
• The Ghost Map: The Story of London’s Most Terrifying Epidemic – and How It Changed Science, Cities, and the Modern World by Steven Johnson. It describes some fascinating detective work from a doctor during a time we knew a lot less about diseases.

In the realms of humanities,

• Enlightenment Now: The Case for Reason, Science, Humanism, and Progress by Steven Pinker is a thorough presentation of facts and data to back the claim that despite localized set backs and short-term regressions, the world has been becoming more progressive. The idea that stuck the most with me is that each new generation tends to be more progressive than their parents, which shines a optimist light to a more humane future.
• Why Nations Fail: The Origins of Power, Prosperity, and Poverty makes the claim that the success of a nation has nothing to do with geography, race or culture. There is a lot of world history in this book and I learned a bunch about different countries.

I also enjoyed some biographies,

• I Am Malala – inspiring story from a Pakistani girl who went on to win the Nobel Peace prize in 2014. Great overview of the history of Pakistan, and the life of a civilian under the regime of the Taliban.
• Born a Crime – The comedian Trevor Noah had a pretty happening life. The book covers the recent history of South Africa and especially the Apartheid. He provides an interesting perspective on growing up on the later part of that regime and for being the son of a black mother and a white father.

and reading stuff related to trips,

• For Greece, I chose The King Must Die by Mary Renault. It is a fiction set in the mythical kingdom of Minos in Crete. I really like the fact it alludes to Greek myths but the story itself does not rely on supernatural elements.
• For Montana, I picked A River Runs Through It by Norman Maclean. It’s a short story set in rural Montana and a constant theme is family relations, fishing and the big questions of life.
• A Modern History of Japan by Andrew Gordon. I was in Japan in 2017, not in 2018, but I only managed to finish the book this past year. I learned a bunch about the recent Japanese history, but not in enough detail to change how I thought about the experiences from my trip.

### Movies

I haven’t watched many movies but really enjoyed Coco and Crazy Rich Asians.