# Browser Performance

In this post we’ll go over the topic of browser performance. We’ll cover tools and techniques to identify, measure and improve performance on the client side of web applications.

Assumptions. Our tests use the Chrome browser, which means the tools likely only make sense for this specific browser (general techniques are agnostic). All the screenshots are taken either on a Linux machine running Gnome or the latest MacOS, though it should be relatively similar to Chrome in other platforms. The Chrome version used is v80.

## Analyzing Performance

### FPS Meter

This widget tracks the number of frames per second. To enable it,

In the … icon menu, click on Show console drawer:

In the Rendering tab, check the box FPS meter:

And it should looks like this:

The yellow bar represents 60 FPS, which is the threshold under which the page might look sluggish. The bar chart on the right is a histogram view of the visible line chart. The bars at the bottom represent low FPS and on the top high FPS. The number on the top are the range (max/min FPS).

Initially the numbers will be low because the is not previous data, since it only measure FPS when the page re-renders. To prime the data you need to manually trigger many visual updates, say resizing the window or specific actions on your application:

### Flame Chart

The flame chart is a very useful visualization to understand performance. It allows identifying expensive functions at a glance (long bars) and dive in into sub-functions quickly.

To use this, download and open (locally) the on_click.html example. Click on the “Performance” tab. Click the record button. Click the “Click Me” button on the page. After some number is displayed, click on Stop on the recording. We should get a chart similar to the one below:

Flame chart view

We can see the area chart at the top which is an overview of the processing happening below. The y-axis in that chart is amount of CPU.

The second chart is the flame chart. It has the bars corresponding to function calls (e.g. Event:click) and below that are the functions that were called. We see a generic Function Call because we used a anonymous arrow function like below:

Another thing to notice is that if we zoom enough, we see what looks like multiple calls of the expensive() function but in our code it is a single continuous function. I’m not sure why Chrome does that, but I’m assuming it has to do with the garbage collection kicking in.

There are other interesting alternative views at the bottom. The doughnut chart gives a breakdown on types of work being done. In our case it’s just a mix of scripting and idle:

The Call Tree view is great for the details. It allows looking at specific call chains and how much each is contributing. In our example, we can see Event: click is an expensive operation but it’s the expensive() function that is the culprit (based on the Self time):

To find specific bottlenecks the Bottom-Up view might be more interesting. If the Call Tree shows a nested tree from the root, the Bottom-up view shows the internal nodes but sorted by Self Time:

Performance Metrics

When we record a trace that includes a page load, it also includes some standard metrics like First contentful paint (FCP). The definition of some them are here [4].

It’s possible to write our own events that show up in Timings. We just need to add markers and  measure duration using the performance API [6]:

Then it shows up in the Timeline row:

The result of performance.measure() contains useful information:

### Network

When we first open the Network tab, it immediately starts recording the network activity, but since we want a full-scan of the page requests, let’s stop it, clear the data, start recording again, reload the page and then stop recording to get a report.

The sample below is from imdb.com:

Network tracing for imdb.com

Similar to the performance tab, this view provides a timeline which we can zoom in. The table below lists all network requests performed by the browser, including static resources like images, JavaScript files and async requests to fetch more data.

We can filter by request type at the top:

Here we could look for oversized images (though unlikely you’ll need to show a gigantic high-resolution image by default – so look into sending a thumbnail with a link to the larger image) or slow async requests.

Async requests are of type XHR (XmlHttpRequest), so we can filter by that:

the bars on the last column give an indication of not only the time a request took, but the dependencies. Sometimes an async request doesn’t start until a specific one finishes. If we hover over the bar of the render request, we see:

It has a nice breakdown on the timings, including DNS lookup and SSL latency (authentication). It also includes queuing time, the amount of time waiting TTFB, and then the content download.

The queuing indicates that the request was ready to go, but needed to wait on some other dependency or due to browser limits. Usually there’s a limit of active XHRs per domain the browser can have (<10) [7].

TTFB means “Time to first byte” – this means that the time it takes to start receiving data. This is indicative of the time the server took processing.

Content download is the time spend getting the resource/data. It’s correlated with the size of the data, network bandwidth/speed.

## Measuring Performance

### Duration from start

The performance API keeps track of specific events in the page loading lifetime, so we can measure durations compared to a specific point in time.

performance.now() [9] provides the timestamp since the page started to load (in rough terms – see this more precise definition).

Side note: browsers purposefully reduce the precision of these APIs to protect against attacks that rely on high-precision timing information [10].

### Custom Durations

As we mentioned in an earlier in measurement.js, we can measure durations. By using performance.mark() and performance.measure() [5].

Benchmarking

We can use the APIs above to create simple benchmarks.

Before we start: always beware of over-optimizing code! You might end up spending a lot of time to improve performance against a specific benchmark, browser or even JavaScript engine version. That’s part of the reason V8 retired the benchmark tool Octane.

That said, if you identify some code in the critical path that executes the same operation on the order of millions of times, it might be worth optimizing it. One example is changing from array methods like forEach() and reduce() to pure for-loops. We can perform several runs of the same function and get the average. I opted for using a random number generator to avoid any optimization related to caching.

The function generated an array with 10 million random floats and calculates the sum. It then runs the code 10 times. The results follow:

• for loop 12ms
• reduce 156ms
• forEach: 208ms

On Firefox it yields

• for loop 40ms
• reduce 48ms
• forEach 64ms

We can see a for loop is ~92% and ~16% faster than reduce on Chrome and Firefox respectively. Some takeaways:

• It’s highly browser-dependent! Don’t over-optimize code for one browser.
• Although the relative gains are substantial, the absolute improvements are only 100ms for an array of 10 million items – how practical is this? The creation and allocation of the array probably takes more time than this.
• This is pure computation, so no DOM access is needed, so consider using Web Workers

## Techniques

Here we discuss some techniques to improve performance. The first two are general purpose optimizations but we include here for completeness.

### Reduce Algorithmic Complexity

Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.

Before optimizing code, we should consider optimizing algorithm. This means reducing the big-O complexity. In a recent project, we found several wins by identifying hidden O(n^2) code and converted them to O(n).

We often need to make this tradeoff: to improve speed of code (computation) we have to use more memory. This is actually how we improved the O(n^2) to O(n): by keeping a look-up hash table for fast access.

In JavaScript a quick look-up table can be implemented via Object, or ES6 Map and Set. Other techniques involve caching (memoization).

In [8] Addy Osmani and Mathias Bynens suggest that if you’re sending large JSON payload via the network, it’s better to send them JSON-encoded instead of JavaScript objects literals:

Because the JSON grammar is much simpler than JavaScript’s grammar, JSON can be parsed more efficiently than JavaScript

### Reduce CSS selector complexity

In [17], Lewis suggests reducing the complexity of CSS selectors such as

to

The reason being:

Roughly 50% of the time used to calculate the computed style for an element is used to match selectors (…)

### Use Flexbox

In [18] Lewis suggests that flexbox is more efficient than other layouting alternatives such as float positioning. One caveat made in the article is that is node widely supported. As of 2020 however, that doesn’t seem the case anymore.

### Avoid reading DOM properties right after mutating them

In [18] Lewis explains that the Layout phase happens after the JavaScript runs:

This means that changing a DOM property and reading it afterwards, requires the browser to block and perform the layout before reading the new value. If possible, make the read before the write.

### Avoid some advanced CSS effects

This is more of a tradeoff between performance and design. In [19] Lewis mentions that properties such as box-shadow can be expensive.

It’s unclear however which specific CSS properties are expensive.

### Use Layers (but sparingly)

In [19] Lewis suggests creating layers for elements that are updated very often, to avoid these updates from causing updates in the rest of the page. One way to force the creation of a layer is:

These should only be created if there’re clear improvements in performance, since like indexes in databases, they come with memory and management cost.

### Use requestAnimationFrame()

Since JavaScript’s main thread is used for rendering, running expensive function might cause the UI to freeze. In order to prioritize the rendering, we can schedule expensive functions to run we can use requestAnimationFrame(). According to [3]:

When visual changes are happening on screen you want to do your work at the right time for the browser, which is right at the start of the frame. The only way to guarantee that your JavaScript will run at the start of a frame is to use requestAnimationFrame

In [20], Lewis provides a more specific case for the use of requestAnimationFrame(): on scroll input handlers since they block the UI.

There’s one caveat however: Input handlers execute before requestAnimationFrame(), so if it makes style changes and then we read it in requestAnimationFrame, we run into the problem described in Avoid reading DOM properties right after mutating them.

### Web Workers

As we discussed previously, web workers run in a separate thread, which prevents your application from freezing because the main thread is blocked. To understand why this happens, it’s important to know the basics of how the JavaScript execution model works.

The major caveat is that web workers don’t have access to the DOM but they do have I/O access (like XHR).

### Web Assembly

It’s possible to use lower level languages like C++ and Rust and run them on the browser by compiling them to an intermediate representation called Web Assembly (WASM).

This guide explains how to compile Rust code to Web Assembly, package it as an npm package and execute it on the browser via Webpack. I’ve tried myself and the process is relatively straightforward.

Portability. Web Assembly’s official website [14] claims it is portable across different platforms.

Existing libraries. One of the major selling points of Web Assembly is being able to run existing libraries written in languages like C/C++. In [11] PSPDFKit’s team showcase their port of a PDF renderer that previously had to be done on the server and now can be done by shipping the binary as wasm. They perform some benchmark, getting mixed results in terms of performance but had several follow-up with browser vendors and were able to improve their performance (though no subsequent analysis was available in the post).

Web Assembly’s website [13] lists other use cases.

Performance. The main benefit I can see for starting a project from scratch in say, C++ or Rust, would be performance over JavaScript. JS vs WASM’s site [12] provides a set of computations in JavaScript and wasm compiled from C++. The results are not promising: in most cases Javascript is faster.

### GPU via WebGL

WebGL (Web Graphics Library) is a JavaScript library available in most browsers, with the main purpose of rendering 2D and 3D graphics. But because it has access to the GPU (JavaScript doesn’t have direct access), it’s can be used for heavy but highly parallelizable computations such as Machine Learning [16].

If your application requires this sort of processing, WebGL can be an interesting idea.

## Conclusion

This was another of the notes on Web Development. The other notes are listed in the Related Posts section below.

In this post we covered a wide variety of tools and techniques regarding performance of client-side applications. We didn’t delve into much detail, but hopefully some of the links will serve as starting pointers for further research.

For me it was particularly interesting to learn more about Web Assembly and I’m interested in studying it further, especially with Rust.

## References

[1] Get Started With Analyzing Runtime Performance
[2] Rendering Performance
[3] Optimize JavaScript Execution
[4] User-centric performance metrics
[5] User Timing API
[6] StackOverflow: Using performance.mark() with Chrome dev tools performance tab
[7] StackOverflow: How many concurrent AJAX (XmlHttpRequest) requests are allowed in popular browsers?
[8] The cost of parsing JSON
[9] MDN: performance.now()
[10] Github w3c/hr-time: Reducing the precision of the DOMHighResTimeStamp resolution
[11] A Real-World WebAssembly Benchmark
[12] JS vs WASM
[13] WebAssembly: Use Cases
[14] WebAssembly: Portability
[15] Donald Knuth: Structured Programming with go to Statements
[16] TensorFlow.js: Machine Learning for the Web and Beyond
[17] Reduce the Scope and Complexity of Style Calculations
[18] Avoid Large, Complex Layouts and Layout Thrashing
[19] Simplify Paint Complexity and Reduce Paint Areas

## Related Posts

• Notes on JavaScript Interpreters – This post focuses on the detail of Chrome’s V8 interpreter. The relationship with performance is that understanding how JavaScript code is executed (e.g. event loop model, JIT) can give us insight on how to optimize it.
• Web workers – is a browser feature that allows running code outside of the main thread. This can speed up computation as long as no DOM access is needed.
• Notes on how browsers work – provides a basic overview on the stages of executing a web page, from downloading HTML and Javascript, executing the scripts, constructing the DOM tree and laying out the page. Understanding these steps might give insights on how to better optimize the app.
• Notes on Javascript Memory Profiling in Google Chrome – memory is related to performance because memory and CPU are the major bounded resources your app has to deal with. Not taking care of memory (e.g. memory leaks) could also exhaust the memory available for the execution engine to make the CPU-Memory tradeoffs we mentioned above.

# Observable

Observable is a web-first interactive notebook for data analysis, visualization, and exploration [1].

It was created by Mike Bostock, the creator of d3.js, which we discussed previously way back in 2014 [2]. At some point, Mike stepped down from d3.js to focus on a new library called d3.express, which he presented in 2017 during the OpenVis conference [3] and was still in the makes. d3.express eventually got renamed to Observable.

I’ve been experimenting with Observable and in this post I’ll cover some of my learnings while trying to build two visualizations.

One of the benefits of notebooks like Observable is to co-locate code with its output. This makes it much easier to explain code because in addition to markdown comments, we can modify the code and see the results in real-time. This means that a lot of the documentation of APIs available in Observable are notebooks themselves. For these reasons, this post consists of a series of links to notebooks with high-level comments.

Before we start, some assumptions I’m making are familiarity with imperative programming and the basics of JavaScript.

## What is Observable?

Why Observable – this notebook explains the motivation behind Observable and how it works at a high level:

An Observable notebook consists of reactive cells, each defining a piece of state. Rather than track dependencies between mutable values in your head, the runtime automatically extracts dependencies via static analysis and efficiently re-evaluates cells whenever things change, like a spreadsheet

Five-Minute Introduction – provides a hands-on approach of the many topics, some of which we’ll cover in more detail in this post:

• Real-time evaluation of JavaScript snippets
• Cell references
• Markdown
• HTML/DOM display
• Promises
• Generators
• Views
• Importing from other notebooks

Let’s start by delving into the details of JavaScript snippets and cell references.

## Observable vs. JavaScript

Observable’s not JavaScript – in this notebook Bostock explains the differences between JavaScript and Observable. Some notes I found interesting from that notebook:

• A cell is composed by a cell name and an expression:
• [cell name] = [expression]

It can be simple statements like 1 + 2, or multi-line statements like

• The value of  cell_name can be used in other cells, but by default is read-only.
• Each cell can only have one  cell_name  assignment. In other words, it can only “export” one variable. It’s possible to cheat by exporting an object. Note that because curly brackets are used to denote code blocks, we need to wrap an object literal with parenthesis:

• Cells can refer to other cells in any part of the code – Observable builds a dependency DAG to figure out the right order. This also means dependencies cannot be circular. How Observable Runs explains this in more details.
• Constructs like async functions (await) and generators (yield) have special behavior in a cell. We’ll expand in the Generators section below.
• Cells can be mutated if declared with a qualifier (mutable). We’ll expand in the Mutability section below.

## Mutability

Introduction to Mutable State – cells are read-only by default but Observable allows changing the value of a cell by declaring it mutable. When changing the value of the cell elsewhere, the mutable keyword also must be used.

It’s important to note that the cell is immutable, but if the cell is a reference to a value, say an Object, then the value can be mutated. This can lead to unexpected behavior, so  I created a notebook, Mutating references, to highlight this issue.

## Markdown

Markdown summary – is a great cheat sheet of Markdown syntax in Observable. I find the Markdown syntax in Observable verbose which is not ideal given text is so prevalent in notebooks:

mdHello World 
More cumbersome still is typing inline code. Because it uses backticks, It has to be escaped:

mdHello \code\

To be fair, most code will be typed as cells, so this shouldn’t be too common. It supports LaTeX via KaTeX which is awesome. KaTeX on Observable contains a bunch of examples.

## Generators

Generators are a JavaScript construct that allows a function to yield the execution back to the caller and resume from where it stopped when it’s called again. We talked about this in the Async Functions in JavaScript post.

Introduction to Generators explains how generators can be used in Observable. The combination of generators and delays (via promises) is a great mechanism to implement a ticker, which is the base of animation. Here’s a very simple way to define a ticker in Observable (remember that Promises are awaited without extra syntax):

## Views

Introduction to Views explains what views are and how to construct one from scratch. I like this recommendation:

If there is some value that you would like the user to control in your notebook, then usually the right way to represent that value is as a view.

Views are thus convenient ways to expose parameters via buttons and knobs such as text inputs, dropdowns and checkboxes.

I ran into a problem when using checkbox with labels in it, like the cell below:

It does not yield true/false as I wanted. This notebook, Checkbox, discusses the problem and offers a solution. This is an example where great but imperfect abstractions make it hard to understand when something doesn’t work. It’s worth learning about how viewof is implemented behind the scenes.

In one of my experiments I needed to synchronize a view and a ticker. Bostock provides a neat abstraction in his Synchronized Views notebook.

## Importing from other notebooks

Introduction to Imports shows how easy it is to import cells from other notebooks.

It also shows that it’s possible to override the value of some of the cells in that notebook:

This is neat, because the chart cell depends on the data, and by allowing overriding data, it allows parameterizing chart. In other words, we import chart as a function as opposed to a value (the result of chart(data)).

Versioning. When importing a notebook we always import the most recent version, which means it could potentially break if the notebook’s contract changes. The introduction above mentions the possibility of specifying the version when importing but didn’t provide an example on how to do so.

My strategy so far has been to clone a notebook if I’m really concerned about it changing, but that means one has to manually update / re-fork if they want to sync with the upstream changes.

## React Hooks

I’m used to write UI components with React and luckily Jed Fox created a notebook which I forked into this React Notebook which allowed me to use React Hooks to write a view.

## Conclusion

I really like Observable and am looking forward to add implement more data visualizations in it. I experimented with 2 animations so far: a simplistic Solar System model and a spinning Globe.

One natural question that crossed my mind is whether I should have written this whole post as an Observable notebook. In the end I decided to stick with an old-school static post for two reasons:

• Consistency: Observable only supports JavaScript, so if I’m writing a post about Python I’d need to fallback to a post, and I wouldn’t want to have a mix of both.
• Durability: Observable has a lot of potential but it’s still mostly experimental and I’m not sure I can rely on it sticking around for a long time.

## References

• Introduction
• Mutability
• Markdown
• Views
•  Importing
• Integration

### Other

[1] Observable – Why Observable?
[2] Introduction to d3.js
[3] OpenViz Conf – 2017
[4] Async Functions in JavaScript

# Notes on JavaScript Interpreters

In a previous post, Notes on how browsers work, we studied the high-level architecture of a browser, specifically the Rendering Engine. We used the following diagram,

In this post we’ll focus on the JavaScript interpreter part. Different browsers use different implementations of the interpreters. Firefox uses SpiderMonkey, Chrome uses V8, Safari uses JavaScriptCore, Microsoft Edge uses Chakra, to name a few. V8 is also used as a standalone interpreter, most notably by Node.js.

These interpreters usually comply to one of the versions of the ECMAScript, which is a standardization effort of the JavaScript language. ECMA-262 is the document with the specification. As it happens with other languages, from their first inception, design flaws are identified, new development needs arise, so the language spec is always evolving. For that reason, there are a few versions of ECMAScript. Most browsers support the 5th edition, also known as ES5.

There’s already the 7th edition (as of 2016), but it takes time for browsers to catch up. Because of that, JavaScript programs that are capable of translating newer specs into ES5 were created, such as Babel. The advantage of using this technique, also known as transpiling, is that developers can use newer versions of the step, such as ES6, without depending on browser support. Disadvantages include the extra complexity by adding a new step in the deployment process and it makes harder to debug since it’s hard to map errors that happen in the transformed code back to the original source.

Section 8.4 of the ECMA-262 describes the execution model of JavaScript:

A Job is an abstract operation that initiates an ECMAScript computation when no other ECMAScript computation is currently in progress. A Job abstract operation may be defined to accept an arbitrary set of job parameters.

Execution of a Job can be initiated only when there is no running execution context and the execution context stack is empty. A PendingJob is a request for the future execution of a Job. A PendingJob is an internal Record whose fields are specified in Table 25. Once execution of a Job is initiated, the Job always executes to completion. No other Job may be initiated until the currently running Job completes. However, the currently running Job or external events may cause the enqueuing of additional PendingJobs that may be initiated sometime after completion of the currently running Job.

MDN’s Concurrency model and Event Loop [2] describes this spec in a more friendly way. As in other programming environments such as C and Java, we have two types of memory available: the heap and the stack. The heap is the general purpose memory and the stack is where we keep track of scopes for example when doing function calls.

In JavaScript we also have the message queue, and for each message in the queue there is an associated function to be executed.

JavaScript Execution Model

### The event loop

The execution model is also called the event loop because of the high-level working of this model:

When the stack is empty, the runtime processes the first message from the queue. While executing the corresponding function, it adds an initial scope to the stack. The function is guaranteed to execute “atomically”, that is, until it returns. The execution might cause new messages to be enqueued, for example if we call

The callback passed as the first argument to setTimeout() will be enqueued, not stacked. On the other hand, if we have a recursive function, such as a factorial:

The recursive calls should all go to the same stack, so they will be executed to completion, that is until the stack is empty. The calls provided to setTimeout() will get enqueued and be executed only after the value of the factorial gets printed.

In fact, using setTimeout() is a common trick to control the execution order of functions. For example, say that function A called B and B calls another function C. If B calls C normally, then C will have to finish before B returns, as the sample code below:

In case we want to finish executing A first, then B can call C using setTimeout(C, 0), because then it will be enqueued, and then both A and B will finish until a new message is processed from the queue, as the code below:

### Web Workers

We discussed Web Workers in a previous post, in which we mentioned that it’s a separate thread that shares no memory with the main thread. In fact, according to [2], it has its own stack, heap and queue. It doesn’t violate the execution model of the main thread because communications must be done via a publisher/subscriber API, which means communicating between two threads is subject to the queueing.

### V8

Beyond the spec, each JavaScript engine is free to implement feature the way they want. V8’s architecture is described in their wiki page [3], so we can study some of its key components.

#### Fast Property Access

In JavaScript, structures like Object can be mutated (added and removed properties) in runtime. Implementing them as hashtables can lead to performance problems when accessing properties of these structures. Compare that to compiled languages like Java in which instances of a class can be allocated with all its members in a single chunk of memory and accessing properties of objects consists in adding an offset to the object’s pointer.

V8 optimizes the Object allocation by creating hidden classes. It makes use of the fact that properties are mutated in the same pattern. For example in

In this case, we always insert the property x and then y. V8 starts with an empty class C0 when the object is first created. When x is assigned, it creates another class C1 with property x and that points to C0. When y is assigned, it creates yet another class C2 with property y that point to C1.

When the Point constructor is finished, it will be an instance of C2. It has to instantiate 3 objects one from C0, then C1, then C2.

Accessing property y is an adding an offset operation, while accessing property x is two offset operations, but still fast than a table lookup. Other instances of Point will share the same class C2. If for some reason we have a point with only x set, then it will be an instance of class C1. This sharing of structures resembles the persistent data structures that we studied previously.

Another interesting property of this method is that it doesn’t need to know in advance whether the code has indeed a pattern in mutating objects. It’s sort of a JIT compilation.

#### Dynamic Machine Code Generation

According to [3]:

V8 compiles JavaScript source code directly into machine code when it is first executed. There are no intermediate byte codes, no interpreter.

#### Efficient Garbage Collection

We wrote about V8’s memory management in a previous post.

### Conclusion

In this post we revisited a bit of the history of JavaScript and ECMAScript. We also studied the event loop model that is part of the spec and finally saw some aspects of one of the JavaScript engines, V8.

### References

• [1] ECMAScript 2016 – Language Specification
• [2] MDN: Concurrency model and Event Loop
• [3] V8 – Design Elements

# Emacs Lisp Introduction

Richard Stallman is an american programmer and software freedom activist. He graduated from Harvard magna cum laude in Physics and started his PhD in Physics at MIT, but after about one year, he decided to focus on programming at the MIT AI Laboratory. There, he developed several softwares, including Emacs, which we’ll talk about in this post.

Emacs is named after Editor MACroS, since it was originally a macro editor for another editor, called TECO.

I’ve been using emacs since 2005, but never took the time to learn about elisp, which is the emacs dialect of Lisp and which emacs uses to implement a lot of its functions and the way it allows users to extend it.

In this post we’ll cover the basics of the elisp language. Most of if was based in the “Introduction to Programming in Emacs Lisp” by Robert Chassell and the Emacs Lisp Manual.

### Setting up the environment

In the following sessions we’ll have a lot of examples, so let’s setup an environment to run them.

Interpreter. For an interactive way to test the code snippets, we can use the elisp interpreter, integrated with emacs, called ielm. To open it, just run M-x ielm.

Running from a text file. An alternative to using the interpreter is by evaluating the code in a given region. To do that, select a region and run eval-last-sexp (C-x C-e).

Now, let’s cover some of the basic aspects of emacs lisp.

### Basics

List Centered. Lisp stands for list processing and as this suggests, it is highly centered on lists. Usually we use lists to perform the evaluation of some expression (which is also referred to form). One common expression is a function call. In this case, the first element of the list is the function and the other elements are parameters to the function. For example, we can call the function max:

(max 3 4) ;; returns 4


Similarly, for operators, we first provide the operator and then the parameters, so the For example, to add two numbers:

(+ 2 3)


We can see then that for arithmetic expressions we use the prefix (polish) notation, instead of the infix notation as in languages such as C++, Python and Haskell.

To use a elisp list to represent an actual list, we use the single quote.

'(1 2 3)


Printing. For simple cases, printing can be handy. On elisp we can use the function message. Run the following in the interpreter or run eval-last-sexp with that text selected:

(message "hello world")


This will print out the hello world string in the messages buffer, which can be viewed by running switch-to-buffer command (M-x b) and going to the *Messages* buffer.

Comments. In elisp, comments are inline and start after a semi-colon (unless it’s part of a string): ;.

;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
;; This is a way to emphasize a comment
;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;

(message "hello; world")  ; simple inline comment
; will print hello; world


### Variables

As in other languages, elisp has variables. But in elisp, variable names can contain symbols that wouldn’t be allowed as in some more common languages like +,-,*.

That is possible because in elisp we don’t have in-fix notation, so a variable named a+b is not ambiguously mistaken by a plus b, which has to be defined as (+ a b)

To set a variable you can use the set function:

(set 'x '(1 2))


We quote both elements because we want their literal and not to evaluate them. In most of the cases, we want the left hand side of the assignment to be a literal (representing the variable), so we use the shortcut setq:

(setq x '(1 2)) ;; Equivalent to (set 'x '(1 2))


Another option is to use defvar. The difference between defvar and setq is that defvar won’t set the variable again if it was already defined.

(setq x '(1 2))
(defvar x '(3 4)) ;; x still holds (1 2)


When we use set/setq/defvar, we create a variable in the global scope. To restrict variables to a local scope, we can use the let.

(let (a (b 5))
(setq a 3)
(+ a b)
) ;; returns 8


For this function, the first argument is a list of the variables which will be in the local scope. You can initialize a variable with a value by providing a pair (variable value) like we did with b above.

To indicate that a variable should not be changed, we can use the defconst macro. Note that it doesn’t actually prevent from overriding with setq though.

Another form of setting variable is using the special form defcustom. The first three parameters are the same of defvar. In addition, we can provide a documentation string as the fourth parameter and following are any number of arguments in the form

<keyword>:<value>

Common examples of keywords are type, option and group.

The type can have a different set of values such as string, integer, function, etc.

A custom group represents a series of custom variables and or other custom groups. Thus, one can organize groups in a tree hierarchy. When specifying the group keyword, we can provide an existing group such as applications, or define our own.

Let’s define our custom group for learning purposes:

(defgroup my-custom-group nil
"This is my first custom group"
:group 'applications
)


In the definition above, we’re creating a group called my-custom-group. The second argument is the list of custom variables in the group, which in our case is empty. The third argument is a description of the group and then there’s a list of key-value pairs, as in defcustom. In the example above, we’re making our new group a subgroup of the existing applications group.

Now, let’s create a custom variable in that group:

(defcustom my-custom-var nil
"This is my first custom variable."
:type 'string
:group 'applications)


Let’s add these definitions to our init file (e.g. .emacs or .emacs.d/init.el) and re-open emacs.

We can browse the custom variables by typing M-x customize to go the *Customize Apropos* buffer. In this buffer, there’s the option to search for variable names or groups. Let’s search for my-custom-variable in the search box:

Figure 1: Searching for my-custom-var (click to enlarge)

We can edit its value by changing from nil in the yellow box above. Let’s put an arbitrary string, such as "some-value" and then hit the [Save for future sessions] button.

This should change the init file with something like:

(custom-set-variables
'(my-custom-variable "some-value")
)


### List Manipulation: cons, car and cdr

One distinct trait of Lisp and Emacs is their alternative naming conventions. For example, what most people refer to copy and paste, emacs uses kill and yank.

Lisp, on its merit, has two important list functions curiously named car and cdr. The function car stands for Contents of the Address part of the Register and cdr for Contents of the Decrement part of the Register.

The car function will retrieve the first element of the list and cdr will get the remaining part of it.

(car '(1 2 3)) ;; 1
(cdr '(1 2 3)) ;; (1 2 3)


Another important function with a more intuitive meaning is cons, a short for constructor. It takes an element and a list, and preprends the element to the list and returns a new list:

(cons 1 (2 3)) ;; (1 2 3)


For those familiar with Haskell, cons is like the (:) operator, car is head and cdr is tail.

### Conditionals

In elisp we can use the if special form, which can take 2 or 3 arguments.

(if <condition> <then clause> <optional else clause>

The first argument is a predicate. A predicate is anything that after evaluated returns true or false. In elisp, both nil and the empty list are considered false. Anything else is true.

If the condition evaluates to true, the second argument is evaluated and returned. If it’s false and the third argument is provided, it will evaluate it and return. If it’s false and the third argument is not provided, it will just return false (nil).

Let’s do a dummy example. Suppose we have an existing variable x and we want to print a message depending on whether it’s negative:

(if (> x 0)
(message "x is positive")
(message "x is negative")
)


Block of code. If besides printing a message we want to do some calculations, we would need to evaluate multiple statements, but the second argument of if takes a single one. To solve this, we can use the progn function, which will combine a list of statements into a single one.

In our example above, if besides printing a message in case the number is negative, we wanted to set x to positive, we could do something like:

(if (> x 0)
(message "x is positive")
(progn
(message "x is negative")
(setq x (* -1 x))
)
)


### Loops

We can use a loop with the while function. It takes a predicate as the first argument and one ore more arguments which will be executed while the predicate evaluates to true. Another simple example:

;; Print numbers from 0 to 9
(let ((x 0))
(while (< x 10)
(message (number-to-string x))
(setq x (+ x 1))
)
)


For the example above, we can alternatively use the dotimes function, which executes a series of statements an specified number of times, much like a for in other languages.

(dotimes (x 10 nil)
(message (number-to-string x))
)


It takes two or more parameters. The first is a list of three elements, the first element will represent the counter (which starts at 0), the second is the number of repetitions and the third is executed after the loop is over. The other parameters are evaluated in the loop.

Another handy function is the dolist, which will iterate through a list for us. For example:

(dotimes (elem '("a" "b" "c") nil)
(message elem)
)


It takes two or more parameters, the first is a list of three elements: a variable which will contain the an element at each iteration; the list to be iterated on; an expression to be evaluated when finishing the loop. The other parameters are the body of the loop.

### Functions

To define a function, we can use the special defun form.

(defun add (a b)
"This simple function adds to numbers"
(+ a b)
)


It takes several arguments. The first one is the name of the function, the second is a list representing the parameters of that function, the third, optional, is a string describing what the function does. The remaining arguments are executed and we evaluate a function and the last evaluated argument is returned.

Anonymous functions. W can also create an anonymous (lambda). The example above would be done as:

(setq add (lambda (a b)
(+ a b)
))


But this time we need to call this function through funcall. This is because at runtime, add can potentially have any type (not necessarily a function), so we might not know it can be invoked as a function.

(funcall add 3 4)


Recursion. We can also define a function in terms of itself to implement recursion. The classical factorial example can be done as:

(defun factorial (n)
"Computes the factorial of n"
(if (<= n 1)
1
(* n (factorial (- n 1)))
)
)


Commands. Within emacs, it’s handy to ask for the input parameters from the user. For that we can use the interactive option. Using the example above, we can ask the input for the factorial function:

(defun printFactorial (n)
(interactive "n")
(message (number-to-string (factorial n)))
)


To run this, first execute execute-extended-command and then execute printFactorial. After hitting enter, you’ll be prompted to input a number, which will feed the function printFactorial.

### Conclusion

In this post, we’ve learned a bit about elisp. In future posts, we’ll cover some of the API emacs provides for extending it.

Robert Chassell mentions that “Introduction to Programming in Emacs Lisp” is a book for non-programmers, but he digs into functions definitions and implementation details that would be uninviting for people learning how to program.

### References

[1] An Introduction to Programming in Emacs Lisp
[2] Emacs Lisp Reference Manual (v 24.3)
[3] Customization Types

# An Introduction to Agda

Per Martin-Löf is a Swedish logician, philosopher and mathematician statistician. Since 1970 he has been mainly publishing on logic.

Martin-Löf has been active in developing intuitionistic type theory as a constructive foundation of mathematics and his work on type theory has influenced computer science.

He completed his PhD program under supervision of Andrei Kolgomorov and is currently Professor at Stockholm University, being a member of the Royal Swedish Academy of Sciences.

In this post we’ll talk about assisted theorem proving, more specifically using a tool named Agda, which has it’s syntax based on Haskell and the engine is based on the Per Martin-Löf’s intuitionistic type theory.

We’ll first talk about the basic concepts such as assisted theorem proving and dependent type theory and then present a very brief introduction to Agda based on a tutorial from Ulf Norell and James Chapman [3].

### Assisted Theorem Proving

We can classify theorem proving in the following categories:

1) Manual
2) Manual with machine support
3) Interactive
4) Automated

(1) is the oldest, most known and widely used method by mathematicians. (2) Since the introduction of computers, we now can have machines to verify syntax and make manual proofs less error prone. (3) We can also write proofs in a way the machine understands so that it can perform some logical deductions steps for us. (4) In the last extreme case, we can let the machine do all the job and find proofs by itself.

### Dependent Types

Basically speaking, dependent type is a type that depends on a value. In typed programming languages we can have a type like vector T, representing a vector containing elements of type T, but in a dependent type language we could also have vector T n, representing vectors with elements of type T and length n. We’ll see an example for this type in the next section using Agda.

### Agda

Agda is a dependent type language based on intuitionistic type theory, introduced by Martin-Löf. It was first implemented by Catarina Coquand (who is married to Thierry Coquand, which inspired the Coq tool, another famous assisted proof tool). Later it was rewritten as Agda 2 by Ulf Norell, one of the authors of the tutorial we’re going to follow.

#### Installing

The first thing we want to do before starting is to get Agda running. This can be done using the cabal package manager from Haskell:

> cabal install Agda


On Mac, this will install some binaries to ~/Library/Haskell. We want to use some of those binaries, so let’s include in our bin path:

export PATH="$HOME/Library/Haskell/bin:$PATH"


Agda has a very good integration with Emacs. On Mac we can make use of that integration in Aquamacs. To set up the right configuration into the .emacs file, we can now run:

> agda-mode setup


#### Main Concepts

Latex and Unicode. Agda uses makes use of unicode characters to make the code more readable and accepts LaTeX syntax to define them. For example, we can input \rightarrow which will be transformed into the → character. Also there are shortcuts to avoid typing a lot (for example \r is expanded to \rightarrow)

Delimiters. Agda is very flexible in naming, by only considering very few characters as delimiter (including space, parenthesis and curly brackets). This means we can have names like :<a*, but then we have to be careful in adding space whenever we need to separate two variables.

Modules and name convention. Agda files have the extension .agda. For our tutorial, let’s create one called basic.agda. The first thing we want to do is to define the module, which is used to manage namespaces. By convention, the module name has to match the file name, so we write:

module basic where


Data Types. Our basic working blocks will be types and we can define them by data types, which have syntax similar to data types in Haskell. For example, we can model the boolean type, which we call Bool. In the following code we have the type named Bool which is a subset of Set (pre-defined) and we provide two constructors, both being of the type Bool.

data Bool : Set where
true : Bool
false : Bool


In the same manner we can define the set of natural numbers. Since it’s impossible to explicitly define all value like we did for booleans, we do so by defining a function suc recursively, which takes a Nat and returns its successor, like in the following example:

-- Natural numbers
data Nat : Set where
zero : Nat
suc  : Nat -> Nat


Note that we are saying the constructor suc represents a function that receives a Nat and return a Nat, but we are not providing the implementation for it.

So let’s write some functions to work with the Nat type. More specifically, addition and multiplication.

Functions. In Agda, when we write a function surrounded by underscores, like _+_, we’re defining a function that takes two arguments using infix notation. Let’s define the addition operation:

-- Addition
_+_ : Nat -> Nat -> Nat
zero  + m = m
suc n + m = suc (n + m)


Note how we define everything in terms of the basic elements from Nat, that is, zero and suc, we have no notion of values. This is important because if we want to encode proofs as programs, we have to be generic and ‘value agnostic’. In this function we pattern match on different values of the first argument: If it’s zero, we are in the base case. Otherwise, the first argument must be a successor of some another value n, so we do recursion on n until it hit the base case.

Multiplication is analogous, but here we use the _+_ operator to define the second matched pattern:

-- Multiplication
_*_ : Nat -> Nat -> Nat
zero  * m = zero
suc n * m = m + n * m


The code above will cause an error because Agda requires strictly different priorities for different operators.

We can attribute some random values to these operators using infixl, which takes a priority value and a function. The higher the value, the higher the priority. Since we want multiplication to have higher priority than addition we can assign a higher number for it, for example:

-- Defining priority
infixl 6 _*_
infixl 4 _+_


Parametrized types. We can define a type that depends on other types, like parametrized types in Haskell or templates in C++. In this tutorial, we’ll create a list of elements of a given type A, as in the following data type:

-- Definition of a list with parametrized type
data List (A : Set) : Set where
-- empty operator
[] : List A
-- append operator
_::_ : A -> List A -> List A


Let’s write the map function over List. Differently from Haskell, Agda doesn’t perform type inference by default, so we need to explicitly provide the types when defining/calling a function that depends on other types. In the following example, the types A and B must be provided as parameters:

map1 : (A : Set) -> (B : Set) -> (A -> B) -> List A -> List B
map1 A B f [] = []
map1 A B f (x :: xs) = f x :: map1 A B f xs


We have the following syntax sugar to combine two or more types that are subset of the same type:

(A : Set) -> (B : Set) is equivalent to (A B : Set)

Implicit Types. We have the option to make types implicit and let Agda figure out the right one. The syntax is to use curly brackets instead of parenthesis:

map2 : {A B : Set} -> (A -> B) -> List A -> List B
map2 f [] = []
map2 f (x :: xs) = f x :: map2 f xs


Dependent Types. So far the types and operations we discussed so far don’t seem to show the real power of dependent types. So let’s define a new version of List, enhanced with the length of the list, which we’ll then call Vector:

data Vec (A : Set) : Nat -> Set where
[]   : Vec A zero
-- Make the length type implicit
_::_ : {n : Nat} -> A -> Vec A n -> Vec A (suc n)


Here we have two parameters when defining the data type, Nat -> Set. Notice also how when defining the list recursively we update the value n properly.

One nice thing about this is that in defining a head function, we don’t need to handle the empty case as long as we restrict the type of non-empty lists:

-- Doesn't need to check for empty lists. Restriction is encoded in
-- the function type!
head : {A : Set}{n : Nat} -> Vec A (suc n) -> A
head (x :: xs) = x


Proofs as types. We are now going to write an example on how to use dependent types to encode proofs ( as types.

Let’s start by defining data types to represent the two possible values of a boolean. Note that we don’t specify constructors here.

data False : Set where
data True  : Set where


Now we define a function to convert a boolean values into these types:

isTrue : Bool -> Set
isTrue true = True
isTrue false = False


Let’s, for instance, define a function to be use as predicate, in this case, the < operator:

-- Definition of the < operator
_<_ : Nat -> Nat -> Bool
_ < zero = false
zero  < suc n = true
suc m < suc n = m < n


We will need the following function, that returns the length of a list, in our final example:

-- Definition of the length function
length : {A : Set} -> List A -> Nat
length [] = zero
length (x :: xs) = suc (length xs)


Finally, we can now write a sample function that encodes a predicate as a type. In the following example, the function lookup accepts one member of the family of predicates in which n less than the size of the list is a true statement.

-- Function that looks up the i-th element in a list
-- param 1: list
-- param 2: look up index
-- param 3: proof
lookup : {A : Set}(xs : List A)(n : Nat) -> isTrue (n < length xs) -> A
lookup [] n ()
lookup (x :: xs) zero p = x
lookup (x :: xs) (suc n) p = lookup xs n p


This concludes our brief introduction to Agda syntax and dependent types. In following posts we’ll learn more about writing proofs and have Agda to perform logical derivations for us.

### Conclusion

I have close to zero knowledge in formal systems. I’ve been exposed briefly to it by reading parts of Gödel, Escher and Bach and the subject sounds very interesting. I based most of the content of this post in a few introductory tutorials [3-6], but I still didn’t get a good grasp of the power of systems like Agda.

I was initially interested in learn Coq but due to some problems in setting it up, I ended up looking for alternatives and found Agda. It seems that Coq is based on Ocaml and Agda is based on Haskell and since I’ve been focused in learning Haskell, it was a pleasant surprise. Agda has also has a good integration with Emacs, which is my favorite editor, so this was a plus.

### References

[1] Per Martin-Löf – Wikipedia
[2] Proof Assistant – Comparison of Systems – Wikipedia
[3] Dependently Typed Programming in Agda – Ulf Norell and James Chapman
[4] Interactive Theorem Proving, Lecture Notes 0 – Anton Setzer
[5] Interactive Theorem Proving, Lecture Notes 1 – Anton Setzer
[6] Interactive Theorem Proving, Lecture Notes 2 – Anton Setzer