Finda Websocket Architecture

Finda lets you search and instantly switch between tabs in browsers like Firefox and Chrome and editors like Sublime Text and Visual Studio Code:

It does this by communicating with these programs over local websocket connections.

This article describes the design tradeoffs of this system, focusing on Rust implementation details that may be helpful to others building similar systems.


Consider the following design requirements:

Websockets fit these requirements quite well: They’re bidirectional, low-overhead, and well supported in the relavant ecosystems — natively by the browsers, and with well-tested packages in Node.js (VS Code) and Python (Sublime Text).

Push or Pull?

Since tabs exist within separate programs, how should Finda search them?

One option is for Finda to “pull” for results when it needs them. On every keystroke, Finda would ask each program: “Which of your tabs match this query?”.

However, this “pull” approach has several downsides.

First, it requires duplicating query logic in every client, which would be prone to inconsistency (especially since different clients have not only different plugin APIs, but often entirely different programming languages).

This might be avoided by having all open tabs sent to Finda, which would then filter them.

However, the larger issue is that it’s not clear whether a pull approach would fit within the 5 millisecond performance window. My concern was not with the network overhead (which is across localhost), but rather with the scheduler and memory cache. Pulling the tabs on demand might require “spinning up” in memory a JavaScript VM in Chrome, a separate JavaScript VM in Firefox, and a Python VM in Sublime Text — all of which would need to fetch tabs from the program’s internal data structures, serialize them to JSON, and transmit back to Finda in under 5 milliseconds.

Both options for handling late responses — either dropping them entirely or updating the results when they arrive — would make the app feel unpredictable, inhibiting habituation and muscle memory.

So, rather than having Finda “pull” tabs when they’re needed, lets have programs “push” tabs to Finda when they first connect and again afterwards whenever their tabs change (open, close, move to a new file/webpage, etc.). When Finda recieves tabs, it stores them in its own memory.

This design decouples inter-process communication overhead from querying, which means that queries consistently finish in under a millisecond.

Then, when a tab is selected, Finda focuses the relevant program’s window and sends a message back over the appropriate websocket connection telling the program to switch to that tab.

Websockets in Rust

Finda implements its websocket server using the excellent websocket crate.

Unlike a JavaScript callback-based API, the Rust websocket API is synchronous (blocking), and forces us to explicitly handle the complexity of event loops and queues.

Finda spins up two threads per connection:

(Unlike a public web app, where there might be thousands of simultanious requests, here the number of incoming connections is bounded by how many instances of Chrome, Firefox, Visual Studio Code, etc. someone might be running — we don’t need to worry about how many threads Finda spawns, since their overhead is negligable compared to the resources consumeb by running these programs in the first place.)

Here’s an outline of the code:

use websocket::OwnedMessage;
use websocket::sync::Server;
use std::net;
use std::thread;
use std::sync::mpsc;

let server = Server::bind("").unwrap();

for request in server.filter_map(Result::ok) {

    // Spawn the new connection's recieving thread
    thread::spawn(move || {
        let mut client = request.accept().unwrap();

        let ip = client.peer_addr().unwrap().ip();

        // Only allow connections from localhost
        if ip != net::Ipv4Addr::new(127, 0, 0, 1) {

        let (mut receiver, mut sender) = client.split().unwrap();

        let (ws_tx, ws_rx) = mpsc::channel();

        // Spawn the new connection's sending thread
        thread::spawn(move || {
            for ws_msg in ws_rx.iter() {

        for message in receiver.incoming_messages() {
            match message {
                Ok(OwnedMessage::Text(s)) => {
                    // (not shown) Deserialize tab JSON, update Finda's internal data structure.
                    // This data structure includes a clone of the `ws_tx` mpsc channel, so messages can be sent back out.

                Ok(OwnedMessage::Close(_)) => {

                    // (not shown) Remove all tabs from Finda's internal data structure.
                    // Doing so will drop all clones of `ws_tx`, closing `ws_rx`, exhausting the sending thread's `for` loop and causing the thread to cleanly shut down.

                    // Cleanly shut down the recieving thread

                Ok(OwnedMessage::Ping(ping)) => {

                _ => {}

When things go wrong

The problem of coordinating tabs between applications is inherently distributed: Finda must keep its internal tab list synchronized with a half dozen independent applications.

Fortunately, since everything is running on localhost rather than across unreliable networks, we don’t need to worry about issues like latency, lost/duplicate messages, etc.

The one issue we do have to worry about, though, is application crashes. Consider the following sequence:

  1. Chrome starts and loads “”.
  2. The Finda chrome extension connects to Finda and sends it one tab, “”.
  3. Finda receives this message, and updates its open tab list.
  4. Chrome crashes suddenly.

Since Chrome crashed suddenly, it never sent a Close websocket message, which means that Finda still thinks a “” Chrome tab is open.

So how can we prevent tabs and buffers from crashed browsers and editors from appearing in Finda search results?

Ping / Pong (heartbeating)

One option would be to implement “heartbeating”: Occassionally Finda (the server) would send a “hey, are you okay?” message to each client and then close any connections that don’t reply within a certain time frame. (The websocket protocol specifies special ping and pong frames and specifically to support this behavior.)

However, the websocket crate doesn’t automatically implement heartbeating, so we’d have to implement it ourselves. This would require quite a bit more code — maybe a dedicated thread and mpsc channel dedicated to heartbeating with a second thread blocking on the timeout?

Not only isn’t it obvious how to implement, it’s not obvious how do design properly either. When should the server send out heartbeat messages?

It could send the heartbeat every time someone opens Finda to switch, but this leads to all the same problems we discussed earlier with the “pull” query model.

It could send heartbeats on a fixed interval; every 5 seconds, say. However, this means that Finda would consume resources even when it wasn’t actively being used. These resources wouldn’t just be from Finda, but from all of the connected browsers and editors, which every 5 seconds would recieve the “are you okay?” messages and have to spin up their respective VMs to respond “yeah, I’m cool”.

This cost could be reduced by extending the interval — checking every minute, say — but that increases the risk of someone seeing a stale result. Which, come to think of it, is a fairly high risk anyway: If someone has internalized using Finda to switch between programs, then odds are that person is going to immediately open Finda as soon as something crashes, so that they can switch to another application or relaunch the crashed program.

So, heartbeating would be difficult to implement, excessively consume resources, and probably not work anyway. Can we do better?

Not quite distributed

Heartbeating would be the only solution in a truely distributed system, but thats not our exact situation: Finda only communicates with other programs on the same machine.

This means Finda can rely on the operating system kernel to cleanup and reclaim resources from crashed processes. Specifically, if Chrome crashes, then the OS X kernel will close any sockets that Chrome was using, including the socket chatting with Finda.

You can actually run an experiment to verify this:

  1. Run a websocket client/server on port 12345 (Clone my code example if you don’t want to write this yourself)

  2. Run sudo tcpdump -i any port 12345 to log all packets on that port.

  3. Force the client to exit immediately with killall -9 client.

As soon as the client exits, you’ll see a line like this in the tcpdump output:

18:37:51.381945 IP localhost.57108 > localhost.12345: Flags [F.], seq 162, ack 130, win 9178, options [nop,nop,TS val 1010281688 ecr 1010273400], length 0

This is a packet sent by the kernel from the client’s socket to the server, where the F flag means “Finish”.

This packet isn’t a proper websocket close message, but it does mean that no more packets are coming. The rust-websocket crate interprets this as an incoming message error, which means we can handle this case within Finda’s message loop and simply clear out all the results from that connection:

for message in receiver.incoming_messages() {
    match message {
        Ok(OwnedMessage::Text(s)) => {...}
        Ok(OwnedMessage::Ping(ping)) => {...}

       //A clean websocket close.
        Ok(OwnedMessage::Close(_)) => {

       //A forced websocket close (client probably crashed, and the kernel cleaned up the socket).
        Err(_) => {

Wrap up

Finda’s websocket architecture leverages the fact that all communication occurs between processes on the same machine. Finda doesn’t need to worry about an unreliable network that might lose or duplicate messages, which keeps the problem simple enough that it can be solved using a blocking API and spinning up threads for every connection.

This “trust the kernel to clean things up” trick won’t work on real distributed systems, but I’ll certainly take advantage of it when I can =)

If you like the idea of searching all of your files, tabs, and other computer bits with a blazing fast, keyboard-based tool, you can download Finda for free.


Thanks to Julia Evans for suggesting I use tcpdump to see what’s going on — you should definitely check out her zine about tcpdump!