Crunchy Bridge's Ruby Backend: Sorbet, Tapioca, and Parlour-Generated Type Stubs

Brandur Leach

11 min read

When we started building Crunchy Bridge two years ago, we chose Ruby as the language to write our database state machine and control plane API. Ruby may not have been the most popular language choice in 2022, but we picked it anyway. A major reason is that everyone on the team already knew it well and liked it. Terse and elegant syntax is perfect for expressing our database state machine logic. Another reason we picked Ruby is that it lets us have a REPL running in production so we can carry out flexible operational work, and expediently thanks to that same terse syntax. Ruby is so efficient for managing a big fleet of servers that it feels a bit like cheating compared to clunky admin dashboards and CLIs.

But Ruby by itself has a major challenge in that it lacks any kind of built-in mechanism for expressing variable and method type signatures. Every one of us had managed large Ruby codebases in the past and wanted to avoid the quagmire of uncertainty around what the types of anything are supposed to be, which makes code hard to reason about and dangerous to change.

That's why we chose to type everything with Sorbet, a Ruby type annotation library and static analysis tool written by Stripe. Sorbet isn't the world's most polished software, and presents some occasional challenges, but overall it's been a huge success for us. Sorbet has helped us find problems, making development considerably faster, and making refactoring (which is otherwise a very risky endeavor in Ruby) considerably safer.

There's already been a lot written about the basic use of Sorbet, so here we're just going to touch upon some of the less obvious things that we're doing with it in our codebase.


While you'll be writing Sorbet type signatures for your own code, you don't want to have to write them for the third party gems that you're importing. That's where Shopify's Tapioca comes in. With a simple tapioca init followed by bin/tapioca gems, it'll bootstrap a new project on Sorbet, then generate RBI files for them which will be used for type checking.

Sorbet has a built-in mechanism to do the same thing, but it doesn't work as well, and was broken for a long time in the official release. Tapioca currently has a number of advantages over Sorbet in that it tries very hard to discover all gem dependencies regardless of their group or how they're loaded, and parses their source extensively to generate a complete API.

The important thing to note here is that while using Sorbet, you not only get type checking on the signatures that you write, but also against RBIs generated for third party code, which most Ruby projects use a lot of (a huge ecosystem is one of the language's core strengths), helping to add a lot of confidence that code you write is correct. This makes Tapioca an essential part of the Sorbet toolchain.


Parlour is a project to enable easy generation of RBI/RBS files (RBS being a new format invented by Ruby core designed to supersede what Stripe did with RBI) in cases where code generation is a better fit than writing definitions by hand. It has a plugin system that lets any number of independent generators stay modular and easily live side-by-side.

We use it in a number of places including generating RBI for our API endpoints and state machine strategies. For these we have mini in-house frameworks to cut down on repetitive boilerplate, which works well. However since Sorbet’s type checking happens before these frameworks generate methods at runtime, there is no way without Parlour to have Sorbet know about those methods. This is an interesting contrast to the programming language Crystal where its macro expansion step, which is nearly as powerful as Ruby’s metaprogramming, happens before type checking. We use Crystal in many places where we don’t need Ruby’s REPL, but that’s a story for another blog post.

But our most important use of Parlour is generating type-safe shims for database models.

We use the Sequel gem as our ORM for all database operations. Like its popular cousin ActiveRecord, it doesn't require the fields of each table to be manually defined, and instead introspects the state of the database when it connects, using Ruby's broadly permissive runtime to dynamically populate accessors and setters for everything it finds.

That works, but presents a challenge when it comes to type checking since none of that information is available in time for static analysis, which is where Parlour comes in. We have a plugin that examines the database schema and generates an RBI definition for our models. Some partial code:


class Plugin < Parlour::Plugin

  def generate(root)

    root.create_class(model_name, superclass: "Sequel::Model") do |k|
      model_schema = model.send(:get_db_schema)

      if model.primary_key.is_a? Symbol # exclude `Sem` which has multi-column pk
        pk_class = ruby_type model_schema[model.primary_key]
        args_types = [HSU]
        args_types << "Eid" if pk_class == "Eid"
        args_types << pk_class
          parameters: ["args", type: "T.any(#{args_types.join(", ")})")],
          returns: "T.nilable(#{model_name})",
          class_method: true


  def ruby_type(attr)
    type = case attr[:db_type]
    when "platform", "replica_flavor", "cluster_flavor", "failover_flavor", "resize_flavor", "configuration_parameter_flavor", "role_flavor"
    when "cidr"
      "T.any(NetAddr::IPv6Net, NetAddr::IPv4Net)"
    when "inet"
      "T.any(NetAddr::IPv6, NetAddr::IPv4)"
    when "timestamp with time zone"
    when "boolean"
    when "jsonb"
    when "uuid"

RBI for a model comes out looking like this:

class Cluster < Sequel::Model
  class << self
    Elem = type_member(fixed: Cluster)

  sig { returns(T::Array[Cluster]) }
  def self.all; end

  sig { returns(Sequel::Postgres::Dataset) }
  def self.dataset; end

  sig { returns(Cluster) }
  def self.first; end

  sig { returns(T.self_type) }
  def save; end

  sig { returns(Time) }
  attr_accessor :created_at

  sig { returns(Eid) }
  attr_accessor :id

  sig { returns(String) }
  attr_accessor :name


Now if I try to access a field that's not on a model, static analysis catches the problem immediately:

$ bundle exec srb tc
app/web/endpoints/get_cluster.rb:8: Method does_not_exist does not exist on Cluster
     8 |      cluster.does_not_exist
  Got Cluster originating from:
     7 |    def call(cluster)
Errors: 1

Sorbet also has a great LSP server that editors like VSCode and Neovim can easily use, so it's immediately visible as a squiggled underline in your IDE as well.

Custom types

Another advantage of generating type scaffolding for models is that it lets us use custom Ruby types where appropriate. We map most Postgres types to what you'd expect to find in Ruby (boolean becomes T::Boolean, text becomes String, etc.), but we have a few more exotic ones as well. For example, we map primary keys in our database which have an underlying type of uuid to a custom Eid class in Ruby -- this makes our code more type-safe (e.g. you can't accidentally compare an Eid to a String), and the class intelligently knows how to encode itself to its public-facing form if it's included in a string interpolation or sent through JSON.generate.

Well-defined structures

One of the most annoying omissions from Ruby core is the lack of a data type that lets you define a strict set of fields that are available and what data types they're expected to be. Luckily, Sorbet does have this in the form of T::Struct which we make heavy use of.

In many Ruby frameworks like Rails, it's common convention to access input parameters through a hash like request.params[:foo] and to send responses back as a Hash object encoding JSON. It's workable for simple applications, but doesn't play nicely with other code that can do better type checking.

We implement every API endpoint with structs that represent its request and response. This gives us errors in the event of accessing or setting the wrong field, and also act as a convenient reference for exactly what each request and response is supposed to look like.

module Endpoints
  class CreateLogger < Endpoint
    body_struct CreateLoggerRequest

    sig { params(cluster: Cluster).returns Response }
    def call(cluster)

      respond(body: LoggerResponse.from_log_destination(T.must(log_destination)),
        status: created ? 201 : 200)
    rescue Sequel::ValidationFailed => e
      respond_error "Error creating logger: #{e.model.errors.full_messages.join("; ")}."

A sample request struct:

class CreateLoggerRequest < T::Struct
  const :id, T.nilable(Eid)
  const :host, String
  const :port, Integer
  const :template, String
  const :description, String

And a sample response struct:

class LoggerResponse < T::Struct
  extend T::Sig

  const :id, Eid
  const :host, String
  const :port, Integer
  const :template, String
  const :description, String
  const :cluster_id, Eid
  const :team_id, Eid

  sig { params(ld: LogDestination).returns T.attached_class }
  def self.from_log_destination(ld)
    new id:,
      port: ld.port,
      template: ld.template,
      description: ld.description || "",
      cluster_id: ld.cluster_id,
      team_id: ld.cluster.team_id

Avoiding monolithic updates with automation

A GitHub Actions job running on a once a week cron upgrades every gem in the project and opens a pull request for human review:

name: Update Gems
    - cron: '0 17 * * 1' # 10am pdt / 9am pst


    - uses: actions/checkout@v3


    - name: update
      run: bundle update

    - name: remove rbi
      run: rm sorbet/rbi/gems/*

    - name: regenerate rbi
      run: bin/tapioca gem --all --no-doc

    - name: Create Pull Request
      uses: peter-evans/create-pull-request@v3
        token: ${{ steps.generate-token.outputs.token }}
        title: "Update gems"
        body: "Update gems"
        branch: "update-gems"
        commit-message: "Update gems"
        author: "Bot <>"
        committer: "Bot <>"
        delete-branch: true
        base: main

Along with keeping us up-to-date on gems and possible security fixes they might have received, the update also serves to keep fixes that might have to be made to Sorbet non-monolithic.

It's not unusual for a Sorbet update or an update to one of the Tapioca shims to introduce some minor compatibility problem that someone needs to resolve by hand. By making sure to automate these updates so we see them every week, these problems tend to stay smaller instead of a whole bunch of breakages piling up into a multi-week project that somebody has to plow through once a year.

Some tail risk

We should caveat that there is some amount of long term risk to using something like Sorbet given that it's a project that's almost entirely maintained by a single company, and one that seems to be investing more into Java toolchains these days at that. As much as we like Sorbet, we have to consider that there's a chance that it falls into disrepair as Stripe shifts focus away.

The good news is that Sorbet has been picked up by other external users since going open source, including some big ones like Shopify. We'd certainly prefer that Ruby's built-in type checking story was better so that we didn't have to rely on third party tools for such core functions, but we've decided to be optimistic in hoping that even if Stripe were to stop working on it, there'd be enough of a community to either take over maintenance, or enough demand for a similar type signature for ex-Sorbet users to collectively invest in producing an equivalent replacement with similar syntax.

One final concern with Sorbet is that it really feels like it’s a Stripe project first and foremost, and that external users of it like us are merely tolerated. Take for example the previously mentioned T::Struct. There are many hidden, undocumented features in it that you really need if you’re going to use it, but they remain undocumented and unsupported, likely a result of this class being lifted under duress from the main Stripe codebase out of necessity to open source Sorbet.

But while all of these things are less than ideal, Sorbet is still more useful than not, and we do get a lot of value out of it.

Sorbet makes change possible

Refactoring in Ruby is usually a risky prospect. You never have enough test coverage to be fully confident that you're not going to break anything, so the traditional strategy is to refactor slowly in small increments, or even to protect new code paths behind feature flags so they can be vetted with a small percent of traffic before taking them live. It works, but it's slow and a lot of work.

Sorbet with nearly every file opted into strict typing, along with a policy of 100% branch test coverage, allows us to make aggressive refactors to our production systems that would ordinarily just not be feasible on the vast majority of Ruby projects. By making large refactors possible, it means that we spend less time on this sort of project (i.e. it can be done in a single large change instead of ten smaller ones over a week), and more importantly, that we'll be more likely to engage in one -- when refactoring is difficult and risky, the overwhelming default will be that it doesn't happen. By refactoring as often as it's needed, we keep our code quality high, and in such a state where it's easier to add new features.

Avatar for Brandur Leach

Written by

Brandur Leach

November 2, 2022 More by this author