Assumed audience: People interested in learning—especially but not only math, programming languages, music, or engineering.
For the past week or so, I have been working my way through large swaths of the Rust async ecosystem, trying to orient myself somewhat as I prep to write a new chapter of The Rust Programming Language. In particular, I have been working through every part of the Tokio tutorial, actually implementing every single part of its example code myself and compiling and running it. Before that, I spent a good amount of time over the past five months working through Programming Languages: Application and Interpretation and Crafting Interpreters, along with parts of other texts about building programming languages. In all of these cases, I have made a point to type out every part of every example I work through in the text.1
The reason I do this is simple: it is the only way I actually learn this kind of material — the same as actually doing the work by hand on paper is the only way I learned physics or math or music analysis in my undergraduate studies. There are many fields where the only way to actually learn the subject is to do it. You can and should read books which are about the practice, too; you will learn different things from those. But very often, literally making your hands do the relevant physical motions is a key part of making your brain do the relevant intellectual motions.2
I was reminded of this last night, while teaching someone how binary and other counting systems work, as well as how and why binary is useful in computing. The person I was teaching was initially struggling to understand the counting system (0, 1, 10, 11, 100, …). Their existing mental model for base 10 number systems was misleading, because it was incomplete: They had never fully internalized what the “places” actually meant. They also had never learned that our choice of base 10 is arbitrary in a purely mathematical sense. (Both of these are true for most math students who haven’t done either programming or college-level math work!) That made it easy for them to jump to wrong conclusions about what should happen in a different base numbering system.
The way forward, I found, was to have them just write out the counts in different base systems: “normal” counting in base 10; 0, 1, 2, 3, 10, 11, 12, 13, 20… for base 4; and so on. Any time they made a mistake in one of the non-base-10 systems, I would gently stop them and point out the mismatch between what they were doing in that system vs. what they would have done at the same point in base 10. That in turn allowed us to reinforce, slowly but steadily, what the “places” are (2’s place, 4’s place, 10’s place, etc.) and therefore the idea that we are “spelling” the same count in different ways depending on how many digits we allow ourselves in our number system. Writing it out — and more than once — was the key ingredient that made the explanation stick.
That somewhat mechanical approach might seem unintuitive if you have not taught a subject like math or programming before: Would it not make more sense to explain from first principles? Sometimes, for some learners, yes, that can be helpful — but usually, in my experience, only once there is already enough intuition built up from correctly-targeted practice. Many programmers pride themselves on learning things “bottom-up”, from first principles or seeing how the implementation actually works — and indeed many of us do learn well that way. Very few of us started that way, though. Most of us started by typing things into a computer and seeing what worked — even if following some programming book.
You might think this comes down to the difference between a true beginner and a false beginner (a fairly standard idea in pedagogy; you can hear a great discussion of it on a recent episode of Software Unscripted). To some extent, that is on the right track: The true beginner has no mental models for the thing, and in my experience the only way to build a mental model like that is by experience. The false beginner, by contrast, already has some mental models for the domain and will lean on them when building up a new skill. When those mental models are wrong, though, they can be deeply misleading. Quite often the best way to unlearn when you have the wrong mental model is also by combining instruction on the correct (or at least a more correct) mental model with actual practice that can help expose the gaps or misalignments in the existing mental model.
Combining concrete practice with explanations of mental models is almost unreasonably effective. This is why memorizing and practicing your multiplication tables (or, for that matter, basic addition and subtraction) is actually helpful, if also insufficient, for developing other mathematical skills. Growing up, I learned most of my math up through high school geometry from Saxon — much beloved of homeschoolers everywhere. It uses the time-hallowed tradition of starting with rote repetition of important skills followed by explanation of the principles. As a result, I had the quadratic formula memorized “by accident” by the time the explanation arrived, and the explanation stuck because I was not trying to memorize the formula and what it meant at the same time. You can sometimes flip the order around or approach model and mechanics together. You really do need both, though.3
Returning to the examples I opened with: this is why I typed out every single line of code in PLAI and Crafting Interpreters. I knew that I could read about programming language interpretation as much as I wanted, but it would not stick in a practical way — that is, in a way that is actually usable in the real world — unless I actually typed it out.
Given how I approached that, that has meant doing it in three different languages: Racket, TypeScript, and Rust. ↩︎
This is no less true of programming and its use of the keyboard than of writing longhand, though I also find that different kinds of thinking are best done in different media. I find paper preferable for working out algorithms, or working out the ideas for essays, or writing poetry. I do not find paper preferable for designing data structures, or for writing the substance of long-form essays. Much the inverse is true of typing. ↩︎
People raised purely on rote repetition sometimes complain about Common Core/“new math”, and sometimes that complaint is justified, though sometimes it is just down to the unfamiliarity of the approach. In my experience working with both of my daughters as they have gone through elementary school, though, I have been delighted to find at least one Common Core-targeted curriculum with a solid balance of work. Some of the exercises are designed to build mental models; others are the kind of rote repetition that is necessary to “drill home” those models. ↩︎
Thanks for reading my feed! Thoughts, comments, or questions? Shoot me an email!
]]>Assumed audience: Other people in the Rust ecosystem (or interested lookers-on), especially those in a position to improve on the current baseline.
Epistemic status: Experiential.
I have spent a good part of the last week or so trying to get my head around as much of the Rust async ecosystem, as part of working on a new chapter for The Rust Programming Language on the subject (!). There is a lot to like. There are also some bumps and hurdles, though. It feels harder to learn than other parts of Rust — not in the sense that the ideas themselves is harder than the other parts, but in that there is not a single coherent thing to learn, but rather a core set of ideas and then a total mishmash of an ecosystem which is required to use those ideas.
This claim from the async-std book is interesting in this context:
Rust Futures have the reputation of being hard. We don’t think this is the case. They are, in our opinion, one of the easiest concurrency concepts around and have an intuitive explanation.
However, there are good reasons for that perception. Futures have three concepts at their base that seem to be a constant source of confusion: deferred computation, asynchronicity and independence of execution strategy.
One thing this misses: it is also hard because there are a lot of options in the space. “Just use Tokio” is a really good and reasonable default as far as I can tell, but the lack of opinions and clear documentation on what to do from the Rust project (as well as the mixed story around maturity/stability from many of these) makes it substantially harder for people to get their heads around.
Here’s a prime example: you cannot do non-blocking I/O without adding some library:
tokio::fs
async-fs
subcrate (smol::fs
)async_std::fs
This also causes significant complexity for people learning: there is no “progressive disclosure of complexity” here. Instead, you have to get your head around a bunch of pieces before you can do anything meaningful. At a minimum, you have to pick a runtime, and that immediately prompts you to ask: “But what runtime do I pick? What are the differences?” That in turn immediately exposes you to all of the complexity in the space.
I actually agree with the claim from the async-std
book that the basic design for Future
is reasonably straightforward and makes a lot of sense.1 However, there is no other part of Rust where the standard library gives you so little to work with. This is particularly noteworthy in contrast with other languages working on concurrency (especially structured concurrency, e.g. especially in Swift), where there is a built-in “runtime”.2 Rust has a good reason to support multiple runtimes, but the difficulty level is dramatically increased for someone just getting started.
Rust’s general approach is not to paper over complexity, but to try to (a) expose it in reasonable ways, (b) improve on the state of the art where possible, and (c) provide a good developer experience. Here, I think Rust is succeeding reasonably well at (a), and in part at (b) in terms of mechanics and implementation — but not much at all on (c), especially the “out of the box” usability. At the end of the day, this comes down to the fact that there is no default.
I understand why. What defaults do you pick, and why, and how does that impact the ecosystem? (Does picking something imply other approaches are bad — even if no one intends that?) But: it is still a problem from the perspective of someone trying to learn this part of the language.
The difference between can choose and must choose is really, really significant. We should figure out what it would look like to ship a reasonable default executor so that the out of the box experience is good, and people can opt into other choices when they need them.
Of course, having a good chapter in the official book might also help, so I will now get back to working on that.
I have some still-developing thoughts on how the laziness of the type is related to some of the difficulties people have, even if well-motivated. Much of Rust is an eager language, but there are key exceptions like Iterator
, and people do not seem to struggle with those. More to mull on here. ↩︎
Swift implemented support for custom executors in its most recent release, Swift 5.9, with similar aims to how Rust has approached the space, but notably Swift ships a default executor out of the box, and only allows opt-in custom executors for performance. This is exactly the right choice for Swift, and not a perfect match for what Rust would need, but it is suggestive of the right direction. ↩︎
Thanks for reading my feed! Thoughts, comments, or questions? Shoot me an email!
]]>Assumed audience: Other Christians interested in church music.
I have been mulling on this post about church music culture since a friend send it to me a couple days ago, and… I have some disagreements. Buckle up; this is an old-school fisking/bloggy rant. (You should read that post first; none of the rest of this will make any sense otherwise.)
First, for the really snarky take: if you want to know the result of following Ahern’s advice, go to any megachurch and sit through the service. You’ll get a coherent musical culture that hasn’t spent its time and energy trying to connect with the musical traditions of the past! That is: Bethel and Hillsong and Chris Tomlin. That is not my taste, and I suspect it isn’t the author’s, either… but it is exactly what he asked for!
For the less snarky take, though, a few points:
First, what he’s suggesting is simply not possible in practice, even if it were good (on which see below), given that we live in the age and era we do. We cannot create out of thin air a culture that doesn’t imbibe from all the surrounding and historical context, in a world which is saturated by that context. Trying to shut out the surrounding world to create our own little culture by avoiding pulling on it simply will not work: that is not how artistic creation ever works. Even the attempt would be a sort of LARPing of the thing. This is true of attempts to recover the premodern worldview as a person living in modernity in general: the conscious attempt to do now what was unconsciously done then is itself expressly and inescapably modern.
Moreover, to the degree it was possible for the folks he mentioned mostly because of the constraints they were operating under. They simply did not have access — much though they often wished they did! — to the music of earlier generations. It took the advent of print to start changing that, and generations of the existence of print before it fully altered the way people learn music. Bach would almost certainly have loved to have access to both a deeper historical well and to broader set of musical forms and influences to draw on, if his extensive use of contemporaneous forms is anything to go by. (The author half-acknowledges the latter, but since it militates quite strongly against his point, breezes by it — rather too quickly!)
The example he gives is profoundly flawed, for exactly this reason. The judgment comparing it to music from 40 years earlier is almost certainly in the “largely meritless and almost certainly spoken out of relative ignorance” bucket, given the extremely sharp limits of musical transmission across generations (to say nothing of) in the pre-print era. Elizabeth Eisenstein’s The Printing Press as an Agent of Change is very, very good on the way the press impacted all these sorts of things because of its impact on material culture, and also does a great job tracing how very long the reverberations took to shake out in culture at large.
Second, the example he gives is… honestly kind of silly in terms of supporting his point. It is extremely neat from a musical technique point of view, but has more or less nothing to do with musical culture. Notice that he describes not what the members of the congregation would be doing but what the professional musicians would be doing. It is sort of akin to someone writing with astonishment, in some hypothetical (horrific) future where all music is electronically generated, of the incredible fact that back in the past [our time] four people could read dots and lines on a page and from them play instruments with strings they mash down with their fingers and rub horse hair over them, in ways that the congregation found delightfully beautiful, all without any kind of director or computerized time-keeping. (That is: a string quartet.) Well… when you put it that way, yes, it is indeed astonishing, but then professional musicians, or even good amateur musicians, do always tend to be, if only we stop to notice. We only miss it because of the fact that it is our musical culture, and so we are inured to how very remarkable it often is. Nor is the fact that people in the Middle Ages liked having professional musicians, or at least good amateurs, particularly surprising. After all, so do we.
Third, both his example of those professional musicians and his reference to Bach are woefully misguided when it comes to the idea that we should want a musical culture that is intentionally trying to tune out the history or the surrounding world. The list of latter examples he gives makes this point for me: what exactly is Gospel music if not the express synthesis of earlier cultural forms? Pitting local and contemporary against global and past makes much the same kind of mistake as many critiques of “cultural appropriation”: it fails to understand that while yes, we should value local culture, it is always — always — in conversation with and borrowing from and remixing artistic moves from elsewhere and elsewhen. That does not mean that hyper-globalized, de-localized, de-contextualized art is all we should aspire to (though I think those descriptors often hide more complexity than critics who throw them around might grant). But there is no local, contextual art that is not in conversation with other places and times.
Long story short, I think the musical abilities described in the piece are delightful and a wonderful historical curiosity. It would even be interesting to see some group revive the technique! But that has nothing whatsoever to do with what the church ought to do or even can do in its practices.
Thanks for reading my feed! Thoughts, comments, or questions? Shoot me an email!
]]>I had a great time talking with Adam Gordon Bell over on CoRecursive about my time at LinkedIn — and why I left. Obviously we had to cut tons of details for time, but I think Adam did a great job cutting to the heart of it. What we build, why we build it, and yes, how we build it all really matter.
Thanks for reading my feed! Thoughts, comments, or questions? Shoot me an email!
]]>Assumed audience: People interested in contemporary classical music. Perhaps extra interesting if you are curious about the compositional process.
I am delighted to announce that you can listen to a new piece of music I wrote: The Desert, for solo piano:
“The Desert” took just me under 48 hours. I have never done anything quite like it before.
On Sunday, February 18, I had the idea for a simple piece for the Christian fast of Lent, recalling the story of Jesus’ spending 40 days and nights fasting in the desert. I sat down at the piano and found the core musical motif — a descending tritone, followed by a resolution by a further half step down, very spread out in time. The basic effect is lonely: a great sonic emptiness due to the long-lingering notes, and not exactly a traditional tonal move, but beautiful.
The next evening (Monday, February 19), I wrote the rest of it. The core musical idea was incredibly simple: 40 key strokes for the 40 days of Jesus’ fasting in the desert, and thus for the 40 days of Lent. I say “key strokes” rather than “notes” because there are a handful of two-note chords in the piece — but only a handful. My process was a simple cycle of playing, listening to the effect, and when I was satisfied with both the notes and their duration, entered them into Dorico. Throughout, I kept circling back to the tritone, though in a few cases switching to a rising rather than falling tritone, but also pulled in a variety of other similarly open-and-empty-sounding motions. This cycle of playing-and-listening was, in a way that is rather unusual for how I have historically worked, rather meditative. It reminded me — and so I began expressly drawing on — the “holy minimalism” of composers like Arvo Pärt and John Tavener. This is not directly like anything either of them wrote, but it is in much the same vein. With that experience in mind, I added a playing instruction to the finished score:
Played as though listening for what the next note should be.
I effectively wrote this in “free time”: no time signature. I gave each note the number of beats it sounded like it should have. Tempo-wise, I simply chose a fairly slow tempo by feel when I started. There is, quite intentionally, little sense of rhythmic structure to the piece. It moves forward, but in the way that 40 days in the wilderness might pass: never quickly, but sometimes more quickly and sometimes less. Most of the notes are quarter notes, sometimes staccato and sometimes not. Occasionally they stretch out to be as long as held whole notes. The sustain pedal is held throughout save for one series of moving lines near the middle of the piece. The interaction between pedal and how long you hold the keys is a subtle effect on a piano, but a real one: because the while the hammers are no longer hitting the keys, neither do you hear the keys come back to their resting position.1 I went back and forth while writing whether to notate in free time or measured time, and I actually ultimately produced both versions of the score. The measured version — with its constant shifting around 10/4, 8/4, 9/4, etc. — is far easier to perform because the counts between notes are very obvious. At the same time, the free time version does a better job of indicating the feel of the piece.
(Just so we are extra clear, since I cannot stop you from downloading these PDFs: you are free to play the piece for friends and family yourself. You are not free to perform it for money, or record it without sub-licensing it from me. However, if you would like to do either, get in touch and I will be happy to work something out!)
If you’d like a deep dive on it musically, I have you covered there:
I got up the next morning basic (Tuesday, February 20), and after helping my wife get our girls out the door, set up a little home recording “studio”. Scare quotes because the studio consisted of a microphone at our upright piano in our living room. I ran the microphone straight into my Mac, set up Logic Pro with a tempo map produced by exporting a MIDI file from Dorico. Then I did four or five takes, until I had one I was satisfied with.2 That afternoon, I hopped on a call with my friend and former colleague Bryan Levay, and he showed me how he approaches mixing and mastering. With a finished recording in hand, I uploaded it for distribution.
And here we are today: with the music live in the world. Please listen to it and if you like it, share it with others who might as well — and I would love to hear from you as well!
There are also very subtle differences in the actual sound of the notes, due to the different placement of the hammers within the body of the piano and the tiny changes that produces in the resonant chamber that is a piano body. But those are very subtle. ↩︎
I tried using Logic’s support for takes, but it ended up not working very well for this kind of very ambient music — at least, with my skill level in both playing and recording. ↩︎
Thanks for reading my feed! Thoughts, comments, or questions? Shoot me an email!
]]>Gut feelings are not a particularly good way of evaluating the outside world. Gut feelings are a reflection of all our biases and intuitions and heuristics. Those all grow, largely unconsciously, out of our experiences. Our experiences are our own; they do not generalize. The phrase “the plural of anecdote is not data” describes all the reasons gut feelings are not reliable for evaluating the world around us. Although we do pick up on things subconsciously, our intuitive responses are also easily misled. Wisdom demands we discount those gut feelings fairly sharply — though certainly not entirely — against whatever actual facts we gather.
Gut feelings are still useful, though. They tell us about ourselves. When you get cold feet about something, that is really useful information about your own thinking and — especially — your own emotions. The same when you get almost-unreasonably enthusiastic about something. When evaluating a choice where the facts alone do not make an obvious choice, gut responses are helpful not in spite of but precisely because they tell us about our biases and intuitions and heuristics. That kind of information about ourselves is often hard to surface purely by introspection. Here, they are more valuable, though still far from infallible.
A gut feeling does not resolve a hard question, of course. It can, however, be a good prompt for, and provide helpful insight for, further consideration.
Thanks for reading my feed! Thoughts, comments, or questions? Shoot me an email!
]]>Assumed audience: Other serious (“recreationally competitive”) athletes interested in Garmin’s training analysis features.
I picked up a Garmin Forerunner 255 last year,1 and have kept a curious eye on its Training Status feature while training for a few races over that time. Unfortunately, I do not find it particularly useful. There are several significant problems with how it interprets training data in my experience to date.
First, over the same span, I also started using the 80/20 Endurance training plans, which emphasize an 80/20% mix of foundation work to higher intensity work.2 Theoretically, Garmin’s Training Status feature ought to work really well with this. It breaks down your training load into one of three categories: anaerobic, high aerobic, and low aerobic. The anaerobic and high aerobic buckets should both map directly to 80/20’s high-intensity zones (X, 3, Y, 4, and 5), with the low aerobic bucket mapping to the low-intensity zones (1 and 2). In practice, the Training Status report gets very confused by the actual runs I do using the 80/20 plans.
As far as I can tell, this is because Training Status is designed to give every run a single score, and only breaks down the effect of the run between aerobic and anaerobic. That is: if you take a run that is a mix of sprint-type intervals and a long, slow warm-up and cool-down, Garmin might characterize that as contributing to both aerobic and anaerobic buckets. What it will not do is characterize a single run as contributing to both low and high aerobic buckets, even though many runs are designed to do just that. The 80/20 plans in particular nearly always include a mix of easy Zone 1 warm-ups and Zone 2 base mileage before switching over to tempo work, intervals, etc., and often also include a long cooldown. That kind of run is doing multiple things for your system in terms of physical performance, but Garmin seems to treat the whole run as “high aerobic” once it crosses some (relatively low!) percentage threshold of the run in Zone 3 or higher. Once it does that, the entire run is treated as increasing high aerobic load and high aerobic load only. The 80/20 plans incorporate foundation work into nearly every run, though, rather than having specific runs which are purely for speed or tempo work. Net, Garmin substantially mischaracterizes the runs in a way it would not if I broke them into three separate runs for the warm-up, tempo or interval work, and cooldown. I am not doing that because it would be a huge pain.3
Second, it is weirdly inconsistent about how it characterizes runs and rides. I regularly see it categorize an hour-long ride where I spent the whole time solidly in Zone 2 for power and much of it in Zone 1 for heart rate as a tempo ride. From time to time I have seen it do the same with long runs with similar heart rate and power zone breakdowns — including times where I have never even come close to the top of Zone 2 for heart rate. Despite the fact that power is all over the place — there is literally nowhere I can run around here that does not include 200+ feet of climb and descent, and keeping running power perfectly “in zone” on those kinds of hills is difficult to say the least — I have seen those results even on runs where I carefully kept my power output consistently in Zone 2 as well. I have repeatedly checked that the Garmin heart rate and power zones are set reasonably, but… it does not matter. This results in Garmin consistently mis-categorizing even more of my runs as being in its “high aerobic” bucket.
Finally, Garmin simply does not seem to adjust for the impact of things like temperature on performance. My Training Status reports have asserted that I am “Maintaining” or “Unproductive” since the start of January, despite the fact that I have made significant progress on benchmark exercises in ways that are really obvious to me. The reason? It has been really dang cold out, because I live in Colorado. Oftentimes, there has been snow on the ground which I have been dodging. I am “not improving” on Garmin’s estimation of my V̇O2 max… because I am spending an enormous amount of my physical energy staying warm, carrying around a bunch of extra pounds of gear to help me literally not freeze, and dodging ice. It turns out that my runs under those conditions do in fact not get faster compared to a baseline of runs in beautiful
I wish Training Status understood that many workouts have multiple phases to them. In many cases, much of the underlying data is already there, given I am using “structured workouts” which carry along automatic sectioning data. Likewise, I wish it adjusted for weather, and had the ability to supply information about conditions like ice and snow and slush4 — the first part of which is also already possible, since Garmin pulls weather data to associate with activities already! I wish, in sum, that Training Status were smart enough to be useful to me — but as of today, I just ignore it.
Annoyingly, I picked it up only a month or so before the Forerunner 265 came out, and I would definitely prefer the 265… but I was not about to turn around and buy a new watch again right after upgrading for the first time in almost five years! ↩︎
This is the same basic approach to training I have used for many years, but (a) with a bit more rigor than the Maffetone age-based heart-rate guide and (b) some nice structured plans I have really enjoyed after more than a decade of building my own plans. I highly recommend the 80/20 materials overall. My one caveat is that their lactate threshold heart rate test formula for a 20-minute test seems to be pretty substantially wrong. Do the test the way they say… and then do the traditional 95% adjustment to get a much more correct number. ↩︎
I have seen other athletes I follow on Strava doing this though I suspect that is mostly for ease of their own personal analysis. ↩︎
You can work around this to a degree by using the “Trail Running” mode, or a mode copied from it, since Garmin does not factor those activities into its V̇O2 max calculations… but that is a hack, and it should not be necessary. ↩︎
Thanks for reading my feed! Thoughts, comments, or questions? Shoot me an email!
]]>Assumed audience: People, especially engineers, who might be unpersuaded about the value of investing in improving their personal productivity and velocity on things like writing.
I have been thinking about Dan Luu’s post Some reasons to work on productivity and velocity for years.
I certainly agree that working on the right thing is important, but increasing velocity doesn’t stop you from working on the right thing. If anything, each of these is a force multiplier for the other. Having strong execution skills becomes more impactful if you’re good at picking the right problem and vice versa.
An example:
One thing writing quickly unlocks for me is the ability to use code review as a major teaching tool. At a former job, some colleagues would actually go stalk my GitHub to learn from my reviews! That was because those reviews were not just “LGTM” or “Please do __ instead.” They came with explanations of why something was better, or questions about someone’s goals. I often outlined tradeoffs in approach. I would leave links to other relevant materials for them to read.
If that sounds like it would take far too long, well… no, because I have practiced writing quickly and clearly for literally decades now. I could review a non-trivial change and give it non-trivial feedback in
Takeaway — engineers, do yourself and all of your teammates a huge favor and learn how to write quickly and cogently. That means practicing it! But the dividends are huge.
I posted a version of this to social media yesterday. But social media is ephemeral, so I always try to pull anything substantive like this back over into my website.
Thanks for reading my feed! Thoughts, comments, or questions? Shoot me an email!
]]>Assumed audience: People who have worked with Git or other modern version control systems like Mercurial, Darcs, Pijul, Bazaar, etc., and have at least a basic idea of how they work.
Jujutsu is a new version control system from a software engineer at Google, where it is on track to replace Google’s existing version control systems (historically: Perforce, Piper, and Mercurial). I find it interesting both for the approach it takes and for its careful design choices in terms of both implementation details and user interface. It offers one possible answer to a question I first started asking most of a decade ago: What might a next-gen version control system look like — one which actually learned from the best parts of all of this generation’s systems, including Mercurial, Git, Darcs, Fossil, etc.?
To answer that question, it is important to have a sense of what those lessons are. This is trickier than it might seem. Git has substantially the most “mind-share” in the current generation; most software developers learn it and use it not because they have done any investigation of the tool and its alternatives but because it is a de facto standard: a situation which arose in no small part because of its “killer app” in the form of GitHub. Developers who have been around for more than a decade or so have likely seen more than one version control system — but there are many, many developers for whom Git was their first and, so far, last VCS.
The problems with Git are many, though. Most of all, its infamously terrible command line interface results in a terrible user experience. In my experience, very few working developers have a good mental model for Git. Instead, they have a handful of commands they have learned over the years: enough to get by, and little more. The common rejoinder is that developers ought to learn how Git works internally — that everything will make more sense that way.
This is nonsense. Git’s internals are interesting on an implementation level, but frankly add up to an incoherent mess in terms of a user mental model. This is a classic mistake for software developers, and one I have fallen prey to myself any number of times. I do not blame the Git developers for it, exactly. No one should have to understand the internals of the system to use it well, though; that is a simple failure of software design. Moreover, even those internals do not particularly cohere. The index, the number of things labeled “-ish” in the glossary, the way that a “detached HEAD
” interacts with branches, the distinction between tags and branches, the important distinctions between commits, refs, and objects… It is not that any one of those things is bad in isolation, but as a set they do not amount to a mental model I can describe charitably. Put in programming language terms: One of the reasons the “surface syntax” of Git is so hard is that its semantics are a bit confused, and that inevitably shows up in the interface to users.
Still, a change in a system so deeply embedded in the software development ecosystem is not cheap. Is it worth the cost of adoption? Well, Jujutsu has a trick up its sleeve: there is no adoption cost. You just install it — brew install jj
will do the trick on macOS — and run a single command in an existing Git repository, and… that’s it. (“There is no step 3.”) I expect that mode will always work, even though there will be a migration step at some point in the future, when Jujutsu’s own, non-Git backend becomes a viable — and ultimately the recommended — option. I am getting ahead of myself though. The first thing to understand is what Jujutsu is, and is not.
Jujutsu is two things:
It is a new front-end to Git. This is by far the less interesting of the two things, but in practice it is a substantial part of the experience of using the tool today. In this regard, it sits in the same notional space as something like gitoxide. Jujutsu’s jj
is far more usable for day to day work than gitoxide’s gix
and ein
so far, though, and it also has very different aims. That takes us to:
It is a new design for distributed version control. This is by far the more interesting part. In particular, Jujutsu brings to the table a few key concepts — none of which are themselves novel, but the combination of which is really nice to use in practice:
The combo of those means that you can use it today in your existing Git repos, as I have been for the past six months, and that it is a really good experience using it that way. (Better than Git!) Moreover, given it is being actively developed at and by Google for use as a replacement for its current custom VCS setup, it seems like it has a good future ahead of it. Net: at a minimum you get a better experience for using Git with it. At a maximum, you get an incredibly smooth and shallow on-ramp to what I earnestly hope is the future of version control.
Jujutsu is not trying to do every interesting thing that other Git-alternative DVCS systems out there do. Unlike Pijul, for example, it does not work from a theory of patches such that the order changes are applied is irrelevant. However, as I noted above and show in detail below, jj does distinguish between changes and revisions, and has first-class support for conflicts, which means that many of the benefits of Pijul’s handling come along anyway. Unlike Fossil, Jujutsu is also not trying to be an all-in-one tool. Accordingly: It does not come with a replacement for GitHub or other such “forges”. It does not include bug tracking. It does not support chat or a forum or a wiki. Instead, it is currently aimed at just doing the base VCS operations well.
Finally, there is a thing Jujutsu is not yet: a standalone VCS ready to use without Git. It supports its own, “native” backend for the sake of keeping that door open for future capabilities, and the test suite exercises both the Git and the “native” backend, but the “native” one is not remotely ready for regular use. That said, this one I do expect to see change over time!
One of the really interesting bits about picking up Jujutsu is realizing just how weirdly Git has wired your brain, and re-learning how to think about how a version control system can work. It is one thing to believe — very strongly, in my case! — that Git’s UI design is deeply janky (and its underlying model just so-so); it is something else to experience how much better a VCS UI can be (even without replacing the underlying model!).
Time to become a Jedi Knight. Jujutsu Knight? Jujutsu Master? Jujutsu apprentice, at least. Let’s dig in!
That is all interesting enough philosophically, but for a tool that, if successful, will end up being one of a software developer’s most-used tools, there is an even more important question: What is it actually like to use?
Setup is painless. Running brew install jj
did everything I needed. As with most modern Rust-powered CLI tools,1 Jujutsu comes with great completions right out of the box. I did make one post-install tweak, since I am going to be using this on existing Git projects: I updated my ~/.gitignore_global
to ignore .jj
directories anywhere on disk.2
Using Jujutsu in an existing Git project is also quite easy.3 You just run jj git init --git-repo <path to repo>
.4 That’s the entire flow. After that you can use git
and jj
commands alike on the repository, and everything Just Works™, right down to correctly handling .gitignore
files. I have since run jj git init
in every Git repository I am actively working on, and have had no issues in many months. It is also possible to initialize a Jujutsu copy of a Git project without having an existing Git repo, using jj git clone
, which I have also done, and which works well.
Once a project is initialized, working on it is fairly straightforward, though there are some significant adjustments required if you have deep-seated habits from Git!
One of the first things to wrap your head around when first coming to Jujutsu is its approach to its revisions and revsets, i.e. “sets of revision”. Revisions are the fundamental elements of changes in Jujutsu, not “commits” as in Git. Revsets are then expressions in a functional language for selecting a set of revisions. Both the idea and the terminology are borrowed directly from Mercurial, though the implementation is totally new. (Many things about Jujutsu borrow from Mercurial — a decision which makes me quite happy.) The vast majority of Jujutsu commands take a --revision
/-r
command to select a revision. So far that might not sound particularly different from Git’s notion of commits and commit ranges, and they are indeed similar at a surface level. However, the differences start showing up pretty quickly, both in terms of working with revisions and in terms of how revisions are a different notion of change than a Git commit.
The first place you are likely to experience how revisions and revsets are different — and neat! — is with the log
command, since looking at the commit log is likely to be something you do pretty early in using a new version control tool. (Certainly it was for me.) When you clone a repo and initialize Jujutsu in it and then run jj log
, you will see something rather different from what git log
would show you — indeed, rather different from anything I even know how to get git log
to show you. For example, here’s what I see today when running jj log
on the Jujutsu repository, limiting it to show just the last 10 revisions:
> jj log --limit 10
@ ukvtttmt hello@chriskrycho.com 2024-02-03 09:37:24.000 -07:00 1a0b8773
│ (empty) (no description set)
◉ qppsqonm essiene@google.com 2024-02-03 15:06:09.000 +00:00 main* HEAD@git bcdb9beb
· cli: Move git_init() from init.rs to git.rs
· ◉ rzwovrll ilyagr@users.noreply.github.com 2024-02-01 14:25:17.000 -08:00
┌─┘ ig/contributing@origin 01e0739d
│ Update contributing.md
◉ nxskksop 49699333+dependabot[bot]@users.noreply.github.com 2024-02-01 08:56:08.000
· -08:00 fb6c834f
· cargo: bump the cargo-dependencies group with 3 updates
· ◉ tlsouwqs jonathantanmy@google.com 2024-02-02 21:26:23.000 -08:00
· │ jt/missingop@origin missingop@origin 347817c6
· │ workspace: recover from missing operation
· ◉ zpkmktoy jonathantanmy@google.com 2024-02-02 21:16:32.000 -08:00 2d0a444e
· │ workspace: inline is_stale()
· ◉ qkxullnx jonathantanmy@google.com 2024-02-02 20:58:21.000 -08:00 7abf1689
┌─┘ workspace: refactor for_stale_working_copy
◉ yyqlyqtq yuya@tcha.org 2024-01-31 09:40:52.000 +09:00 976b8012
· index: on reinit(), delete all segment files to save disk space
· ◉ oqnvqzzq martinvonz@google.com 2024-01-23 10:34:16.000 -08:00
┌─┘ push-oznkpsskqyyw@origin 54bd70ad
│ working_copy: make reset() take a commit instead of a tree
◉ rrxuwsqp stephen.g.jennings@gmail.com 2024-01-23 08:59:43.000 -08:00 57d5abab
· cli: display which file's conflicts are being resolved
Here’s the output for the same basic command in Git — note that I am not trying to get a similar output from Git, just asking what it shows by default (and warning: wall of log output!):
> git log -10
commit: bcdb9beb6ce5ba625ae73d4839e4574db3d9e559 HEAD -> main, origin/main
date: Mon, 15 Jan 2024 22:31:33 +0000
author: Essien Ita Essien
cli: Move git_init() from init.rs to git.rs
* Move git_init() to cli/src/commands/git.rs and call it from there.
* Move print_trackable_remote_branches into cli_util since it's not git specific,
but would apply to any backend that supports remote branches.
* A no-op change. A follow up PR will make use of this.
commit: 31e4061bab6cfc835e8ac65d263c29e99c937abf
date: Mon, 8 Jan 2024 10:41:07 +0000
author: Essien Ita Essien
cli: Refactor out git_init() to encapsulate all git related work.
* Create a git_init() function in cli/src/commands/init.rs where all git related work is done.
This function will be moved to cli/src/commands/git.rs in a subsequent PR.
commit: 8423c63a0465ada99c81f87e06f833568a22cb48
date: Mon, 8 Jan 2024 10:41:07 +0000
author: Essien Ita Essien
cli: Refactor workspace root directory creation
* Add file_util::create_or_reuse_dir() which is needed by all init
functionality regardless of the backend.
commit: b3c47953e807bef202d632c4e309b9a8eb814fde
date: Wed, 31 Jan 2024 20:53:23 -0800
author: Ilya Grigoriev
config.md docs: document `jj config edit` and `jj config path`
This changes the intro section to recommend using `jj config edit` to
edit the config instead of looking for the files manually.
commit: e9c482c0176d5f0c0c28436f78bd6002aa23a5e2
date: Wed, 31 Jan 2024 20:53:23 -0800
author: Ilya Grigoriev
docs: mention in `jj help config edit` that the command can create a file
commit: 98948554f72d4dc2d5f406da36452acb2868e6d7
date: Wed, 31 Jan 2024 20:53:23 -0800
author: Ilya Grigoriev
cli `jj config`: add `jj config path` command
commit: 8a4b3966a6ff6b9cc1005c575d71bfc7771bced1
date: Fri, 2 Feb 2024 22:08:00 -0800
author: Ilya Grigoriev
test_global_opts: make test_version just a bit nicer when it fails
commit: 42e61327718553fae6b98d7d96dd786b1f050e4c
date: Fri, 2 Feb 2024 22:03:26 -0800
author: Ilya Grigoriev
test_global_opts: extract --version to its own test
commit: 42c85b33c7481efbfec01d68c0a3b1ea857196e0
date: Fri, 2 Feb 2024 15:23:56 +0000
author: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
cargo: bump the cargo-dependencies group with 1 update
Bumps the cargo-dependencies group with 1 update: [tokio](https://github.com/tokio-rs/tokio).
Updates `tokio` from 1.35.1 to 1.36.0
- [Release notes](https://github.com/tokio-rs/tokio/releases)
- [Commits](https://github.com/tokio-rs/tokio/compare/tokio-1.35.1...tokio-1.36.0)
---
updated-dependencies:
- dependency-name: tokio
dependency-type: direct:production
update-type: version-update:semver-minor
dependency-group: cargo-dependencies
...
Signed-off-by: dependabot[bot]
commit: 32c6406e5f04d2ecb6642433b0faae2c6592c151
date: Fri, 2 Feb 2024 15:22:21 +0000
author: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
github: bump the github-dependencies group with 1 update
Bumps the github-dependencies group with 1 update: [DeterminateSystems/magic-nix-cache-action](https://github.com/determinatesystems/magic-nix-cache-action).
Updates `DeterminateSystems/magic-nix-cache-action` from 1402a2dd8f56a6a6306c015089c5086f5e1ca3ef to eeabdb06718ac63a7021c6132129679a8e22d0c7
- [Release notes](https://github.com/determinatesystems/magic-nix-cache-action/releases)
- [Commits](https://github.com/determinatesystems/magic-nix-cache-action/compare/1402a2dd8f56a6a6306c015089c5086f5e1ca3ef...eeabdb06718ac63a7021c6132129679a8e22d0c7)
---
updated-dependencies:
- dependency-name: DeterminateSystems/magic-nix-cache-action
dependency-type: direct:production
dependency-group: github-dependencies
...
Signed-off-by: dependabot[bot]
41898282+github-actions[bot]@users.noreply.github.com> 41898282+github-actions[bot]@users.noreply.github.com>
What’s happening in the Jujutsu log output? Per the tutorial’s note on the log
command specifically:
By default,
jj log
lists your local commits, with some remote commits added for context. The~
indicates that the commit has parents that are not included in the graph. We can use the-r
flag to select a different set of revisions to list.
What jj log
does show by default was still a bit non-obvious to me, even after that. Which remote commits added for context, and why? The answer is in the help
output for jj log
’s -r
/--revisions
option:
Which revisions to show. Defaults to the
ui.default-revset
setting, or@ | ancestors(immutable_heads().., 2) | heads(immutable_heads())
if it is not set
I will come back to this revset in a moment to explain it in detail. First, though, this shows a couple other interesting features of Jujutsu’s approach to revsets and thus the log
command. First, it treats some of these operations as functions (ancestors()
, immutable_heads()
, etc.). There is a whole list of these functions! This is not a surprise if you think about what “expressions in a functional language” implies… but it was a surprise to me because I had not yet read that bit of documentation. Second, it makes “operators” a first-class idea. Git has operators, but this goes a fair bit further:
It includes -
for the parent and +
for a child, and these stack and compose, so writing @-+-+
is the same as @
as long as the history is linear. (That is an important distinction!)
It supports union |
, intersection &
, and difference ~
operators.
A leading ::
, which means “ancestors”. A trailing ::
means “descendants”. Using ::
between commits gives a view of the directed acyclic graph range between two commits. Notably, <id1>::<id2>
is just <id1>:: & ::<id2>
.
There is also a ..
operator, which also composes appropriately (and, smartly, is the same as ..
in Git when used between two commits, <id1>..<id2>
). The trailing version, <id>..
, is interesting: it is “revisions that are not ancestors of <id>
”. Likewise, the leading version ..<id>
is all revisions which are ancestors of <id>
Now, I used <id>
here, but throughout these actually operate on revsets, so you could use them with any revset. For example, ..tags()
will give you the ancestors of all tags. This strikes me as extremely interesting: I think it will dodge a lot of pain in dealing with Git histories, because it lets you ask questions about the history in a compositional way using normal set logic. To make that concrete: back in October, Jujutsu contributor @aseipp pointed out how easy it is to use this to get a log which excludes gh-pages
. (Anyone who has worked on a repo with a gh-pages
branch knows how annoying it is to have it cluttering up your view of the rest of your Git history!) First, you define an alias for the revset that only includes the gh-pages
branch: 'gh-pages' = 'remote_branches(exact:"gh-pages")'
. Then you can exclude it from other queries with the ~
negation operator: jj log -r "all() ~ ancestors(gh-pages)"
would give you a log view for every revision with all()
and then exclude every ancestor of the gh-pages
branch.
Jujutsu also provides a really capable templating system, which uses “a functional language to customize output of commands”. That functional language is built on top of the functional language that the whole language uses for describing revisions (described in brief above!), so you can use the same kinds of operators in templates for output as you do for navigating and manipulating the repository. The template format is still evolving, but you can use it to customize the output today… while being aware that you may have to update it in the future. Keywords include things like description
and change_id
, and these can be customized in Jujutsu’s config. For example, I made this tweak to mine, overriding the built-in format_short_id
alias:
[template-aliases]
'format_short_id(id)' = 'id.shortest()'
This gives me super short names for changes and commits, which makes for a much nicer experience when reading and working with both in the log output: Jujutsu will give me the shortest unique identifier for a given change or commit, which I can then use with commands like jj new
. Additionally, there are a number of built-in templates. For example, to see the equivalent of Git’s log --pretty
you can use Jujutsu’s log -T builtin_log_detailed
(-T
for “template”; you can also use the long from --template
). You can define your own templates in a [templates]
section, or add your own [template-aliases]
block, using the template language and any combination of further functions you define yourself.
That’s all well and good, but even with reading the docs for the revset language and the templating language, it still took me a bit to actually quite make sense out of the default output, much less to get a handle on how to customize the output. Right now, the docs have a bit of a flavor of explanations for people who already have a pretty good handle on version control systems, and the description of what you get from jj log
is a good example of that. As the project gains momentum, it will need other kinds of more-introductory material, but the current status is totally fair and reasonable for the stage the project is at. And, to be fair to Jujutsu, both the revset language and the templating language are incredibly easier to understand and work with than the corresponding Git materials.
Returning to the difference between the default output from jj log
and git log
, the key is that unless you pass -r
, Jujutsu uses the ui.default-revset
selector to provide a much more informative view than git log
does. Again, the default is @ | ancestors(immutable_heads().., 2) | heads(immutable_heads())
. Walking through that:
@
operator selects the current head revision.|
union operator says “or this other revset”, so this will show @
itself and the result of the other two queries.immutable_heads()
function gets the list of head revisions which are, well, immutable. By default, this is trunk() | tags()
, so whatever the trunk branch is (most commonly main
or master
) and also any tags in the repository...
to the first immutable_heads()
function selects revisions which are not ancestors of those immutable heads. This is basically asking for branches which are not the trunk and which do not end at a tag.ancestors(immutable_heads().., 2)
requests the ancestors of those branches, but only two deep.heads()
gets the tips of all branches which appear in the revset passed to it: a head is a commit with no children. Thus, heads(immutable_heads())
gets just the branch tips for the list of revisions computed by immutable_heads()
.5When you put those all together, your log view will always show your current head change, all the open branches which have not been merged into your trunk branch, and whatever you have configured to be immutable — out of the box, trunk and all tags. That is vastly more informative than git log
’s default output, even if it is a bit surprising the first time you see it. Nor is it particularly possible to get that in a single git log
command. By contrast, getting the equivalent of git log
is trivial.
To show the full history for a given change, you can use the ::
ancestors operator. Since jj log
always gives you the identifier for a revision, you can follow it up with jj log --revision ::<change id>
, or jj log -r ::<change id>
for short. For example, in one repo where I am trying this, the most recent commit identifier starts with mwoq
(Jujutsu helpfully highlights the segment of the change identifier you need to use), so I could write jj log -r ::mwoq
, and this will show all the ancestors of mwoq
, or jj log -r ..mwoq
to get all the ancestors of the commit except the root. (The root is uninteresting.) Net, the equivalent command for “show me all the history for this commit” is:
$ jj log -r ..@
Revsets are very powerful, very flexible, and yet much easier to use than Git’s operators. That is in part because of the language used to express them. It is also in part because revsets build on a fundamentally different view of the world than Git commits: Jujutsu’s idea of changes.
In Git, as in Subversion and Mercurial and other version control systems before them, when you finish with a change, you commit it. In Jujutsu, there is no first-class notion of “committing” code. This took me a fair bit to wrap my head around! Instead, Jujutsu has two discrete operations: describe
and new
. jj describe
lets you provide a descriptive message for any change. jj new
starts a new change. You can think of git commit --message "something I did"
as being equivalent to jj describe --message "some I did" && jj new
. This falls out of the fact that jj describe
and jj new
are orthogonal, and much more capable than git commit
as a result.
The describe
command works on any commit. It defaults to the commit that is the current working copy. If you want to rewrite a message earlier in your commit history, though, that is not a special operation like it is in Git, where you have to perform an interactive rebase to do it. You just call jj describe
with a --revision
(or -r
for short, as everywhere in Jujutsu) argument. For example:
# long version
$ jj describe --revision abcd --message "An updated message."
# short version
$ jj describe -r abcd -m "An updated message."
That’s it. How you choose to integrate that into your workflow is a matter for you and your team to decide, of course. Jujutsu understands that some branches should not have their history rewritten this way, though, and lets you specify what the “immutable heads” revset should be accordingly. This actually makes it safer than Git, where the tool itself does not understand that kind of immutability and we rely on forges to protect certain branches from being targeted by a force push.
The new
command is the core of creating any new change, and it does not require there to be only a single parent. You can create a new change with as many parents as is appropriate! Is a given change logically the child of four other changes, with identifiers a
, b
, c
, and d
? jj new a b c d
. That’s it. One neat consequence that falls out of this: a merge in Jujutsu is just jj new
with the requirement that it have at least two parents. (“At least two parents” because having multiple parents for a merge is not a special case as with Git’s “octopus” merges.) Likewise, you do not need a commit
command, because you can describe a given change at any time with describe
, and you can create a new change at any time with new
. If you already know the next thing you are going to do, you can even describe it by passing -m
/--message
to new
when creating the new change!6
Most of the time with Git, I am doing one of two things when I go to commit a change:
git commit --all
7 is an extremely common operation for me.-p
to do it via that atrocious interface, but instead opening Fork and doing it with Fork’s staging UI.In the first case, Jujutsu’s choice to skip Git’s “index” looks like a very good one. In the second case, I was initially skeptical. Once I got the hang of working this way, though, I started to come around. My workflow with Fork looks an awful lot like the workflow that Jujutsu pushes you toward with actually using a diff tool. With Jujutsu, though, any diff tool can work. Want to use Vim? Go for it.
What is more, Jujutsu’s approach to the working copy results in a really interesting shift. In every version control system I have worked with previously (including CVS, PVCS, SVN), the workflow has been some variation on:
With both Mercurial and Git, it also became possible to rewrite history in various ways. I use Git’s rebase --interactive
command extensively when working on large sets of changes. (I did the same with Mercurial’s history rewriting when I was using it a decade ago.) That expanded the list of common operations to include two more:
Jujutsu flips all of that on its head. A change, not a commit, is the fundamental element of the mental and working model. That means that you can describe a change that is still “in progress” as it were. I discovered this while working on a little example code for a blog post I plan to publish later this month: you can describe the change you are working on and then keep working on it. The act of describing the change is distinct from the act of “committing” and thus starting a new change. This falls out naturally from the fact that the working copy state is something you can operate on directly: akin to Git’s index, but without its many pitfalls. (This simplification affects a lot of things, as I will discuss further below; but it is especially important for new learners. Getting my head around the index was one of those things I found quite challenging initially with Git a decade ago.)
When you are ready to start a new change, you use either jj commit
to “finalize” this commit with a message, or jj new
to “Create a new, empty change and edit it in the working copy”. Implied: jj commit
is just a convenience for jj describe
followed by jj new
. And a bonus: this means that rewording a message earlier in history does not involve some kind of rebase operation; you just jj describe --revision <target>
.
What is more, jj new
lets you create a new commit anywhere in the history of your project, trivially:
-A, --insert-after
Insert the new change between the target commit(s) and their children
[aliases: after]
-B, --insert-before
Insert the new change between the target commit(s) and their parents
[aliases: before]
You can do this using interactive rebasing with Git (or with history rewriting with Mercurial, though I am afraid my hg
is rusty enough that I do not remember the details). What you cannot do in Git specifically is say “Start a new change at point x” unless you are in the middle of a rebase operation, which makes it inherently somewhat fragile. To be extra clear: Git allows you to check out make a new change at any point in your graph, but it creates a branch at that point, and none of the descendants of that original point in your commit graph will come along without explicitly rebasing. Moreover, even once you do an explicit rebase and cherry-pick in the commit, the original commit is still hanging out, so you likely need to delete that branch. With jj new -A <some change ID>
, you just insert the change directly into the history. Jujutsu will rebase every child in the history, including any merges if necessary; it “just works”. That does not guarantee you will not have conflicts, of course, but Jujutsu also handles conflicts better — way better — than Git. More on that below.
I never use git reflog
so much as when doing interactive rebases. Once I got the hang of Jujutsu’s ability to jj new
anywhere, it basically obviates most of the places I have needed Git’s interactive rebase mode, especially when combined with Jujutsu’s aforementioned support for “first-class conflicts”. There is still an escape hatch for mistakes, though: jj op log
shows all the operations you have performed on the repo — and frankly, is much more useful and powerful than git reflog
, because it logs all the operations, including whenever Jujutsu updates its view of your working copy via jj status
, when it fetches new revisions from a remote.
Additionally, Jujutsu allows you to see how any change has evolved over time. This handily solves multiple pain points in Git. For example, if you have made changes in your working copy, and would like to split it into multiple changes, Git only has a binary state to let you tease those apart: staged, or not. As a result, that kind of operation ranges in difficulty from merely painful to outright impossible. With its obslog
command,8 Jujutsu allows you to see how a change has evolved over time. Since the working copy is just one more kind of “change”, you can very easily retrieve earlier state — any time you did a jj status
check, or any other command which snapshotted the state of the repository (which is most of them). That applies equally to earlier changes. If you just rebased, for example, and realize you moved some changes to code into the wrong revision, you can use the combination of obslog
and new
and restore
(or move
) to pull it back apart into the desired sequence of changes. (This one is hard to describe, so I may put up a video of it later!)
This also leads to another significant difference with Git: around breaking up your current set of changes on disk. As I noted above, Jujutsu treats the working copy itself as a commit instead of having an “index” like Git. Git really only lets you break apart a set of changes with the index, using git add --patch
. Jujutsu instead has a split
command, which launches a diff editor and lets you select what you want to incorporate — rather like git add --patch
does. As with all of its commands, though, jj split
works exactly the same way on any commit; the working copy commit gets it “for free”.
Philosophically, I really like this. Practically, though, it is a slightly bumpier experience for me than the Git approach at the moment. Recall that I do not use git add --patch
directly. Instead, I always stage changes into the Git index using a graphical tool like Fork. That workflow is slightly nicer than editing a diff — at least, as Jujutsu does it today. In Fork (and similar tools), you start with no changes and add what you want to the change set you want. By contrast, jj split
launches a diff view with all the changes from a given commit present: splitting the commit involves removing changes from the right side of the diff so that it has only the changes you want to be present in the first of two new commits; whatever is not present in the final version of the right side when you close your diff editor ends up in the second commit.
If this sounds a little complicated, that is because it is — at least for today. That qualifier is important, because a lot of this is down to tooling, and we have about as much dedicated tooling for Jujutsu as Git had in 2007, which is to say: not much. Qualifier notwithstanding, and philosophical elegance notwithstanding, the complexity is still real here in early 2024. There are two big downsides as things stand. First, I find it comes with more cognitive load. It requires thinking in terms of negation rather than addition, and the “second commit” becomes less and less visible over time as you remove it from the first commit. Second, it requires you to repeat the operation when breaking up something into more than two commits. I semi-regularly take a single bucket of changes on disk and chunk it up into many more than just 2 commits, though! That significantly multiplies the cognitive overhead.
Now, since I started working with Jujutsu, the team has switched the default view for working with these kinds of diffs to using scm-diff-editor
, a TUI which has a first-class notion of this kind of workflow.9 That TUI works reasonably well, but is much less pleasant to use than something like the nice GUIs of Fork or Tower.
The net is: when I want to break apart changes, at least for the moment I find myself quite tempted to go back to Fork and Git’s index. I do not think this problem is intractable, and I think the idea of jj split
is right. It just — “just”! — needs some careful design work. Preferably, the split
command would make it straightforward to generate an arbitrary number of commits from one initial commit, and it would allow progressive creation of each commit from a “vs. the previous commit” baseline. This is the upside of the index in Git: it does actually reflect the reality that there are three separate “buckets” in view when splitting apart a change: the baseline before all changes, the set of all the changes, and the set you want to include in the commit. Existing diff tools do not really handle this — other than the integrated index-aware diff tools in Git clients, which then have their own oddities when interacting with Jujutsu, since it ignores the index.
Another huge feature of Jujutsu is its support for first-class conflicts. Instead of a conflict resulting in a nightmare that has to be resolved before you can move on, Jujutsu can incorporate both the merge and its resolution (whether manual or automatic) directly into commit history. Just having the conflicts in history does not seem that weird. “Okay, you committed the text conflict markers from git, neat.” But: having the conflict and its resolution in history, especially when Jujutsu figured out how to do that resolution for you, as part of a rebase operation? That is just plain wild.
A while back, I was working on a change to a library I maintain10 and decided to flip the order in which I landed two changes to package.json
. Unfortunately, those changes were adjacent to each other in the file and so flipping the order they would land in seemed likely to be painfully difficult. It was actually trivial. First of all, the flow itself was great: instead of launching an editor for interactive rebase, I just explicitly told Jujutsu to do the rebases: jj rebase --revision <source> --destination <target>
. I did that for each of the items I wanted to reorder and I was done. (I could also have rebased a whole series of commits; I just did not need to in this case.) Literally, that was it: because Jujutsu had agreed with me that JSON is a terrible format for changes like this and committed a merge conflict, then resolved the merge conflict via the next rebase command, and simply carried on.
At a mechanical level, Jujutsu will add conflict markers to a file, not unlike those Git adds in merge conflicts. However, unlike Git, those are not just markers in a file. They are part of a system which understands what conflicts are semantically, and therefore also what resolving a conflict is semantically. This not only produces nice automatic outcomes like the one I described with my library above; it also means that you have more options for how to accomplish a resolution, and for how to treat a conflict. Git trains you to see a conflict between two branches as a problem. It requires you to solve that problem before moving on. Jujutsu allows you to treat a conflict as a problem which much be resolved, but it does not require it. Resolving conflicts in merges in Git is often quite messy. It is even worse when rebasing. I have spent an incredibly amount of time attempting merges only to give up and git reset --hard <before the merge>
, and possibly even more time trying to resolve a conflicting in a rebase only to bail with git rebase --abort
. Jujutsu allows you to create a merge, leave the conflict in place, and then introduce a resolution in the next commit, telling the whole story with your change history.
Likewise with a rebase: depending on whether you require all your intermediate revisions to be able to be built or would rather show a history including conflicts, you could choose to rebase, leave all the intermediate changes conflicted, and resolve it only at the end.
Conflicts are inevitable when you have enough people working on a repository. Honestly: conflicts happen when I am working alone in a repository, as suggested by my anecdote above. Having this ability to keep working with the repository even in a conflicted state, as well as to resolve the conflicts in a more interactive and iterative way is something I now find difficult to live without.
There are a few other niceties which fall out of Jujutsu’s distinction between changes and commits, especially when combined with first-class conflicts.
First up, jj squash
takes all the changes in a given commit and, well, squashes them into the parent of that commit.11 Given a working copy with a bunch of changes, you can move them straight into the parent by just typing jj squash
. If you want to squash some change besides the one you are currently editing, you just pass the -r
/--revision
flag, as with most Jujutsu commands: jj squash -r abc
will squash the change identified by abc
into its parent. You can also use the --interactive
(-i
for short) argument to move just a part of a change into its parent. Using that flag will pop up your configured diff editor just like jj split
will and allow you to select which items you want to move into the parent and which you want to keep separate. Or, for an even faster option, if you have specific files to move while leaving others alone, and you do not need to handle subsections of those files, you can pass them as the final arguments to the command, like jj squash ./path/a ./path/c
.
As it turns out, this ability to move part of one change into a different change is a really useful thing to be able to do in general. I find it particularly handy when building up a set of changes where I want each one to be coherent — say, for the sake of having a commit history which is easy for others to review. You could do that by doing some combination of jj split
and jj new --after <some change ID>
and then doing jj rebase
to move around the changes… but as usual, Jujutsu has a better way. The squash
command is actually just a shortcut for Jujutsu’s move
command with some arguments filled in. The move
command has --from
and --to
arguments which let you specify which revisions you want to move between. When you run jj squash
with no other arguments, that is the equivalent of jj move --from @ --to @-
. When you run jj squash -r abc
, that is the equivalent of jj move --from abc --to abc-
. Since it takes those arguments explicitly, though, move
lets you move changes around between any changes. They do not need to be anywhere near each other in history.
This eliminates another entire category of places I have historically had to reach for git rebase --interactive
. While there are still a few times where I think Jujutsu could use something akin to Git’s interactive rebase mode, they are legitimately few, and mostly to do with wanting to be able to do batch reordering of commits. To be fair, though, I only want to do that perhaps a few times a year.
Branches are another of the very significant differences between Jujutsu and Git — another place where Jujutsu acts a bit more like Mercurial, in fact. In Git, everything happens on named branches. You can operate on anonymous branches in Git, but it will yell at you constantly about being on a “detached HEAD
”. Jujutsu inverts this. The normal working mode in Jujutsu is just to make a series of changes, which then naturally form “branches” in the change graph, but which do not require a name out of the gate. You can give a branch a name any time, using jj branch create
. That name is just a pointer to the change you pointed it at, though; it does not automatically “follow” you as you do jj new
to create new changes. (Readers familiar with Mercurial may recognize that this is very similar to its bookmarks), though without the notion of “active” and “inactive” bookmarks.)
To update what a branch name points to, you use the branch set
command. To completely get rid of a branch, including removing it from any remotes you have pushed the branch to, you use the branch delete
command. Handily, if you want to forget all your local branch operations (though not the changes they apply to), you can use the branch forget
command. That can come in useful when your local copy of a branch has diverged from what is on the remote and you don’t want to reconcile the changes and just want to get back to whatever is on the remote for that branch. No need for git reset --hard origin/<branch name>
, just jj branch forget <branch name>
and then the next time you pull from the remote, you will get back its view of the branch!
Jujutsu’s defaulting to anonymous branches took me a bit to get used to, after a decade of doing all of my work in Git and of necessity having to do my work on named branches. As with so many things about Jujutsu, though, I have very much come to appreciate this default. In particular,I find this approach makes really good sense for all the steps where I am not yet sharing a set of changes with others. Even once I am sharing the changes with others, Git’s requirement of a branch name can start to feel kind of silly at times. Especially for the case where I am making some small and self-contained change, the name of a given branch is often just some short, snake-case-ified version of the commit message. The default log template shows me the current set of branches, and their commit messages are usually sufficiently informative that I do not need anything else.
However, there are some downsides to this approach in practice, at least given today’s ecosystem. First, the lack of a “current branch” makes for some extra friction when working with tools like GitHub, GitLab, Gitea, and so on. The GitHub model (which other tools have copied) treats branches as the basis for all work. GitHub displays warning messages about commits which are not on a branch, and will not allow you to create a pull request from an anonymous branch. In many ways, this is simply because Git itself treats branches as special and important. GitHub is just following Git’s example of loud warnings about being on a “detached HEAD
” commit, after all.
What this means in practice, though, is that there is an extra operation required any time you want to push your changes to GitHub or a similar forge. With Git, you simply git push
after making your changes. (More on Git interop below.) Since Git keeps the current branch pointing at the current HEAD
, Git aliases git push
with no arguments to git push <configured remote for current branch> <current branch>
. Jujutsu does not do this, and given how its branching model works today, cannot do this, because named branches do not “follow” your operations. Instead, you must first explicitly set the branch to the commit you want to push. In the most common case, where you are pushing your latest set of changes, that is just jj branch set <branch name>
; it takes the current change automatically. Only then can you run jj git push
to actually get an update. This is only a paper cut, but it is a paper cut. It is one extra command every single time you go to push a change to share with others, or even just to get it off of your machine.12 That might not seem like a lot, but it adds up.
There is a real tension in the design space here, though. On the one hand, the main time I use branches in Jujutsu at this point is for pushing to a Git forge like GitHub. I rarely feel the need for them for just working on a set of changes, where jj log
and jj new <some revision>
give me everything I need. In that sense, it seems like having the branch “follow along” with my work would be natural: if I have gone to the trouble of creating a name for a branch and pushing it to some remote, then it is very likely I want to keep it up to date as I add changes to the branch I named. On the other hand, there is a big upside to not doing that automatically: pushing changes becomes an intentional act. I cannot count the number of times I have been working on what is essentially just an experiment in a Git repo, forgotten to change from the foo-feature
to a new foo-feature-experiment
branch, and then done a git push
. Especially if I am collaborating with others on foo-feature
, now I have to force push back to the previous to reset things, and let others know to wait for that, etc. That never happens with the Jujutsu model. Since updating a named branch is always an intentional act, you can experiment to your heart’s content, and know you will never accidentally push changes to a branch that way. I go back and forth: Maybe the little bit of extra friction when you do want to push a branch is worth it for all the times you do not have to consciously move a branch backwards to avoid pushing changes you are not yet ready to share.
(As you might expect, the default of anonymous branches has some knock-on effects for how it interacts with Git tooling in general; I say more on this below.)
Jujutsu also has a handy little feature for when you have done a bunch of work on an anonymous branch and are ready to push it to a Git forge. The jj git push
subcommand takes an optional --change
/-c
flag, which creates a branch based on your current change ID. It works really well when you only have a single change you are going to push and then continually work on, or any time you are content that your current change will remain the tip of the branch. It works a little less well when you are going to add further changes later, because you need to then actually use the branch name with jj branch set push/<change ID> -r <revision>
.
Taking a step back, though, working with branches in Jujutsu is great overall. The branch
command is a particularly good lens for seeing what a well-designed CLI is like and how it can make your work easier. Notice that the various commands there are all of the form jj branch <do something>
. There are a handful of other branch
subcommands not mentioned so far: list
, rename
, track
, and untrack
. Git has slowly improved its design here over the past few years, but still lacks the straightforward coherence of Jujutsu’s design. For one thing, all of these are subcommands in Jujutsu, not like Git’s mishmash of flags which can be combined in some cases but not others, and have different meanings depending on where they are deployed. For another, as with the rest of Jujutsu’s CLI structure, they use the same options to mean the same things. If you want to list all the branches which point to a given set of revisions, you use the -r
/--revisions
flag, exactly like you do with any other command involving revisions in Jujutsu. In general, Jujutsu has a very strong and careful distinction between commands (including subcommands) and options. Git does not. The track
and untrack
subcommands are a perfect example. In Jujutsu, you track a remote branch by running a command like jj branch track <branch>@<remote>
. The corresponding Git command is git branch --set-upstream-to <remote>/<branch>
. But to list and filter branches in Git, you also pass flags, e.g. git branch --all
is the equivalent of jj branch list --all
. The Git one is shorter, but also notably less coherent; there is no way to build a mental model for it. With Jujutsu, the mental model is obvious and consistent: jj <command> <options>
or jj <context> <command> <options>
, where <context>
is something like branch
or workspace
or op
(for operation).
Jujutsu’s native backend exists, and every feature has to work with it, so it will some day be a real feature of the VCS. Today, though, the Git backend is the only one you should use. So much so that if you try to run jj init
without passing --git
, Jujutsu won’t let you by default:
> jj init
Error: The native backend is disallowed by default.
Hint: Did you mean to pass `--git`?
Set `ui.allow-init-native` to allow initializing a repo with the native backend.
In practice, you are going to be using the Git backend. In practice, I have been using the Git backend for the last seven months, full time, on every one of my personal repositories and all the open source projects I have contributed to. With the sole exception of someone watching me while we pair, no one has noticed, because the Git integration is that solid and robust. This interop means that adoption can be very low friction. Any individual can simply run jj git init --git-repo .
in a given Git repository, and start doing their work with Jujutsu instead of Git, and all that work gets translated directly into operations on the Git repository.
Interoperating with Git also means that there is a two way-street between Jujutsu and Git. You can do a bunch of work with jj
commands, and then if you hit something you don’t know how to do with Jujutsu yet, you can flip over and do it the way you already know with a git
command. When you next run a jj
command, like jj status
, it will (very quickly!) import the updates from Git and go back about its normal business. The same thing happens when you run commands like jj git fetch
to get the latest updates from a Git remote. All the explicit Git interop commands live under a git
subcommand: jj git push
, jj git fetch
, etc. There are a handful of these, including the ability to explicitly ask to synchronize with the Git repository, but the only ones I use on a day to day basis are jj git push
and jj git fetch
. Notably, there is no jj git pull
, because Jujutsu keeps a distinction between getting the latest changes from the server and changing your local copy’s state. I have not missed git pull
at all.
This clean interop does not mean that Git sees everything Jujutsu sees, though. Initializing a Jujutsu repo adds a .jj
directory to your project, which is where it stores its extra metadata. This, for example, is where Jujutsu keeps track of its own representation of changes, including how any given change has evolved, in terms of the underlying revisions. In the case of a Git repository, those revisions just are the Git commits, and although you rarely need to work with or name them directly, they have the same SHAs, so any time you would name a specific Git commit, you can reference it directly as a Jujutsu revision as well. (This is particularly handy when bouncing between jj
commands and Git-aware tools which know nothing of Jujutsu’s change identifiers.) The .jj
directory also includes the operation log, and in the case of a fresh Jujutsu repo (not one created from an existing Git repository), is where the backing Git repo lives.
This Git integration currently runs on libgit2
, so there is effectively no risk of breaking your repo because of a
Unsurprisingly, given the scale of the problem domain, there are still some rough edges and gaps. For example: commit signing with GPG or SSH does not yet work. There is an open PR for the basics of the feature with GPG support, and SSH support will be straightforward to add once the basics, but landed it has not.13 The list of actual gaps or missing features is getting short, though. When I started using Jujutsu back in July 2023, there was not yet any support for sparse checkouts or for workspaces (analogous to Git worktrees). Both of those landed in the interval, and there is consistent forward motion from both Google and non-Google contributors. In fact, the biggest gap I see as a regular user in Jujutsu itself is the lack of the kinds of capabilities that will hopefully come once work starts in earnest on the native backend.
The real gaps and rough edges at this point are down to the lack of an ecosystem of tools around Jujutsu, and the ways that existing Git tools interact with Jujutsu’s design for Git interop. The lack of tooling is obvious: no one has built the equivalent of Fork or Tower, and there is no native integration in IDEs like IntelliJ or Visual Studio or in editors like VS Code or Vim. Since Jujutsu currently works primarily in terms of Git, you will get some useful feedback. All of those tools expect to be working in terms of Git’s index and not in terms of a Jujutsu-style working copy, though. Moreover, most of them (unsurprisingly!) share Git’s own confusion about why you are working on a detached HEAD
nearly all the time. On the upside, viewing the history of a repo generally works well, with the exception that some tools will not show anonymous branches/detached HEAD
s other than one you have actively checked out. Detached heads also tend to confuse tools like GitHub’s gh
; you will often need to do a bit of extra manual argument-passing to get them to work. (gh pr create --web --head <name>
is has been showing up in my history a lot for exactly this reason.)
Some of Jujutsu’s very nice features also make other parts of working on mainstream Git forges a bit wonky. For example, notice what each of these operations has in common:
They are all changes to history. If you have pushed a branch to a remote, doing any of these operations with changes on that branch and pushing to a remote again will be a force push. Most mainstream Git forges handle force pushing pretty badly. In particular, GitHub has some support for showing diffs between force pushes, but it is very basic and loses all conversational context. As a result, any workflow which makes heavy use of force pushes will be bumpy. Jujutsu is not to blame for the gaps in those tools, but it certainly does expose them.14 Nor do I not blame GitHub for the quirks in interop, though. It is not JujutsuLab after all, and Jujutsu is doing things which do not perfectly map onto the Git model. Since most open source software development happens on forges like GitHub and GitLab, though, these things do regularly come up and cause some friction.
The biggest place I feel this today is in the lack of tools designed to work with Jujutsu around splitting, moving, and otherwise interactively editing changes. Other than @arxanas’ excellent scm-diff-editor
, the TUI, which Jujutsu bundles for editing diffs on the command line, there are zero good tools for those operations. I mean it when I say scm-diff-editor
is excellent, but I also do not love working in a TUI for this kind of thing, so I have cajoled both Kaleidoscope and BBEdit into working to some degree. As I noted when describing how jj split
works, though, it is not a particularly good experience. These tools are simply not designed for this workflow. They understand an index, and they do not understand splitting apart changes. Net, we are going to want new tooling which actually understands Jujutsu.
There are opportunities here beyond implementing the same kinds of capabilities that many editors, IDEs, and dedicated VCS viewers provide today for Git. Given a tool which makes rebasing, merging, re-describing changes, etc. are all normal and easy operations, GUI tools could make all of those much easier. Any number of the Git GUIs have tried, but Git’s underlying model simply makes it clunky. That does not have to be the case with Jujutsu. Likewise, surfacing things like Jujutsu’s operation and change evolution logs should be much easier than surfacing the Git reflog, and provide easier ways to recover lost work or simply to change one’s mind.
Jujutsu has become my version control tool of choice since I picked it up over the summer. The rough edges and gaps I described throughout this write-up notwithstanding, I much prefer it to working with Git directly. I do not hesitate to recommend that you try it out on personal or open source projects. Indeed, I actively recommend it! I have used Jujutsu almost exclusively for the past seven months, and I am not sure what would make me go back to using Git other than Jujutsu being abandoned entirely. Given its apparently-bright future at Google, that seems unlikely.15 Moreover, because using it in existing Git repositories is transparent, there is no inherent reason individual developers or teams cannot use it today. (Your corporate security policy might have be a different story.)
Is Jujutsu ready for you to roll out at your Fortune 500 company? Probably not. While it is improving at a steady clip — most of the rough edges I hit in mid-2023 are long since fixed — it is still undergoing breaking changes in design here and there, and there is effectively no material out there about how to use it yet. (This essay exists, in part, as an attempt to change that!) Beyond Jujutsu itself, there is a lot of work to be done to build an ecosystem around it. Most of the remaining rough edges are squarely to do with the lack of understanding from other tools. The project is marching steadily toward a 1.0 release… someday. As for when that might be, there are as far as I know no plans: there is still too much to do. Above all, I am very eager to see what a native Jujutsu backend would look like. Today, it is “just” a much better model for working with Git repos. A world where the same level of smarts being applied to the front end goes into the backend too is a world well worth looking forward to.
Thoughts, comments, or questions? Discuss:
As alluded to above, I have done my best to make it possible to use Kaleidoscope, my beloved diff-and-merge tool, with Jujutsu. I have had only mixed success. The appropriate setup that gives the best results so far:
Add the following to your Jujutsu config (jj config edit --user
) to configure Kaleidoscope for the various diff and merge operations:
[ui]
diff-editor = ["ksdiff", "--wait", "$left", "--no-snapshot", "$right", "--no-snapshot"]
merge-editor = ["ksdiff", "--merge", "--output", "$output", "--base", "$base", "--", "$left", "--snapshot", "$right", "--snapshot"]
I will note, however, that I have still not been 100% successful using Kaleidoscope this way. In particular, jj split
does not give me the desired results; it often ends up reporting “Nothing changed” when I close Kaleidoscope.
When opening a file diff, you must Option⎇-double-click, not do a normal double-click, so that it will preserve the --no-snapshot
behavior. That --no-snapshot
argument to ksdiff
is what makes the resulting diff editable, which is what Jujutsu needs for its just-edit-a-diff workflow. I have been in touch with the Kaleidoscope folks about this, which is how I even know about this workaround; they are evaluating whether it is possible to make the normal double-click flow preserve the --no-snapshot
in this case so you do not have to do the workaround.
Yes, it is written in Rust, and it is pretty darn fast. But Git is written in C, and is also pretty darn fast. There are of course some safety upsides to using Rust here, but Rust is not particularly core to Jujutsu’s “branding”. It was just a fairly obvious choice for a project like this at this point — which is exactly what I have long hoped Rust would become! ↩︎
Pro tip for Mac users: add .DS_Store
to your ~/.gitignore_global
and live a much less annoyed life — whether using Git or Jujutsu. ↩︎
I did have one odd hiccup along the way due to a bug (already fixed, though not in a released version) in how Jujutsu handles a failure when initializing in a directory. While confusing, the problem was fixed in the next release… and this is what I expected of still-relatively-early software. ↩︎
The plain jj init
command is reserved for initializing with the native backend… which is currently turned off. This is absolutely the right call for now, until the native backend is ready, but it is a mild bit of extra friction (and makes the title of this essay a bit amusing until the native backend comes online…). ↩︎
This is not quite the same as Git’s HEAD
or as Mercurial’s “tip” — there is only one of either of those, and they are not the same as each other! ↩︎
If you look at the jj help
output today, you will notice that Jujutsu has checkout
, merge
, and commit
commands. Each is just an alias for a behavior using new
, describe
, or both, though:
checkout
is just an alias for new
commit
is just a shortcut for jj describe -m "<some message>" && jj new
merge
is just jj new
with an implicit @
as the first argument.All of these are going to go away in the medium term with both documentation and output from the CLI that teach people to use new
instead. ↩︎
Actually it is normally git ci -am "<message>"
with -a
for “all” (--all
) and -m
for the message, and smashed together to avoid any needless extra typing. ↩︎
The name is from Mercurial’s evolution feature, where it refers to changes which have become obsolescent, thus obslog
is the “obsolescent changes log”. I recently suggested to the Jujutsu maintainers that renaming this might be helpful, because it took me six months of daily use to discover this incredibly helpful tool. ↩︎
They also enabled support for a three-pane view in Meld, which allegedly makes it somewhat better. However, Meld is pretty janky on macOS (as GTK apps basically always are), and it has a terrible startup time for reasons that are unclear at this point, which means this was not a great experience in the first place… and Meld crashes on launch on the current version of macOS. ↩︎
Yes, this is what I do for fun on my time off. At least: partially. ↩︎
For people coming from Git, there is also an amend
alias, so you can use jj amend
instead, but it does the same thing as squash
and in fact the help text for jj amend
makes it clear that it just is squash
. ↩︎
If that sounds like paranoia, well, you only have to lose everything on your machine once due to someone spilling a whole cup of water on it at a coffee shop to learn to be a bit paranoid about having off-machine backups of everything. I git push
all the time. ↩︎
I care about about this feature and have some hopes of helping get it across the line myself here in February 2024, but we will see! ↩︎
There are plenty of interesting arguments out there about the GitHub collaboration design, alternatives represented by the Phabricator or Gerrit review models, and so on. This piece is long enough without them! ↩︎
Google is famous for killing products, but less so developer tools. ↩︎
Thanks: Waleed Khan (@arxanas), Joy Reynolds (@joyously), and Isabella Basso (@isinyaaa) all took time to read and comment on earlier drafts of this mammoth essay, and it is substantially better for their feedback!
Thanks for reading my feed! Thoughts, comments, or questions? Shoot me an email, or leave a comment on Hacker News or lobste.rs.
]]>A few weeks ago, Jaimie and I were slated to sing and play piano respectively for our church music team. Our music director has been asking me for a while to consider writing new music for the sung parts of the liturgy. I had the time, and the inclination, so I sat down the first few days of January and worked out: a Sanctus intended to be sung especially during Epiphany.
We recorded one of our rehearsals that Sunday morning, and after a tiny bit of cleanup, I am sharing it with you! Here it is with Jaimie singing and me playing on a digital piano:1
If you like it, please use it! The music is freely available under a license which lets you do whatever you like with it (including remixing it, recording it, even selling it) as long as you let others do exactly the same with anything you do with it. This should make it easy for churches to use (no licensing deal required) and to share with others.
Here’s what I shared with our congregation about the piece:
As I was composing the new Sanctus for Epiphany, I was thinking about three things. The first two were the normal “tools” in a composer’s toolbox that one uses for setting a text like this well: making sure that any mentions of God are musically prominent, and making sure that texts like “Hosanna in the highest” don’t go “downward” tonally. The second was keeping the lines singable for a congregation. The final consideration was how to fit those things together with Epiphany! God has appeared! Epiphany reminds us — from Jesus’ circumcision, through the visit of the Magi, to his baptism, of how God appeared. In Jesus, God is truly with us, among us, was truly one of us — and this was not only for Jews, but for Gentiles like me. This is a cause for rejoicing: the holiness of God drawn near, for all of us. Soli deo gloria!
I have never written a piece like this before. It was a fun new challenge musically to write something with all my composition training in mind, but in a style and mode that suits our congregation. It has been a real joy to share with our church. Getting to stand with the congregation and sing it as someone else played it yesterday (the second Sunday in Epiphany) was really special. Hopefully I will be writing more music like this in the months and years ahead!
Once our piano is tuned, I hope to make a somewhat better recording than this, with none of the background chatter, a cleaner take of Jaimie’s voice, and the glorious sound of a real piano. ↩︎
Thanks for reading my feed! Thoughts, comments, or questions? Shoot me an email!
]]>I really only have one problem with my Kobo Aura ONE. It is fully seven years old1 and works extremely well — in fact works better now than when I got it, other than the very slight (honestly, unnoticeable) decrease in battery life. In fact, it has exactly zero problems. That’s the problem.
You see, I would really like a Kobo Sage. The physical page-turn buttons seem attractive, and since we bought our daughters each a Kobo Clara 2E for their Christmas present this year I can see that the screen technology has gotten a bit better over the years. A faster and more responsive screen would be nice. I might even take advantage of the stylus integration. All in all, it looks like a great product and a really solid upgrade.
But I have a device I bought seven years ago which is going strong, has no issues, and continues to be one of my favorite pieces of technology to use day to day. I really cannot justify buying something new when there is absolutely nothing wrong and indeed many things very right with the tool I am currently using.
Drat — consumerism foiled by a company doing really good work!
I know where I will shop when, eventually, something goes wrong with the Aura ONE, though.
I opened the box on January 17, 2017! ↩︎
Thanks for reading my feed! Thoughts, comments, or questions? Shoot me an email!
]]>I am excited to announce that I will be giving not one but two talks at LambdaConf 2024! These two talks represent some of the most important parts of my thinking and work over the past several years. They are also profoundly different from one another: one sweepingly philosophical, the other deep in the weeds of a specific technical challenge. They have in common, though, a concern for the human dynamics of software development — the social and cultural realities of the field. I look forward to sharing them with you!
When I first read Peter Naur’s Programming as Theory-Building, it summed up beautifully one of the deep tensions I have seen in the industry throughout my career so far: between how “management” wants software development to work and how it actually does work. When I read James C. Scott’s increasingly-influential book Seeing Like a State, I immediately started mulling on applications to software: the same kinds of limits that modernist schemes face in trying to bend the world into a legible, “rationalized” shape apply to this discipline in particular. This talk about the intersection of those two themes has been bubbling ever since.
Engineering is building systems which both reduce mistakes and safely absorbing the mistakes which do occur. In software engineering specifically, we should apply test-driven development (TDD) and adopt domain-driven design (DDD), use types systems to make illegal states unrepresentable, model what kinds of effects are legal in languages like Unison, and formally model and even prove how parts of our systems work: all tools for more effective “theory-building”, to borrow Peter Naur’s term.
But we also have two serious problems: (1) the limits of translating our thoughts into code; and (2) the limits of translating any human activity into software systems. Many “schemes to improve the human condition have failed”, as James C. Scott observes. We should learn from those failures.
We should not give up on software — or on software engineering! But great engineers know not only when and how to apply their tools, but also the limits of their tools and their whole trade.
The second talk pulls together many of the threads of my work at LinkedIn on versioning constraints for TypeScript (including the SemVer for TS Types spec I developed) with more general considerations about the state of the art in this problem space.
Versioning sits at the intersection of contract and communications: what our languages and tools can enforce, and what we want to tell the people using our software.
Semantic Versioning is one answer to this: a sociotechnical contract which lets us define breaking changes. Because it is social as well as technical, though, it is ambiguous. Members of the Rust and TypeScript communities offer one kind of answer to this challenge: specification and tooling. Elm has offered another: baking it into a language-aware package manager. Unison lets evolution and versioning coexist. One group of researchers even baked type evolution into a functional programming language. Other languages and ecosystems have just thrown up their hands in the face of the inevitable edge cases and failure modes. We can do better!
How can new languages include versioning semantics in their design constraints? What tooling should we build for existing ecosystems? Above all, what should you do in your own projects?
Thanks for reading my feed! Thoughts, comments, or questions? Shoot me an email!
]]>A bit of context: For many years now, I have made it my habit to write up one of these summaries. In this case, I have tried to make it a bit more digestible by breaking into smaller chunks. You can find them all here.
This entry in my 2023 in review is a smörgåsbord, covering everything I have not covered in the other parts of the series: music, photography, health and fitness, and church.
The major public bit of musical work I did this year was publishing a recording of “Fanfare for a New Era of American Spaceflight”. I hoped originally to find a label to publish that, and it might have been possible, but I was so overwhelmed by the difficult start to the year at work that I just wanted to get the thing out into the world. It has gotten only a few dozen plays all year, but then I did not do the things which would have helped, like contacting people to get it on playlists in Spotify. Lessons learned for next time! I am enormously proud of it, though. It is a good little fanfare.
(In case you missed it: you can listen to it on the streaming service of your choice, or watch a video of the orchestra playing it!)
I did a lot of musical work beyond that in 2023. It just is not public yet, and I expect most of what I did this year will not be until at least 2025, perhaps even later. When you are working on what I will continue to describe publicly only as “a large-scale symphonic work”, emphasis on “large-scale”, it takes a while to get it across the finish line. My goal in this case is to finish it by the time I am 40, some 3½ years from now, and I started it around the time I turned 34, already 2½ years ago. Working on this project was one of several major side project emphases of my sabbatical. Courtesy of the extra time and focus, I was able to get through a major milestone on the work, and I estimate I am probably about halfway through the composition in terms of the total duration of the finished work, and hopefully somewhat more than that in terms of the time it will take to write it.
I also wrote a much smaller piece a few weeks ago, a little chamber-scale work as a bit of a palette cleanser after getting across that big milestone in the symphonic work. I will have more to share on that front in the next month or two, I think!
Finally, I am on deck to write a congregation-friendly Sanctus for our church, which I need to do… tomorrow. I am playing piano for church on January 7, the first day of Epiphany, and we like to use a different Sanctus for each season in the church calendar. The ones we were looking at using were not particularly singable, and so I volunteered to write one. (Did our music director just troll me into doing this by picking music that would annoy me? No, that is not remotely her style. Did I troll myself into doing it by way of being a complete snob about text setting? Maybe.) I will share that once it is done, perhaps even recording it nicely with Jaimie singing and self-publishing it. These sorts of things will likely crop up more and more going forward. While I do not expect church music ever to be my primary musical context, it is good to put my abilities at the service of the church.
This was a slow-but-steady kind of year for photography. I kept sharing regularly to Glass. I caught up on a large chunk of my backlog of unedited photos from years past. I also kept working on my editing skills, from learning how to use masks to building a good-enough preset that I can use for the vast majority of the photos I take of family events. That was good learning and saves me a ton of time, given I usually come out of an evening like our extended family Christmas get-together with at least 150 photos.
On the other hand, I do not feel like I particularly improved in my skill at taking photographs this year. When I look back through the year, there are a fair number of photos I like, some of them a lot. There are none, though, that I could not have taken a year or two ago. I am considering ways to improve this in 2024, including the venerable “take and share on a schedule” plan. The main thing, though, is that I did improve on at least one axis this year.
Despite experiencing some excruciating back issues in the spring (I know now what the phrase “threw out my back” means from direct experience) and spraining my ankle on a trail in Sequoia National Park in July, I had the best year I have had since 2015. It felt even better by way of contrasted with 2022, which I described as “terrible-against-my-baseline”. That goes double given the injuries. I spent some time at the start of the year studying up more on running and cycling training, leading to my adopting a new set of training plans, and it really paid off.
I set out to run a sub-1:30:00 half marathon again. I did so twice, with a 1:28:36 (6:45/mile pace) in the Colfax Half in May and then a 1:26:41 (6:37/mile pace) at the Colorado Springs Half in September. The latter was almost 1,000 feet higher than the former, so running it 2 minutes faster felt particularly satisfying. Adjusted for altitude, that is likely my best-ever performance; I have run faster only twice before, and then at sea level.
Both races were extra fun for having friends at them. I got three of my best friends — none of whom had ever run a half marathon before! — to do Colfax in the spring, and one of them to do Colorado Springs with me in the fall. I have never before had a “team” to train alongside, even virtually, and it was great. Encouraging each other along the way, through the various kinds of injuries that start hitting you in your mid-30s as we all are now, helped us all. Despite being hours apart across the Colorado Front Range, we even managed to run together a couple times this year. I definitely hope to repeat parts of this in 2024.
I also set out to do both of the 40-mile rides at this year’s Courage Classic, which I did, and felt much stronger and more capable when I last did the Courage Classic back in 2018. Biking made up much more of my training this year than any previous year, in part because of those injuries and in part because I was able to pick up a smart trainer at the start of the year. When I could not run for the sake of my sprained ankle, for example, I could still put an hour on the bike. That made all the difference when it came to being able to turn in a good half marathon time in September despite not being able to run for half of July and half of August.
Strength training remains a mixed bag for me. I simply do not like it. It is a useful complement to running and cycling and makes me better at both. I can acknowledge that in the abstract. Knowing I need more strength training in the mix, I also acquired some weight training equipment mid-summer. I have used it some, but not as much as I would like. I was more consistent about regular push-ups work; I managed to maintain that habit most of the year despite setbacks from pulled back in the spring. Strength training is definitely my biggest area to improve fitness-wise in 2024, though.
On a happier note, this was the first year I managed to be where I wanted weight-wise since 2015. I have been at healthy and reasonable weights since 2010, for which I am grateful. The last 7 years, though, I have been sitting just a bit higher than the ranges where I feel best and perform best. I also see, looking around at folks in the decades ahead of me, how easy it is to gain just a little weight each year and have it never come back off. Getting back to my target range was really satisfying, and I could definitely see the impact on my performance in the September half marathon. Now to stay here!
All together, then, a good year on this front, albeit one with a clear path of where and how to improve in 2024.
Where last year was very transitional for us as we joined a new church and found our footing in a new tradition, this year was the year we started stepping back into actively serving. I described my feeling about the year to our senior pastor a few months ago, when still recovering from the sprained ankle I described above: When you have done something like that sprain, you really need to stay off the injured part of the body as much as possible for a while to let it heal. There comes a time, though, when you actually need to start putting weight back onto it and working it normally, so that it will finish healing. Last year and early this year, I was slowly coming back into a healthy spot in my faith. By mid-year, it was clear it was time to start working again.
The aforementioned writing of church music is one part. Closely connected is another, and the way I ended up in that boat in the first place: agreeing to serve monthly in the music ministry by playing piano (with Jaimie singing!). Before jumping back in this year, it had been a full decade since I had played music for a church. It has been good to knock the rust1 off my fingers for piano-playing, and it has been a real joy to serve this way again. “Serve” is the right word, though: it is not a small amount of time or work to prepare even just monthly. There is a lot of music in an Anglican service like ours — and I mean a lot.
Going forward, I may find ways to serve beyond that, in terms of teaching and preaching in particular. Much as I love those, though, I do need to prioritize carefully: with finding a new job and giving multiple talks and several essays committed and that big orchestra piece in flight, I already have much to do. I am leery of overcommitting, especially as I see that we will not have all that many years with our girls at home (they are almost 12 and 10!).
There will be years for all these things, Lord willing. This is my new refrain. There will be years for all these things.
And with that, a happy close to 2023! Many more words here, no doubt, in 2024. Godspeed.
Thanks for reading my feed! Thoughts, comments, or questions? Shoot me an email!
]]>A bit of context: For many years now, I have made it my habit to write up one of these summaries. In this case, I have tried to make it a bit more digestible by breaking into smaller chunks. You can find them all here.
This year was wild for me professionally. I started the year cautiously optimistic:
I will be very curious to see how I feel come the end of 2023! For now, I will simply close by noting how delightful it is to be a matter of weeks from my 4-year anniversary of starting at LinkedIn: this is already substantially the longest I have stayed at any job. I no longer feel like I am “just getting started” but I do still feel like there is a lot of opportunity ahead, and indeed more opportunity than there was when I started, thanks to the work my colleagues and I have done over the years in between. That is a really nice spot to be.
Eight months later, I quit LinkedIn, having barely avoided both rage-quitting in the early spring or burning out in the fall. It turned out that when I returned to work 2 weeks after publishing those words, I dropped right into a fire that got significantly worse before it got better: memory leaks in server infrastructure related to the web client. I spent the first three months of the year leading a team of a dozen engineers to fix that — and, critically, fixing it far more robustly than had similar efforts in the past. We added layers of resilience and improved the observability of the server components of the system. We implemented lints in our client-side code to rule out whole swaths of failure modes we had identified. Our retro/postmortem was incredibly thorough and robust.
And I came out of it (a) exhausted by the work itself, which was not the kind of work I love, and (b) frustrated by the way the broader organization failed to recognize, still less to value appropriately, the work we had done. That combination was, to say the least, demoralizing. When the company decided shortly thereafter to subject my team to a fairly nasty re-org, then to embark on what I still judge to be one of the more obviously wrongheaded technical directions I have seen in my nearly 15 years of writing software professionally — and I have seen some things before this — well…
I also had an illuminating conversation with my now-former boss (hey Adam!) at my mid-year review. He noted that despite my oft-stated interest in our being more innovative, I was unenthusiastic about a lot of notionally innovative efforts floating around the company — e.g. the various big pushes for LLM-based features. As I put it to him then: I am motivated specifically by innovations which raise the quality floor, ceiling, or preferably both. I care, as I put it in Next: Role?, about ratchets. None of the things on offer at LinkedIn looked like ratchets.
Put those pieces together, and there is a reason I started allocating budget early in the year to afford myself a sabbatical and some time to look for a job which is a better fit next time around.
Reviewing my work notebook over the past hour was quite illuminating.
As I wrote in those pages one day in May:
What the heck do I even want to do? What actively brings me joy in programming? I like building things, but not “features” so much as libraries. I like teaching. I like FP. I like trying to improve the craft of our discipline. I like having time to think deeply about problems and solve them well — so they stay solved. I like caring about and being able to celebrate & support the product I work on. I like learning. I like type systems. I like helping mainstream/popularize newer ideas more than trying to push forward JS/Java/etc. I dislike management-heavy/-run orgs. I dislike pure profit-maximizer companies. I dislike places which reward political savvy over competence in the domain and excellence in the craft.
(Yes, too much emphasis. But that’s what I wrote.)
As I put it to a friend a few days ago, there are limits to how good we can make things with the limits of technologies like Java or JavaScript. That does not mean they are not worth investing in. But their technical limitations and affordances produce both cultural and creative limitations and affordances. We can improve them to a real degree — but only so far. There is a real shift in going from C++ to Rust, or from C♯ to F♯, in terms of our ability to raise the floor or the ceiling quality-wise.1 Floor-and-ceiling-raising is what gets me up in the morning, though. As I put it in that quote from my notebook above: solving problems so they stay solved.
This was the year we — at last — published stable types for Ember.js. Once I got clear of the memory leak mess, I spent a non-trivial chunk of my time getting that capability across the line in Ember. That entailed everything from massive internal refactors to the framework (needful for half a decade or more) to building an absolutely horrific pile of nightmare fuel as a “build script” for the types. But it is a well-documented horrific pile of nightmare fuel, which will be easy to delete if Ember ever gets around to paying off the related tech debt which necessitated it. (If I had tried to fix that, I would still be working on it.) Finishing that work felt incredible.
When I started working on TypeScript support for Ember all the way back in December 2016, I was distinctly not “qualified” for the job. I had been working in Ember for less than a year, and I had been writing TypeScript at all for a whopping two or three months. I was not coming from nowhere, to be fair: unlike many folks in the front-end world, I had spent a lot of time over the first five years of my career writing a mix of typed languages (including C, C++, and Fortran — yes, Fortran), and since 2015 had been mucking around with Rust, Haskell, Elm, PureScript, F♯, and so on. Types were not new to me. But TypeScript was!
It was not ego but desperation driving me, though. I knew — from Rust and Elm especially — that there were many, many bugs in our app that simply did not have to be there. I also knew, from my attempt starting a few months earlier to add types to our app using Flow, that these languages could catch a lot of low-hanging fruit in terms of those kinds of bugs.2 I was sick to death of the bugs that I knew Flow or TypeScript could catch. So I just… started.
I had no idea — really, no idea — how much work I had signed myself up for. I also had no idea just how satisfying it would be, or how much it would change the trajectory of my career. I ended up on the Ember Framework team largely because of that work (and the related efforts it motivated). I ended up at LinkedIn because of that work, and it was a significant part of the case for my promotion to Senior Staff in 2021. I learned an incredible amount technically, plumbing depths of type systems arcana I could not have imagined when I started. I also learned a huge amount about what it means to lead in the kind of incoherent, ad hoc community that any open source project ultimately is.
I am incredibly glad to have done it. I am also, honestly, relieved to have it behind me. I left the Ember Framework team in May and the Ember TypeScript team in September. I do not expect to use Ember going forward, nor to contribute to it in any way. Although I am grateful for all I learned in that little ecosystem, I am very much ready to focus my attention and efforts elsewhere. I still check in on the Discord and on GitHub every once in a while to see if there is any area where my specific knowledge is needed, but less and less frequently. A chapter closed.
I feel good about the work I did in 2023, both internally and in open source. I also feel good about the decision to leave LinkedIn. Not happy, to be clear — but good. I wanted LinkedIn to be a place I could stay longer and grow more. Alas! I am proud of my tenure there, proud especially of the number of engineers who told me that I inspired them or helped them grow, proud most of all of the recurring refrain that I had made the technical culture of “the big app” far friendlier and more welcoming — a place where anyone could safely ask a question and not be made to feel dumb but rather encouraged and helped.
And now? Something new. Here’s to 2024.
I wrote another version of this take all the way back in 2018. I am nothing if not consistent! ↩︎
I started out looking at Flow instead of TypeScript because, when I started looking at it, Flow had features TS did not that were hard necessities for Ember support — and because, unlike TS, it did actually aim to be a sound type system. But I concluded, once TS got the relevant features, that between Microsoft and Facebook, Microsoft was the team to bet on in terms of successfully driving a programming language forward. Facebook’s track record was already a bit spotty on that front, whereas Microsoft had decades of experience in exactly that space. That choice looks obviously correct in retrospect — but it was distinctly not obvious at the time: Flow was a real competitor and some of its choices really pushed TypeScript to reevaluate some of its earlier decisions. ↩︎
Thanks for reading my feed! Thoughts, comments, or questions? Shoot me an email!
]]>A bit of context: For many years now, I have made it my habit to write up one of these summaries. In this case, I have tried to make it a bit more digestible by breaking into smaller chunks. You can find them all here.
This year saw me publish about 70,000 words on this site (including this absurd ~7,500-word end-of-year extravaganza). Unlike 2022, I did not publish almost anything elsewhere this year — no magazine work, and only a couple of posts over on the Ember.js blog. The latter included a capstone for a whole phase of my career: Stable TypeScript Types in Ember 5.1. I have more to say on this in the Professional post, and as a piece of writing per se it was not particularly interesting, so I will leave it at that.
I am fine with relatively little external publishing, though. I probably could have found some opportunities to publish elsewhere, but this was really not the year for it. Happily, I already have a couple writing projects outside this site in the hopper for 2024, on which more once they arrive.
Here on my own turf, I did a solid majority of my writing in three distinct stints:
Of those, the ones from this year I most want people to read are (ordered alphabetically):
Competencies — on how political competency comes to dominate other competencies at the upper echelons of large organizations.
The Good News That God Does Not Change — a short little reflection on Malachi 3:6, the kind of writing I would like to get back into the habit of doing as I once did more often. (Also an exercise in trying to write at a non-technical, very-accessible level: which is hard!)
How to Do a TypeScript Conversion — just what it says, but with some advice that a lot of teams need to hear, because the advice I offer is very different from a lot of conventional wisdom out there.
Je ne sais quoi — thinking out loud about “taste” and everything from Leica cameras to Mac computers to programming languages.
Next: Role? — part of which, on the metaphor of ratchets and levers in software development, I really need to extract into its own standalone essay.
Sermon Notes — explaining why I do not take them. Extracted from a reply on a LinkedIn post, of all things!
Stocks, Flows, Resiliency, and Layoffs — an almost-essay riffing on a couple key quotes from Donella Meadows’ Thinking in Systems.
Tools for Thought, Not Shortcuts for Thinking — a small push against a very common way of describing and thinking about computers.
Unmeasurable Costs and Benefits — extracted from a Hacker News discussion (again: of all places!) but representing an important part of my thinking about the common idea that we can “just come up with a way to measure it” for everything we do in software. Pairs well with Be a quality detector, again from Donella Meadows’ Thinking in Systems.
Where DRY Applies — a simple post on perhaps the single-most-repeated programming principle, itself extracted from another good, but far less general post.
The Wizardry Frontier — arguing that given how we have raised the level of both abstraction and of quality in software development, we can do so again, even in areas that today look far out of reach.
I set out at the start of the year with a goal of giving at least two talks. I gave zero. I submitted a couple talk ideas I liked to a couple of conferences, and got rejected, and was mildly disappointed — but not terribly disappointed; this is the usual way things go. I also already have two talks lined up for 2024 (more on that in this space shortly!), and am tentatively hoping to give a few other talks as well.
I missed giving talks, even if I was able to take it in stride. Speaking feels like an effective way to get more eyes on some of the interesting work I have been doing over the past few years. Doing the work of writing and polishing a talk also serves very effectively to help me “tune up” my thinking on various subjects. It acts as an excellent complement to public writing in that regard: writing helps me be precise on what I think; speaking helps me be pithy and persuasive. And: I just enjoy doing it!
I did appear on a podcast episode again this year: talking about SemVer, TypeScript, Rust, programming language design tradeoffs, and more with Richard Feldman on Software Unscripted. I love these kinds of discussions, and hope very much to do more of this kind of thing in 2024. Podcast chats sit in a very different spot than prepared talks, but they also prompt some of my best thinking. That was true in this particular conversation, and it was true back in the Winning Slowly era, and I expect it will remain so going forward. There is a certain kind of thinking which only happens in conversation with other smart and careful people.
Finally, I also published a whole boatload of videos to YouTube. Most of them were part of a series of “deep dives” I put together with my friend and long-time collaborator Dan Freeman to help people in the Ember.js community (and beyond!) understand how the Glint tools work. Those were a lot of fun to record, and I used them to learn a little bit about video editing and Final Cut Pro. I also put up one short video on using Dorico, and plan to do more of that as makes sense.
As I have been publishing to YouTube, I have been thinking hard about how much of a presence I want to establish there. On the one hand, it is undeniably the best place to get reasonable “reach” for this kind of video content. (If the algorithm favors you, at least.) Plenty of people have built an entire career on that. I do not want to do that, though for two reasons. First, I do not want to be catering to the whims of the algorithm or whatever kind of audience one acquires on YouTube: generally, the same one might acquire on Twitter, which is to say, one more interested in Takes™ than in learning deeply. There are certainly exceptions to that, but they are rare. Second, and equally importantly, I never want to be beholden to any platform like that for the entirety of some part of my professional presence. Should YouTube go away, or pivot, or change its business relationship to “content creators”, I want to be independent and able to sail on.
I have also been mulling in a more general sense a lot on the role I want public speaking to have in my life. On the one hand, I am very comfortable with public speaking and enjoy it immensely, and I regularly get feedback that I give very good talks. That means it is potentially a very helpful road to go down in terms of impact. On the other hand, there is a real danger in pursuing a public persona as a regular conference speaker. The temptation is to end up speaking because you are a conference speaker, rather than because you have something to say. That way lies Thought Leadership™ and Content Creation™. That way lies folly. But I repeat myself.
I want to build public speaking into my career going forward… but if and only if, when and only when, I have something to say that I do not see others saying. You might observe that much the same applies to writing. So it does. If I ever find myself writing for the sake of writing, rather than to inform, to challenge, to build up, to educate — I will stop.
Thanks for reading my feed! Thoughts, comments, or questions? Shoot me an email!
]]>A bit of context: For many years now, I have made it my habit to write up one of these summaries. In this case, I have tried to make it a bit more digestible by breaking into smaller chunks. You can find them all here.
This year I finally read all the books I planned to — and then some! At the end of the year, I finished 31 works of nonfiction and 12 this year. I write “finished”, rather than “read”, in the name of pedantic accuracy. Most of the books on this year’s list were books I read start to finish this year, but not all. A fair number of those were books I had started before this year — in two cases as far back as 2020! — and only got across the finish line in 2023. I was glad to get them across the finish line, though! For a few weeks in November and December, I was texting one of my friends an updated count every few days of how many books I was “actively” reading still. The fact that my count is now 8 is a triumph: at various points in the year it was close to 20!
The other win, perhaps strangely, is that I committed myself to just tabling books I do not intend to keep reading at this point. Sometimes you get everything you need out of the first three chapters of a book. Sometimes you make it halfway through and realize it just is not for you. Sometimes you realize two chapters into a novel that it is terrible dreck that is not worth your time. In any and every one of those cases, it is okay to stop reading it. I have always had a completionist bent to my reading, and while I would very occasionally give up on a book before finishing it, the final months of this year saw me actually commit to a habit of making that call and moving on.
I am even beginning to think about books as starting out neutral and having to earn their way to completion. We will see if that mentality sticks, but life is too short to waste it on bad books, or even on books which are good but not particularly profitable at this point in my intellectual and spiritual journeys.
What about the good books, though? Well, I read a lot of those, thankfully! Listed below are the best books in each of the categories I read through. Notably missing is poetry — a gap I hope to address in 2024!
Broken down by the major “segments” of reading I generally do. Not listed here: the many hundreds of articles and essays I read throughout the year. Perhaps next year I will keep an eye on those, too!
These are books I read in conjunction with business and tech… even if they are not themselves explicitly addressing the subject.
The truth is that standout novels is nearly the whole list from this year — the only novels I read which do not appear here were rereads or are filed under “still pretty good, just not amazing”:
Thanks for reading my feed! Thoughts, comments, or questions? Shoot me an email!
]]>A bit of context: For many years now, I have made it my habit to write up one of these summaries. In this case, I have tried to make it a bit more digestible by breaking into smaller chunks. You can find them all here.
I wrote up both the background of this little break as well as my goals and hopes for it in some detail back in early November, so I will not rehash those here. Suffice it to say that it has been wonderful. I am much refreshed, and very much looking forward to doing the work of finding my next role over the next few months.
I had a few hopes at the start of my sabbatical:
- make a bunch of progress on the large orchestral work I started back in 2021;
- learn Racket, including Typed Racket, and use it to learn how to build programming languages, starting by working through PLAI;
- finish a couple in-flight essays;
- write a decent bit of code, e.g. for my next-generation personal website builder and perhaps some related to one or another of those in-flight essays;
- and read a lot of books.
But as I (foot)noted there:
I write “hope” rather than “plan” because the meta goal for this sabbatical is to rest and reset. Holding any of those plans too tightly would be a recipe for ending up frustrated if I failed to end up accomplishing them: rather the opposite of the point of a sabbatical!
Happily, on that key point the sabbatical has been a smashing success — and a few of the hopes, too:
I have made significant progress on that orchestral work (and I mean significant progress), and also wrote another short work for a much smaller ensemble. More on both of these in my post on the rest of life.
I did learn enough Racket and Typed Racket to finish PLAI.
I made some progress on my site-builder and wrote a bunch of code in the course of working through parts of both Functional Programming in Lean and Crafting Interpreters.
As for reading a lot of books? Oh yes. See the Reading post!
I have not made progress on my in-flight essay/posts (though perhaps I still will in the next few days after I publish this!), but that is okay. The point, again, was to rest — and so I have. The essays were the thing I chose not to prioritize given relatively limited time.
On that last note: three months is both a very long time off and not all that much time at all. It has gone very quickly! It has been the right length of time, though. I feel very ready to get back to work in 2024, and that absolutely was not the case when I started this sabbatical.
One thing I hoped to get to but did not mention in that post was redoing my office at last. I have joked often for the past few years that my office was the room where we ran out of steam when moving into our house almost six years ago. It was not entirely a joke, though. My home office is a repurposed bedroom, and the closet has for all those years been full of boxes of books — ready to be unpacked, but not unpacked.
From
It stayed that way for the next half decade. I never loved the space or the layout, but I had never really had the time to sit with the space and consider how else it might work. In late 2021, I replaced the iMac with a maxed-out MacBook Pro, which handles all my personal work far better than the iMac could, and is portable to boot. That might have been a good time to revisit the setup, but I once again did not really feel I had the mental space to tackle it. I had a handful of sketches from a couple years ago when I first started thinking about how I might improve things, but that was as far as I had ever gotten. Instead, I used some home office budget to provision another big display, roughly replicating the old setup with a dedicated 5K display for each machine.
But then I left LinkedIn at the start of October, and had some time on my hands! I started mulling on how I could use the time to make my office someplace really nice. Back in late November, I sat down one Sunday evening and took those initial sketches, iterated on them extensively with input from an architect friend, and ultimately turned them into an actual plan. With Jaimie’s help, I have actually accomplished that plan over the past month.
First and foremost, I changed the desk layout and reoriented it in the room. I laid out the desk so that when I have a work computer again, I can simply plug it into the same single cable that I plug my personal machine into and be off to the races — monitor, camera, microphone, even headphones. I sold off my now-extraneous second monitor.1 With the extra monitor gone, I aligned the desk with the window and closet here, though leaving enough room to access both. This means I now have a lot more working space on the desk, even with a 61-key MIDI keyboard on the window-facing part of it.
That change also makes for a really nice active transition from looking at the computer to looking out the window. In the previous layout, I had a 45º angle between computer screen and window. In some ways, that was nice, because it mean looking out the window was just a glance to the side. I have found over the past few weeks that the 90º turn to the window and open desk space is even better, though. A head turn is still possible, but when I want to switch away from the computer to just thinking or to working purely in paper and pen, having a dedicated space backed by the window view is much better.
Second, we tackled the rest of the room. We bought some more bookshelves and unpacked those final boxes at last. We added a reading chair by a table and lamp in the corner. We hung a bit of art around the room. With the backdrop for the computer now being a pair of sliding closet doors, we bought a curtain rod and hung some blue velvet curtain panels there — which not only improves the view but also substantially decreases the reverberance of the room. We replaced the ceiling light with one that is both brighter and much more visually stylish and interesting.2
It is amazing the difference this fairly simple set of changes has made. Everyone who walks in notices immediately that it feels both cozier and more spacious. Quite a trick! It is now also the kind of place my daughters like to come hang out, enjoying the comfy reading chair (or, hilariously, the floor!). That chair is also a place I can come sit to read for hours on end purely for pleasure — or where I can transition to for dedicated study of something for work, while staying in my office. The location of the MIDI keyboard makes it easy to use for working out ideas, both just messing around on the keys and jotting down notes with pencil and staff paper.
All told, this office update may end up being one of the most significant parts of my sabbatical!
Thanks for reading my feed! Thoughts, comments, or questions? Shoot me an email!
]]>A bit of context: For many years now, I have made it my habit to write up one of these summaries. In this case, I have tried to make it a bit more digestible by breaking into smaller chunks. You can find them all here.
I start every year with a set of overall hopes and aspirations and expectations, and then get to see over the course of the year what reality has in store. This year, though, I came into the year making a conscious effort to be more flexible with those expectations over the course of the year. The past few years, I have set myself up for emotional distress late in the year as hopes and goals became increasingly unattainable. I recognized, in doing my usual end-of-year review last year, how that put me in a counterproductive place: the discouragement of missed goals left me that much less motivated to try to make progress where I could.
Starting out this year, then, I did set some goals — but with two explicit changes to my thinking about those aims:
First, it is important to readjust goals proactively throughout the year. On the “hitting my goals” side, this feels obvious: If I am having a particularly productive year and hit all my annual goals by the end of May, adding more goals for the rest of the year is fine. Rationally, the same is true of the “not hitting my goals” side of things as well: If, in a less productive year, I come to the end of May and am clearly not on track, it is fine to cut back on goals for the rest of the year. The challenge is proactively addressing a negative mismatch, because the associated emotions are so much less pleasant. Doing so is very helpful emotionally in the medium term, though, because holding onto unreasonable goals just feeds frustration as the tension between goal and reality mounts. Biting the bullet lets me deal with the emotional setback once — and then move on.
Second, setting ambitious goals inherently means I might not meet all of them. That is an inversion of my normal feelings on the matter! If I easily accomplish every goal I set, that means my goals were probably too modest, and I likely did not grow much. If I cannot accomplish any of my goals, I was unreasonable at the outset. If, however, I accomplish most of my goals, but cannot quite finish or succeed at one or two of them — particularly those I identify as being a stretch — that is healthy. It indicates I picked goals which were a good fit for my current abilities and circumstances: with room to learn or to fail, but also the chance to succeed.
I have (as ever) written many more words about the details, but I am happiest about having succeeded here: in adapting throughout the year as I needed to. It is a really good feeling to come to the end of the year satisfied with what I did, and not because it had fewer ups and downs — to the contrary, in fact.
In a variety of ways, and for reasons not totally clear to me, 2023 feels like a turning point in several of my public endeavors, setting me up for what I hope will be a really good 2024 — and many years to follow. I touch on this most directly in my notes on work already lined up for 2024 in the Writing and Speaking post, but I have the same general sense overall.
Thanks for reading my feed! Thoughts, comments, or questions? Shoot me an email!
]]>Assumed audience: People who care about art.
I spent the past few hours listening to and reading the score for the entirety of James MacMillan’s “Triduum”. It is not an easy work. No part of it is traditionally “beautiful”. The few moments which come close to that are immediately shattered by harsh blasts of sound. Dissonance reigns. That is fitting for the subject matter of the work. It is a meditation on the span from Maundy Thursday through Holy Saturday. Those three days of Christian history are the days when it seemed Jesus was losing — seemed God was losing. And so the work is difficult.
I am glad to have done the work of sitting down and having listened to it and read its score. I learned quite a bit musically from the experience. I was challenged by the work itself. “Challenged” is the right word. At one point I thought I could not write this piece. It is not only that I would not want to — true though that is, at least at this juncture — but that I could not even if I did want to. More than that, though: listening to it was hard work. I would not think of trying to play it for my daughters, and Jaimie might appreciate it with some effort but would not, I think, “like it”. This is fair. I am not sure the piece is intended to be “liked”. It is meant to be difficult.
In that it is like many works of the 20th century repetoir (albeit with a better motivation than much of that music). I do not listen to this kind of music for “fun” exactly, but to learn, to grow, and yes to be challenged. Good art can be challenging; the best art always challenges us, though not always in the same ways. Some books, some music, some poetry, is difficult merely for the sake of being difficult, and I have little positive to say about those works. Some art, though, is difficult because its subject matter demands it. Speaking truthfully — in verse or prose or harmonic structure — of the Son of God undergoing trial, crucifixion, death, and long lonely day in the tomb: this requires hard material to go with the hard matter at hand.
Not every work ought to be difficult. But some should.
Thanks for reading my feed! Thoughts, comments, or questions? Shoot me an email!
]]>Assumed audience: People at least vaguely aware of the “alternative computing” ideas represented by e.g. OpenDoc.
Epistemic status: Thinking out loud, old-school blogging, “sketch” level.
This morning I came across Apps considered harmful, by Duncan Cragg.
There is a lot of this sentiment floating around (this one posted in the Future of Coding space, but the rhetoric is extremely common more generally). A lot of the work out of the Ink & Switch folks is adjacent to it (and less rhetorically heated!), for example.
I… understand the frustration and the things it wants to solve. I get why folks were and are drawn to the ideas behind OpenDoc.
And yet, something does not “ring true” to me for it. For one, I do not share the “it is just because capitalism is bad and people are trying to take advantage of you to steal your moneys!” take that a lot of folks seem to gravitate toward around it. But for another, it reminds me at a “feel” level (so this is not well fleshed out for me) of the way many well-intended folks seem to think about free-and-open-source software: surely the only reason it has not caught on is Big Corporate Tech/The Man being against it! Well… no. There are other, deeper reasons Linux software is not as good, and many of them have to do with trade-offs around specialization.
There is, it seems to me, a fairly deep tension here between how deep and specialized a program can be vs. how easy it can be to “integrate” with other programs. I think about Dorico, for example: it is not — as far as I can see! — something which would even be possible to build with the “simple collection of components” model from tools like OpenDoc etc. Its entire value proposition is that there is a coherent whole to the application and its model of music-making. (The same goes for its various competitors.) Apps can do things by dint of specialization and focus that are harder and in some cases likely impossible in“inverted” models where the user “just” (notice that word!) composes things together from “components”. A “component” of Dorico rich enough to be useful would just be… an app, if perhaps one with different up-front extensibility and embed-ability models.
There is definitely value in figuring out (a) ways to make interactions between apps better and easier and (b) ways that the kinds of things which are better handled by user-constructed documents built of “components”. Likewise, many of the little app sandboxes out there are not taking advantage of the specialization that applications can afford. After all, the fact that apps can pull off things via specialization does not mean all or even very many apps do so. I do not think we are at a global maximum in software design.
I do think that it is easy for folks dissatisfied with the limitations of apps to overlook the many powerful affordances they offer, though, and for computing aficionados to overestimate the degree to which regular users even want to “compose together components”; the simplicity of the “app” model is one of its great strengths, even as it also (at least in its current form) also hobbles “power usage” even for those regular users.
More, perhaps, as my thoughts develop.
Thanks for reading my feed! Thoughts, comments, or questions? Shoot me an email!
]]>Assumed audience: People who already have a decent baseline understanding of Rust and of the idea of type-based overloading/dispatch. (I won’t be explaining either in any detail!)
I was talking with a friend yesterday about a particular API in a Rust library I was working with, and he was noting that it would be nice to have support for overloading based on types, arity, etc. — something you see in Java, C#, C++, Swift, and even (depending on how you squint) JS and TS and the like. You can imagine how that might be nice! These days, you often end up with something like this:
struct Foo { ... }
struct Bar { ... }
struct Baz { ... }
impl Foo {
fn new() -> Foo { ... }
fn new_with_bar(bar: Bar) -> Foo { ... }
fn new_with_baz(baz: Baz) -> Foo { ... }
fn new_with_everything(bar: Bar, baz: Baz) -> Foo { ... }
}
Invoking this can get pretty noisy:
let foo1 = Foo::new();
let foo2 = Foo::new_with_bar(some_bar);
let foo3 = Foo::new_with_baz(some_baz);
let foo4 = Foo::new_with_everything(some_bar, some_baz);
If you had overloading, you could write something like this instead: nstead of having a bunch of different functions with names indicating their differences, you could have something where the types informed what to call:
impl Foo {
fn new() -> Foo { ... }
fn new(bar: Bar) -> Foo { ... }
fn new(baz: Baz) -> Foo { ... }
fn new(bar: Bar, baz: Baz) -> Foo { ... }
}
Then invoking it could be a bit nicer:
let foo1 = Foo::new();
let foo2 = Foo::new(some_bar);
let foo3 = Foo::new(some_baz);
let foo4 = Foo::new(some_bar, some_baz);
This is not exactly novel technology: as noted above, lots of languages do this! So… why not Rust?
I was thinking on that question on my run today, and one particular reason not to do that stood out to me. If you have type-directed dispatch to different overloads, the simple cases above would be nicer, yes, but then you would lose a really key signal that is really important in Rust: “What are the ownership semantics of this?” Rust today requires you to have separate functions for borrowing, borrowing mutably, or moving an item. — and each of those is a different type.
struct TheseAreDifferentTypes {
fn do_something_borrow(&self) { ... }
fn do_something_borrow_mutably(&mut self) { ... }
fn do_something_move(self) { ... }
}
I used self
here, but the same goes for any kind of overload-by-type:
fn do_something_borrow_foo(foo: &Foo) { ... }
fn do_something_borrow_foo_mutable(foo: &mut Foo) { ... }
fn do_something_move_foo(foo: Foo) { ... }
If we had type-based overloading in Rust, it would be really tempting to write those like this instead:
struct TheseAreDifferentTypes {
fn do_something(&self) { ... }
fn do_something(&mut self) { ... }
fn do_something(self) { ... }
}
fn do_something(foo: &Foo) { ... }
fn do_something(foo: &mut Foo) { ... }
fn do_something(foo: Foo) { ... }
At first blush, that might also seem really convenient. After all, it would just be up to the caller to specify, and the caller would still have to specify how an item is being passed:
do_something(&foo);
do_something(&mut foo);
do_something(foo);
Unfortunately, it would make it really easy to end up in situations where you get compiler errors far away from the actual mistake you made. Notably, that is true even in a fairly simple case, where you have local ownership of a type. More importantly, it would make it so that changes in one spot could cascade in surprising ways and then produce those errors-far-from-the-change. Consider a function like this:
fn look_at_the_arg(foo: &Foo) {
do_something(foo);
do_something_else(foo);
}
Assume that do_something
is exactly like I defined it above, and that do_something_else
has the same kinds of overloads.
As it stands, we have a function which borrows foo
— that is, takes it by immutable reference — and then calls do_something(foo)
and do_something_else(foo)
directly. Implied here is that do_something
and do_something_else
both take foo
by reference as well: &Foo
. What if we change the function signature for look_at_the_arg
to take foo
by move instead of by reference?
fn look_at_the_arg(foo: Foo) {
do_something(foo);
do_something_else(foo);
}
Now we have a type error on the call to do_something_else
:
error[E0382]: use of moved value: `foo`
--> src/main.rs:3:23
|
1 | fn look_at_the_arg(foo: Foo) {
| --- move occurs because `foo` has type `Foo`, which does not implement the `Copy` trait
2 | do_something(foo);
| --- value moved here
3 | do_something_else(foo);
| ^^^ value used here after move
|
note: consider changing this parameter type in function `do_something` to borrow instead if owning the value isn't necessary
--> src/main.rs:6:22
|
6 | fn do_something(foo: Foo) {}
| ------------ ^^^ this parameter takes ownership of the value
| |
| in this function
Yes, we can fix that, and yes it is fairly obvious what is going on in this very short function. But even here, it is an annoying little paper cut that the error message shows up at the call site for do_something_else
when the reality is that you almost certainly wanted to continue calling both do_something
and do_something_else
with a reference to foo
. That is, it is very unlikely you wanted to move it into do_something
simply because look_at_the_arg
now takes Foo
by reference. By making one change, you got a significant and invisible change in the behavior and semantics of the program somewhere else. This is generally not great! In a longer function, that type error could be much harder to figure out, too.
This could also change other semantics in surprising ways. For example, it could invisibly change the timing of Drop
calls. Consider, again, this function definition:
fn drop_example(foo: &Foo) {
do_something(foo);
// other stuff...
}
Again, assume that do_something
is overloaded as shown above. What happens if we change drop_example
here to take Foo
instead of &Foo
? Now do_something
also takes ownership of Foo
, and therefore the Drop
implementation for Foo
runs as soon as do_something
ends. Depending on what you are doing in the rest of drop_example
and how expensive the Drop
implementation for Foo
is, that might be fine — or it might be surprising and very unwanted! Again, you may or may not have wanted to change the semantics of the call to do_something
just because you changed the semantics of drop_example
… but if we had this kind of type-based overloading, that would be exactly what you are saddled with.
What about the original example, though? In that case I was discussing overloads that differed on entirely different kinds of arguments passed to them, not on reference type or mutability. Well, it turns out that these two examples are actually closely related: mutability and reference type are not only a matter of “passing semantics” as they might be in other languages in Rust’s space. References and mutability are part of the type in Rust. That said, they sit in a different part of the type checking space than the type of the thing being handed around by reference or moved etc. This means that you could imagine a version of type-based overloading that supports the original example, but does not support type-based overloading where ownership and mutability semantics are concerned. That would work… but for experienced Rust developers, it would immediately raise the question of overloading on ownership. It would leave the language in a slightly weird spot where you can overload on one useful axis, but not another.
That is not 100% conclusive, of course. You could make the case that Rust does lots of things “implicitly” rather than “explicitly” for the sake of convenience to the user — and that this should be one of them. In particular, this does not introduce any new safety hazards, because the compiler would always catch all the same kinds of errors it does today in terms of ownership in this world. The only kinds of changes it would introduce would be the sort described above. What is more, you would not even need an Edition change to support it, because the behavior here would be purely additive and always something authors would have to opt into.
I think that is a perfectly reasonable argument, but I ultimately do not agree with it here. Mutability and ownership are very core to the basic mental model for Rust. Having to differentiate between do_something
and do_something_mut
seems net good to me. It is a small extra bit of overhead when initially writing some code, but pays for itself substantially when either reading or especially editing code later. In general, it is preferable that the distance — the actual distance on screen, but also the distance in “program time” — between an edit and any changes it results in be as small as possible. Adding type-based overloading to Rust would significantly violate that heuristic.
Thanks: Rob Jackson got me thinking about this in the first place. Larry Garfield provided really helpful feedback on the version I originally published.
Thanks for reading my feed! Thoughts, comments, or questions? Shoot me an email!
]]>Assumed audience: People with an interest, and at least a “lay-level” understanding of, programming languages.
One way of describing the history of programming languages over the past 75 years is: They have steadily raised the baseline of what “normal” programmers can express in their languages. I say “steadily” and not “consistently” because there have been plenty of false starts along the way and many a popular language that is a sidestep at best. Even so, “steadily” captures the reality well. We can and occasionally still do write machine code or assembly, when appropriate. In general, though, we tend to write at a level of abstraction that would have been astonishing to the first few generations of programmers, courtesy of advances in programming language design and implementation. That is, advanced type systems and high-performance garbage collectors, safe dynamic programming and JITs — and so on.1
Consider this comment from Niko Matsakis in his post on Rust design axioms (which is worth reading the rest of if you are interested in either Rust or programming language design in general):
Consider reliability: that is a core axiom of Rust, no doubt, but is it the most important? I would argue it is not. If it were, we wouldn’t permit unsafe code, or at least not without a safety proof. I think our core axiom is actually that Rust [is] meant to be used, and used for building a particular kind of program.…
When it comes to safety, I think Rust’s approach is eminently practical. We’ve designed a safe type system that we believe covers 90-95% of what people need to do, and we are always working to expand that scope. [To] get that last 5-10%, we fallback to unsafe code. Is this as safe and reliable as it could be? No. That would be requiring 100% proofs of correctness. There are systems that do that, but they are maintained by a small handful of experts, and that
idea – that systems programming is just for “wizards” — is exactly what we are trying to get away from.To express this in our axioms, we put accessible as the top-most axiom. It defines the mission overall. But we put reliability as the second in the list, since that takes precedence over everything else.
Let me say up front that I think this was and is the right design choice for Rust. What follows is not about Rust at all. Instead, I am interested in thinking about an implication of Niko’s point here for languages in general — specifically, on that contrast between the axioms of accessibility and reliability.
When Niko writes that “we are trying to get away from” the idea “that systems programming is just for ‘wizards’” I vigorously agree with him. Folks in the Rust community sometimes like to say that one of its key ideas is that “we can have nice things”, that performance and usability can go together. That framing for Rust is, as far as I know, originally from me! Sign me up, then, for getting away from “only for wizards” framings. There is more to say, though.
Grant that proving safety is only accessible to “wizards” today. Twenty years ago, though, most of Rust’s type system was reserved for the “wizards” who worked with Haskell or OCaml or other “esoteric” or research languages. That certainly includes the affine types that make up the ownership system, but also its deployment of sum types and Hindley-Milner-style inference. Now, many of those ideas are table stakes for new programming languages, and are even being back-ported to very mainstream, rather staid languages like Java. Even the most cutting-edge bits around affine types look reasonably likely to become far more widespread in new languages in the decade ahead. Rust itself is increasingly popular among not only people who particularly need low-level code but also people building regular applications, tools for other languages, you name it. It has “gone mainstream”.
You can see the same dynamic play out in other languages that came up in the 2010s. TypeScript in particular is a good example of a language whose type system can of necessity perform some extraordinary contortions (“because JavaScript”). Some of its type system features, especially conditional types, its very advanced approach to “type narrowing”, and other forms of type-level programming, otherwise exist only in far more obscure languages like Haskell, Racket, and Idris. If you had told anyone a decade ago that many ordinary front-end web developers would be writing conditional types, verging on a subset of dependent typing, you would have been laughed out of the room. Yet here we are.
Proving the safety guarantees of the things Rust requires us to wrap in unsafe
blocks might be wizardry today, reserved for the handful of half-programmer-half-mathematician magicians who can roll up their sleeves and work through proof tactics in F* or Lean. Yes, accessibility and reliability are still in tension. It need not stay that way, though. It has not stayed that way over the past fifty years; we have made it far better. We can follow the same hard path we have so many times before and make the tools more usable, the knowledge more comprehensible and accessible, the applications more achievable. We can figure out what it would mean to merge progressive disclosure of complexity with proof tactics.
Things that were Merlin-class exploits yesterday are now part of the undergrad curriculum. We can move the wizardry frontier.
Some of the abstraction that comes along with this leads some folks to decry a corresponding degradation of performance — a somewhat legitimate critique. The reasons are complicated, though, and do not merely come down to too much abstraction but rather the wrong abstractions and misaligned incentives. But that is for another post. ↩︎
Thanks for reading my feed! Thoughts, comments, or questions? Shoot me an email, or leave a comment on Hacker News or lobste.rs.
]]>Assumed audience: People interested in typography. Especially if you have a musical background!
In his excellent Practical Typography, Matthew Butterick writes in the chapter “Grids”:
Fans of mathematical ratios in grids (also known as modular scales) sometimes compare them to music. For instance, Robert Bringhurst says “a modular scale, like a musical scale, is a prearranged set of harmonious proportions”. (The Elements of Typographic Style, p. 166.)
As a musician, I find this metaphor incomplete. Sure, music is written on a grid of harmony and rhythm. But performers don’t rigidly adhere to these grids. Indeed, music that was locked perfectly to a grid would sound sterile and boring. Just as the performer’s ear is the ultimate judge of the music, the typographer’s eye is the ultimate judge of the page.
I actually agree with much of Butterick’s point here (indeed: if the only thing that comes of my writing this piece is that you go read and learn from Practical Typography, I will consider this a win). However, his comment about musical grids gets a couple things quite wrong in my view as a composer of music — and the ways it goes wrong are themselves illuminating:
The performer’s ear is not the ultimate judge of music — at least, not of all kinds of music. Butterick may be thinking here of styles heavier on improvisation and alteration: styles I like and play a fair bit myself, but not all styles. As a composer, I would find it quite offensive if a performer decided to ignore the melodic and harmonic “grids” I chose. So would most audiences for “classical” music (whenever it was composed), where the composers’ intent is highly prioritized.
That being said, the composer’s ear is in most cases the ultimate judge. Sometimes the right harmony for a given moment in a piece is not the “normal” one for the scale and progression; indeed, sometimes the surprise of doing something a little bit weird is the point. You can only tell that something is a “little bit weird” if there is a coherent baseline to refer to, though!
The exceptions to the second point are in the context of other compositional choices, e.g. to use a tone row, or more generally to stay strictly within a given tonal structure — from constructed tonal systems in modern music1 to the strict counterpoint works of historical masters like Palestrina and de Victoria. To reiterate, though, this is still a conscious choice.
Finally, even in highly-structured composed music, there is considerable freedom to the performer (including a conductor): How are tempo and dynamics to be interpreted? Just how much emphasis does a marcato imply? How tightly or loosely should this follow the rhythmic markings here to convey the musical sense of the section?
With those points in mind, I offer an alternative formulation of the same closing paragraph:
As a composer, I think this metaphor is quite apt — as long as we understand its limits. Musicians think hard about scales and harmony. These kinds of “grids” serve as starting points, a framework on which to hang the rest of the musical decisions. In some music, that framework is a starting point from which performers are free to make many modifications. Even in other, more strict kinds of performances, the composer herself may both choose an underlying harmonic and rhythmic structure and then consciously deviate from it to achieve a desired effect — or she may use the constraints of a grid to force her to accomplish musical interest in other ways. Rare — but sometimes quite excellent — is the musical piece which has no “grid” and is still musically interesting and beautiful. The exact same dynamic holds for typographical scales: they can be useful starting points on which to “riff”. Knowing when and how to deploy them is a matter of taste. Just as the composer’s musical sense is the ultimate judge of the music, the typographer’s eye is the judge of the page.
An addendum: It is true that advanced composers or performers can go far away from harmonic scales and metrical grids to accomplish their musical ends. But it is also true that beginning composers do not start that way. The course of study begins with very rigorous structure, most often following the course of musical history. Formal instruction in composition, for example, involves exercises with extremely constrained rules for both metrical and harmonic structure, standard forms to compose, standard ensembles to write for, and so on. There are differences for musicians whose focus is performance over composition — but the best musicians, across genres and forms, are also students of what has come before and what works well and what does not. Mature composers and performers can go “off the rails” because, again, they know where the rails are and what they are for. Exactly the same applies to typographers and designers.
This has historically been one of my favorite ways to end up in really interesting places tonally for chamber music in particular. My Single Movement for Clarinet and Cello, for example, works with a constructed “mode” and I had a lot of fun with it. ↩︎
Thanks: Social media image credit Marius Masalar.
Thanks for reading my feed! Thoughts, comments, or questions? Shoot me an email!
]]>Assumed audience: People interested in the contours of a career in general, and specifically who care about why I left LinkedIn earlier this fall.
Part of a series on what’s next for me:
When I came to LinkedIn at the start of 2019, I told people that I hoped it would be a
This change was, by the time it happened, not a surprise. I did not rage-quit my job during an incredibly high-pressure period back in the spring (though I wanted to, at times!), and I also did not burn out by staying too long. No, I kept a promise I made to myself when I chose to come to LinkedIn five years ago: not to burn out, by keeping a close eye on how well my values stayed aligned with company priorities, and leaving before things went awry.
Reflecting on 2022, I wrote:
I find that when I end up spending a majority of my time on writing documents to articulate strategies, or dealing with team dynamics, or otherwise away from building, I am least satisfied. Those are all good kinds of work and they all need doing, and I don’t mind spending some of my time on them, but they never energize me the way that both building and teaching do.
…I am looking forward to having some space to do more R&D and exploration around how to continue making an impact on developers’ ability to develop quality software quickly.
[The] year felt odd: while we made a little bit of progress on things in that space of “developers’ ability to develop quality software quickly”, basically all of the work we were doing felt like catch-up work — both in terms of getting LinkedIn’s web stack caught up with capabilities its mobile stacks have had for a while with types, and in terms of getting LinkedIn’s web stack at least in the ball park of where other front end stacks have been for a while. Catching up is good… but it is generally hard to advance the field from behind*.
…
I will be very curious to see how I feel come the end of 2023! For now, I will simply close by noting how delightful it is to be a matter of weeks from my 4-year anniversary of starting at LinkedIn: this is already substantially the longest I have stayed at any job. I no longer feel like I am “just getting started” but I do still feel like there is a lot of opportunity ahead, and indeed more opportunity than there was when I started, thanks to the work my colleagues and I have done over the years in between. That is a really nice spot to be.
“How I feel” is a pretty open and shut book here late in the year: Unfortunately, the opportunities I hoped for did not materialize, my own best efforts notwithstanding. Worse, the horizons for what I could work on narrowed substantially. Client development at LinkedIn (web, iOS, and Android alike) is embarking on a course that — to my judgment — offers me no levers for the kind of work I care about.1 (More on “the kind of work I care about” in Next: Role?) There would have been a lot of code in the mix, but not code I believed in. In sum, the work in front of me no longer aligned well with my own values and priorities, so — after much consideration and discussion with my wife and close friends — I resigned.
Life is short, and our work is a massive part of it. Given the considerable blessing of financial stability and margin, and given the question: Spend the next few years of my life working on things I do not really believe in? Or… do something else? The answer was easy. Not everyone has the opportunity to make that call, I recognize. Sometimes, you just have to do annoying or terrible work to put food on the table for your family. That comes with a sense of responsibility. For those of us who do have that opportunity, though, it matters that we take it: It matters that we use financial stability as a lever to make sure our work actually moves the world — however fractionally — in the directions we believe in.
I remain very glad to have gone to LinkedIn. I developed some great relationships there — made friends we have had fly across the country to spend time in our home! — and did really good work. I grew and learned a great deal through working at significant scale both technically and organizationally. But now it’s time for something new.
Bonus: a few months after publishing this, I recorded an interview with Adam Gordon Bell on CoRecursive about my time at LinkedIn, including leaving LinkedIn. If you want a little more detail on what I wrote above, check it out — it has a full transcript as well as the audio recording.
My most common description of that roadmap has been: building mediocre developer experiences with mediocre technologies to deliver mediocre user experiences in the name of product velocity — and, worse, doing so as a technical bandaid over other, far more fundamental, cultural and organizational issues. The details are not public, and people could reverse course, so I will not belabor the point here. But my judgment stands unless or until they do reverse course. ↩︎
Thanks for reading my feed! Thoughts, comments, or questions? Shoot me an email!
]]>Assumed audience: People interested in how I am spending this sabbatical, or interested in tools and approaches for making a sabbatical like this most helpful.
Part of a series on what’s next for me:
Since leaving my job at LinkedIn at the start of October, I am taking a proper sabbatical through the end of 2023. I am spending these months resting, composing, writing, reading, learning things I have not had a chance to learn on the job, and hanging out with my family. Over the course of the rest of the year, I hope1 to:
That might not sound like your idea of a sabbatical, but it absolutely sounds like mine. For one thing, I find myself most able to rest in the sense of getting back energy when I am learning and exploring. For another, parts of that will not be work-like at all. Last but not least: I started by taking all of the first week to “veg”, and all of the second week entirely away from computers etc.: only books, paper journals, etc.
My goal in starting out with that kind of basic physical and mental rest as a foundation was to make the weeks which followed that much more energizing. So far, that has proven out! I am just about ⅔ of the way through PLAI and having tons of fun with it. I am also most of the way through a draft of one section of that symphonic work. Along the way, I have also written a bunch of shorter blog posts, made some tweaks to my current website, and started planning in earnest for the next iteration of the website.
At the halfway point, I feel much, much less mentally and emotionally tired and stressed than I did starting out. The space has also yielded some of the fruit I hoped for in terms of thinking about my next role: it is hard for me to really imagine something new while deep in the grind of a current role. As I came into November, though, I found myself imagining a lot of fun alternative approaches to my chosen career. Whatever comes of those, it has been helpful to have the space to come up with new ideas, and to set aside the assumption that things must be as they have always been.
Over the rest of this sabbatical, I also intend to spend some time actively working through the exercises in Designing Your Life, which I listened to on audiobook over the summer. I am hoping, in conjunction with the considerable amount of reflection I have done as I wrapped up at LinkedIn and the conversations I have been having with Jaimie and other family and friends, those exercises will help me chart out a good next step career-wise.
I also reserve the right to look at any given day or week in that stretch and nope right out of doing projects (in addition to my standard rule of not doing work — whatever that means at any given time — on Sundays)! There has been a day each of the past few weeks where that was what I needed to do, and I am really glad that I made the commitment ahead of time to allow myself that kind of flexibility on this. It makes it easier to really push hard on doing the work on the working days to know that on days when I want rest, I can just rest.
One additional note: for the sake of making the most of this time, I have been leaning heavily on Freedom to structure my time. I recognized that it was easy for me to end up spending hours chatting with folks on Discord, or browsing various news aggregators. Given the incredible blessing of being able to take three months off and spend time on side projects and learning I would not normally be able to do, though, I do not want to spend hours and hours on chat or social media or Hacker News. A little goes a long way. Accordingly, I have set up a fairly aggressive schedule for access to those during the work week, in addition to my existing habit of blocking all social media on the weekends and additionally blocking email and chat on Sundays. At present, I am only allowing myself unrestricted access from
I do miss having more of the social-intellectual interactions from some of those chat contexts. Unfortunately, I feel quite keenly the tradeoff here: I have limited time on this sabbatical, and every second spent in a chat is a second not spent writing orchestra music or learning how to build programming languages. I think I need the intellectual community provided by some of those groups in my life; I also know that it is good to have seasons where I step away and “put my head down” to get things done in a more focused way.
This experience has also highlighted for me just how much I would like to do something like the in-person version of Recurse Center at some point in the future. Learning Racket and using it to build programming languages is good; it would be much better if I were not largely alone doing it. My experience is that being around other programmers is really valuable for things like this, even when they do not have any experience in the specific thing I am working on. (Rubber duck debugging!) Likewise, I am feeling more and more that I need to find some kind of community for feedback on my music: there is only so far I can go working totally alone and without robust input from other musicians.
All of this has me mulling on how valuable in-person community is for these kinds of things, and on the intersection of that reality with jobs. I remain deeply committed to remote work, not least because I am able to work so much more effectively from my home office than anywhere else when I really need to concentrate and get some hard problem solved. At the same time, I feel the lack of even the kinds of social interaction I had via Slack and Zoom calls (!) at work; I feel even more the lack of the quarterly meet-ups. Embracing flexible and remote work cultures does not mean abandoning in-person interactions; it does not, that is, require a kind of remote work maximalism.
I am, to say the least, grateful for the opportunity to take this sabbatical. It has been wonderful. I encourage you to figure out how to make it work if you can. (Really: employers should build in regular sabbaticals as part of a retention plan!) I am excited to see where the second half of this takes me, as I am to see where I go with my next role in 2024.
I write “hope” rather than “plan” because the meta goal for this sabbatical is to rest and reset. Holding any of those plans too tightly would be a recipe for ending up frustrated if I failed to end up accomplishing them: rather the opposite of the point of a sabbatical! ↩︎
Thanks for reading my feed! Thoughts, comments, or questions? Shoot me an email!
]]>