Daniel Compton

The personal blog of Daniel Compton - Projects

Understanding DisplayLink, multiple displays, and M1 Macs

Introduction

I needed to buy a new Mac recently. I couldn’t bring myself to buy a legacy Intel Mac, but the new M1 Macs only support up to two displays. For the MacBook Air, MacBook Pro, and iMac that means you can connect one additional display along with the built-in display. For the Mac Mini, you can add two displays. I wanted a MacBook Air and already had two external displays which I wanted to keep using.

DisplayLink

What are your options if you want to run more than two displays? Enter DisplayLink. DisplayLink (not to be confused with DisplayPort) is the name of a technology created by a company also named DisplayLink. It lets you send a video signal to a display over USB or Wi-Fi instead of via DisplayPort or HDMI. DisplayLink technology was first used in laptop docking stations but can now be found in other video-related products including adapter cables and monitors.

DisplayLink has two components - a software driver installed on your computer and a hardware chip in the dock or adapter. The software driver presents itself as one or more displays to the computer. The computer sends pixel data to the software driver, which then compresses the data and sends it over USB. The DisplayLink chip decompresses the data and sends the display signal to the (real) display.

I was pleasantly surprised to see that DisplayLink’s drivers have been updated for Apple Silicon and don’t require a kernel extension. From what I can tell, DisplayLink seems to be publishing regular updates to their drivers and is actively working on new features.

DisplayLink Limitations

Sending compressed video over USB is a pretty neat trick! However, DisplayLink has quite a few downsides to be aware of.

An uncompressed 4K60 video signal requires 12 Gbps of bandwidth. The latest DisplayLink chips run over USB 3.0 which has a bandwidth of 5 Gbps. Even a single 4K signal is too big, let alone multiple displays. This is why DisplayLink needs to compress the captured screen.

For desktop browsing, email, software development, and other general computer tasks where the display doesn’t change much from frame to frame, DisplayLink compression can be very efficient and is unlikely to be noticeable. DisplayLink claims “Pixel-perfect graphics”, and “Compression tuned for video content and high-quality graphics” on their newer chipsets. If you’re a video/photo/graphics professional, you’d probably want to check and see if this works for you.

It doesn’t sound like it will be as good for gaming though:

While DisplayLink technology is targeted primarily to productivity and video applications, it is suitable for the casual gamer. If you’re a “power gamer”, looking for every edge possible over your opponents, you might want to go another route.

Another downside to DisplayLink is that you need to install an optional program to show the Lock Screen on a DisplayLink monitor. Otherwise, the Lock Screen will only show up on the built-in display.

Unlock with Apple Watch doesn’t work when using DisplayLink. Apple disables Unlock with Apple Watch with the message “Unlocking with Apple Watch is not available while your screen is being shared”.

DisplayLink has a few other limitations on Macs: HDCP is not supported so you may not be able to watch Netflix, Hulu, iTunes Movies, etc.; only 4 external displays are supported; screen rotation is not supported; Clamshell mode is partially supported. Manufacturers publish blog posts announcing that new DisplayLink drivers are validated with their hardware, it’s probably good to check with your manufacturer before updating.

There are quite a few limitations to using DisplayLink to extend your displays. However, it’s also currently the only way to run more than two displays on a Mac, so if you need that then you’ll need to look at a DisplayLink powered Dock.

Choosing a Dock

If you already have a dock without DisplayLink, your cheapest option may be to purchase a DisplayLink USB adapter. You could connect your monitor to the adapter, and then the adapter to a free USB port. DisplayLink keeps a list of USB adapters you can purchase. Check compatibility as not all adapters list support for macOS, although sometimes the reviews will mention that they do still work.

DisplayLink sells chipsets to dock manufacturers. The chipset has their DisplayLink hardware, along with other ports like USB, Ethernet, Audio. This means that products from different manufacturers like Targus, Dell, Alogic, Plugable, Kensington can end up with very similar capabilities because they are all using the same electronics.

Unfortunately, all of the M1 compatible docks I saw had a maximum of one USB-C port. If you have a lot of USB-C devices you’ll either need to purchase USB-A->USB-C adapters or buy a non-DisplayLink dock and a DisplayLink adapter.

If you’re investigating docks, you may see reference to USB-C Alternate Mode. This is a way for a USB-C port to run non-USB signals over it like DisplayPort and Thunderbolt(!). Alternate Mode doesn’t let you run any more displays than you could otherwise do natively.

Dock Options

If you’re looking for a list of DisplayLink docks, DisplayLink has a list of DisplayLink docks. Here’s a list of docks that the manufacturers have said are supported on M1 Macs as a starting point:

Plugable has a few options:

Note that Plugable mentioned that their Ethernet ports are limited to 300Mbps on macOS due to driver issues. I’m unsure if that extends to all DisplayLink docks.

Alogic also has a few options:

Targus has several docks with DisplayLink:

Kensington has four docks with DisplayLink:

StarTech has 4 docks with DisplayLink:

Dell has some docks with DisplayLink technology, but they don’t list compatibility with Macs, so probably safest to avoid them unless you’re able to try them first.

Caldigit docks are highly regarded but don’t contain DisplayLink chips. This means you can’t run multiple displays on them. However, they seem to work if you only need a single external display on an M1 Mac, or purchase a USB display adapter. There is a video of someone using a Caldigit TS3 Thunderbolt dock with 4 DisplayLink USB adapters.

Disclaimer

I’m not an expert on any of this, I’ve just read a lot and compiled it into one place. Please let me know if I’ve missed something or if you have any suggestions.

Summary

There are lots of moving parts involved with picking a USB-C dock, let alone one also running DisplayLink. You need one which:

As you can tell from the number of limitations mentioned throughout this post, picking a dock that works with your setup is far from a simple task. I recommend checking the specs closely, checking out reviews, and ideally trying it out at home before committing.

Building Stable Foundations - Heart of Clojure 2019 talk

This was a talk I gave at the first Heart of Clojure in 2019. I wrote more about why I enjoyed it so much at the time.

You can also see the talk on Youtube.

Good morning. I was going to ask if everyone’s feeling awake, but I think after that karaoke, I think everyone’s pretty well awake by now.

This talk is called Building Stable Foundations, and it’s about building stable long lasting foundations for the Clojure community.

[00:00:30] Bozhidar has already introduced me slightly, but I’ll go back over it. My name’s Daniel Compton, I’m an open-source project maintainer. I am an administrator at Clojars. I write and record a Clojure podcast and newsletter at therepl.net, and I am working at a startup called Falcon.

Falcon is hiring for Clojure developers, remote Clojure developers. If you’re sort of in the U.S.-ish region, or you can work [00:01:00] on a similar kind of time zone, check out falconproj.com [Ed: Falcon shut down 😢] or come and talk to me, and I can tell you more about what we’re working on.

The other thing that you may know me from is an organization called Clojurists Together. With the help of our members, we fund and support open-source Clojure projects. The question is, well, why is that important? Firstly, who has heard of Clojurists Together?

Great. All right. Just about everyone, and who [00:01:30] here is a member or your company is a member of Clojurists Together. Great. The reason why Clojurists Together is important, or I think it’s important, is that Clojure is foundational, and open-source Clojure is foundational to what we work on. Of course, the Clojure language is open-source, but the vast majority of tools, and libraries, and services, that we all use as day-to-day Clojure programmers come from open-source software.

It’s important, not just for [00:02:00] us as a community, but open-source Clojure is really foundational for businesses too. Businesses are investing lots and lots of money into the Clojure ecosystem, and they’re relying on these investments lasting a long amount of time. I think it’s really important to sort of remember that context of the money being invested into Clojure and making sure that that is well-spent and it’s going to be [00:02:30] a long lasting investment.

Who here is lucky enough to get paid to work with Clojure? All right. Of those of you who had your hands up, who uses open-source Clojure code in your day job? It’s a bit of a silly question, isn’t it? That the idea that as a working Clojure programmer you wouldn’t be using a significant amount of open-source code as part of your work. Maybe there are some people. I suspect there are a couple, but for most of us a lot of the work we do is built on top of these really important open-source foundations. Open-source creates an incredible amount of value for everybody that gets to use it. It’s the shared resource that doesn’t run out when somebody uses it, that doesn’t mean that anyone else can use less of it. This is a really amazing property of software and open-source software.

But a few years ago, I started to notice [00:03:30] developers talking about burning out. In the Clojure community and also outside of the Clojure community, and this was kind of worrying to me, because a lot of these people were really important to the Clojure community. They were doing really important work.

I started to see these kinds of things being said. People saying things like, “Any tips for a post burnout return to open-source? I’m exhausted, but want to keep going.”

This [00:04:00] really broke my heart to see these people who I cared about, many friends and that they were not feeling supported. They were thinking of leaving, and I was really worried about this. We have this problem that the foundations of our community are perhaps not as strong as they could be, or they’re not being as supported as they could be.

Before we talk more about our open-source foundations, I think we can learn some things from building [00:04:30] foundations. This is what a building site looks like before you start building on it itself. The first thing you have to do when you’re building is to prepare these foundations, everything else goes on top of them. So you have to get this right first.

When you’re building foundations, there’s a few things you need to do. The first thing you need to do is survey the land. You need to find out, do you have the rights to work on this land that you think you own and is the structure that you’re going to [00:05:00] be building allowed to be built on this land. This is really important to get right, because if you get it wrong, you might invest all of this money into building something which you then need to take back down again.

Next thing you need to do is check under the ground. What is going on underneath the surface? You can see what’s on top of the ground, but it’s really important to find out what’s going on underneath there before you start building, so that you don’t end up being surprised by things later on. The last thing, and this is the most important thing and [00:05:30] what we most often think of when we’re thinking of building foundations is reinforcing the ground. A fully built house can weigh 50 to a hundred tons or more, and that’s a lot of weight to be putting into the ground, and so you really want to make sure that those foundations are going to be able to hold up to that kind of weight.

This is quite similar to what we do when we’re picking dependencies. The first thing we do is we’ve got to check the license. I hope we check the license and [00:06:00] we also will usually want to review the codebase. Is it well-written Clojure? Does it have tests? Is it actively maintained? These are all important things to be aware of before you pick a library, and the last thing, if this is going to be used in the critical path of your production software, you probably want to test it, find out how fast is it. Can it stand up to the full weight of production? Are there memory leaks, that sort of thing?

Foundations are really important [00:06:30] to building a house, but it turns out that they’re actually pretty cheap. In the U.S. on a $400,000 house build. They cost about $10,000, two and a half percent, so not really that much of the cost of the build. This is because it’s a very well understood process. It’s been done many times every day, and for the most part, it goes pretty smoothly.

Has anyone ever seen a house like this before? [00:07:00] What’s happening here is that the foundation on this house wasn’t built correctly the first time. It’s a little bit hard to tell, but this house has been lifted right up off the ground, quite a few meters up. Stacked up on these Jenga blocks so they could fit machinery underneath to repair the foundations, and then they’re going to slowly lower that house back down.

If you look closely, I’m not sure if it’s going to come through very well here, but there are some people standing underneath this house, [00:07:30] very brave people. Does anyone relate to this picture? Has anyone had to fix the foundational dependency in your codebase and feel like there’s a lot of things bearing down on top of you? As you can imagine, this is not a cheap process. It can cost $20,000 to $100,000 to do this kind of repair, which is two to 10 times more than the cost of building the foundations correctly in the first place

[00:08:00] Yesterday, Tiago talked to us about resilience, and I’m glad he really dug into this, because I don’t need to go too deeply, but we’ll remember resilience as the ability to recover from a stress or a change. This is what you’re going for when you’re building foundations, you want the foundations to be able to stand up to the elements and all of the things that a house is going to face over a hundred year lifetime. What we can see here is there’s two houses, on the left… If you look at them, superficially they kind of look the same. They’re [00:08:30] both held up by wooden poles. They’ve both got thatch kind of roofs. They’re both orangy. They’re both roughly the same size, but the one on the left is not very resilient at all, and the one on the right is incredibly resilient, even though it’s over the water.

This house on the right is, well, it’s a community. Firstly, it’s a community of houses. It’s not just a single house, and it’s also held up by many stilts, so that even if you lost one, or two or even a dozen of these stilts, [00:09:00] the whole structure would still be sound. Whereas on the left, you’ve got just two poles holding this house up. And if one of those two goes down, the whole thing is coming down. If you agree with me that open-source software is the foundation of the Clojure community, then we need to be asking this question, how resilient are our foundations? A few years ago, as I was looking around the Clojure community, it seemed to me that we were perhaps closer to the left-hand side than [00:09:30] the right-hand side.

We had a few people doing a tremendous amount of work for the Clojure community. People were depending on their work every day, they were using it. It was really important, but they were feeling unappreciated, perhaps burnt out, and there was this refrain of, “I’m exhausted, but want to keep going.” I thought they were at severe risk of burning out. If we lost these people, this would be somewhat equivalent to that house we saw, having to be lifted up on the blocks, [00:10:00] and there would be a huge investment. We would lose a huge amount if we lost some of these people, and it would cost us a lot of time and effort to regain that knowledge and hard won wisdom.

This led me and others to create this organization called Clojurists Together. We have a very simple mission. We want to fund and support open-source software, infrastructure, and documentation that’s important to the Clojure and Clojure script community.

We [00:10:30] fund three kinds of projects. The first kind of project is maintenance projects, and these are projects that are often not very interesting to fund. Maybe they’re very stable, used by lots of people, but over time, software needs maintenance, and if people aren’t being paid, or aren’t getting dedicated time to work on it these maintenance things can pile up over time. So this is a place where I think Clojurists Together is really well suited to help these projects, [00:11:00] because it’s work that sometimes wouldn’t get done otherwise, or it would take a lot longer. We really liked being able to fund these kinds of low-level boring kind of work, but these are really important improvements to make.

The next kind of work we like to fund is new development. This might be a project that is already well understood and used by many people, and they have an idea for a new feature, or a new release, or something that they would like to do, and they need some dedicated [00:11:30] time to work on it. We also take and fund projects who do this kind of work.

The last kind of project is new fledgling of projects, and we fund a few of these kinds of projects where someone comes to us with a seed of an idea, and perhaps they’ve already created proved out that this is an idea, but they would like some funding to go to the next level with it, and we’ve been able to fund some of these projects, and they have [00:12:00] really surpassed my expectations. You’ve done amazing work, so this is another kind of project we like to fund.

We launched in October 2017, and in 2018 and 2019, we’ve funded 11 projects.

Who in the audience has used one of these projects? All right. What about three projects. Who’s used three or more of these projects? Wow. Okay. Who’s used five or more of these projects? Great. All right. Is there anybody here who has not [00:15:00] used a single one of these projects?

All right. Well, that’s really good to see. [Nobody put their hand up]. I’m glad that this has benefited people.

I’m really excited to be able to announce today, this Heart of Clojure conference has been timed really well to be able to announce on stage the next funding round. We’re going to be funding four projects, $9,000 over the next three months.

The first project we are funding [00:15:30] is Calva. Calva is a VS Code extension for Clojure. VS Code has been gaining in popularity hugely over the last few years. I think it’s one of the sort of surprise sleeper hits of the programming community, so we’re really excited to be able to fund Calva.

We’re also funding Thomas Heller to work on Shadow CLJS again. He’s been doing incredible work for the Clojure and ClojureScript community on Shadow CLJS. I hear lots of people talking about it and using [00:16:00] it, and it’s a really great tool, and those improvements are benefiting lots of people.

Another project we are funding is Meander. Meander is a really interesting Clojure, ClojureScript, data transformation library. I probably can’t do it justice here, but Michiel Borkent [Ed: this was Timothy Pratley, sorry Michiel!] wrote a really great blog post about this, so if you find him here at the conference, ask him to tell you more about Meander, because it’s a really cool project, transforming data and maps, which [00:16:30] we do every day.

The last project is CIDER. I would say CIDER needs no introduction as well, because many, many people use CIDER and many more people also use Bozhidar’s work on the Orchard and in REPL and all of these other foundational things that sit beneath CIDER that we as a community get to take advantage of. I’m really excited to be able to fund these four projects, thanks to the [00:17:00] support of our members.

The way Clojurists Together works is that we have members sign up throughout the year, and then every quarter we will go to them and say, “Hey, what do you think we should be working on? What’s interesting to you, what do you think is useful? What areas do you think need support?” Then we take that information and we create a call for proposals. We say to the wider Clojure community, “Hey, here’s what our members would like us to fund. Please [00:17:30] give us some proposals,” and every quarter we get an amazing amount of proposals, more than we could pick. The quality of these submissions is really high. Then the committee members vote on the projects that we want to fund, and then we’ll fund those projects for three months. At the end of those three months, we turn around and do it all again, so it’s pretty straightforward.

Clojurists Together has been getting noticed and has influenced beyond [00:18:00] just the Clojure community. This is my friend, Devon, who works at GitHub, and she’s talking at GitHub Satellite earlier this year about GitHub Sponsors. Clojurists Together was mentioned as an influence on GitHub Sponsors. GitHub Sponsors is a project for where you can fund directly the projects that you use on GitHub. I’m really excited to see this. I think GitHub has a really wide reach, clearly, in the development community, and they’re helping to [00:18:30] change attitudes around open-source funding for both, in the Clojure community, but also, in the wider community. This is something I’m really looking forward to.

Devon has this quote, “Our goal is for open-source to be a serious career path people can set up.”

This is kind of the, I think, the next frontier for the Clojure community too. That we have a bunch of projects that are really important, that people are using, and I would really like to see [00:19:00] people be able to be funded to work full-time or nearly full-time on these projects for the good of the community. Clojurists Together, can’t do it all. We’re set up to do a particular kind of funding, and so what I’d like to see is that the community as a whole starts to build this open-source middle class of single or small teams being funded in a meaningful way, and being appreciated by the community so that they can do this [00:19:30] work that benefits everybody.

Part of this is that we’ve just launched a page on our site called Beyond Clojurists Together, where we are collecting Clojure projects and Clojure programmers who are accepting money from Patreon, Open Collective, GitHub Sponsors, or whatever other tools they have, to be funded directly. If you’re interested in funding projects and you don’t really know where to start, this could be a really good place for you to go start looking and finding [00:20:00] some of the dependencies that maybe you or your company is using to fund them directly.

How do we get there? How do we get to this glorious future where all of our dependencies are supported and maintained? It needs two things, time and money. Sometimes people and companies have capabilities for one or the other. We have a lot of people investing, people in companies too, investing in open-source [00:20:30] projects, open-sourcing internal things or maintaining projects. Other times companies don’t really have the time or the capacity to be directly working on open-source projects, but they have some money available that they could put into maintaining them instead.

The other side of the equation is money, and companies spend money on lots of things. They spend it on our salaries, software licenses, cloud hosting, hopefully [00:21:00] green cloud hosting after that talk yesterday, and offices and snacks.

Companies are used to spending money on lots of things, but I think open-source projects often aren’t shaped very well for companies to give money to, and companies need these two things. They need value. They need to be able to say to the decision-makers, to the finance department, “We are getting something of value when we are giving money to this project.” That doesn’t mean [00:21:30] that you need to sell out to the man. There’s lots of different ways that you can provide value to projects, to companies that are both valuable to the companies, that fit with the skills and capacity of what you have, but most importantly, still meet the values that you hold as a person and the values of your project.

This could be things like consulting, feature development, long-term support, maintenance, [00:22:00] and these are things that oftentimes you may be doing anyway, but if you’re able to sell it, and sell this to companies in a way that they are able to understand, and sell onto their decision holders, this can be a really powerful thing.

The other thing that companies need is invoices, and not just invoices, but all of the other financial infrastructure that goes along with things. Companies are not used to using PayPal [00:22:30] donation links to pay for their office rent. They pay money to a bank account, and so this is something where open-source projects, again, aren’t often shaped very well to accept this money.

This is where I’m really excited to see platforms like Open Collective, Patreon, GitHub Sponsors who are providing the shared common infrastructure for projects to be able to work, to be able to provide the kinds of things that companies [00:23:00] need to be able to give money to.

The question now is, are we investing in stable foundations? Are we building stable, resilient foundations that are going to last us the test of time? To answer that question, we need to sort of answer, well, compared to what? The main input to open-source projects is labor, people spending their time on it. I thought perhaps a good comparison to [00:23:30] this will be, how much are we spending on Clojure developer salaries and businesses. To answer that we need two numbers. We need how many Clojures developers there are, and how much they’re getting paid.

Estimates vary from perhaps 20,000 to 50,000 working Clojure developers. We want to be conservative here, so we will just take 20,000, and then we need to figure out how much they’re getting paid. The Stack Overflow Survey for 2019, put the average Clojure developer salary, worldwide, [00:24:00] at 90,000 U.S. dollars. Clojure’s been at the top for the last three years, which is pretty good, but I’ve heard people quibble with this number, say that maybe it’s a bit inflated or unrepresentative in some way, so let’s just take it back a bit more to still be really conservative, and we would just say $80,000 a year.

If you multiply these two numbers together, you end up with $1.6 billion a year being spent on Clojure developers. This doesn’t even include all [00:24:30] of the other expenses that go around Clojure that you need to run a business, and it doesn’t even include the full cost of hiring an employee, but I think this is a really good number to compare to and to think about.

If we were to spend a relatively small percentage of what we spend on Clojure developers, on supporting the foundations, that could be a meaningful amount of money, say two and a half percent of 1.6 billion would be $ [00:25:00] 40 million a year. I think, I can’t even really imagine quite what that would look like. It’s just such a long way away from where we are currently that it would… Yeah, I can’t imagine quite that what that would look like, but what I’ve seen from funding projects, even at a very small level far below this, is that the return on investment there has been incredible.

I think we would see some really exciting things start to happen, even more than they are currently, [00:25:30] if we were to be able to even approach this level of funding, which itself is, I think, very small compared to the value that we as a community get from these open-source foundations.

We come back to this image of the houses on the water, and this image really appeals to me, because I see some similarities here with the Clojure community. These are not always the most pretty houses, some of [00:26:00] those stilts down below look a little bit rickety, but together as a whole, these come together to form a really solid stable foundation. This is what I’m hoping that we can build together for the Clojure community, and to be clear, this is already happening, well before Clojurists Together. This is still a really strong community before this, and I’m excited to see kind of where we can go in the future.

If this idea appeals to you, here’s some things that you could [00:26:30] do.

I want to acknowledge many people here. Firstly, Heart of Clojure, Ana and Martin, they’ve put on a really great conference here, so I really appreciate them bringing me along to speak.

[00:28:30] I’d also like to thank Ruby Together, and André Arko in particular. Ruby Together is the organization that we modeled ourselves afterwards. They’ve done a lot of trailblazing work here and André Arko, in particular, has given us a lot of time so that we can learn from some of the things that they’ve done.

I’d also like to thank the Software Freedom Conservancy. I saw someone here had a [00:29:00] Software Freedom Conservancy shirt on yesterday. The Software Freedom Conservancy, if you’re not aware is our non-profit parent. They handle all of the accounting, and payments, and financial, and legal things that go on that companies need, and that we need to have happen, but that we can rely on them to provide for us. They’ve done an amazing amount of work for us, and so I really appreciate them.

Next group of people I’d like to think is [00:29:30] the projects that have applied. We have more projects apply every quarter than we could fund, and as I read over the applications, I’m really excited, because there’s so many good projects out there, and so I’d like to thank everyone who has applied.

I’d also like to thank the projects that we funded. Sometimes today in the talk, I might’ve slipped and said that we did this work and I wasn’t programming on CIDER, Bozhidar was, so I’d like to thank the projects that we funded, that have done this work. I know that often they’ve taken [00:30:00] time off paid work, or taken holiday time. While we’ve been able to fund them to a certain level, I know often that still means a pay cut for them to be able to do this work, so I really appreciate that.

Also, I’d like to thank the rest of the Clojurists Together team. We have these board members.

On the left we have Maria Geller, Daniel Solano Gómez, Larry Staton Jr., Nola Stowe, Fumiko Hanreich, and Laurens Van Houtven. These are all current board members. We also have [00:30:30] board members from the past, Bridget Hillyer, Toby Crawley, Devin Walters, and Rachel Magruder. [inaudible 00:30:36]is our admin assistant. If you’re a Clojurists Together member, you might’ve got some stickers from us recently, and if you’re wondering, why did this come from Spain? It’s because Rachel, our admin assistant lives in Spain.

I’d also like to thank our members. We’ve seen many of these names around the conference already. Pitch, Nubank, JUXT, Metosin, Adgoji, and Funding Circle. [00:31:00] These companies have all been a huge help to Clojurists Together, and have funded us in a really big way that have let us do all of the work that we’ve been able to do.

Also, I’d like to think these other company members, I don’t have time to go over all of their names, but many of these names are going to be familiar to you, and some of them are at the conference as well.

This is probably my favorite slide at the talk, because this is the names of the 200 members, developer members in the Clojure, [00:31:30] Clojurists Together that are members of Clojurists Together. Yeah, there’s far too many here to name them all. I tried to make the text bigger, so that you could even read the names, but then it just went on for slides, and slides, and slides.

I can’t name them all, but I’d really like to say thanks to the 204 developer members, the 34 company members, and also to the rest of the Clojure community.

The Clojure community is a [00:32:00] really special place. It’s warm. It’s very small, and I think we punch above our weight in terms of the impact that we can have, in terms of the tooling and libraries that we create. That’s in large part due to the contributions of all of the Clojure community coming together to build things for everybody. I think a good chunk of the Clojurists Together members are here at this conference, and so again, I especially want to say thanks to you for your support.

Staff

I’m thriled to announce that I have finally attained the coveted blue GitHub “Staff” badge!

GitHub staff badge for @danielcompton

I’m joining the Social Coding team at GitHub as a product manager. For the last six years I’ve been writing Clojure professionally; product management is going to be quite different, but I’m looking forward to it. There are a few reasons why I’m looking forward to working at GitHub:

GitHub is the worlds most important social network for developers, and I’m thankful for the opportunity to help build it.

Deciphering IRD’s new acronyms - IIT return and ITN return

This tax season I got an email from IRD reminding me I had to file my return. This happens every year and is usually not a surprise. However this year, the returns were slightly different. They said:

The 31 March 2019 IIT return for {person} is due 8 July 2019.

and

The 31 March 2019 ITN return for {company} is due 8 July 2019.

I’d never heard of an IIT return or an ITN return. I searched around and couldn’t get any results on Google or IRD’s website search. However, my best guess at the acronyms is:

I think IRD has renamed these in their recent system upgrades and I suspect they’ve renamed all of their returns. If you come across any other acronyms, let me know and I’ll add them here to help others out.

Improving videoconferencing audio quality for remote workers

I’ve been working remotely for about five years full-time. Over that time I’ve talked with colleagues pretty much every work day, so being able to communicate clearly by audio has been crucial. Bad quality audio can quickly turn a good conversation into a frustrating one when you struggle to hear the speaker, or keep needing to ask them to repeat themselves. Videoconferencing is lower bandwidth than in-person communication; I’ve tried to get as good quality when videoconferencing so I can catch as much of what the other person is communicating.

I studied as a musician, and accumulated several pieces of audio gear and knowledge about working with audio which have been very useful for working remotely. A friend asked me for advice about getting some audio gear so I sent him an email. That turned into a post on our company wiki, and now I’m posting it here publicly.

In my experience, for remote videoconferencing to work well, the following things are very beneficial:

Improving audio quality when video conferencing

Here are some tips in rough order of importance for improving audio quality when videoconferencing. You can keep going down the list with diminishing returns of audio quality, stop whenever you and/or your team is happy with the quality you’re getting.

Why Heart of Clojure was special

A few weeks ago I got to attend and speak at Heart of Clojure. I met lots of online friends in person for the first time, and made some new ones too. I’ve thought a lot about how to describe it since then, and every time I come back to the word special.

Others have also posted their thoughts on Heart of Clojure: Fork This Conference, The people’s conference, The hallway track conference, Community with lots of heart, and A courageous conference?.

Here were some things that I think made Heart of Clojure so special. If you’re running a conference, consider stealing some of these ideas.

Heart of Clojure was a very special event. I can’t imagine how much work it was for Arne Brasseur, Martin Klepsch, and all of the other helpers before, during, and after the event but it paid off. Heart of Clojure felt like a very polished event, not something being put on for the first time. Thanks to everyone who organised, helped out, spoke, sponsored, and attended the conference. I hope there is another Heart of Clojure in the future, if you get a chance to go, I highly recommend it.

What do :project/dev and :profiles/dev mean in a Leiningen project?

A few years ago I came across a Leiningen project that defined profiles for :project/dev, :profiles/dev, :project/test, and :profiles/test. It took me a little bit of digging, but eventually I discovered what was happening. This is a convention that I think originated with James Reeves. I’m reposting my issue comment here, so it can be more accessible for searchers.

If you see profiles like this in a project.clj, here is what is happening:

:dev  [:project/dev  :profiles/dev]
:test [:project/test :profiles/test]
:profiles/dev  {}
:profiles/test {}
:project/dev  { ... }
:project/test { ... }

Leiningen has a feature called Composite Profiles. On startup, if Leiningen sees a vector of keywords for a profile, it will lookup each keyword as a profile, and merge them together. For the :dev profile, Leiningen will merge the values of :project/dev and :profiles/dev.

If you want to add any custom settings or dependencies for your own use, you can place them into the :profiles/dev or :profiles/test in your ~/.lein/profiles.clj. If either of these are set in the user’s profiles.clj, they will override the empty :profiles/dev map specified in the project.clj. You need an empty map for :profiles/dev {} in the project.clj, because otherwise Leiningen will complain about a missing profile.

Playing Apple Music from multiple devices on the same Apple ID with a family subscription

When I first started using Apple Music I signed up for a solo subscription. During the day, I would listen to music on my Mac, and my family would listen to music on the iPad. If we listened at the same time, we would get the error:

“Looks like you’re listening to music on another device.”

Eventually this error became too annoying and I looked at upgrading to a Family subscription. The Apple Music Family subscription allows up to six people that are part of the same Family Sharing group to play music at the same time.

It sounded like this was probably what I needed, but in all of the documentation I read, it wasn’t clear whether two devices that were logged into the same Apple ID would be able to play music at the same time. I talked with Apple’s support team and it still was unclear so I went ahead and upgraded to try it for myself.

After upgrading I tested playing on multiple devices, and it worked fine. If you have an Apple Music Family subscription, you’re able to play on multiple devices that are logged into the same Apple ID.

Announcing defn-spec, a library to create specs inline with your defn

I’m pleased to announce the initial release of defn-spec, a library to create specs inline with your defn.

[net.danielcompton/defn-spec-alpha "0.1.0"]

A quick peek at defn-spec:

(ds/defn to-zoned-dt :- ::zoned-date-time
  [instant :- ::instant
   zone-id :- ::zone-id]
  (ZonedDateTime/ofInstant instant zone-id))

One of the features in Schema that I always appreciated was inline function schemas, using a schema.core/defn macro.

When spec was released it had many similarities to Schema, but one thing it didn’t have was a way of expressing specs inline with your function definition. Spec only supported defining function specs separately with fdef. This does have some advantages, it forces you to think carefully about changing your specs, and to be aware of possibly breaking consumers. While this is valuable, not all code is written to these constraints, and I found having fdef’s separate from the function definition had a number of downsides.

When writing Clojure, I noticed that I often resisted writing specs for functions. After thinking about it I realised that I didn’t want to duplicate information from the defn into the fdef. It’s not a huge deal, but it was enough to deter me from writing specs on code that was being heavily modified. This is a really useful time to have basic specs on your functions, so that you can catch refactorings gone wrong early.

I created defn-spec to increase the locality of the spec definitions, and to reduce the activation energy to start adding specs to your codebase. defn-spec copies the syntax (and implementation) of Schema’s defn macro. This has the advantage of adopting a proven design, familiarity for many Clojurists, and the ability to work with existing tooling that understands the Schema defn macro.

Benefits and tradeoffs

Like all things in life, defn-spec has benefits and tradeoffs:

Benefits

Tradeoffs

This is similar to Orchestra’s defn-spec macro, but allows for optionally only speccing part of the function, and matches the well-known Schema defn syntax. Try them both out though, and see which one works best for you. defn-spec is still missing some features from the defn macro like destructuring, but they are on the roadmap to be added soon. I’m releasing this early to get feedback from other users.

defn-spec will follow Spec’s release cycle, and there will be a new set of namespaces and artifacts for spec-alpha2 and beyond. If you have features/bug reports, feel free to post them on GitHub.

State of Clojure Survey 2019 Analysis

Cognitect have recently released the results of their State of Clojure Survey for 2019. For the last three Clojure surveys, I have reviewed the free-form answers at the end of the survey and tried to summarise how the community is feeling. This year I’m repeating the exercise, keeping the same categories as before. If you’d like to see all of the comments, I’ve published them in rough groupings.

Some comments have been lightly edited for spelling, clarity, and brevity.

Update: Alex Miller has posted a few responses to some of the comments I highlighted below, and some suggestions for the next steps that people can take to help.

Error messages

Error messages have been a top complaint in Clojure for a long time, I think since the very first survey. CLJ-2373 introduced improvements to error messages in Clojure 1.10. A number of comments complained about error messages this year, but none of the complaints mentioned 1.10’s changes. Given that 1.10 was released on 17 December 2018, and the survey ran from 7-22 January 2019, it seems likely to me that many of the people complaining haven’t tried the latest improvement to Clojure’s error messages.

Spec

Spec has been around for two and half years, but is still in alpha. A number of comments referenced Rich’s recent Maybe Not talk about coming changes to Spec. It was great to see Rich putting in to words some of the problems I had felt. It feels like the community is in a transitional phase with spec, where people are still not quite sure where things are going to land.

Docs

Documentation has continued to be a sore spot for beginners. API docstrings were often cited as being too hard to understand, instead people preferred to see examples at clojuredocs.org. The documentation at clojure.org has grown this year with a number of great guides added, particularly Programming at the REPL.

Startup time

Startup time has been a perennial complaint, and there haven’t been any major changes here.

Marketing/adoption/staffing

Clojure continues to grow in business adoption, but hiring developers has been one of the top three things preventing people from using Clojure for the last three years. I’ll add that one way that companies could address the mismatched supply and demand of Clojure developers is being more open to remote workers. Update: Alex Miller also suggested training people in Clojure, which I thought about when preparing the post, but forgot to include in the final copy.

A new option in the survey this year was “Convincing coworkers/companies/clients”; it was ranked number one in preventing people from adopting Clojure. In my opinion, this is a significant issue to be addressed if Clojure wants to grow its market share further. This probably needs more research to find out what the barriers are to convincing people. Elm is proof that exotic languages can be adopted widely, we should learn from what they’ve done.

Language

There were again more suggestions for improvements to existing language features, or development of new ones.

Language Development Process

2018 had a lot of discussions about Clojure’s development process, and the relationship between the core team and the community. I was very curious to see how widely those feelings were reflected in the free-form comments from respondents. After compliments (~230), this was the most common comment type, with ~70 responses (positive, negative, and in-between) on Clojure’s language development process (out of 563 total comments). While many people had issues with the development process, there were also many people who were supportive of how Clojure is currently developed.

Community

People seem to mostly enjoy the Clojure community, but others have had negative experiences, often citing an elitist attitude from some community members. Losing the history from Clojurians Slack came up several times. I’m not sure if everyone is aware of the Clojurians Slack Log?

Libraries

People are still looking for a curated on-ramp into Clojure web development, a ‘Rails for Clojure’. There are a number of frameworks and templates here, but they don’t seem to be hitting the spot. I’m not sure whether that is because of limited mindshare/marketing, limited documentation, or the scope and quality of the frameworks. Having used Clojure for many years, I no longer find it difficult to find and assemble the pieces that I need for projects, but I definitely remember finding this being difficult when I started. Data Science and ML were places that people saw a niche for Clojure. Several people hoped for improvements on core.async, it still has a number of rough edges.

Other compilation targets

People have been experimenting with the Graal compiler to produce static binaries without needing a JVM. At this point that seems like the strongest option for people wanting to use Clojure without the JVM. Better Android support was also requested.

Typing

For the past few years people have been less and less worried about types. I suspect this is mostly due to spec.

ClojureScript

Shadow CLJS was mentioned by many people as having been a great part of their workflow. For those not using Shadow, externs remain challenging.

Setup/tooling

Clojure’s tooling has continued to improve. This year lots of work has been done adding plugins or expanding functionality of the new official Clojure tools. clj on Windows was asked for by many people. Maintainer burnout seems like a significant risk to the Clojure tooling ecosystem.

Compliments

As always, the vast number of comments were compliments to Clojure and the core team.

Conclusion

Clojure is a great language, and people are very enthusiastic about it. Its adoption in business continues to grow. There are a number of areas for improvement still, especially if the Clojure community wants to grow further. Not all of this work needs to go through Core though, particularly in areas of documentation and guides, libraries, and tooling.

I worry about the Clojure community losing key contributors. There are a few linchpins holding a lot of things together; if we lose them, it will be hard to come back from. If you don’t want to see this happen then please support the libraries and tools you rely on by contributing code, documentation, issue triage, or money.

On Abstraction

Almost two years ago, there was a Github issue on reagent (a ClojureScript React wrapper), suggesting that Preact be added as a substitute for React. I wrote up a fairly long comment about why I didn’t think this was a great idea (at least not immediately). React’s recent announcement of the new hooks feature made me think about it again. I’ve republished it here with a few edits for context and time.


Introduction

In principle, I’m not opposed to the idea of Reagent using Preact. It has some cool features and I like that it is small (although in comparison to the total compiled size of a normal CLJS app it’s probably a wash). If Preact worked 100% out of the box with Reagent with no code changes required then I would have no issues with someone swapping out the React dependency for a Preact one and calling it a day. If there are only a few minor tweaks to Reagent required to pass the tests, then again I don’t really have any issues with that. I suspect that even if you have 100% of the tests passing, there will still be issues, as Reagent was built around React, and may not have tests that would cover the difference in behaviour between React and Preact.

Abstraction

It looks like it may not be a pure lift and shift to support Preact. If that’s the case then we run into a bigger issue: abstraction. Reagent was written and built around the ideas and in the context of React. There are assumptions (probably tens or hundreds) built around React’s API and possibly implementation details too.

Adding abstraction adds a large cost because you can no longer program against a concrete API and implementation, you now have to consider two. There are three numbers in computer science, 0, 1, and many. We would be moving from 1 to many, and that takes work.

An aside: recently at work, we were looking at moving a legacy system from only supporting dates in the past to also be able to support dates in the future. This should be straightforward right? We talked to the programmers responsible for it and they couldn’t guarantee that it would work, nor whether supporting future dates would be easy or hard. In the building of that (or any) system, hundreds of simplifying assumptions are made around the context that the system is going to be built in.

It is a very common pattern to have different backends and I don’t see any downsides to it.

I can’t think of a single example of a system with multiple backends that didn’t have any downsides to it, e.g. ORMs, HTML/CSS/JS, Java. There may be some, but they would be the exceptions that prove the rule. Everything has a cost, the question is whether there is a benefit that outweighs the cost. It is much harder to remove something from software than to add it, which is why we should be certain that the benefits outweigh the costs.

While Preact strives to be API-compatible with React, portions of the interface are intentionally not included. The most noteworthy of these is createClass() … https://preactjs.com/guide/switching-to-preact#3-update-any-legacy-code

Reagent currently uses createClass. There are workaround options provided, but this is an example of some of the API differences between React and Preact which you need extra compatibility layers to support. Do we know if the compatibility layer works 100% correctly?

A possible future if Preact support is merged now

As a thought experiment, let’s assume that Preact is in Reagent with some kind of compatibility shim. Preact already has several performance optimisations that people can take advantage of:

customizable update batching, optional async rendering, DOM recycling and optimized event handling via Linked State. - (from Preact homepage)

Wouldn’t you want to be able to take advantage of those in your application? I certainly would. Now to do so, you may run into issues because the compatibility shim layer that was written was encoded around default assumptions of React, and they may not apply to Preact. Do we have to rework the shim layer, or lower level Reagent API stuff? Who is going to do that work? Who is going to review it and merge it?

Let’s consider the reverse. Perhaps in some new React version, Facebook comes out with a new API which is faster or better suited to Reagent’s style of rendering, so we want to switch to that [since writing this they came out with Hooks. These may be added to Preact also, but this isn’t certain and neither is the time-frame]. However that new model may not work with Preact. Again, we’re in a bit of a pickle: Preact users want to be carried along with Reagent and get the benefits of new Reagent work, but it may not be easy or possible to support the new API for them. Now what?

Consider everyday development on Reagent. Reagent’s source code is built around a very detailed understanding of React and is highly optimised. If Preact was supported too, then developers would probably need to gain an understanding of Preact too.

At the moment, Preact has one main contributor, it has been around for 1.5 years. React has many contributors. I’d estimate there are 100+ people with very deep knowledge of React. It’s been around (in public form) for 3.5 years. In general, the JavaScript community does not have a reputation for long-term support of projects. What happens if development slows/stops on Preact and the compatibility layer isn’t kept up to date? It is much harder to remove something than it is to add it. Who decides when/if to remove Preact from Reagent at a future date?

These are all hypotheticals, but I hope this demonstrates that the extra abstraction provided by supporting two VDOM layers doesn’t come for free. At the very least, it consumes extra brain cycles when testing and developing Reagent, extra support and documentation costs from users wanting to use one or the other, as well as extra indirection when running and debugging apps using Reagent.

The Innovators Dilemma

If you haven’t already, I highly recommend reading “The Innovators Dilemma” by Clayton Christensen. One of the key points he makes in that book is the difference between integrated and modular products and when to develop each kind.

CHRISTENSEN: When the functionality of a product or service overshoots what customers can use, it changes the way companies have to compete. When the product isn’t yet good enough, the way you compete is by making better products. In order to make better products, the architecture of the product has to be interdependent and proprietary in character.

In the early years of the mainframe computer, for example, you could not have existed as an independent contract manufacturer of mainframe computers, because the way they were made depended upon the art that was employed in the design. The way you designed them depended upon the art that you would employ in manufacturing. There were no rules of design for manufacturing.

Similarly, you could not have existed as an independent maker of logic circuitry or operating systems or core memory because the design of those subsystems was interdependent. The reason for the interdependence was that the product wasn’t good enough. In every product generation, the engineers were compelled by competition to fit the pieces of the system together in a more efficient way to wring the maximum performance possible out of the technology that was available at the time. This meant that you had to do everything in order to do anything. When the way you compete is to make better products, there is a big competitive advantage to being integrated. … In order to compete in that way, to be fast and flexible and responsive, the architecture of the product has to evolve toward modularity. Then, because the functionality is more than good enough, you can afford to have standard interfaces; you can trade off performance to get the advantages of speed and flexibility. These standard interfaces then enable independent providers of pieces of the system to thrive, and the industry comes to be dominated by a population of specialized firms rather than integrated companies.

I would argue that we are still very much at the point where the current VDOM libraries aren’t good enough yet. They aren’t yet ready to be commoditised, and the best option is to tightly integrate.

Options from here (with some conjecture)

  1. Someone can make a PR to Reagent to add support for Preact. It will probably take a while to get merged because it is a significant change. Once it is merged and released, there will probably need to be several rounds of revisions before it is ready to go. Because Reagent moves relatively slowly, this will take a while.

    Reagent also has a large number of production users, so new releases need to be well tested and stable. Adding Preact to the mix is going to slow this down further.

  2. Someone can make a fork of Reagent (let’s say it’s called Preagent). You can run wild experimenting with what is the best way to use Preact in Preagent, take advantage of all of the great features Preact has, and have a much faster turnaround time for releasing and using it. You will be able to work out what is the right API and integration points for Preact because you have room to experiment with it, without the weight and responsibility of bringing the rest of the Reagent users along with you.

    At some point in the future, you could review merging Preagent back into Reagent, given all that you now know. You would also have the weight of evidence on your side where you can demonstrate the benefits of Preact and can show how many users want Preact. This would let you make a much better case for including Preact, give you what you want in the meantime, and likely provide a higher quality integration in the future.

    Alternatively, you may decide that Preagent is better served going its own way and integrating more closely with Preact. This is also a good option.

Abstraction is not free

The point I have been trying to drive through this post is that abstraction is not free. Over-abstraction is a common anti-pattern and it saps productivity. I had a friend who recently left a Clojure job and started a Java one. He quipped to me about how he’d forgotten what it was like to trace code through five layers of abstraction to get to the concrete implementation. As programmers, we’re trained to solve problems by adding layers of abstraction, but that isn’t always the best way to solve the problem.

Lastly, this isn’t a personal attack on you or your ideas. I’m all for innovation in ClojureScript webapps and I think that it is worth investigating Preact and how it could work in the ClojureScript ecosystem 😄. I’m not against Preact. I would consider using it if there was a measurable benefit. I’m just suggesting that the best way to go about this is probably not to integrate it into Reagent as the first step.

Announcing The REPL podcast

I sent the first newsletter for The REPL two years ago, and I have really enjoyed writing it. I’ve learnt a lot from what people have written and created to share with the Clojure community. Sometimes when I’m writing the newsletter I’ve thought “That’s fascinating, I’d love to hear more about the technical details of that”. Now I have an outlet for doing just that.

I first had the idea of producing a Clojure podcast around January 2018. I didn’t have the time for it then, but the idea kept swirling around the back of my brain. When Michael Drogalis’s Pyrostore was acquired, I wanted to hear more about the acquisition, so I contacted Michael to see if he wanted to talk about it. That became the first episode.

My goal for The REPL is to talk with people about the technical details of the projects they are working on. The Clojure community has a ton of interesting people doing really creative work with Clojure. I’ve really enjoyed talking with a bunch of people already, and I’m looking forward to talking to more of them in the future. You can find it therepl.net. It’s available on Apple Podcasts, RSS and all of the other common podcasting apps.

State of Clojure Survey 2018 Analysis

Cognitect has recently released the results of their State of Clojure Survey for 2018. For the last two Clojure survey’s, I have reviewed the free-form answers at the end of the survey and tried to summarise the zeitgeist of community feeling. I enjoy it each time, and so here’s this years analysis.

Some comments have been lightly edited for spelling, clarity, and brevity.

Error messages

Error messages have been one of the top complaints about Clojure since the surveys started, and this year they have gotten worse with the introduction of specs on core macros. I’m not the only one who has noticed this either. In 2015 Colin Fleming gave a talk on improving Clojure’s error messages with Grammars. In previous surveys there was hope that spec would be able to use this approach to improve error messages; Ben Brinckerhoff has recently shown a proof-of-concept that gives some similar (excellent) error messages in Expound.

Spec

Comments ranged from loving spec to disliking it. There were also requests for more guidance on how to apply spec to real-world use-cases and how to integrate it into whole programs.

Docs

Documentation in all forms has been another perrenial complaint but this is starting to improve. clojure.org continues to add more guides this year on Programming at the REPL and Higher Order Functions among other things. Martin Klepsch’s cljdoc holds a lot of promise to improve the state of third-party documentation, by automatically generating and publishing docs for published JARs.

Startup time

Startup time continues to be an issue for people. As serverless computing becomes more mainstream, this may end up pushing people from Clojure to ClojureScript, or elsewhere.

Marketing/adoption/staffing

There were lots of comments this year in this category. Even after 10 years, Clojure still unknown or seen as being extremely niche to many people, especially among business stakeholders. Hiring seems to be less of an issue than in years past. There is still a large contingent of programmers looking to work with Clojure in remote positions, although it’s hard to say whether this proportion is any higher than in other language communities.

Language

Language Development Process

This hasn’t changed in a long time, and I don’t see it changing in the future, but it has been persistently highlighted by many people as a big issue.

Community

By and large, people found the community welcoming, and personally I have found it to be one of the best things about working with Clojure. However there is a persistent undercurrent of eliteness from some Clojure programmers which really puts people off the language.

Libraries

More guidance on how to compose and select libraries remains a common common issue for people in this section, as well as improving the state of library documentation.

Other targets

As in previous years, there are a smattering of people asking for alternative platforms to target other than JS and the JVM. LLVM, Go, and Web Assembly all had interest from people.

Typing

Requests for static typing were down from previous years. I attribute that mostly to spec gaining wider adoption and understanding.

ClojureScript

ClojureScript continues to improve, although getting the tooling setup remains a pain point, as does integrating NPM modules. There are recent improvements on that front though, so this may not be such an issue in 12 months time. I have heard lots of good things about shadow-cljs, it seems to be a strong tool to investigate for ClojureScript usage, especially if you work with a lot of NPM modules.

Setup/tooling

Tooling on the Clojure side seems to be somewhat less of an issue this year, but setting up ClojureScript projects with all the things that you need remains an issue for many people.

Compliments

As always, the survey was dominated by positive comments. Here were some of my favourites.

Conclusion

When I went to write this years summary, I re-read the previous posts to remind myself of where we were a few years ago. While some areas have improved, I was struck by how many of the comments could have been lifted from previous years survey results. There was a worrying trend in the community and adoption sections that people perceive Clojure to be shrinking. Perception doesn’t always match reality, and it may be (as a commenter noted) that Clojure is entering the trough of disillusionment phase of the Gartner Hype Cycle. However I think there are a lot of low-hanging fruit around that could significantly improve the experience and ecosystem around Clojure without changes to the core language.

How to serve ClojureScript files in development

When I develop ClojureScript projects, I almost always use Figwheel. It’s a great tool, but sometimes my app ended up using stale files. This led to some very confusing debugging sessions. It only happened some of the time, and was always fixed after a hard refresh. I thought about just disabling the browser cache, but I didn’t like ignoring the issue. After seeing colleagues struggle with stale caching too, I decided to figure out what was going on, and fix it once and for all.

Cache-Control rules everything around me

The first thing to do was to add a Cache-Control: no-cache header to all static file responses. Despite the name, no-cache tells the browser it can cache files, but must always validate them with the server before using them. If the browser’s cached version is up-to-date, a compliant HTTP server should return a 304 Not Modified response, otherwise it serves the new file.

If you don’t provide a caching header to an HTTP response, the browser can choose its own caching behaviour. The browser’s caching heuristics are much more aggressive than you want in development, and lead to the weird caching behaviour I was seeing.

I thought this had fixed the issue, but occasionally I would still notice stale files were being used. After looking closely at the compiled output files, I made a surprising discovery.

ClojureScript copies file modification times

ClojureScript (as of March 2018) copies the last-modified date of ClojureScript source files to the compiled JavaScript target files. This is so that the compiler can detect changes to source files. JavaScript from the Closure compiler (e.g. goog.base), gets a modification time that matches the time it was compiled.

Neither of these dates are particularly useful to use as a Last-Modified date header for caching purposes.

To avoid these issues, I recommend removing the Last-Modified header from the response when in development.

ETags

To knock both problems on the head once and for all (hopefully), I added a CRC32 checksum based ETag for static file responses. I packaged this up in a library ring-etag-middleware so that other projects could also use it.

Serve 304 Not Modified responses

At this point the browser will check with the server for every ClojureScript file, on every pageload. However, this causes all of the files to be downloaded each time, even if they haven’t changed. The last step is to add ring’s ring.middleware.not-modified/wrap-not-modified middleware. This returns a “304 Not Modified” response if the ETag provided in the If-None-Match request header matches the ETag header in the response.

Summary

As best as I can tell, this has completely solved all of the odd caching issues that I was seeing, while still keeping the app snappy to load by reusing as much of the cache as possible. If you are serving ClojureScript files in development and not using Figwheel, I recommend you follow these three steps:

  1. Set a Cache-Control: no-cache header
  2. Add an ETag to your static file responses
  3. Remove the Last-Modified header
  4. Wrap your responses in ring.middleware.not-modified/wrap-not-modified or the equivalent in your Clojure web framework.

Adding Context to CockroachDB’s Article “Your Database Should Work Like a CDN”

I was excited this morning when I checked Hacker News and saw an article from Cockroach Labs titled “Your Database Should Work Like a CDN”. I’m a big fan of Cockroach DB and enjoy their blog posts. I clicked on this one, but came away disapointed. I don’t think it fairly contrasted Cockroach with it’s competitors, and instead made some incomplete arguments instead of selling Cockroach on it’s merits.

In this article I will analyse sections of the post and add more context where I think some was left out.

Analysis

Availability

To maximize the value of their services, companies and their CTOs chase down the elusive Five Nines of uptime

Only companies with sophisticated operations teams can seriously set an SLA for five nines. It’s doable, but comes with a heavy lift if your service is doing anything non-trivial. It’s certainly not the default position among the industry as far as I can tell. The Google SRE book has a good section on this.

For many companies the cost of moving from a single region deployment to a multi-region one is too great and doesn’t provide enough benefits. Some services don’t actually need to be available to five nines, and if not done well, moving to multi-region deployments may make your system less fault-tolerant, not more.

This is particularly crucial for your customer’s data. If your data’s only located in a single region and it goes down, you are faced with a “non-zero RPO”, meaning you will simply lose all transactions committed after your last backup.

If a whole region dropped into a sinkhole in the earth then you would lose all transactions after your last backup. However that’s not a failure scenario that we tend to worry about (although maybe we should?). Every time a major cloud provider has had a zone/region go down, data was partially/wholly unavailable during the outage, but no committed data was lost once service was restored (that I know of?).

…and Bureaucracy?

This impending legislation requires that businesses receive explicit consent from EU users before storing or even processing their data outside the EU. If the user declines? Their data must always (and only) reside within the EU. If you’re caught not complying to GDPR? You’ll face fines of either 4% of annual global turnover or €20 Million, whichever is greater.

I’m not a GPDR expert, but this seems like an incomplete summary of the GPDR rules around processing, especially as the processing rule applies only to the processing of an EU user’s personal data (AFAICT), and conflates requiring consent for storing/processing personal data with storing the data outside of the EU. From article 6:

Processing shall be lawful only if and to the extent that at least one of the following applies:

a) the data subject has given consent to the processing of his or her personal data for one or more specific purposes;

b) processing is necessary for the performance of a contract to which the data subject is party or in order to take steps at the request of the data subject prior to entering into a contract;

c-f) [Public/government interest exceptions)

If you have entered into a contract with the data subject, and that data is necessary for the contract, then I think you are fine (IANAL). As far as I can tell, transfers to Third Countries are OK as long as there are appropriate safeguards.

This is the part of my critique that I am least confident about, I’d welcome some links confirming/rebutting this.

When you take GDPR in the context of the existing Chinese and Russian data privacy laws––which require you to keep their citizen’s data housed within their countries […]

If you follow the links in the article, you see this for China:

Does the Cybersecurity Law require my company to keep certain data in China?

[…] To that end, the Cybersecurity Law requires “critical information infrastructure” providers to store “personal information” and “important data” within China unless their business requires them to store data overseas and they have passed a security assessment. At this point, it remains unclear what qualifies as “important data,” although its inclusion in the text of the law alongside “personal data” means that it likely refers to non-personal data. […]

“Critical Information Infrastructure” providers are defined a bit more narrowly, but the law still casts a fairly wide net. […] the law names information services, transportation, water resources, and public services, among other service providers, as examples.

and this for Russia:

3.6 Generally, to transfer personal data outside the Russian Federation, the operator will have to make sure, prior to such transfer, that the rights of personal data subjects will enjoy adequate and sufficient protection in the country of destination.

Some companies will need to store data on China/Russian servers, but the laws here are far narrower than “If you have any data from a Chinese or Russian person or company, you must store it in their country”.

Managed & Cloud Databases

Managed and Cloud databases often tout their survivability because they run in “multiple zones.” This often leads users to believe that a cloud database that runs in multiple availability zones can also be distributed across the globe.

It might be an assumption you would make if you have no experience with databases, but it’s not one that I’ve ever seen a software engineer make.

There are caveats to this, of course. For example, with Amazon RDS, you can create read-only replicas that cross regions, but this risks introducing anomalies because of asynchronous replication: and anomalies can equal millions of dollars in lost revenue or fines if you’re audited.

I’m not sure how asynchronous replication lag could result in failing an audit and incurring millions of dollars of fines. I spent a few minutes trying to come up with a scenario and couldn’t. Losing revenue from users also seems speculative, I’m not really clear how this would happen.

Designing a system to run across multiple regions with asynchronous replication is certainly not trivial, but people do it every day. If they were losing millions of dollars from it, they would probably stop.

In addition, this forces all writes to travel to the primary copy of your data. This means, for example, you have to choose between not complying with GDPR or placing your primary replica in the EU, providing poor experiences for non-EU users.

Again, I don’t think GPDR requires this.

NoSQL

For example, NoSQL databases suffer from split-brain during partitions (i.e. availability events), with data that is impossible to reconcile. When partitions heal, you might have to make ugly decisions: which version of your customer’s data to you choose to discard? If two partitions received updates, it’s a lose-lose situation.

This paints all NoSQL databases with a broad brush. While some NoSQL databases are (were?) notorious for losing data, that is not inherent to NoSQL databases.

Certainly if you are using an AP NoSQL database, you need to design your application to correctly handle conflicts, use CRDT’s, or make idempotent writes. Partitions do happen, and it’s not trivial to handle them correctly, but neither is it the case that you always need to discard your customer’s data.

Sharded Relational Databases

Sharded Relational databases come in many shapes and suffer from as many different types of ailments when deployed across regions: some sacrifice replication and availability for consistency, some do the opposite.

I assume the author is referring to systems like Citus. I don’t have enough experience with systems like this to judge the assertion, but this seems fair.

Conclusion

If you do need more reliability/availability than is possible from a single region, then Cockroach is a strong option to consider. I think a far better argument for CockroachDB and against NoSQL, Replicated SQL, and Sharded Relational DBs is minimising complexity and developer time. It is possible for developers to design their application for the nuances of each of these databases, but it’s certainly not easy or cheap, especially if you want it to be correct under failure. The reason Google created Spanner (the inspiration for Cockroach) was that developers found it hard to build reliable systems with weak consistency models.

[…] It is much easier to reason about and write software for a database that supports strong consistency than for a database that only supports row-level consistency, entity-level consistency, or has no consistency guarantees at all. - Life of Cloud Spanner Reads & Writes

CockroachDB provides consistent reads and writes, supports SQL, is able to be deployed multi-region, and in any datacenter. Those are a compelling set of features. If your application can handle the latency and performance tradeoffs that it makes (which are getting better all the time), then it will let you write software against a consistent datastore without spending as much time reasoning about hard consistency problems. Cockroach is a great product, and I think it stands well on it’s merits.

How to upgrade Terraform provider plugins and modules

Since Terraform v0.10, Terraform providers are distributed separately from the Terraform binary. This lets them update at different paces, and allows a wider group of people to collaborate on the providers. This is mostly good, but it does introduce a new step for upgrading providers. It is slightly counterintuitive, but to upgrade your providers, run

terraform init -upgrade

To upgrade your modules, run

terraform get -update

Fixing Ansible error: “SSH Error: data could not be sent to remote host "127.0.0.1". Make sure this host can be reached over ssh”

I was building VM images for Google Cloud with Packer, and provisioning them with Ansible. Everything had been working in the morning, but in the afternoon one computer wasn’t working after I had upgraded Ansible with Homebrew. I was having a really tough time figuring out why Ansible and Packer were running fine on one computer, and not on the other. I was getting the following error:

googlecompute: fatal: [default]: UNREACHABLE! => {"changed": false, "msg": "SSH Error: data could not be sent to remote host \"127.0.0.1\". Make sure this host can be reached over ssh", "unreachable": true}

Investigating the error, I found it could indicate any number of problems, as Ansible is masking the underlying SSH error.

After checking the versions of Python, Ansible, and Packer on the two computers, I found one difference. On the computer that wasn’t working, when running ansible --version it had a config file listed:

$ ansible --version
ansible 2.3.2.0
  config file = /Users/<user>/.ansible.cfg
  configured module search path = Default w/o overrides
  python version = 2.7.13 (default, Dec 18 2016, 07:03:39) [GCC     4.2.1 Compatible Apple LLVM 8.0.0 (clang-800.0.42.1)]

Opening that file:

$ cat ~/.ansible.cfg
[ssh_connection]
pipelining = True

I moved that file to another location so it wouldn’t be picked up by Ansible, and after rerunning Packer a second time, got this error:

googlecompute: fatal: [default]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added '[127.0.0.1]:62684' (RSA) to the list of known hosts.\r\nReceived disconnect from 127.0.0.1 port 62684:2: too many authentication failures\r\nAuthentication failed.\r\n", "unreachable": true}

I then used the extra_arguments option for the Ansible provisioner to pass [ "-vvvv" ] to Ansible. I ran this on both computers and diffed the output. I saw that the dynamically generated key was being successfully provided on the working computer (after one other local key). On the failing computer I had many SSH keys that were being tried before I could get to the dynamic key, and I was getting locked out.

SSH servers only allow you to attempt to authenticate a certain number of times (six by default). All of your loaded keys will be tried before the dynamically generated key provided to Ansible. If you have too many SSH keys loaded in your ssh-agent, the Ansible provisioner may fail authentication.

Running ssh-add -D unloaded all of the keys from my ssh-agent, and meant that the dynamic key Packer was generating was provided first.

I hope this is helpful to someone else, and saves you from hours of debugging!

Postscript

I was very confused by seeing that my computer was trying to connect to 127.0.0.1, instead of the Google Cloud Platform VM. My best guess is that Packer/Google Cloud SDK proxies the connection from my computer to the VM.

Detecting the users’s time zone using pure JavaScript

While working on Deps I wanted to detect the user’s time zone at signup, to localise the times that I presented to them. I hunted around and found a variety of libraries that offer time zone detection, like moment.js, and jstz. However I wasn’t keen on pulling in a dependency on one of these libraries if I could help it, so I kept looking. I also considered trying to detect the user’s time zone from their IP address using something like the Maxmind GeoIP database, and that probably would have been the method I settled on.

However in my research, just before I started with IP based detection, I found that there is a newish Internationalization API, which will give you the user’s system time zone.

Intl.DateTimeFormat().resolvedOptions().timeZone
=> "Pacific/Auckland"

All current browsers support the Internationalization API including IE11. However there is a slight wrinkle to keep in mind when using it: on some older browser versions that do support Internationalization (IE <=11, and Firefox <=51), the timeZone property is set to undefined rather than the user’s actual time zone. As best as I can tell, at the time of writing (July 2017) all current browsers except for IE11 will return the user’s actual timezone. If you do need to support older browser environments then you could look at moment.js. It uses the newer Internationalization API when it is available, and falls back to other heuristics if it isn’t. For my purposes, my browser statistics show almost no IE usage, so I chose to just use the newer method, while also allowing the user to manually update their time zone if they needed.

Bryan Cantrill on Integrity

Bryan Cantrill on integrity. Edited slightly for clarity and profanity:

We are on a collision course with the Amazon Principles […] What is going on? I’m losing my mind. The wheels are off, everyone has to get out of the car and this is why. All of these [expletive] leadership principles, from all these organisations, where is integrity? Damn it where’s integrity? Amazon has 14 leadership principles and integrity is not on it. [That’s] inexcusable. I’m sorry, if you’ve got one principle in your organisation, its integrity? Right? […] No we’re living in a world that has lost it’s [expletive] mind, I don’t understand it. Why the appetite for territory? Do you not know where your next meal is coming from? Do you not have a roof over your head? I mean I’m sorry, I just don’t get it. We have got an incredible luxury. There’s never been a labour market like this one. Where have we so screwed up with a generation, if not a society where we’ve got people who are so extrinsically motivated? What the [expletive] is wrong with us? I mean its like how is integrity not the only thing you have?

[Andrew Clay Shafer]: Are we doing the full critique of capitalism today?

No it’s not. Bullshit, it’s not capitalism. The finest capitalist is Scott McNealy, a capitalist so pure that when he was being devoured by Oracle he believed that it was his duty to capitalism to die, and I admire that. There is no purer a capitalist than Scott McNealy. And read McNealy’s final email to Sun employees, and it’s almost prescient, McNealy says: “You know what, in 30 years I never had to hide the newspaper from my children.” An achievement that Uber violated on like month two? Where are Travis’ parents? Are you not humiliated? How did you raise him to be so divorced from what really, actually, truly matters? What the living [expletive] is wrong with us? Maybe it’s me?

I continue to be impressed by Joyent, and Bryan Cantrill’s engineering principles. It sounds like a great place to do your best work.

Recorded at GOTO 2017 for an Arrested Devops episode: Old Geeks Yell At Cloud