Last week, there was a mailing list post on the clojure-dev mailing list noting that several of the new predicates in Clojure 1.9 alphas return true or nil. The three new predicates are: qualified-keyword?, qualified-ident?, and qualified-symbol?. Previously, all predicate functions in Clojure core, i.e. functions that end in a ? have returned true/false. In this post I want to examine the pros and cons of this new behaviour.
If I’ve missed anything here, please get in contact and I’ll update the post.
Pros to some predicates returning nil
The main argument for some predicates returning nil is that the predicate still returns a falsy value. Idiomatic Clojure code usually doesn’t need to distinguish between nil and false.
The docstrings for qualified-keyword? for example says: “Return true if x is a keyword with a namespace”. It doesn’t say anything about what happens if it doesn’t have a namespace, so returning nil or false are both technically valid interpretations.
Not coercing the output from qualified-keyword? into a boolean is faster. On my computer qualified-keyword? takes roughly 3.5 ns to run, and a version that returns a boolean value takes around 6.5 ns. In absolute terms they are both pretty small though.
The first two points aren’t strong arguments for returning nil, rather they argue that it doesn’t matter whether the functions return nil or false.
Cons to some predicates returning nil:
The biggest downside to this change is that it breaks a core convention that Clojure and the community has held around predicates, namely that any function ending in ? returns true or false. This is my biggest concern about the changes. You can find this expectation in:
Programming Clojure: “A predicate is a function that returns either true or false. In Clojure, it is idiomatic to name predicates with a trailing question mark, for example true?, false?, nil?, and zero?.” - Programming Clojure, 2nd Ed., Page 27.
Clojure’s own library coding standards: “Use ‘?’ suffix for predicates. N.B. - predicates return booleans”. That page also has the disclaimer: “Rules are made to be broken. Know the standards, but do not treat them as absolutes.”
The community Clojure Style Guide: “The names of predicate methods (methods that return a boolean value) should end in a question mark (e.g., even?).”
All of the Clojure standard library functions that end in a ? return a boolean value. Anyone learning Clojure would be justified in assuming that all functions that end in ? return a boolean value if that’s the convention they have always seen.
A mailing list thread about this exact question, whether all Clojure functions that end in ? should return a boolean value.
Tutorials Point: “Predicates are functions that evaluate a condition and provide a value of either true or false.”
StackOverflow on naming rules: “The main function naming conventions seem to be … Use ? to indicate a predicate that returns true or false: sequential?”
The second biggest problem is that qualified-keyword? can return three values. “How is that?” you might be asking, “Didn’t you just say that the functions return true or nil?”. I thought so too. However as I was looking at the implementation of these functions, I realised that qualified-keyword? can return false as well, if not given a keyword. For example:
It is fairly common to use group-by or juxt with a predicate function to partition a collection into two groups. If you use any of the new qualified* functions, you will end up with three groups. Based on Alex Miller’s tweet, it seems like this was considered and accepted.
If these predicates are used at the boundaries of systems or in interop with Java, then it would be easy to forget that the qualified* predicates may return nil, and end up passing null into a Java method as a parameter when you meant to pass false. You could also return false under certain input when you expected the function to only return true or nil, or return nil when you were expecting the result to be a boolean. These kinds of bugs can be very subtle, especially with Clojure’s falsy handling.
Clojure gets similar benefits from the JVM when it can mark the function as returning a Boolean.
Historically, Clojure has made a distinction between functions that return truthy and those that return true. There are predicate functions in core that return a truthy value like some and every-pred, but they don’t end in a ?. This change starts to dissolve that distinction.
Personally, I don’t see a strong reason to keep the current behaviour of the new qualified predicate functions, but you might see things differently. Whatever the outcome, I’m glad that in Clojure we don’t have to name all of our boolean functions with an “Eh” suffix.
… no funding appears to have been taken from Uber, or any other company (If there turn out to be payments from Uber to Otto prior to the acquisition then that would be Very Bad for Uber).
Not long after, Johana Bhuiyan at Recode posted a story which suggests that Uber CEO Travis Kalanick hired Otto CEO Anthony Levandowski as a consultant for Uber’s self-driving car around February-March 2016. Even after reading the article several times and sleeping on it, I still can’t quite believe it.
Now, sources are adding more to that claim. Kalanick hired Levandowski as a consultant to Uber’s fledgling self-driving effort as early as February 2016, six months before Levandowski publicly got his company off the ground, according to three sources.
If this was a legitimate business arrangement, the timing makes no sense at all. Levandowski had just left Waymo weeks earlier. While it might have been difficult for Alphabet to bring a lawsuit on these grounds, the optics for Levandowski advising Uber are terrible. Additionally, Uber and Otto are competitors in the self-driving vehicle space, and Uber is famously aggressive. That a CEO of one company would be advising one of their direct competitors within months of forming their own self-funded startup is truly strange.
If this wasn’t a legitimate business arrangement, then it also doesn’t make much sense purely because it looks so suspicious, especially in light of Alphabet’s trade secrets lawsuit. It brings the start of Uber’s direct involvement with Otto to within weeks of Otto being formed. If the plan all along was for Otto to join Uber, then it does make some kind of twisted sense for him to be advising the Uber self-driving car team.
A little birdie tells me that we haven’t seen the last on this particular thread of the story.
Naked Capitalism on Uber
Hubert Horan has just published another article in his series on Uber and whether it can ever increase overall economic welfare.
The answer is that Uber is attempting to implement an innovative and disruptive strategy for achieving market control — a strategy radical different from every prior tech oriented startup — but those disruptive innovations have nothing to do with technology or the efficient production of urban car services.
Otto purchased another Lidar company
A reader sent me a link to this article by Reuters discussing an acquisition of Tyto LIDAR. The title is “Uber’s self-driving unit quietly bought firm with tech at heart of Alphabet lawsuit”. The title isn’t accurate, as it was Otto that acquired Tyto (for an undisclosed sum) in May 2016. Uber then acquired Otto/Tyto in August 2016.
Levandowski: “We did not steal any Google IP”
Forbes published an article on 28 February, 2017 which contains part of an interview they did with Levandowski in October 2016.
Q: The question is, Google has been working on this for however many years. You guys (Otto) are a year old or something like that.
A: We started in January. How did we catch up?
We did not steal any Google IP. Just want to make sure, super clear on that. We built everything from scratch and we have all of the logs to make that—just to be super clear.
But this is my fourth autonomy stack. I did the DARPA Grand Challenge.
I wonder if “we have all of the logs” is also going to be Alphabet counsel’s opening argument?
If you’ve been following tech news over the last few weeks, youhaveprobablyseenseveralstoriesaboutUber, all negative (bar this one about flying cars). I suspect that what is coming next will prove to be a far bigger story than all of the other incidents so far.
N.B. all of this article is sourced from filings and allegations that Alphabet has made, as well as reading between the lines. Uber will probably contest these claims in court.
Update: I’ve published some updates on the lawsuit. You can keep track of all the posts related to the Alphabet lawsuit here.
In the last few weeks Alphabet filed a lawsuit against Uber. Alphabet and Waymo (Alphabet’s self-driving car company) allege that Anthony Levandowski, an ex-Waymo manager, stole confidential and proprietary information from Waymo, then used it in his own self-driving truck startup, Otto. Uber acquired Otto in August 2016, so the suit was filed against Uber, not Otto.
This alone is a fairly explosive claim, but the subtext of Alphabet’s filing is an even bigger bombshell. Reading between the lines, (in my opinion) Alphabet is implying that Mr Levandowski arranged with Uber to:
Steal LiDAR and other self-driving component designs from Waymo
Start Otto as a plausible corporate vehicle for developing the self-driving technology
Acquire Otto for $680 million
Below, I’ll present the timeline of events, my interpretation, and some speculation on a possible (bad) outcome for Uber. The timeline references section numbers from Waymo’s amended filing, so you can read the full context yourself. You can also read the original filing.
Timeline of events
The main timeline of important events is as follows:
Summer 2015 - Anthony Levandowski told Pierre-Yves Droz, a colleague at Waymo, that he had talked with an Uber executive about forming a self-driving car startup and that Uber would be interested in buying that startup. (Droz 28)
November 17, 2015 - Levandowski registers a domain for 280 Systems, the company that would later become Otto. 280systems.com can be linked to a public email sent February 4, 2016 from someone with a 280systems.com email address looking to do testing of a semi truck with “specialized equipment”. (Filing 41)
December 3, 2015 - Mr Levandowski searched the Alphabet intranet for instructions on how to access Waymo’s design server on his work laptop. Based on Gary Brown’s deposition (a Google forensic security engineer) this was an SVN server. (Brown 15)
December 11, 2015 - Anthony Levandowski installed TortoiseSVN and downloaded 9.7 GB of data from the SVN repository. (Brown 17)
December 14, 2015 - A USB card reader was attached to the laptop for eight hours. Google doesn’t appear to have logged what the laptop did over that time, but the implication is that data was copied from the laptop to a memory card. (Brown 18)
December 18, 2015 - Levandowski reformatted his work laptop from Windows to Goobuntu (Google’s custom version of Ubuntu). This laptop wasn’t used again after December 21. To be fair, it was only used three times between March and November 2015. Presumably, he was still doing work during this time, just on another computer? (Brown 19, 20)
January 4, 2016 - Levandowski downloaded five confidential technical Waymo documents from Google Drive to a personal device. (Brown 22)
January 5, 2016 - Levandowski took a walk with Droz. In Pierre-Yves’ deposition, he claims that Levandowski “told him that he planned to ‘replicate’ Waymo’s technology at a new company he was forming.” (Droz 27)
January 11, 2016 - Levandowski downloads another file from Google Drive relating to Waymo’s self-driving car development schedule and timeline. (Brown 23)
January 14, 2016 - Levandowski was seen meeting at Uber’s headquarters and the news travelled back to Droz. Droz asked Levandowski about this, and he admitted he had met with Uber and was looking for investors for his new company. (Droz 29, Filing 48)
January 15, 2016 - Levandowski officially forms 280 Systems (in stealth mode). Note that this was one day after his meeting with Uber. (Filing 49)
January 27, 2016 - Levandowski resigns from Waymo without notice. (Filing 49)
February 1, 2016 - Levandowski forms Otto Trucking (this is also in stealth mode). (Filing 49)
Spring (March-May) 2016 - “Kalanick began courting Levandowski this spring, broaching the possibility of an acquisition … The two men would leave their offices separately—to avoid being seen by employees, the press, or competitors.” - Bloomberg. Update: I forgot about this article, but was reminded by hammock on Hacker News.
May 17, 2016 - Otto launches out of stealth mode. As far as I can tell, they never took on any venture funding, instead self-funding (emphasis mine):
Many of Otto’s founders have done well for themselves over the years, and it shows: the company is entirely self-funded right now without any external investment. (In the wake of the reported $1 billion Cruise Automation sale to General Motors, I ask Ron if the plan is to get acquired, but he’s insistent that they’re focused on bringing a product to market.) Even George Hotz’s scrappy upstart Comma.ai has recently taken on venture funding from Andreessen Horowitz. - The Verge
In the photo for their announcement I count 35 people. By the time Otto was acquired, they had 91 employees. This seems like a lot of salary commitment to take on via self-funding by Otto’s four co-founders (all ex-Google). On the other hand, depending on the incentive pay they received at Google, they may have had plenty to cover several years of salaries between them.
August 2016 - Levandowski received his final multi-million dollar payment from Google (presumably a deferred bonus?). (Filing 55)
August 19, 2016 - Shortly after the final payment was awarded, Uber announced a deal to acquire Otto for $680 million. (Filing 55)
Summer 2016 - Levandowski’s sudden resignation, Otto’s quick launch, and Uber’s subsequent acquisition of Otto caused Waymo to suspect that their IP had been misused. Waymo investigated this and discovered Levandowski’s actions prior to leaving. (Filing 57)
December 13, 2016 - A Waymo employee was accidentally copied on an email from one of its LiDAR-component vendors titled OTTO FILES. The email contained a drawing of what appeared to be an Otto circuit board that resembled Waymo’s LiDAR board and shared several unique characteristics with it. (Filing 59)
December 2016 to February 2017 - Waymo tried to obtain further information on whether Uber was using their LiDAR designs. This is also known as “Getting your ducks in a row”. (Filing 60)
February 9, 2017 - A Nevada public records request turned up a filing Otto/Uber made that they were using an “in-house custom built 64-laser” LiDAR system. This was enough to confirm to Waymo that they Uber was using a LiDAR system with the same characteristics as Waymo’s. (Filing 61)
February 23, 2017 - Alphabet makes their first filing against Uber.
From Waymo’s filings, it seems that they have Levandowski dead to rights on stealing their LiDAR designs. That alone should be enough to bring Uber’s self-driving car program to a halt and cause some big problems for Levandowski. California’s Trade Secrets law is weaker than other states, but if successful, Waymo will be able to seek an injunction, damages, and attorney’s fees. Because all law is securities law, the SEC may also be able to bring a case against Uber (similarly to their case against Theranos).
I’m guessing, but I think the reason that Alphabet hasn’t directly accused Uber of conspiring with Levandowski is that they don’t have enough evidence. When they get to discovery, they will be looking for it. You would think that no-one would be dumb enough to send emails like that, but you wouldbewrong.
Several things suggest Otto’s intent to be acquired by Uber:
Levandowski told Droz in the summer of 2015 that he had talked to an Uber executive about forming a self-driving car company and Uber acquiring them. This is a pretty clear signal!
Levandowski formed 280 Systems the day after meeting with Uber executives and two weeks before leaving Waymo. He claimed he was talking to them about funding, but no funding appears to have been taken from Uber, or any other company (If there turn out to be payments from Uber to Otto prior to the acquisition then that would be Very Bad for Uber). The timing of Levandowski’s actions suggests certainty about something, and if it wasn’t funding, then what was it?
According to a Bloomberg article in August 2016, Travis Kalanick and Anthony Levandowski started talking about an acquisition in the Spring of 2016. Spring is usually defined as March-May, which sounds like they may have been talking about an acquisition before Otto had even been publicly unveiled. I’m a bit skeptical about Uber’s statements for this article, given Alphabet’s allegations, but it seems like a weird kind of detail to make up.
That Otto hadn’t received funding from any VC’s is unusual. With 91 employees, getting paid $150k/year (this might even be too low given they are working on self-driving cars, one of the hottest spaces in tech right now), they would have had a $13.6 million/year burn rate just for salaries. Otto always aimed to get to market quickly, but getting to profitability without funding seems like it would have been very hard, especially on the accelerated timescale they were working towards and the need to likely hire many more people to get to production. All of Otto’s public self-driving car and truckcompetitors have taken venture funding. However, as someone working on a bootstrapped SaaS application, I’m sympathetic to wanting to self-fund. Update: Counterbalancing this point, it looks like Levandowski had previously sold three startups to Google for nearly $500 million. Those facts are somewhat in dispute, and it’s not clear how much he personally made from the sales. However it does seem plausible that he would have had enough cash to self-fund Otto with his co-founders.
Otto was acquired only four months after their public launch. While it’s not that unusual for companies to be acquired quickly, it is still very quick, and for a lot of money.
Uber has considerable internal self-driving car expertise. I’m speculating, but it seems likely that Uber would have found out (or should have found out) during due diligence, that Otto’s LiDAR system could not have been built and developed independently in the six months the company was operating. Updated: Otto purchased Tyto Lidar in 2016, so they may argue that their Lidar system came from Tyto.
There is no smoking gun email (yet), but there is a strong implication that Uber and Otto planned this from the start.
If this is true and can be proved in court, then it would be a massive blow to Uber. The worst case for Uber would be:
Uber gets dealt an injunction on their self-driving car project. They have to start again, a long way behind other companies.
Uber’s name is mud, they struggle to raise more money from investors, especially on good terms. Uber has raised at least $15 billion at a $68 billion valuation.
Uber is currently losing money at $2-3 billion/year. Uber passengers only pay 41% of the cost of trips, with investor capital making up the difference. Update: I was wrong about Uber passengers only paying 41% of the cost of trips. I can’t find a publicly available number on how the trips are subsidised. Regardless, “Lose a billion here, a billion there, pretty soon, you’re talking real money.” Update 2: See the appendix for more details on this.
With significant negative margins, no way to become profitable in sight, and a terrible media narrative after the Alphabet lawsuit, sexual harassment, aggressive/illegal behaviour, e.t.c., they cannot IPO.
Because their self-driving car plans are still so far off, they can’t lower their costs to become revenue neutral.
I suspect that a large part of Uber’s appeal is their low pricing. If they were to raise prices to cover their driver costs (not even covering the significant costs of their own operations), demand would dry up.
Without any way to raise more money or reduce their costs, Uber runs out of money and folds.
This is admittedly the very worst case scenario for Uber, and there are lots of places along this downward trajectory that they could pull up from. If it’s proven that Uber intended to acquire Otto while Levandowski was at Google (this might be established more concretely in discovery), then it’s hard to see how Uber’s CEO Travis Kalanick could keep his job. Uber has had a bad month, and it doesn’t look like things are going to be getting better any time soon.
This post is made up of a mix of filings from Waymo, reading between the lines, and some speculation. Keep in mind these are all allegations and haven’t been proven in court. Please let me know if I’ve made any mistakes, and I’ll correct them.
I’d encourage you to go over the filings yourself, as they are very readable, and give a lot more context to the story than I was able to here.
I originally posted that Uber was only charging 41% of the cost of the trips, based on this article which I misread. The 41% number is the total cost of the trip including driver compensation as well as corporate expenses.
Alyssa Vance sent me the calculations you would make to work out the farebox ratio:
Farebox ratio = fares / total cost of operations
Total cost of operations = fares - profits
Farebox ratio = fares / (fares - profits)
Farebox ratio = 3,661 / (3,661 - -987) = 3661 / 4648 = 78.8%.
In his summary, Justin said that 62% of the responses were positive. That number sounded low to me (but was presumably calculated by SurveyMonkey). I would have estimated closer to 80% after reading through them all. A little under one quarter of the 2420 survey respondents left a comment.
I’m reusing most of the same categories from last year so interested readers can compare with last years post. Some comments have been lightly edited for spelling, clarity, brevity, and niceness.
Error messages were the biggest pain point for people last year. This year there were still many complaints about error messages and stack traces, but a lot of people are waiting to see what spec brings. There was a lot of discussion about spec error messages several months ago, but it seems to have gone quiet. I’m really hopeful that spec will improve error messages.
This year Figwheel has started providing very sophisticated reporting of compile errors, and configuration problems. It has made a massive difference to my ClojureScript workflow. Thanks Bruce!
Please Please Please can we get better error messages. …
ClojureScript has come a long way, but I still encounter errors that are totally perplexing. Better debugging would help enormously.
Error messages are very frustrating and it is hard to find where the error occurred and why. Errors coming from underlying Java code are only somehow relevant to the Clojure that produced them. Programming with dynamically typed language rocks when inferred types align with your idea, but when it’s not, things get fuzzy.
Please, please, please, please, improve error messages. This would single handedly improve: new user onboarding, community feedback (let’s admit it - it is a small community), rapid iterations, usage of the REPL, usage of spec and testing, and other areas. …
I consistently see error messages being the biggest barrier for learners of Clojure, and even more so for ClojureScript
I think a concerted focus on improving Clojure’s error reporting & handling would benefit both newcomers to the language and experienced developers alike and would result in considerably less wasted time & effort.
Last year clojure.org had just been revamped and open sourced. I wrote at the time:
I have high hopes that it will become a central source for high quality, official documentation and ease the pain for newcomers.
clojure.org has had some contributions from the community, but it still doesn’t have the breadth of material needed to be a central source. I would love to see something like Python’s web documentation for Clojure. Renzo Borgatti is writing a mammoth book “The Clojure Standard Library” which will cover the entire Clojure standard library.
… Please improve the documentation. Recently I heard someone describe Clojure documentation as “comically terse”.
A simple way to get someone up and running that is a standard for the community would be great. There is nothing more frustrating than telling new people to “Google” for a solution and pick one you like… They never get further than reading the docs.
Clojure appeared to be a nice language when I started with it and I do not regret this decision. The flip side is that I had hoped that the poor documentation would have gone away by now — 1.4 was the first version I used and it does not seem to have improved. That’s a shame. Documentation helps new users and really, hinders nobody.
Clojure docs are too terse. They assume that the reader already fully understands the concepts being described, and merely needs a refresher. …
Clojure’s community is surprisingly small given it’s output. This leaves documentation and tutorials sparse because they require time and effort that is in short supply. …
I think both clojure.org and clojurescript.org could use a better “Get Started” experience. It’s our best opportunity at a first impression but we rely too heavily on third party docs. For example, on cljs.org the reference section starts with compiler options. Doesn’t really capture the imagination.
Again, Figwheel has been leading the way in improving tooling on the ClojureScript side. Tooling is still a pain point, but seems to be less so than in previous years. For Clojure editors, Cider, Cursive, and vim-fireplace are all used pretty much the same amount as last year, but Atom has entered fourth place, presumably coupled with proto-repl.
One thing we’re used to from C#/Java that would be awesome to have is a better unit testing framework with IDE integration. …
The build toolchain for cljs has been a pain. All of my learning has come from cloning public repos and seeing how they do it.
A huge thanks to the community for invaluable tools like Leiningen, Parinfer and the various IDE plugins like ProtoREPL or Cursive. And of course huge thanks to the core team for excellent stewardship of Clojure itself.
While tooling for Clojure is improving there needs to be a lot more resources devoted to it. I am currently in a Java shop and the only realistic option is Cursive.
Startup time was another persistent complaints this year. Unfortunately there wasn’t always enough information about whether startup time was a problem in development or production. There are several good options for avoiding constantly restarting Clojure REPLs in development (and in production), but perhaps for newcomers these are too complicated to setup, or they don’t know about them?
Love Clojure … hate its startup time!
Love the language and will keep at it. Primary frustration is with the start up time and tooling. In particular, I’m finding it difficult to quickly test the backend and frontend of my application with Boot.
… We love functional programming, but hate Clojure’s long startup times. We do what we can to minimize this (using test-refresh, REPLs, etc.), but it is still very painful, and we find ourselves twiddling our thumbs waiting for a restart several times a day. Similarly, our Clojurescript compiles take ~60 seconds (:simple optimizations) or ~30 seconds (:none optimizations). This slows us down substantially. Coming from Python, where compile and startup times are sub-second, this is the biggest complaint from our team.
Clojure would be perfect with faster startup time and a little less memory usage. Seriously I don’t even mind the java stack traces.
REPL startup time is all I care about. I use emacs+cider, and it takes an age. Otherwise, I am completely happy with Clojure :)
Justin specifically mentioned marketing and adoption as a common theme amongst the feedback:
One relatively stronger theme this year was the need for better marketing for the purposes of expanding or introducing Clojure within organizations, which is a great area for contribution from the entire community.
In my opinion, one of the areas where Clojure could improve on marketing is on the main Clojure and ClojureScript websites. I think Scala in particular has done a really good job of this. Elm and Rust also do really well in presenting the main reasons why you would want to use their languages. The community isn’t able to help with this, as contributions to style and templates are not currently wanted.
Last year I mentioned Cognitect’s case studies on Clojure adoption, this year Juxt has done some great work with their case studies of companies using Clojure in Europe.
Anecdotally, I’ve seen fewer people coming along to the Auckland Clojure Meetup than in previous years, although we’re working on a fairly small sample size. I’m not sure what (if anything) to make of that.
I would do 100% of my development in Clojure if the enterprise-y company I worked for allowed it. “Reason for not using Clojure as much as you would like”? Management not on board.
I’d love to see more “serious” support for ClojureScript: when I present ClojureScript to colleagues, it looks like a thing supported by several enthusiasts rather than a platform supported by a company.
Clojure is very solid. As a frontend developer, a stable build system, DCE and syntax stability are very valuable for me. I failed to convince my CTO on Clojure and instead we stuck to ES6/ES7. The main concern of management was hiring and training people. Although the learning curve for Clojure is easy, people still have a perception that LISP syntax is esoteric and difficult to pick up for someone completely new to the ecosystem. This myth has to be busted on a larger scale.
Language is great, community is great, the “marketing” is not that great. You guys have a great language and you are struggling to sell it.
[I] would love to see better marketing from Cognitect and other Clojure centric shops. Selling Clojure to a Java shop is not easy. It can and should be easier. (and simple, but I’ll take “easier” here first)
As an average developer fluent in Java, I would love to use Clojure more, but the biggest hurdle for me is the lack of peers interested in Clojure. At the local meetup people report the same situation at their organisations. Pity, because Clojure is super cool! Thank you.
Love Clojure, really. Can’t convince my peers that it’s worth investing the time to learn it though.
The Clojure community needs a “Rails killer,” and only Arachne holds serious promise for that.
Clojure/ClojureScript ecosystem could benefit from stronger stewardship from Cognitect to propel it into the mainstream with a focus on a killer app or a industry domain similar to the way Light Bend has been focusing on enterprise and reactive system to promote the Scala ecosystem. …
I’m in Shanghai and we barely have full-time Clojure developers in China. We formed online groups, but I guess we need help to push it to the next level.
And the age old developer question:
How can we get management to understand what they don’t understand?
Like last year, there are still concerns about the contribution process, and the opaque decision making process for Clojure’s development.
Engagement of community is still sub-optimal, there are talented individuals who could be engaged more and productively so without opening it up to the peanut gallery.
I wish that the core Clojure development process was more flexible and friendly. I (and all the Clojure developers I know) have pretty much given up on trying to make suggestions or tickets about Clojure because I usually wind up banging my head against the wall and getting frustrated. The core team are under no obligations to explain themselves or their motivations, but they would save a lot of frustration from new community members wishing to contribute if there was a prominent document outlining the philosophy, and what kinds of changes and suggestions are and aren’t welcome.
A more open approach to development and contributing to Clojure would be appreciated. I’d love to contribute to Clojure but the mess of a lot of the code in Clojure and the JIRA-based contribution model is a barrier. I know that’s what Rich likes and see the rest of Cognitect are happy contributing in this way but I’d love to see something more open and approachable.
The slow rate at which JIRA issues are addressed remains a frustration, and provides a disincentive for contributing. Features that Cognitect cares about get rushed into the language half-baked, while well-thought out patches from years ago languish.
A little concerned over how ‘arbitrary’ some design decisions seem to be and how inflexible the core team is. Running instrumented post-condition specs is such an obvious idea for example but it has been deemed not necessary “from on high”.
The issue tracker is well tended during the earlier workflow stages. There seems to be a bottleneck later on. I hope 2017 will bring the project’s founders to modes of community contribution that are more efficient of the art and tradecraft of the many.
I would say that Clojure’s community is one of it’s greatest assets. There were mostly positive comments for the community, though still some concerns. The Clojurians Slack has been a great resource for a lot of newcomers, and spawned many channels for libraries and topics. IRC is still around, but seems to be less popular, and doesn’t have the breadth of topics that Clojurians does.
Alex Miller is doing a great job addressing the community.
CLJS has come a long way!! to the point we can use it for production development! this community is great! Justin_Smith stands out as ridiculously helpful on IRC … if the world (this community) had more folks like him, it’ll elevate all of our games!
The thing that makes me wary of continuing to invest in Clojure professionally is how few paid opportunities there are for women and people of color. I don’t know what the exact problem is - they are left of networks, don’t have an opportunity to build their skills, or just straight-up sexism and racism - but it is definitely a problem. The current state of things means I will likely not be able to be on teams with women or people of color, and that is a big turn-off for me.
Clojure has the best community after PostgreSQL, in my opinion. I came for the language but I stayed for the community.
Be more inviting to new-comers.. so they stop learning Ruby.
A “Rails for Clojure” was a common request so people can start delivering value quickly. Arachne was commonly raised as one possibility to provide this.
Clojure/ClojureScript it’s very cool, but not very practical nor efficient in many typical business case scenarios. I’m missing a full stack framework like Grails or Ruby on Rails or Django: there’s nothing like that for the Clojure world: it’s not about a project template but about the quick prototyping (with the customer by your side) these frameworks allow.
… There is also no good Clojure Web Development story: Pedestal looks interesting, but not actively developed and unsupported, Arachne is years away from being complete and useful, Luminus/Duct are initial scaffold project generators. It’s hard to delivering some value quickly - user have to dig through all underlying libs and how they work together.
… Clojure web development needs a “Rails” to overcome the chicken-and-egg problem of getting enough developers to be able to hire for it. …
I hope to write a lot more Clojure. No, I hope to do a lot more with less Clojure. And looking forward to using Arachne.
The biggest thing I’ve noticed for beginners on forums / blog comments is they are afraid of the JVM. …
I’ll never rewrite all my Scala into Clojure unless some sort of Cognitect Clojure OFFICIAL LLVM effort is made. Scala Native is [beating] Clojure.
Clojure’s hosted nature is its biggest downfall, in my opinion. As a systems programmer looking to lisp + immutability for saner concurrency, the cost of Clojure’s startup time and memory usage is very high. …
I really think an opportunity is being missed by dismissing the c++ ecosystem, and I think that something that should be given somewhat serious consideration to by somebody would be to leverage LLVM with JIT and get a clojurescript running in that setting. There are significant legacy environments to be liberated there.
I would love for there to be some sort of compile-to-bundle with Clojure such that you have a single artifact that doesn’t require an external JVM installation. Like Go programs. An embedded JVM (perhaps).
As we’ve all learned this year, people are worried about types. Spec is the biggest movement in this direction, and the number of type related comments has dropped a lot since last year.
I know that Spec should help a lot with the error messages once we move to 1.9, although people coming from a non-Java background still have a lot of learning to do.
It is just sad that people who love Clojure (Colin Fleming, Steve Yegge, Adam Bard), have to use Kotlin sometimes instead (better startup times, static typing and Java interop like fast multidimensional arrays)
I hope Spec will find its way into tooling, especially Cursive.
Looking forward to learn and use spec
I’d most like to see improvements to the Clojure compiler. Error messages, performance, better Java interop and simple static type checking seem most important.
As I mentioned at the start, the overwhelming majority of comments had positive things to say about Clojure. Alex Miller also deserves a special mention for his work with the community, and maintaining the Clojure infrastructure.
I sincerely thank Clojure(Script) community to make LISP revolution back to live stage and making us realize the importance of focusing on core things which matter
I have been programming since 1979. Clojure is a work of art.
The community has been unbelievably great and I hope it stays true to its roots! Keep up the good work and I can’t wait to see where Clojure/Script goes next!
The community is amazing, with a special shoutout to Alex Miller for being such a great part of it.
Happy Clojure user for over 4 years. Still loving every minute :-).
Finally, I found a comment that I think sums up the general feeling of the community (and my own):
No one and nothing is perfect, but I (for one) am very appreciative of the work that has been done and is on-going around Clojure/ClojureScript and the community in general. People tend to ‘harp’ on the things that aren’t just right (in their opinion) and forget about all the things that are amazing. I just want to say thanks, and keep up the superb work! :-)
Batch is like an alien device that has appeared on the earth, and at first you think it’s a gift, but then you realize it is a machine of destruction, here to raze your society to the ground, and the only viable solution is to find a way to rid yourself of it completely.
Working from home might genuinely be the ideal environment for those closest to the introvert end of the spectrum, and I think those are the people who form angelic choirs of blog posts asking if you have met their lord and savior, the Fortress of Infinite Solitude, Home Office Edition.
Shockwaves rang out through the Clojuresphere today with the news that Datomic is changing their licensing to drop the per-process limits. This is big news if you were limited in the number of processes you wanted to run.
The major changes in a nutshell:
There are now no limits on process count
Datomic Starter is now limited to one year of updates, then you need to pay $5,000 for Datomic Pro, or stay on the current version when your license expired.
Datomic Starter can now run the Memcached cache.
All Datomic Pro licenses are now $5,000, and annual maintenance is also $5,000/year.
If I’ve got any of the following wrong, please get in touch via email and I’ll update my post.
The old pricing model
Datomic launched in March 2012 with a paid option and in July 2012 added Datomic Free. In November 2013, Cognitect launched Datomic Pro Starter Edition. The old pricing page is on archive.orghere. This model was easy to understand, as it mapped well to existing database licensing patterns of ‘perpetually free with limitations’ or ‘paid without limitations on a per node basis’.
Free (as the name indicates)
Limited to storing data in memory on the transactor, or on the transactor’s disk.
Freely redistributable (i.e. in on-premise or open source software)
Only able to run two peers (clients) and a single transactor. If the single transactor fails, then you won’t be able to write to the database until another one starts.
Datomic Pro Starter Edition
Support for all storage backends (SQL, DynamoDB, Cassandra, Riak, Couchbase, Infinispan)
Limited to 2 peers and a single transactor
Limited to 1 year of maintenance and software updates, though you were able to renew your Datomic Pro Starter Edition license each year for free (more on this later).
When your updates expire, you can continue using that version of Datomic, but you won’t be able to use any future versions.
Support for all storage backends
Able to run a second transactor in HA standby to take over if the first one fails or for rolling updates.
Ability to use Memcache to cache segments rather than all peers needing to talk directly to the storage backend.
Support is included while maintenance is current.
Pricing scaled linearly from $3,000 for five processes (this includes peers and transactors) up to 30 processes for $16,000. You could upgrade later to higher tiers for the price difference between the old license and the new license.
Annual maintenance for support and updates was half of the upfront license price
If an organisation had special needs (more processes, custom EULA, 24x7 support, redistributing Datomic e.t.c.) then you could talk to Cognitect to negotiate terms.
The new model
All versions of Datomic apart from Free now support unlimited peers, high availability, Memcached support, and all storages (more on storage later). This is a significant change, as you can now use Datomic in an almost unlimited fashion for free. There is also a new Client API which is suitable for smaller, short-lived processes, like microservices and serverless computing. The changes are smart, as it frees users up from having to architect their systems around their Datomic license limitations. The new pricing model rearranges the tiers. There is now Datomic Free (unchanged), Datomic Starter, Datomic Pro, and Datomic Enterprise.
Similar intent to the previous Datomic Pro Starter
Maintenance and updates limited to 1 year. However, based on discussion on Hacker News, it seems that you can no longer renew your Datomic Starter license. This means that you will need to pay for Datomic Pro to get updates after one year. You can still use whichever version was available when your updates expire. Discussion in the #datomic Slack channel on clojurians matches up with this too.
Update: Alex Miller said that you can sign up for multiple Datomic Starter licenses for different systems you’re running.
Similar intent to the previous Datomic Pro
$5,000/year per system including maintenance and updates. For organisations using 16 processes or less, then maintenance will be more expensive (previously $1,500 - $4,500 depending on process count).
2 Day Business-Hours-Only Support. The former Datomic Pro didn’t have a published SLA for support, but I suspect that this is just formalising what was previously there.
Enterprise integration support (professional services)
Negotiated license terms
While this is a new tier in the pricing grid, these options were available as a “Contact Us” note on the former Datomic Pro.
Datomic has a snazzy new documentation site. It also looks like as part of the licensing changes, Riak, Couchbase, and Infinispan are now considered legacy storage and are only available to be used under an enterprise license. Standard editions of Datomic only support SQL, DynamodDB, and Cassandra. This change hasn’t been mentioned on the Datomic mailing list or release notes, but probably will be soon.
There is a new customer feedback portal where you can suggest features.
If you are a Datomic Pro user then your maintenance is probably going to be higher, although in absolute terms it’s still not a lot compared to developer salaries. If you were on Datomic Pro Starter and want to stay current, then you are now looking at moving Datomic Free, or paying $5,000/year for Datomic Pro. If you were using Riak, Couchbase, or Infinispan then it seems like you’ll need to get Datomic Enterprise.
Datomic from the beginning has always felt like a database that understood the Cloud, and the zeitgeist of computing. It supported AWS DynamoDB and CloudFormation from early on, and their architecture has always felt well suited to cloud systems. The license changes to accommodate the trend towards microservices and serverless computing are a continuation of that.
I agree with these statements, and I disagree with those.
However, a great thinker who has spent decades on an unusual line of thought cannot induce their context into your head in a few pages. It’s almost certainly the case that you don’t fully understand their statements.
Instead, you can say:
I have now learned that there exists a worldview in which all of these statements are consistent.
And if it feels worthwhile, you can make a genuine effort to understand that entire worldview. You don’t have to adopt it. Just make it available to yourself, so you can make connections to it when it’s needed.
Before we get started, I want to be clear. I don’t support tobacco, cluster bomb manufacturers, nuclear weapons manufacturers, and the other socially harmful businesses mentioned in the recent brouhaha about Kiwisaver investments into the aforementioned companies. However, I think the reporting that NZ Herald did on this was misleading, in search of a good story at the expense of accuracy.
To recap: the Herald has recently been reporting on Kiwisaver, the NZ government retirement savings scheme (like an IRA). Their big headline was that $150mm has been invested by the major Kiwisaver providers into companies that produce cluster bombs, landmines, and other socially harmful products, and that they may be breaking a law banning investments into cluster bomb manufacturers. When you look into the details, their bold claims don’t look so strong.
Here are a few points that should have been included in the reporting:
The biggest problem is that the Herald doesn’t distinguish between active management (where fund managers or algorithms chose particular businesses to invest in), and passive management that tracks an index (say agriculture, energy, or the USA stock market). If some of these funds were directly invested in these companies, and banks really were breaking the law, that would be a real story. It’s not clear to me from the reporting whether there was any direct investment into the companies, or investment into clusterbomb index funds, or whether the funds were invested in broad indexes. You can make an argument for both being bad, but active investment in these companies is very different from passively investing in an index fund of the total stock market. The Herald implies that these companies deliberately or knowingly invested in the companies, but I couldn’t see this from the data they provided.
The total amount invested in Kiwisaver is $32.5 billion, and the amount invested in socially harmful businesses is $150 million. This works out to around 0.46% of the total funds invested. This is not nothing, but it is a tiny fraction of the total assets invested.
Kiwisaver funds that aren’t investing in these companies are likely doing so through active management of stocks. In general, this will result in higher fees, and depending on who you ask, lower performance (especially once fees are taken into account).
The charts that NZ Herald produced all give absolute numbers for how much is invested in socially harmful companies, without giving the percentage invested. Westpac, ANZ, and ASB all feature high on the list of investing into these harmful companies, but there is no context given for what percentage of the investments each have made. Westpac has 0.76% in socially harmful businesses. ANZ has 1.27%, and ASB has 0.13% 1. Without more details on total amount invested in stocks (active and passive) by each fund, it’s hard to tell where and why the differences here come from.
The amount invested in the socially harmful parts of Northrop Grumman2 and General Dynamics 3 are very small, compared to their overall businesses.
Data journalism can be used to illuminate complex topics and explain them for a wide audience. It seems in this reporting that the story came first, and the numbers were presented in a misleading way to back it up. There is a nuanced discussion that could have been had about the ethics of index funds, and socially responsible investing, but that wasn’t what we got here.
N.B.: I may have read the article wrong, and all of the figures provided were active investments (it’s not clear at all to me which is being included). If that’s the case my conclusion would be quite different.
Northrop Grumman is blacklisted by the NZ Superannuation Fund for selling mines. They had sales last year of $23.5 billion. $15 billion in products and $10.5 billion in services. This is split amongst Aerospace systems, Electronic systems, Information systems, and Technical services. Northrop Grumman doesn’t break out a landmine line item (and only mention mines once in their annual report, to say they have nothing to disclose about mine safety), but it looks like it is part of the Electronic systems segment, which did $5.5 billion in product sales and $1.3 billion in services (23% of total sales). Electronic systems also includes radar, space intelligence, navigation, land & self protection systems (probably where mines go, but also includes missiles, air defence, e.t.c.). ↩
General Dynamics is blacklisted by the NZ Superannuation fund for selling cluster bombs. They had $31.4 billion in sales in 2015. They also don’t break out a clusterbomb line item (I’m sensing a pattern here), but it probably fits into the Combat Systems group which had $5.6 billion in sales (18%), and which also includes wheeled combat and tactical vehicles, tanks, weapons systems, and maintenance. ↩
Every week in The REPL I have a section called: “People are worried about Types”. This is based on a section that Matt Levine (my favourite columnist) includes in his daily column “People are worried about bond market liquidity”, and the related “People are worried about unicorns”, and “People are worried about stock buybacks”.
The title is a joke (as is his), I don’t really think there are people worried about types, but it is interesting to see a small but steady flow of type or type related (schema, spec, e.t.c.) articles relating to Clojure each week.
I’ve started a weekly newsletter about Clojure and ClojureScript. Each week will have a curated selection of links (both recent, and older) and a sentence or two about why I think they’re worth reading. You can sign up at therepl.net.
One of the datatypes Clojure gives you is the Record. It is used in both Clojure and ClojureScript, but due to implementation differences, the syntax to use records from one namespace in another is different.
Clojure generates a Java class for every defrecord that you create. You need to :import them, just like you would a standard Java class. Lets say we have a small music store app that sells vinyl:
Because Clojure is generating Java classes, they follow the naming conventions of Java classes, where dashes get converted to underscores.
You should probably prefer the first example where the record is aliased, over the second one where the record is fully qualified.
Clojure dynamically generates the class for the record when that namespace is required (ignoring AOT compiling). If your code never requires the namespace that the record lives in, then it won’t be created. This will cause ClassNotFoundException errors when you try to import the class. In our trivial example, that would mean changing the ns import to:
Most of the time I don’t have any issues with git’s .gitignore system but occasionally it doesn’t behave as I expect. Usually, I add and remove lines until I get the result I’m after and leave feeling vaguely unsatisfied with myself. I always had a nagging feeling that there must be a smarter way to do this, but I was never able to find it. Until today!
Enter git check-ignore. You pass it files on the command line, and it tells you if they’re ignored or not, and why. Sounds pretty perfect right? I won’t give you a run down of all the options, as the man page is surprisingly readable (especially for a git man page!). However the 30-second version is git check-ignore --verbose --non-matching <filepath to check>:
$ echo "b" > .gitignore
$ # a is a file that would be matched
$ git check-ignore --verbose --non-matching a
$ # b is a file that would be ignored
$ git check-ignore --verbose --non-matching b
--verbose prints the explanation of why a file was ignored. --non-matching makes sure the file is printed out, even if it won’t be ignored.
If a file is going to be ignored, check-ignore prints out the path to the gitignore file and line that matched. Interestingly, the files don’t even need to exist, git just checks what it would do if they did exist.
Using check-ignore I was able to solve my problem faster, and learn a bit more about git’s arcaneexclusion rules. Another handy tool to add to my unix utility belt.
I’ve recently been working on a new ClojureScript application as part of a contract, and I was digging around for things to polish before launch. The app was mostly fast, but I noticed that when the main list of content got to around 40 items, it was a little bit slow to render. I also noticed that it seemed like it got almost twice as slow when I added another 10 items. At this point, you might already be having alarm bells go off in your head suggesting what the problem was likely to be. I didn’t, so I dived into the code to look at the part of the app rendering the main list.
I looked over the code path that rendered the list, and wrapped time around a few suspect pieces of code. After a few checks, I found that a sort-by function in the view was the slow part, though it wasn’t immediately clear why sorting a list of 40 items would take a second. We were using sort-by to order items by state, then reverse date order (newest items first).
sort-by takes a custom sort key function. Our key function was doing some date parsing to parse a string into a date, then subtracting the current unix time from the parsed unix time to give an integer value. The lowest numbers are the most recent dates. I suspected that the date parsing could be the problem, but I wasn’t really sure. As an experiment, I disabled all of the date parsing, and returned the string directly. My sorting was the wrong way around, but it went from taking 1000 ms to 10 ms, a factor of 100x speedup!
A standard sort of the dates (which were in ISO 8601 order, e.g. 2016-04-02T08:24:31+00:00) sorted the oldest dates as the first ones in the list. After a few minutes thinking, I remembered I had recently read the clojure.org guide on comparators. In it, it discusses a reverse comparator:
(fn [a b] (compare b a))
This comparator is exactly the same as a normal comparator, but it will return the opposite result to what the normal one would. Passing this comparator to sort-by kept the 100x speedup, but sorted in the correct order. The list rendered in 10-15 ms, and was now plenty fast enough.
One question remained though, why was the list getting so much slower to render as I added a few more items to it? Of course reading this now, you Dear Reader are probably thinking to yourself, “aha! ClojureScript’s sorting algorithm will be O(n log(n)), and the slow key function will therefore be called O(n log(n)) times.” It took me a bit more thinking than you (I didn’t have a blog post explaining why my code was slow to work from), but I got to this conclusion in the end too.
I really enjoyed this debugging session, as it is not very often that I can both speed up real world code by 100x, and get exposed directly to algorithmic complexity issues. A++, would debug again.
I’m excited to announce I’m starting a new podcast Decompress, hosted by myself and my long time friend Nathan Tiddy. Decompress is a technology podcast, with a New Zealand perspective. We also talk about the wider world of things that touch technology, and how it relates to our lives.
If you like technology, and podcasts, then you may like this too. You can subscribe through iTunes, or in your podcast client of choice, by searching for “Decompress”.
Specifying standard Clojure types is fairly straightforward, but I recently needed to specify that a value was a goog.date.UtcDateTime. I did a bit of searching on Google, didn’t find anything. I then looked in the company codebase and found it. Here’s a minimal example:
(def Date js/goog.date.UtcDateTime)
Mike Hadlow’s blog post about a project that moved to an agile process with PM’s mirrored exactly a project I worked on a while ago. If I didn’t know everyone on the team, I would have assumed it was someone taking notes about the project I was working on.
Every two weeks we would have a day long planning meeting where these tasks were defined. We then spent the next 8 days working on the tasks and updating Jira with how long each one took. Our project manager would be displeased when tasks took longer than the estimate and would immediately assign one of the other team members to work with the original developer to hurry it along. We soon learned to add plenty of contingency to our estimates. We were delivery focused. Any request to refactor the software was met with disapproval, and our time was too finely managed to allow us refactor ‘under the radar’.
We became somewhat de-motivated. After loosing an argument about how things should be done more than a few times, you start to have a pretty clear choice: knuckle down, don’t argue and get paid, or leave.
The project was not a happy one. Features took longer and longer to be delivered. There always seemed to be a mounting number of bugs, few of which seemed to get fixed, even as the team grew. The business spent more and more money for fewer and fewer benefits.
In this video Stuart Halloway talks about adopting a scientific mindset when debugging. Thinking in this way has absolutely changed the way I now approach debugging. Debugging is such a core skill in software development, but how many developers would rate themselves to be truly great at debugging? Not me yet, but after watching that video I’ve picked up two books: Debugging and Why Programs Fail to learn more and hone this skill. You can see the slides and related resources at Stuart’s wiki page.
Cognitect has released the State Of Clojure 2015 results. Justin Gehtland did a good analysis on the quantitative parts of the survey, and summarizing the comments section at the end. I read (most) all of them and got a feel for the zeitgeist of where the Clojure community is looking for improvements. After compliments, error messages and documentation far and away received the most comments. Below are some quotes grouped by subject, to help you get a feel for the responses. You can read all of the comments on the SurveyMonkey page.
I’d love to see elm-like approach to hard tasks like improving on error reporting and documentation. Clojure team is so skilled that it won’t bother even with low hanging fruit. But it could be hurtful in the long run.
Regarding potential future enhancements, improved error reporting would be very helpful. This has been a major stumbling block for our junior developers in particular. Too many error messages require a high level of understanding of Clojure internals to understand. Overall, I’m very appreciative of the well-thought-out design decisions in both Clojure and its ecosystem.
Please fix/finish transitive AOT, and please make error messages better. I know error messages continue to receive this sort of “well, if you wanna make ‘em better, patches welcome…” treatment, but I think it’s a mistake to keep beating up on them, insinuating they don’t really matter. As someone who’s spent a fair amount of time debugging Clojure programs, some of them do matter, and I think the wrong message is being sent to would-be contributors about the value in improving them.
Better error messages from macros! It can be done with PEGs quite easily.
Please improve error messages - Elm developers are a good example to follow.
2+ years with Clojure. First one was strong, but the community has been solving non-problems (transducers, reducers) instead of improving error messages and general debugability.
Highest on my list of annoyances are NullPointerExceptions missing stack traces. They they disproportionately more time to debug that other failures. We rarely see them in production but during development they are a constant scourge.
Clojure has the problem that many languages have —- the people capable of providing a better UX aren’t interested in solving that problem. (Myself included.) We could learn from the Go core team’s willingness to solve common UX problems (fast compiler, auto-formatter) and other projects like Elm’s compiler work to support good error messages.
clojure.org has undergone a massive visual refresh between the survey being taken, and the results being released. I have high hopes that it will become a central source for hiqh quality, official documentation and ease the pain for newcomers.
Community & resource wise, everything things feel extremely federated, in a bad way. Documentation This is where I had the worst experience with Clojure. Documentation is extremely federated. To understand clojure.zip & clojure.core.logic took me so much work […]
Many things that disappoint you about Clojure at first (“why errors are so awful?”, “why the contribution process is so hard?”, ” why don’t you implement this one feature I like? “), they fade away as you learn Clojure and Clojure Way more. Still, it would nice to have a sort of explain-like-I’m-five FAQ somewhere on clojure.org that covers these non-obvious points so that people skip the frustration part and understand the reasoning behind those decisions immediately.
CLJS could use better official documentation.
I still don’t know how to get externs working with either lein or boot!! Evaluating code in vim doesn’t always work. Documentation on google are just not reliable because they are outdated or assumes too much pre-knowledge. Applying what’s read in the docs most often do not work, or I just don’t understand. The documentation must really be much more comprehensive and not assume the readers are clojure pros.
Documentation! The clojure.org documentation is so terse that it is unusable for beginners. The clojuredocs.org documentation is confusing for someone who wants the learn the basics. Some examples are too cryptic and appears to be written by very smart people showing off edge uses of the feature they’re describing.
Clojure needs more outreach. Clojure.org or clojuredocs.org needs more tutorials, application-development documentation pointers (For example: “here are your choices if you want to make webapps with Clojure”).
Very happy with progress on cljs.js (tho remains sparsely documented). Much of my pain is related to the compilation step. Lots of errors still happen only after compilation, & not easy to know how to split large code base and load only necessary modules, etc.
Re. documentation: Features added to clojure.core usually have far inferior documentation on initial release than many libraries. Sometimes this is eventually remedied by the community, but sometimes not. This has been the case for reducers, core.async, the new threading macros, and transducers. Even the core library functions didn’t have much example-based documentation until the community took it upon themselves to make clojuredocs. […] Actual documentation, with practical examples, would lead to faster and more widespread adoption of new features.
We just had a Clojure user group meeting here in PDX with several new faces, new to Clojure. […] One person pointed out how the Clojure home page makes no mention of Leiningen, the de-facto standard build tool. The references to running Clojure apps with “java -jar …” are that much more intimidating for anyone coming from a non-Java background. Some of these issues were well described in the “ClojureScript for Skeptics” talk at the Conj. Consensus: people love, or want to love, the language (I do!) but the rough edges have not been smoothed out, and it is still intimidating to get started in Clojure. — Howard
Setup and tooling has historically been another pain point in Clojure. In the ClojureScript world, Figwheel seems to be solving a lot of hard problems (REPL, reloading code) in a relatively easy way to set up, and I’m sure it will get even better this year. As an alternative to emacs, Cursive (which I use) is a great tool for getting a full Clojure development environment setup with minimal hassle.
By far, the biggest complaints I hear from friends and at meetups is the startup time and the work required to get a ClojureScript workflow set up.
[…] EMACS SETUP: IMPOSSIBLE I used and liked emacs 15 years ago and prefer to use emacs for clojure development now, but after a full day of trying I still can’t get all the emacs pieces together. It’s too complex, there are too many half-baked and out-of-date how-to examples. Will the real viable emacs setup tutorial please stand up?
Cursive and it’s debugger are a Godsend - I couldn’t ever convince myself to use Emacs and all the alternatives were inferior. Cursive made writing and debugging Clojure a really pleasant experience. Same with boot in Clojurescript world - getting lein + cljbsuild + figwheel working well was a pain. Boot+boot-cljs+boot-reload+boot-cljs-repl is a pleasure. I think those two things will do a lot good towards Clojure and Clojurescript adoption.
Convincing coworkers to adopt Clojure also was a pain point for some of the respondents, and additional marketing materials could help here. Cognitect’s case studies are good, and could be ported to the official Clojure website.
The clj / cljs / om / figwheel websites make Clojure appear like a niche ecosystem. Seems to be time for marketing!
needs better marketing. clojure has bad marketing (great product, bad marketing) typesafe has good marketing (complex product, good marketing) 10gen has great marketing (bad product, great marketing) it is hurting companies wide adoption: “Clojure? Um.. What is it you said?” “Scala, yea, I’ve heard good things about it, it’s solid.” “MongoDB? Oh, yea my teams tell me it’s really good.”
Love clojure and clojureScript. Where I need help is on the “evangelical” side. If we had more people singing the praises of clojure that would help my business secure more clojure business and everyone else’s work as well. This brings in more money and talent which can then be reinvested into tooling, systems, turn-around development time,etc. just look at what’s happened over in the JS world once their reputation improved.
Overall, very happy with the current direction of Clojure. What feels missing is awareness in the market about the goods of is as a technology, as the biggest impediment with using Clojure more has been finding customers willing to embrace the risk. Alternatives like Scala managed to have a better name within the industry.
Clojure deserves a better website and documentation.
Still trying to persuade co-workers to use Clojure more.
In my opinion, the Clojure community is one of Clojure’s greatest assets. I’ve found it to be overwhelmingly positive, and there are a massive amount of innovative libraries coming from a very young community. There were also some concerns about the contribution process, and how many of the decisions about Clojure happen inside Cognitect.
Many thanks to Rich, Stu, David, Alex and many other people from community and Cognitect. You are doing amazing job. I love the culture of community you created around Clojure: open community with emphasis on learning, design, thinking hard and solving the right problem. It is inspiring place to be in.
What an amazing language. Relatively frequent, consistently stable releases. A pleasure to use. A friendly, smart community. I feel very lucky to be a Clojure user! Thank you for all of your hard work.
The Clojure contrib process frustrates me more than any technical or community aspect of the language.
Clojure gets a lot right, but as has been repeatedly discussed the pace of evolution and the maintainership’s dim view of 3rd party non-bugfix work flatly leads to worthy but minor work such as type predicates going largely ignored and certainly unmerged. In most open source projects, contributors can impact priorities by giving of their time to support work which isn’t high priority. The work which Nola Stowe did on clojure.string was awesome and I think we can see more of that if Alex et all allocate more capacity to working with contributors.
I found some of the recent back-and-forth about tuples to be very disheartening—-if I were Zach Tellman I would decide not to bother attempting to make improvements to the core of Clojure. […]
i have become a little worried about the future of the language. The core team are way ahead in terms of design and concepts, but seem to lack any kind of empathy with beginners.
Things I think should be a priority for the long term future of Clojure: […] 3. More open core development process. I worry about the long term future of Clojure when it appears to be driven by internal decisions within a single company (i.e. Cognitect). I am uncomfortable with this approach, and am hesitant to commit fully to Clojure as a language until this becomes more open. Maybe a “Clojure Foundation” would be a good idea?
Thanks to everyone again for another great year in CLJ(S)land. Best community for any language.
I participate in a local meetup group which has helped greatly and I find the community very welcoming (especially being a woman, I’m not finding as many issues as I do with other communities). Thanks!
Keep up the great work! Its been a joy to participate in this community :)
I would like to thank entire Cognitect staff for creating such a great environment and community.
Finding good libraries has not been difficult, but it would be helpful to have some mechanism in the community for evaluating which libraries are viable in the long term as well as which are “best in class”, so to speek. In many cases, I want a library that will provide a useful component for a multi-year project. The library does what I need and is available, but the modifications on github are 3+ years ago. Should I invest in using this library in my core code? Are there better, more current and maintained options? And so forth. Some community aggregation that makes it easy to evaluate libraries in this sense would be very helpful. […]
Libraries Not sure where the main place is to find clojure libraries. Some are in maven (with no ability to evaluate the library). Clojars also feels incomplete and hard to evaluate. The landing page http://clojure.org/libraries even suggests checking maven, clojars, and github, it feels like a dead end. Google is better resource than these, because you can find blogs evaluating the libraries.
I was surprised to see so many comments about static typing, but it was a recurring theme. They can roughly be divided into three groups:
People who were getting push back from their organisation for using a dynamic language.
People who wanted a type system like Haskell’s.
There seems to be a lot of interest in core.typed but people are waiting at the sidelines until it reaches more maturity.
I was surprised to see that the survey doesn’t include reference to usage of typing or contract tools like core.typed or Schema. I’m interested in understanding how widespread their usage is. Schema’s been an important tool in helping to encourage Clojure’s more widespread use at our enterprise and I’ve been largely happy with it.
Clojure and ClojureScript are amazing projects and the community is pushing the edges of programming today (Om Next, Datomic, cljc). I’ve been close to using both many times, but at this point I don’t think I could leave the expressive syntax or type systems in Haskell and PureScript. I wish they had the same community and dev experience, and am interested in using the Clojure community’s ideas there.
People in my organization feel strongly that dynamic typing won’t scale to projects with more than a few developers. I am concentrating on small projects. Also, the learning curve for Java devs is a bit steep. Like anything else, one has to really want it.
The two things holding back Clojure/ClojureScript use at work are corporate policy and a strong fear of weakly typed production languages by the team.
I love clojure but I also like static type systems. Don’t know if its possible to two in a way that works well. Haskell and scala are languages I like for its type system
The discussion we have been having lately though is around static typing. I have a Haskell background (and a Phd in type systems, even), so I’m aware of the benefits they can provide but I don’t think they’re a silver bullet either. None-the-less, improved static support seems incredibly useful. A gradual-typing approach does seem ideal, allowing you to specify either types up front, or to add annotations as the design solidifies, perhaps. I’m eagerly watching Ambrose’s work, and perhaps prismatic-schema is enough — it would be great to get some of the core community’s thoughts on the matter though.
I really, really, really want to see some form of official support for static typing. This could be core.typed, but it needs to be blessed by the core Clojure dev team and development to allow core.typed (and other future tools) to better integrate with the compiler is necessary. Type annotations provide helpful documentation and (optional) static type checking can help speed up development for those already annotating for documentation purposes by detecting type errors before executing programs. I see the lack of support for (optional) static typing as Clojure’s biggest weakness and the main reason I have considered moving to a different language.
Like Justin said, over half of the responses were happy or positive. Here’s a few to finish on an upbeat note. I’m looking forward to seeing what 2016 brings for Clojure, and especially ClojureScript. The tooling story seems to be coming together quite well. If enhanced error messages make it into Clojure 1.9, and the documentation story improves, a lot of the pain in using Clojure could be minimised.
(= “Clojure” “Awesome!”) => true
Thanks for everything that you do. :)
Love it! Thank you for everything that you do.
I just love programming in clojure.
Very nice thing. I like to program in this Lisp a lot
I would never have written PartsBox.io if it wasn’t for Clojure and ClojureScript. Only these tools allow for such a productivity increase that a single person can write a complete medium-size application.
The Everything Store has a story in it about how Jeff Bezos came up with the idea for Amazon, an ‘everything store’.
Bezos concluded that a true everything store would be impractical—at least at the beginning. He made a list of twenty possible product categories, including computer software, office supplies, apparel, and music. The category that eventually jumped out at him as the best option was books. They were pure commodities; a copy of a book in one store was identical to the same book carried in another, so buyers always knew what they were getting. There were two primary distributors of books at that time, Ingram and Baker and Taylor, so a new retailer wouldn’t have to approach each of the thousands of book publishers individually. And, most important, there were three million books in print worldwide, far more than a Barnes & Noble or a Borders superstore could ever stock.
We all know how Amazon became a success in the dotcom boom. What isn’t quite so well remembered were the imitators: pets.com, eToys.com, and other websites sprung up trying to be the Amazon for pets or toys. These businesses tried to copy Amazon’s visible activities of selling a category of product online, without understanding the context, business model, and reason for choosing books to sell. The x for y phenomenon is still going strong today. A few years ago it used to be Facebook or Pinterest of Y. Today it’s Uber for Y. These companies are copying the visible parts of a business, but they can’t copy the context, and underlying reasons for a business’s success.
The same thing happens whencompaniescopydesign. It’s one thing to copy the trade dress of a product, and this takes some effort and thought. But you can’t copy the thinking that went in to the design of a product, and you’re likely to miss the small touches when you do so.
A few years ago I read how Stripe handles group email with Google Group lists and I wanted to replicate it at my job. I imagined how efficient this would let us be in our communications and how it would help us deal with projects much more efficently. But in the back of my head I had a nagging feeling that it wouldn’t work for us. At the time I didn’t know exactly why, but I parked the idea. Looking back I understand it now. I was trying to copy a practice that had grown out of Stripe’s specific company culture and values and transplant it into my own workplace with completely different values and world views. I could copy the system but I couldn’t copy the context.
When looking to learn from another business, instead of looking at the superficial exterior that is easy to see and understand, we should look at the core of why a product or process exists, and why people use it. Only once we have that understanding, can we see how to apply it to our own contexts or look for new opportunities based on that insight. Clayton Christensen’s ‘Jobs to be done’ theory is a powerful tool for doing this.
A context free grammar may be useful for describing regular expressions and automata, but it’s not so useful for analysing businesses, products and processes.