Working from home might genuinely be the ideal environment for those closest to the introvert end of the spectrum, and I think those are the people who form angelic choirs of blog posts asking if you have met their lord and savior, the Fortress of Infinite Solitude, Home Office Edition.
Shockwaves rang out through the Clojuresphere today with the news that Datomic is changing their licensing to drop the per-process limits. This is big news if you were limited in the number of processes you wanted to run.
The major changes in a nutshell:
There are now no limits on process count
Datomic Starter is now limited to one year of updates, then you need to pay $5,000 for Datomic Pro, or stay on the current version when your license expired.
Datomic Starter can now run the Memcached cache.
All Datomic Pro licenses are now $5,000, and annual maintenance is also $5,000/year.
If I’ve got any of the following wrong, please get in touch via email and I’ll update my post.
The old pricing model
Datomic launched in March 2012 with a paid option and in July 2012 added Datomic Free. In November 2013, Cognitect launched Datomic Pro Starter Edition. The old pricing page is on archive.orghere. This model was easy to understand, as it mapped well to existing database licensing patterns of ‘perpetually free with limitations’ or ‘paid without limitations on a per node basis’.
Free (as the name indicates)
Limited to storing data in memory on the transactor, or on the transactor’s disk.
Freely redistributable (i.e. in on-premise or open source software)
Only able to run two peers (clients) and a single transactor. If the single transactor fails, then you won’t be able to write to the database until you another one starts. Reads will still continue.
Datomic Pro Starter Edition
Support for all storage backends (SQL, DynamoDB, Cassandra, Riak, Couchbase, Infinispan)
Limited to 2 peers and a single transactor
Limited to 1 year of maintenance and software updates, though you were able to renew your Datomic Pro Starter Edition license each year for free (more on this later).
When your updates expire, you can continue using that version of Datomic, but you won’t be able to use any future versions.
Support for all storage backends
Able to run a second transactor in HA standby to take over if the first one fails or for rolling updates.
Ability to use Memcache to cache segments rather than all peers needing to talk directly to the storage backend.
Support is included while maintenance is current.
Pricing scaled linearly from $3,000 for five processes (this includes peers and transactors) up to 30 processes for $16,000. You could upgrade later to higher tiers for the price difference between the old license and the new license.
Annual maintenance for support and updates was half of the upfront license price
If an organisation had special needs (more processes, custom EULA, 24x7 support, redistributing Datomic e.t.c.) then you could talk to Cognitect to negotiate terms.
The new model
All versions of Datomic apart from Free now support unlimited peers, high availability, Memcached support, and all storages (more on storage later). This is a significant change, as you can now use Datomic in an almost unlimited fashion for free. There is also a new Client API which is suitable for smaller, short-lived processes, like microservices and serverless computing. The changes are smart, as it frees users up from having to architect their systems around their Datomic license limitations. The new pricing model rearranges the tiers. There is now Datomic Free (unchanged), Datomic Starter, Datomic Pro, and Datomic Enterprise.
Similar intent to the previous Datomic Pro Starter
Maintenance and updates limited to 1 year. However, based on discussion on Hacker News, it seems that you can no longer renew your Datomic Starter license. This means that you will need to pay for Datomic Pro to get updates after one year. You can still use whichever version was available when your updates expire. Discussion in the #datomic Slack channel on clojurians matches up with this too.
Update: Alex Miller said that you can sign up for multiple Datomic Starter licenses for different systems you’re running.
Similar intent to the previous Datomic Pro
$5,000/year per system including maintenance and updates. For organisations using 16 processes or less, then maintenance will be more expensive (previously $1,500 - $4,500 depending on process count).
2 Day Business-Hours-Only Support. The former Datomic Pro didn’t have a published SLA for support, but I suspect that this is just formalising what was previously there.
Enterprise integration support (professional services)
Negotiated license terms
While this is a new tier in the pricing grid, these options were available as a “Contact Us” note on the former Datomic Pro.
Datomic has a snazzy new documentation site. It also looks like as part of the licensing changes, Riak, Couchbase, and Infinispan are now considered legacy storage and are only available to be used under an enterprise license. Standard editions of Datomic only support SQL, DynamodDB, and Cassandra. This change hasn’t been mentioned on the Datomic mailing list or release notes, but probably will be soon.
There is a new customer feedback portal where you can suggest features.
If you are a Datomic Pro user then your maintenance is probably going to be higher, although in absolute terms it’s still not a lot compared to developer salaries. If you were on Datomic Pro Starter and want to stay current, then you are now looking at moving Datomic Free, or paying $5,000/year for Datomic Pro. If you were using Riak, Couchbase, or Infinispan then it seems like you’ll need to get Datomic Enterprise.
Datomic from the beginning has always felt like a database that understood the Cloud, and the zeitgeist of computing. It supported AWS DynamoDB and CloudFormation from early on, and their architecture has always felt well suited to cloud systems. The license changes to accommodate the trend towards microservices and serverless computing are a continuation of that.
I agree with these statements, and I disagree with those.
However, a great thinker who has spent decades on an unusual line of thought cannot induce their context into your head in a few pages. It’s almost certainly the case that you don’t fully understand their statements.
Instead, you can say:
I have now learned that there exists a worldview in which all of these statements are consistent.
And if it feels worthwhile, you can make a genuine effort to understand that entire worldview. You don’t have to adopt it. Just make it available to yourself, so you can make connections to it when it’s needed.
Before we get started, I want to be clear. I don’t support tobacco, cluster bomb manufacturers, nuclear weapons manufacturers, and the other socially harmful businesses mentioned in the recent brouhaha about Kiwisaver investments into the aforementioned companies. However, I think the reporting that NZ Herald did on this was misleading, in search of a good story at the expense of accuracy.
To recap: the Herald has recently been reporting on Kiwisaver, the NZ government retirement savings scheme (like an IRA). Their big headline was that $150mm has been invested by the major Kiwisaver providers into companies that produce cluster bombs, landmines, and other socially harmful products, and that they may be breaking a law banning investments into cluster bomb manufacturers. When you look into the details, their bold claims don’t look so strong.
Here are a few points that should have been included in the reporting:
The biggest problem is that the Herald doesn’t distinguish between active management (where fund managers or algorithms chose particular businesses to invest in), and passive management that tracks an index (say agriculture, energy, or the USA stock market). If some of these funds were directly invested in these companies, and banks really were breaking the law, that would be a real story. It’s not clear to me from the reporting whether there was any direct investment into the companies, or investment into clusterbomb index funds, or whether the funds were invested in broad indexes. You can make an argument for both being bad, but active investment in these companies is very different from passively investing in an index fund of the total stock market. The Herald implies that these companies deliberately or knowingly invested in the companies, but I couldn’t see this from the data they provided.
The total amount invested in Kiwisaver is $32.5 billion, and the amount invested in socially harmful businesses is $150 million. This works out to around 0.46% of the total funds invested. This is not nothing, but it is a tiny fraction of the total assets invested.
Kiwisaver funds that aren’t investing in these companies are likely doing so through active management of stocks. In general, this will result in higher fees, and depending on who you ask, lower performance (especially once fees are taken into account).
The charts that NZ Herald produced all give absolute numbers for how much is invested in socially harmful companies, without giving the percentage invested. Westpac, ANZ, and ASB all feature high on the list of investing into these harmful companies, but there is no context given for what percentage of the investments each have made. Westpac has 0.76% in socially harmful businesses. ANZ has 1.27%, and ASB has 0.13% 1. Without more details on total amount invested in stocks (active and passive) by each fund, it’s hard to tell where and why the differences here come from.
The amount invested in the socially harmful parts of Northrop Grumman2 and General Dynamics 3 are very small, compared to their overall businesses.
Data journalism can be used to illuminate complex topics and explain them for a wide audience. It seems in this reporting that the story came first, and the numbers were presented in a misleading way to back it up. There is a nuanced discussion that could have been had about the ethics of index funds, and socially responsible investing, but that wasn’t what we got here.
N.B.: I may have read the article wrong, and all of the figures provided were active investments (it’s not clear at all to me which is being included). If that’s the case my conclusion would be quite different.
Northrop Grumman is blacklisted by the NZ Superannuation Fund for selling mines. They had sales last year of $23.5 billion. $15 billion in products and $10.5 billion in services. This is split amongst Aerospace systems, Electronic systems, Information systems, and Technical services. Northrop Grumman doesn’t break out a landmine line item (and only mention mines once in their annual report, to say they have nothing to disclose about mine safety), but it looks like it is part of the Electronic systems segment, which did $5.5 billion in product sales and $1.3 billion in services (23% of total sales). Electronic systems also includes radar, space intelligence, navigation, land & self protection systems (probably where mines go, but also includes missiles, air defence, e.t.c.). ↩
General Dynamics is blacklisted by the NZ Superannuation fund for selling cluster bombs. They had $31.4 billion in sales in 2015. They also don’t break out a clusterbomb line item (I’m sensing a pattern here), but it probably fits into the Combat Systems group which had $5.6 billion in sales (18%), and which also includes wheeled combat and tactical vehicles, tanks, weapons systems, and maintenance. ↩
Every week in The REPL I have a section called: “People are worried about Types”. This is based on a section that Matt Levine (my favourite columnist) includes in his daily column “People are worried about bond market liquidity”, and the related “People are worried about unicorns”, and “People are worried about stock buybacks”.
The title is a joke (as is his), I don’t really think there are people worried about types, but it is interesting to see a small but steady flow of type or type related (schema, spec, e.t.c.) articles relating to Clojure each week.
I’ve started a weekly newsletter about Clojure and ClojureScript. Each week will have a curated selection of links (both recent, and older) and a sentence or two about why I think they’re worth reading. You can sign up at therepl.net.
One of the datatypes Clojure gives you is the Record. It is used in both Clojure and ClojureScript, but due to implementation differences, the syntax to use records from one namespace in another is different.
Clojure generates a Java class for every defrecord that you create. You need to :import them, just like you would a standard Java class. Lets say we have a small music store app that sells vinyl:
Because Clojure is generating Java classes, they follow the naming conventions of Java classes, where dashes get converted to underscores.
You should probably prefer the first example where the record is aliased, over the second one where the record is fully qualified.
Clojure dynamically generates the class for the record when that namespace is required (ignoring AOT compiling). If your code never requires the namespace that the record lives in, then it won’t be created. This will cause ClassNotFoundException errors when you try to import the class. In our trivial example, that would mean changing the ns import to:
Most of the time I don’t have any issues with git’s .gitignore system but occasionally it doesn’t behave as I expect. Usually, I add and remove lines until I get the result I’m after and leave feeling vaguely unsatisfied with myself. I always had a nagging feeling that there must be a smarter way to do this, but I was never able to find it. Until today!
Enter git check-ignore. You pass it files on the command line, and it tells you if they’re ignored or not, and why. Sounds pretty perfect right? I won’t give you a run down of all the options, as the man page is surprisingly readable (especially for a git man page!). However the 30-second version is git check-ignore --verbose --non-matching <filepath to check>:
$ echo "b" > .gitignore
$ # a is a file that would be matched
$ git check-ignore --verbose --non-matching a
$ # b is a file that would be ignored
$ git check-ignore --verbose --non-matching b
--verbose prints the explanation of why a file was ignored. --non-matching makes sure the file is printed out, even if it won’t be ignored.
If a file is going to be ignored, check-ignore prints out the path to the gitignore file and line that matched. Interestingly, the files don’t even need to exist, git just checks what it would do if they did exist.
Using check-ignore I was able to solve my problem faster, and learn a bit more about git’s arcaneexclusion rules. Another handy tool to add to my unix utility belt.
I’ve recently been working on a new ClojureScript application as part of a contract, and I was digging around for things to polish before launch. The app was mostly fast, but I noticed that when the main list of content got to around 40 items, it was a little bit slow to render. I also noticed that it seemed like it got almost twice as slow when I added another 10 items. At this point, you might already be having alarm bells go off in your head suggesting what the problem was likely to be. I didn’t, so I dived into the code to look at the part of the app rendering the main list.
I looked over the code path that rendered the list, and wrapped time around a few suspect pieces of code. After a few checks, I found that a sort-by function in the view was the slow part, though it wasn’t immediately clear why sorting a list of 40 items would take a second. We were using sort-by to order items by state, then reverse date order (newest items first).
sort-by takes a custom sort key function. Our key function was doing some date parsing to parse a string into a date, then subtracting the current unix time from the parsed unix time to give an integer value. The lowest numbers are the most recent dates. I suspected that the date parsing could be the problem, but I wasn’t really sure. As an experiment, I disabled all of the date parsing, and returned the string directly. My sorting was the wrong way around, but it went from taking 1000 ms to 10 ms, a factor of 100x speedup!
A standard sort of the dates (which were in ISO 8601 order, e.g. 2016-04-02T08:24:31+00:00) sorted the oldest dates as the first ones in the list. After a few minutes thinking, I remembered I had recently read the clojure.org guide on comparators. In it, it discusses a reverse comparator:
(fn [a b] (compare b a))
This comparator is exactly the same as a normal comparator, but it will return the opposite result to what the normal one would. Passing this comparator to sort-by kept the 100x speedup, but sorted in the correct order. The list rendered in 10-15 ms, and was now plenty fast enough.
One question remained though, why was the list getting so much slower to render as I added a few more items to it? Of course reading this now, you Dear Reader are probably thinking to yourself, “aha! ClojureScript’s sorting algorithm will be O(n log(n)), and the slow key function will therefore be called O(n log(n)) times.” It took me a bit more thinking than you (I didn’t have a blog post explaining why my code was slow to work from), but I got to this conclusion in the end too.
I really enjoyed this debugging session, as it is not very often that I can both speed up real world code by 100x, and get exposed directly to algorithmic complexity issues. A++, would debug again.
I’m excited to announce I’m starting a new podcast Decompress, hosted by myself and my long time friend Nathan Tiddy. Decompress is a technology podcast, with a New Zealand perspective. We also talk about the wider world of things that touch technology, and how it relates to our lives.
If you like technology, and podcasts, then you may like this too. You can subscribe through iTunes, or in your podcast client of choice, by searching for “Decompress”.
Specifying standard Clojure types is fairly straightforward, but I recently needed to specify that a value was a goog.date.UtcDateTime. I did a bit of searching on Google, didn’t find anything. I then looked in the company codebase and found it. Here’s a minimal example:
(def Date js/goog.date.UtcDateTime)
Mike Hadlow’s blog post about a project that moved to an agile process with PM’s mirrored exactly a project I worked on a while ago. If I didn’t know everyone on the team, I would have assumed it was someone taking notes about the project I was working on.
Every two weeks we would have a day long planning meeting where these tasks were defined. We then spent the next 8 days working on the tasks and updating Jira with how long each one took. Our project manager would be displeased when tasks took longer than the estimate and would immediately assign one of the other team members to work with the original developer to hurry it along. We soon learned to add plenty of contingency to our estimates. We were delivery focused. Any request to refactor the software was met with disapproval, and our time was too finely managed to allow us refactor ‘under the radar’.
We became somewhat de-motivated. After loosing an argument about how things should be done more than a few times, you start to have a pretty clear choice: knuckle down, don’t argue and get paid, or leave.
The project was not a happy one. Features took longer and longer to be delivered. There always seemed to be a mounting number of bugs, few of which seemed to get fixed, even as the team grew. The business spent more and more money for fewer and fewer benefits.
In this video Stuart Halloway talks about adopting a scientific mindset when debugging. Thinking in this way has absolutely changed the way I now approach debugging. Debugging is such a core skill in software development, but how many developers would rate themselves to be truly great at debugging? Not me yet, but after watching that video I’ve picked up two books: Debugging and Why Programs Fail to learn more and hone this skill. You can see the slides and related resources at Stuart’s wiki page.
Cognitect has released the State Of Clojure 2015 results. Justin Gehtland did a good analysis on the quantitative parts of the survey, and summarizing the comments section at the end. I read (most) all of them and got a feel for the zeitgeist of where the Clojure community is looking for improvements. After compliments, error messages and documentation far and away received the most comments. Below are some quotes grouped by subject, to help you get a feel for the responses. You can read all of the comments on the SurveyMonkey page.
I’d love to see elm-like approach to hard tasks like improving on error reporting and documentation. Clojure team is so skilled that it won’t bother even with low hanging fruit. But it could be hurtful in the long run.
Regarding potential future enhancements, improved error reporting would be very helpful. This has been a major stumbling block for our junior developers in particular. Too many error messages require a high level of understanding of Clojure internals to understand. Overall, I’m very appreciative of the well-thought-out design decisions in both Clojure and its ecosystem.
Please fix/finish transitive AOT, and please make error messages better. I know error messages continue to receive this sort of “well, if you wanna make ‘em better, patches welcome…” treatment, but I think it’s a mistake to keep beating up on them, insinuating they don’t really matter. As someone who’s spent a fair amount of time debugging Clojure programs, some of them do matter, and I think the wrong message is being sent to would-be contributors about the value in improving them.
Better error messages from macros! It can be done with PEGs quite easily.
Please improve error messages - Elm developers are a good example to follow.
2+ years with Clojure. First one was strong, but the community has been solving non-problems (transducers, reducers) instead of improving error messages and general debugability.
Highest on my list of annoyances are NullPointerExceptions missing stack traces. They they disproportionately more time to debug that other failures. We rarely see them in production but during development they are a constant scourge.
Clojure has the problem that many languages have —- the people capable of providing a better UX aren’t interested in solving that problem. (Myself included.) We could learn from the Go core team’s willingness to solve common UX problems (fast compiler, auto-formatter) and other projects like Elm’s compiler work to support good error messages.
clojure.org has undergone a massive visual refresh between the survey being taken, and the results being released. I have high hopes that it will become a central source for hiqh quality, official documentation and ease the pain for newcomers.
Community & resource wise, everything things feel extremely federated, in a bad way. Documentation This is where I had the worst experience with Clojure. Documentation is extremely federated. To understand clojure.zip & clojure.core.logic took me so much work […]
Many things that disappoint you about Clojure at first (“why errors are so awful?”, “why the contribution process is so hard?”, ” why don’t you implement this one feature I like? “), they fade away as you learn Clojure and Clojure Way more. Still, it would nice to have a sort of explain-like-I’m-five FAQ somewhere on clojure.org that covers these non-obvious points so that people skip the frustration part and understand the reasoning behind those decisions immediately.
CLJS could use better official documentation.
I still don’t know how to get externs working with either lein or boot!! Evaluating code in vim doesn’t always work. Documentation on google are just not reliable because they are outdated or assumes too much pre-knowledge. Applying what’s read in the docs most often do not work, or I just don’t understand. The documentation must really be much more comprehensive and not assume the readers are clojure pros.
Documentation! The clojure.org documentation is so terse that it is unusable for beginners. The clojuredocs.org documentation is confusing for someone who wants the learn the basics. Some examples are too cryptic and appears to be written by very smart people showing off edge uses of the feature they’re describing.
Clojure needs more outreach. Clojure.org or clojuredocs.org needs more tutorials, application-development documentation pointers (For example: “here are your choices if you want to make webapps with Clojure”).
Very happy with progress on cljs.js (tho remains sparsely documented). Much of my pain is related to the compilation step. Lots of errors still happen only after compilation, & not easy to know how to split large code base and load only necessary modules, etc.
Re. documentation: Features added to clojure.core usually have far inferior documentation on initial release than many libraries. Sometimes this is eventually remedied by the community, but sometimes not. This has been the case for reducers, core.async, the new threading macros, and transducers. Even the core library functions didn’t have much example-based documentation until the community took it upon themselves to make clojuredocs. […] Actual documentation, with practical examples, would lead to faster and more widespread adoption of new features.
We just had a Clojure user group meeting here in PDX with several new faces, new to Clojure. […] One person pointed out how the Clojure home page makes no mention of Leiningen, the de-facto standard build tool. The references to running Clojure apps with “java -jar …” are that much more intimidating for anyone coming from a non-Java background. Some of these issues were well described in the “ClojureScript for Skeptics” talk at the Conj. Consensus: people love, or want to love, the language (I do!) but the rough edges have not been smoothed out, and it is still intimidating to get started in Clojure. — Howard
Setup and tooling has historically been another pain point in Clojure. In the ClojureScript world, Figwheel seems to be solving a lot of hard problems (REPL, reloading code) in a relatively easy way to set up, and I’m sure it will get even better this year. As an alternative to emacs, Cursive (which I use) is a great tool for getting a full Clojure development environment setup with minimal hassle.
By far, the biggest complaints I hear from friends and at meetups is the startup time and the work required to get a ClojureScript workflow set up.
[…] EMACS SETUP: IMPOSSIBLE I used and liked emacs 15 years ago and prefer to use emacs for clojure development now, but after a full day of trying I still can’t get all the emacs pieces together. It’s too complex, there are too many half-baked and out-of-date how-to examples. Will the real viable emacs setup tutorial please stand up?
Cursive and it’s debugger are a Godsend - I couldn’t ever convince myself to use Emacs and all the alternatives were inferior. Cursive made writing and debugging Clojure a really pleasant experience. Same with boot in Clojurescript world - getting lein + cljbsuild + figwheel working well was a pain. Boot+boot-cljs+boot-reload+boot-cljs-repl is a pleasure. I think those two things will do a lot good towards Clojure and Clojurescript adoption.
Convincing coworkers to adopt Clojure also was a pain point for some of the respondents, and additional marketing materials could help here. Cognitect’s case studies are good, and could be ported to the official Clojure website.
The clj / cljs / om / figwheel websites make Clojure appear like a niche ecosystem. Seems to be time for marketing!
needs better marketing. clojure has bad marketing (great product, bad marketing) typesafe has good marketing (complex product, good marketing) 10gen has great marketing (bad product, great marketing) it is hurting companies wide adoption: “Clojure? Um.. What is it you said?” “Scala, yea, I’ve heard good things about it, it’s solid.” “MongoDB? Oh, yea my teams tell me it’s really good.”
Love clojure and clojureScript. Where I need help is on the “evangelical” side. If we had more people singing the praises of clojure that would help my business secure more clojure business and everyone else’s work as well. This brings in more money and talent which can then be reinvested into tooling, systems, turn-around development time,etc. just look at what’s happened over in the JS world once their reputation improved.
Overall, very happy with the current direction of Clojure. What feels missing is awareness in the market about the goods of is as a technology, as the biggest impediment with using Clojure more has been finding customers willing to embrace the risk. Alternatives like Scala managed to have a better name within the industry.
Clojure deserves a better website and documentation.
Still trying to persuade co-workers to use Clojure more.
In my opinion, the Clojure community is one of Clojure’s greatest assets. I’ve found it to be overwhelmingly positive, and there are a massive amount of innovative libraries coming from a very young community. There were also some concerns about the contribution process, and how many of the decisions about Clojure happen inside Cognitect.
Many thanks to Rich, Stu, David, Alex and many other people from community and Cognitect. You are doing amazing job. I love the culture of community you created around Clojure: open community with emphasis on learning, design, thinking hard and solving the right problem. It is inspiring place to be in.
What an amazing language. Relatively frequent, consistently stable releases. A pleasure to use. A friendly, smart community. I feel very lucky to be a Clojure user! Thank you for all of your hard work.
The Clojure contrib process frustrates me more than any technical or community aspect of the language.
Clojure gets a lot right, but as has been repeatedly discussed the pace of evolution and the maintainership’s dim view of 3rd party non-bugfix work flatly leads to worthy but minor work such as type predicates going largely ignored and certainly unmerged. In most open source projects, contributors can impact priorities by giving of their time to support work which isn’t high priority. The work which Nola Stowe did on clojure.string was awesome and I think we can see more of that if Alex et all allocate more capacity to working with contributors.
I found some of the recent back-and-forth about tuples to be very disheartening—-if I were Zach Tellman I would decide not to bother attempting to make improvements to the core of Clojure. […]
i have become a little worried about the future of the language. The core team are way ahead in terms of design and concepts, but seem to lack any kind of empathy with beginners.
Things I think should be a priority for the long term future of Clojure: […] 3. More open core development process. I worry about the long term future of Clojure when it appears to be driven by internal decisions within a single company (i.e. Cognitect). I am uncomfortable with this approach, and am hesitant to commit fully to Clojure as a language until this becomes more open. Maybe a “Clojure Foundation” would be a good idea?
Thanks to everyone again for another great year in CLJ(S)land. Best community for any language.
I participate in a local meetup group which has helped greatly and I find the community very welcoming (especially being a woman, I’m not finding as many issues as I do with other communities). Thanks!
Keep up the great work! Its been a joy to participate in this community :)
I would like to thank entire Cognitect staff for creating such a great environment and community.
Finding good libraries has not been difficult, but it would be helpful to have some mechanism in the community for evaluating which libraries are viable in the long term as well as which are “best in class”, so to speek. In many cases, I want a library that will provide a useful component for a multi-year project. The library does what I need and is available, but the modifications on github are 3+ years ago. Should I invest in using this library in my core code? Are there better, more current and maintained options? And so forth. Some community aggregation that makes it easy to evaluate libraries in this sense would be very helpful. […]
Libraries Not sure where the main place is to find clojure libraries. Some are in maven (with no ability to evaluate the library). Clojars also feels incomplete and hard to evaluate. The landing page http://clojure.org/libraries even suggests checking maven, clojars, and github, it feels like a dead end. Google is better resource than these, because you can find blogs evaluating the libraries.
I was surprised to see so many comments about static typing, but it was a recurring theme. They can roughly be divided into three groups:
People who were getting push back from their organisation for using a dynamic language.
People who wanted a type system like Haskell’s.
There seems to be a lot of interest in core.typed but people are waiting at the sidelines until it reaches more maturity.
I was surprised to see that the survey doesn’t include reference to usage of typing or contract tools like core.typed or Schema. I’m interested in understanding how widespread their usage is. Schema’s been an important tool in helping to encourage Clojure’s more widespread use at our enterprise and I’ve been largely happy with it.
Clojure and ClojureScript are amazing projects and the community is pushing the edges of programming today (Om Next, Datomic, cljc). I’ve been close to using both many times, but at this point I don’t think I could leave the expressive syntax or type systems in Haskell and PureScript. I wish they had the same community and dev experience, and am interested in using the Clojure community’s ideas there.
People in my organization feel strongly that dynamic typing won’t scale to projects with more than a few developers. I am concentrating on small projects. Also, the learning curve for Java devs is a bit steep. Like anything else, one has to really want it.
The two things holding back Clojure/ClojureScript use at work are corporate policy and a strong fear of weakly typed production languages by the team.
I love clojure but I also like static type systems. Don’t know if its possible to two in a way that works well. Haskell and scala are languages I like for its type system
The discussion we have been having lately though is around static typing. I have a Haskell background (and a Phd in type systems, even), so I’m aware of the benefits they can provide but I don’t think they’re a silver bullet either. None-the-less, improved static support seems incredibly useful. A gradual-typing approach does seem ideal, allowing you to specify either types up front, or to add annotations as the design solidifies, perhaps. I’m eagerly watching Ambrose’s work, and perhaps prismatic-schema is enough — it would be great to get some of the core community’s thoughts on the matter though.
I really, really, really want to see some form of official support for static typing. This could be core.typed, but it needs to be blessed by the core Clojure dev team and development to allow core.typed (and other future tools) to better integrate with the compiler is necessary. Type annotations provide helpful documentation and (optional) static type checking can help speed up development for those already annotating for documentation purposes by detecting type errors before executing programs. I see the lack of support for (optional) static typing as Clojure’s biggest weakness and the main reason I have considered moving to a different language.
Like Justin said, over half of the responses were happy or positive. Here’s a few to finish on an upbeat note. I’m looking forward to seeing what 2016 brings for Clojure, and especially ClojureScript. The tooling story seems to be coming together quite well. If enhanced error messages make it into Clojure 1.9, and the documentation story improves, a lot of the pain in using Clojure could be minimised.
(= “Clojure” “Awesome!”) => true
Thanks for everything that you do. :)
Love it! Thank you for everything that you do.
I just love programming in clojure.
Very nice thing. I like to program in this Lisp a lot
I would never have written PartsBox.io if it wasn’t for Clojure and ClojureScript. Only these tools allow for such a productivity increase that a single person can write a complete medium-size application.
The Everything Store has a story in it about how Jeff Bezos came up with the idea for Amazon, an ‘everything store’.
Bezos concluded that a true everything store would be impractical—at least at the beginning. He made a list of twenty possible product categories, including computer software, office supplies, apparel, and music. The category that eventually jumped out at him as the best option was books. They were pure commodities; a copy of a book in one store was identical to the same book carried in another, so buyers always knew what they were getting. There were two primary distributors of books at that time, Ingram and Baker and Taylor, so a new retailer wouldn’t have to approach each of the thousands of book publishers individually. And, most important, there were three million books in print worldwide, far more than a Barnes & Noble or a Borders superstore could ever stock.
We all know how Amazon became a success in the dotcom boom. What isn’t quite so well remembered were the imitators: pets.com, eToys.com, and other websites sprung up trying to be the Amazon for pets or toys. These businesses tried to copy Amazon’s visible activities of selling a category of product online, without understanding the context, business model, and reason for choosing books to sell. The x for y phenomenon is still going strong today. A few years ago it used to be Facebook or Pinterest of Y. Today it’s Uber for Y. These companies are copying the visible parts of a business, but they can’t copy the context, and underlying reasons for a business’s success.
The same thing happens whencompaniescopydesign. It’s one thing to copy the trade dress of a product, and this takes some effort and thought. But you can’t copy the thinking that went in to the design of a product, and you’re likely to miss the small touches when you do so.
A few years ago I read how Stripe handles group email with Google Group lists and I wanted to replicate it at my job. I imagined how efficient this would let us be in our communications and how it would help us deal with projects much more efficently. But in the back of my head I had a nagging feeling that it wouldn’t work for us. At the time I didn’t know exactly why, but I parked the idea. Looking back I understand it now. I was trying to copy a practice that had grown out of Stripe’s specific company culture and values and transplant it into my own workplace with completely different values and world views. I could copy the system but I couldn’t copy the context.
When looking to learn from another business, instead of looking at the superficial exterior that is easy to see and understand, we should look at the core of why a product or process exists, and why people use it. Only once we have that understanding, can we see how to apply it to our own contexts or look for new opportunities based on that insight. Clayton Christensen’s ‘Jobs to be done’ theory is a powerful tool for doing this.
A context free grammar may be useful for describing regular expressions and automata, but it’s not so useful for analysing businesses, products and processes.
In December, I tried to set up ClojureScript testing with doo and it was going surprisingly well. Everything was running well locally, so I setup a circle.yml file, and pushed my changes up to CircleCI. They ran there on the first time as well, more surprise! CircleCI have clearly put a lot of energy into getting a good setup for browser testing. I was very pleased, until after a few builds, I started running out of memory while running the tests. Circle CI only allows you 4GB which should be plenty for most applications, and certainly for compiling ClojureScript. I tried rerunning the build, sometimes it would complete successfully, and other times it would hit the RAM limit. I tested it on my machine and the tests ran fine, and only used ~ 2GB RAM.
I wasn’t quite sure what was going on so I gave it a break for a few hours. When I came back I ssh’d onto the build machine and ran top while the test was running. I could see my process running and taking more and more memory, getting closer and closer to 4GB. Then the tests finished and the memory was reclaimed. While I was running top, I saw Mem: 251902904k total and thought that was curious. It looked like it was saying I had 240GB, but I thought I was only allowed 4?
This triggered a thought, if the JVM saw that it had 240 GB of memory, then it might easily blow past 4GB of memory while running a ClojureScript build. I checked the project.clj file and saw that it didn’t have any :jvm-opts set in it. I checked the default JVM memory options on the build server:
2147483648 bytes for intial heap size (xms) converts to 2.15 GB, and 32126271488 bytes for max heap size (xmx) converts to 32.13 GB. Once I saw that, the problem became very clear. The JVM was starting with 2 GB of RAM, and had a max heap size of 32 GB. It had no need to GC anywhere near 4GB, so its memory usage would just keep climbing. Depending on how fast the build ran, my tests were sometimes able to finish before the CI hit the 4 GB limit. I set :jvm-opts ^:replace ["-Xms256m" "-Xmx2g"] and reran the build a few times. It was able to successfully run every time. Success!
We cannot solve our problems with the same thinking we used when we created them.
— Albert Einstein
In my (relatively short) career as a software engineer, I’ve seen a paradoxical behaviour among coworkers and myself: at the point when people most need better debugging tools, they are the least likely to use them.
This behaviour doesn’t seem to be exclusive to any one language, domain, or Operating System. The pattern I see is:
Get stuck with a bug
Try to debug the problem using the methods you’re comfortable with
Keep going, getting more frustrated/nervous/impatient
Realise that you’re running out of time, and double down twice as hard on your current methods.
It’s steps 3 and 4 that are most interesting to me, and where the paradoxical behaviour occurs. I can think of a few reasons for why people do this:
When you are debugging something, you will usually spend some time debugging it with your normal methods. By the time you realise that you are stuck, you may be behind schedule, so you double down on your existing method.
When you don’t feel like you have enough time, you are not as likely to explore alternative tools and debugging approaches.
Often when debugging you can feel like the answer is just around the corner (even if it’s still a few hours away).
If you’re already neck deep in one confusing situation, adding a new unfamiliar tool doesn’t seem like it will help.
I’m not sure if there is a good way to combat this problem in the heat of the moment. Two solutions I can see:
Try debugging with someone more experienced than you who is familiar with these tools. As a bonus explaining the problem to them may help you solve the issue.
Invest in learning new debugging tools ahead of time. Debugging a problem is a really stressful situation, particularly if you are under time pressure, so having your tools and techniques practiced and ready to go beforehand can be helpful here. Athletes and musicians put in hours of practice before they actually perform, I think Software Developers could take a lesson from them.
I’m going to describe why the security community has failed in the past, I’m going to describe why the security community is failing right now. I’m going to describe why the security community will fail in the future. There will be a symphony of failure, every song will be a sad one, children will be crying, old people will wish that they were dead, middle aged people will be killing the old people, nobody will survive, the only good news is that tonight is Thursday which means that it’s almost Friday, but Friday is going to suck too, it never gets better, as shown by this graph, life only gets worse, time is on the x-axis, worseness is on the y-axis, that line is going up and to the right ladies and gentleman. So, lets talk about computer security.
So Alibaba would go to Yahoo and say, “Hey, we’d like to buy you for negative eight billion dollars” and Yahoo would say, “How about positive seven billion dollars?” and Alibaba would say no but maybe someone else would think that Yahoo is actually worth positive seven billion dollars, and so you’d have a pretty wide bid-ask spread.