Pierre Chambart, Fabrice Le Fessant, Vincent Bernardoff
This is a cool idea for ensuring quality in the compiler in the face of new and rapid development, at least with respect to performance. Users can submit microbenchmarks along with assumptions about the results of those benchmarks. These can be used during development of compiler patches to watch out for regressions. Not only that but if you’re a user that submitted a microbenchmark, you can be notified at a future date if a compiler patch that’s merged invalidates it. Not the best thing in the world when stuff slows down, but at least you’ll know about it. You can find the collection of microbenchmarks here if you’d like to contribute.
But to take it a step further, the OCaml community can also collect a set of “macro” benchmarks that are bigger, longer-runnning programs. OCamlPro has set up a repository to collect these, and run the benchmarks with almost every combination of compiler flags imagineable. Once a benchmark goes through this system, you can look at a table of results that will tell you how much of a performance speedup (or slowdown) enabling or disabling a single flag will produce. This has a very similar feel to the opam publishing process! Here is the repository that collects those benchmarks.