|elm-version||0.18.0 <= v < 0.19.0|
|Committed At||2018-01-03 20:04:44 UTC|
Run microbenchmarks in Elm.**Table of Contents**
Here's a sample, benchmarking
import Array import Arry.Hamt as Hamt import Benchmark exposing (..) suite : Benchmark suite = let sampleArray = Hamt.initialize 100 identity in describe "Array.Hamt" [ -- nest as many descriptions as you like describe "slice" [ benchmark "from the beginning" <| \_ -> Hamt.slice 50 100 sampleArray , benchmark "from the end" <| \_ -> Hamt.slice 0 50 sampleArray ] -- compare the results of two benchmarks , Benchmark.compare "initialize" "HAMT" (\_ -> Hamt.initialize 100 identity) "core" (\_ -> Array.initialize 100 identity) ]
This code uses a few common functions:
describeto organize benchmarks
benchmarkto run benchmarks
compareto compare the results of two benchmarks
For a more thorough overview, I've written an introduction to elm-benchmark.
You should keep your benchmarks separate from your code since you don't want the elm-benchmark code in your production artifacts.
This is necessary because of how
elm-package works; it may change in the future.
Here are the commands (with explanation) that you should run to get started:
mkdir benchmarks # create a benchmarks directory cd benchmarks # go into that directory elm package install BrianHicks/elm-benchmark # get this project, including the browser runner
You'll also need to add your main source directory (probably
../src) to the
source-directories list in
If you don't do this, you won't be able to import the code you're benchmarking!
program, which takes a
Benchmark and runs it in the browser.
To run the sample above, you would do:
import Benchmark.Runner exposing (BenchmarkProgram, program) main : BenchmarkProgram main = program suite
Compile and open in your browser to start the benchmarking run.
Some general principles:
compareto measure your progress.
Goodness of fit is a measurement of how well our prediction fits the measurements we have collected. You want this to be as close to 100% as possible. In elm-benchmark:
elm-benchmark will eventually incorporate this advice into the reporting interface. See Issue #13.
For more, see Wikipedia: Goodness of Fit.
They're not, but they look like it because we interleave runs and only update the UI after collecting one of each. Keep reading for more on why we do this!
When we measure the speed of your code, we take the following steps:
If the run contains multiple benchmarks, we interleave sampling between them. This means that given three benchmarks we would take one sample of each and continue in that pattern until they were complete.
We do this because the system might be busy with other work when running the first, but give its full attention to the second and third. This would make one artificially slower than the others, so we would get misleading data!
By interleaving samples, we spread this offset among all the benchmarks. It sets a more even playing field for all the benchmarks, and gives us better data.
elm-benchmark is licensed under a 3-Clause BSD License.