Background

An open-source benchmark of optimization solvers on representative problems from the energy planning domain.

Built by Open Energy Transition, with funding from Breakthrough Energy, and contributions from the community.
BENCHMARKS
WHAT DO WE HAVE?
Our platform consists of open, community-contributed benchmark problems from various energy modelling frameworks. Our open-source benchmarking infrastructure runs them on multiple versions of leading solvers on multiple hardware configurations, to gather insights on how performance varies with benchmark size, computational resources, and solver evolution.
0
Model Frameworks
0
Benchmarks
0
Solvers
Mission
WHO IS IT FOR?
This website is geared towards providing data and insights to all participants in the green energy transition.
Solver Developers
Improve your solver algorithms and performance using our realistic and energy planning relevant benchmarks
Benchmark Set
Energy Modellers
Use our performance data to pick the best solver for your application domain, hardware constraints, and budget
Compare Solvers
Donors & Stakeholders
Track the evolution of solver performance over time, and maximize the potential return on your investment
Solver Performance History
Methodology
HOW DO WE BENCHMARK?
We run the benchmarks on cloud virtual machines (VMs) for efficiency and cost reasons, and have validated that the measured runtimes have acceptable error margins. We use a custom built benchmarking infrastructure based on Python and OpenTofu, that is open, transparent, and fully reproducible -- meaning you can also use it to run your own benchmarks!
Read more about our methodology, caveats, and known issues here:
contributions
CHECK OUT OUR CODE, JOIN THE EFFORT!
We accept community contributions for new benchmarks, new / updated solver versions, and feedback on the benchmarking methodology and metrics via our
GitHub repository.
...

Contributors

...

Issues

...

Stars

...

Forks

Contribute now
Questions
FAQ
While there exist well-known benchmark sets such as the Mittelmann benchmarks (https://plato.asu.edu/bench.html) or MIPLIB (https://miplib.zib.de/), we do not yet have a benchmark set that focuses on up-to-date and representative problems from the energy planning domain. This is a crucial missing piece that can enable optimization solvers to develop new algorithms and improve their performance on energy models, thereby accelerating key technologies used to plan and implement the energy transition. By building an open-source, transparent, and reproducible platform, we maximize our impact by enabling modellers to submit new benchmark instances and solver developers to reproduce and use our benchmarks for development. Our website offers numerous interactive dashboards that allow users to perform fine-grained analysis depending on their application domain and features of interest.
The aim of this project is to compare and spur development in open source optimization solvers, and to track the gap between open source and proprietary solvers on problems of interest to the energy planning community. Thus, we currently have 5 solvers on the platform: 4 popular open source solvers and a single commercial proprietary solver. We welcome including any open source solver to our platform, and can support community contributions via pull requests. As we do not wish this platform to become a competition between commercial solvers, we restrict the platform to a single proprietary solver, which is Gurobi by direct agreement. Other proprietary solvers exist, and users are welcome to use our benchmarking tools to benchmark their problems on all available solvers.
We chose to run benchmarks on the cloud for several reasons: it is more cost-efficient than physical machines or bare metal cloud servers; it allows us to run benchmarks in parallel, which speeds up our run and allows us to scale to many more benchmark instances; it is automatable using infrastructure-as-code; it is transparent and reproducible by anyone with a cloud account; and it reflects the experience of most energy modellers, who use cloud compute or shared high performance computing clusters. We are aware that runtimes vary depending on the other workloads running on the same cloud zones, and have run experiments to estimate the error in runtime. We estimate that 99% of our benchmark instances will have the same ranking of solvers as if run on a bare metal server.
We use the following nomenclature on this platform: An energy modelling framework, e.g. PyPSA or TIMES, is a software system that allows one to input country or region-specific data and model the energy system of interest. An energy model, e.g. eTIMES-EU or TIMES-NZ, is an instantiation of a modelling framework for a particular application or study. A benchmark (problem) is a single model scenario of an energy model, captured as an LP or MPS file that is given as input to an optimization solver. In order to study the scaling behavior of solvers, and to provide solver developers with smaller versions of problems of interest, we group benchmark problems obtained by varying size parameters such as spatial or temporal resolution as being size instances of the same benchmark. The full collection of benchmarks on our platform is the benchmark set.
contact
GET IN TOUCH
If you are a developer or are familiar with GitHub, please open an issue for all feedback and suggestions!
Otherwise, you can write to us using this form.
Email:
Message