Update README

This commit is contained in:
2025-10-13 10:46:27 -07:00
committed by GitHub
parent 71f131c2b7
commit 96a926835d

12
README
View File

@@ -1,9 +1,3 @@
# LLM Algorithmic Benchmark
This repository contains a suite of difficult algorithmic tests to benchmark the code generation capabilities of various Large Language Models.
The tests are run automatically via GitHub Actions, and the results are updated in this README.
## Configuration
Set the percentage of tests to run during the benchmark. 100% runs all tests.
@@ -21,9 +15,3 @@ google/gemini-2.5-pro
anthropic/claude-sonnet-4.5
openai/gpt-5-codex
<!-- MODELS_END -->
## Benchmark Results
Live benchmark results, including pass/fail status and code generation time, are available on our [results page](https://multipleof4.github.io/benchmark/).
The results are updated automatically via GitHub Actions.