mirror of
https://github.com/multipleof4/lynchmark.git
synced 2026-01-13 16:17:54 +00:00
Update README
This commit is contained in:
12
README
12
README
@@ -1,9 +1,3 @@
|
|||||||
# LLM Algorithmic Benchmark
|
|
||||||
|
|
||||||
This repository contains a suite of difficult algorithmic tests to benchmark the code generation capabilities of various Large Language Models.
|
|
||||||
|
|
||||||
The tests are run automatically via GitHub Actions, and the results are updated in this README.
|
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
Set the percentage of tests to run during the benchmark. 100% runs all tests.
|
Set the percentage of tests to run during the benchmark. 100% runs all tests.
|
||||||
@@ -21,9 +15,3 @@ google/gemini-2.5-pro
|
|||||||
anthropic/claude-sonnet-4.5
|
anthropic/claude-sonnet-4.5
|
||||||
openai/gpt-5-codex
|
openai/gpt-5-codex
|
||||||
<!-- MODELS_END -->
|
<!-- MODELS_END -->
|
||||||
|
|
||||||
## Benchmark Results
|
|
||||||
|
|
||||||
Live benchmark results, including pass/fail status and code generation time, are available on our [results page](https://multipleof4.github.io/benchmark/).
|
|
||||||
|
|
||||||
The results are updated automatically via GitHub Actions.
|
|
||||||
|
|||||||
Reference in New Issue
Block a user