evaluating-llms-harness
Evaluates LLMs using 60+ benchmarks for model quality assessment, widely adopted in academic and industry settings.
Install this skill
or
evaluating-llms-harness5 files
Comments
Sign in to leave a comment.
No comments yet. Be the first to comment!
Install this skill with one command
/learn @davila7/evaluation-lm-evaluation-harnessGitHub Stars 22.3K
Rate this skill
Categorydevelopment
UpdatedMarch 29, 2026
openclawapiml-ai-engineerdata-scientistdata-analystresearcherproduct-managerhuggingfacedevelopmentdata analyticseducation researchproduct
davila7/claude-code-templates