evaluating-code-models
Evaluates code generation models using benchmarks like HumanEval and MBPP, providing insights into coding abilities and multi-language support.
Install this skill
or
93/100
Security score
The evaluating-code-models skill was audited on Feb 28, 2026 and we found 3 security issues across 2 threat categories. Review the findings below before installing.
Categories Tested
Security Issues
medium line 230
Template literal with variable interpolation in command context
SourceSKILL.md
| 230 | ```bash |
low line 403
External URL reference
SourceSKILL.md
| 403 | - **BigCode Leaderboard**: https://huggingface.co/spaces/bigcode/bigcode-models-leaderboard |
low line 404
External URL reference
SourceSKILL.md
| 404 | - **HumanEval Dataset**: https://huggingface.co/datasets/openai/openai_humaneval |
Scanned on Feb 28, 2026
View Security DashboardInstall this skill with one command
/learn @fabioeducacross/evaluating-code-modelsRate this skill
Categorydata analytics
UpdatedMarch 29, 2026
openclawapidata-scientistml-ai-engineerdata-analystproduct-managertechnical-pmdata analyticsdevelopmentproduct
fabioeducacross/DesignSystem-Vuexy