llamaguard
LlamaGuard provides advanced content moderation for LLMs, ensuring safe interactions by filtering harmful inputs and outputs with high accuracy.
Install this skill
or
89/100
Security score
The llamaguard skill was audited on Feb 28, 2026 and we found 7 security issues across 2 threat categories. Review the findings below before installing.
Categories Tested
Security Issues
medium line 183
Curl to non-GitHub URL
SourceSKILL.md
| 183 | curl -X POST http://localhost:8000/moderate \ |
low line 183
External URL reference
SourceSKILL.md
| 183 | curl -X POST http://localhost:8000/moderate \ |
low line 257
External URL reference
SourceSKILL.md
| 257 | https://huggingface.co/meta-llama/LlamaGuard-7b |
low line 329
External URL reference
SourceSKILL.md
| 329 | - V1: https://huggingface.co/meta-llama/LlamaGuard-7b |
low line 330
External URL reference
SourceSKILL.md
| 330 | - V2: https://huggingface.co/meta-llama/Meta-Llama-Guard-2-8B |
low line 331
External URL reference
SourceSKILL.md
| 331 | - V3: https://huggingface.co/meta-llama/Meta-Llama-Guard-3-8B |
low line 332
External URL reference
SourceSKILL.md
| 332 | - Paper: https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/ |
Scanned on Feb 28, 2026
View Security DashboardGitHub Stars 22.3K
Rate this skill
Categorydevelopment
UpdatedApril 4, 2026
davila7/claude-code-templates