Skip to main content

llama-cpp

Runs LLM inference on CPU, Apple Silicon, and consumer GPUs without NVIDIA hardware. Use for edge deployment, M1/M2/M3 Macs, AMD/Intel GPUs, or when CUDA is una

91/100

Security score

The llama-cpp skill was audited on Feb 28, 2026 and we found 5 security issues across 2 threat categories. Review the findings below before installing.

Categories Tested

Security Issues

medium line 96

Curl to non-GitHub URL

SourceSKILL.md
96curl http://localhost:8080/v1/chat/completions \
low line 96

External URL reference

SourceSKILL.md
96curl http://localhost:8080/v1/chat/completions \
low line 244

External URL reference

SourceSKILL.md
244**Find models**: https://huggingface.co/models?library=gguf
low line 255

External URL reference

SourceSKILL.md
255- **Models**: https://huggingface.co/models?library=gguf
low line 256

External URL reference

SourceSKILL.md
256- **Discord**: https://discord.gg/llama-cpp
Scanned on Feb 28, 2026
View Security Dashboard