Skip to main content

llama-cpp

91

Runs LLM inference on CPU, Apple Silicon, and consumer GPUs without NVIDIA hardware. Use for edge deployment, M1/M2/M3 Macs, AMD/Intel GPUs, or when CUDA is unavailable. Supports GGUF quantization (1.5-8 bit) for reduced memory and 4-10× speedup vs PyTorch on CPU.

Enables efficient LLM inference on non-NVIDIA hardware, optimizing performance for edge deployment and Apple Silicon.

Install this skill

or
llama-cpp4 files

Comments

Sign in to leave a comment.

No comments yet. Be the first to comment!

Install this skill with one command

/learn @davila7/inference-serving-llama-cpp
GitHub Stars 22.3K
Rate this skill
Categorydevelopment
UpdatedMarch 16, 2026
davila7/claude-code-templates