Skip to main content

tensorrt-llm

Optimizes LLM inference with NVIDIA TensorRT for maximum throughput and lowest latency. Use for production deployment on NVIDIA GPUs (A100/H100), when you need

92/100

Security score

The tensorrt-llm skill was audited on Feb 28, 2026 and we found 4 security issues across 2 threat categories. Review the findings below before installing.

Categories Tested

Security Issues

medium line 81

Curl to non-GitHub URL

SourceSKILL.md
81curl -X POST http://localhost:8000/v1/chat/completions \
low line 81

External URL reference

SourceSKILL.md
81curl -X POST http://localhost:8000/v1/chat/completions \
low line 183

External URL reference

SourceSKILL.md
183- **Docs**: https://nvidia.github.io/TensorRT-LLM/
low line 185

External URL reference

SourceSKILL.md
185- **Models**: https://huggingface.co/models?library=tensorrt_llm
Scanned on Feb 28, 2026
View Security Dashboard