long-context
Extend context windows of transformer models using RoPE, YaRN, ALiBi, and position interpolation techniques. Use when processing long documents (32k-128k+ token
95/100
Security score
The long-context skill was audited on Feb 28, 2026 and we found 5 security issues across 1 threat category. Review the findings below before installing.
Categories Tested
Security Issues
low line 522
External URL reference
SourceSKILL.md
| 522 | - **RoPE Paper**: https://arxiv.org/abs/2104.09864 (RoFormer) |
low line 523
External URL reference
SourceSKILL.md
| 523 | - **YaRN Paper**: https://arxiv.org/abs/2309.00071 |
low line 524
External URL reference
SourceSKILL.md
| 524 | - **ALiBi Paper**: https://arxiv.org/abs/2108.12409 (Train Short, Test Long) |
low line 525
External URL reference
SourceSKILL.md
| 525 | - **Position Interpolation**: https://arxiv.org/abs/2306.15595 |
low line 528
External URL reference
SourceSKILL.md
| 528 | - **Together AI Blog**: https://www.together.ai/blog/llama-2-7b-32k |
Scanned on Feb 28, 2026
View Security Dashboard