langcache
This skill should be used when the user asks to "enable semantic caching", "cache LLM responses", "reduce API costs", "speed up AI responses", "configure LangCache", "search the semantic cache", "store responses in cache", or mentions Redis LangCache, semantic similarity caching, or LLM response ...
Install this skill
langcache1 files
Comments
Sign in to leave a comment.
No comments yet. Be the first to comment!
Install this skill with one command
/learn @openclaw/openclaw-langcache