How to Create an AI Agent Skill — Step-by-Step Tutorial
What You Will Build
By the end of this tutorial, you will have a working agent skill that you can use locally, share with your team, and optionally publish to agentskill.sh for the community.
A skill is a folder with a SKILL.md file. Here is what the final structure looks like:
my-skill/
├── SKILL.md # Required: instructions + metadata
├── scripts/ # Optional: executable code
│ └── validate.py
└── references/ # Optional: detailed docs
└── EXAMPLES.md
Step 1: Create the Folder
mkdir my-skill
cd my-skill
The folder name should match the skill name you will use in the frontmatter. Use lowercase letters, numbers, and hyphens only.
Good folder names:
code-reviewseo-auditdeploy-production
Bad folder names:
Code_Review(uppercase and underscores not allowed)my skill(spaces not allowed)-deploy(cannot start with a hyphen)
Step 2: Write the SKILL.md File
Create a SKILL.md file with two parts: YAML frontmatter and markdown instructions.
---
name: code-review
description: Reviews code for bugs, security issues, and best practices. Use when the user asks for a code review or submits a pull request.
---
# Code Review
## When to use
Activate when the user asks for a code review, submits code for feedback, or opens a pull request.
## Process
1. Read all changed files carefully
2. Check for bugs, edge cases, and logic errors
3. Look for security vulnerabilities (SQL injection, XSS, etc.)
4. Verify error handling covers failure modes
5. Check code style consistency with the project
6. Provide actionable feedback with specific line references
## Output format
Structure your review as:
### Summary
One paragraph overview of the code quality.
### Issues found
List each issue with severity (critical, warning, suggestion), the file and line, and a recommended fix.
### Positive notes
Call out things done well. Good code deserves recognition.
The Two Required Fields
name: Must match the folder name. Lowercase letters, numbers, and hyphens only. Max 64 characters. This becomes the slash command (e.g., /code-review).
description: Tells the agent what the skill does and when to activate it. Max 1,024 characters. This is the most important field because agents use it to decide whether to load your skill.
Write descriptions that include both what and when:
# Good: includes what + when
description: Reviews code for bugs, security issues, and best practices. Use when the user asks for a code review or submits a pull request.
# Bad: only what, no when
description: A code review tool.
# Bad: too vague
description: Helps with code.
Optional Frontmatter Fields
---
name: code-review
description: Reviews code for bugs, security issues, and best practices.
license: MIT
compatibility: Works with any codebase. No external dependencies.
disable-model-invocation: false
metadata:
author: your-github-username
version: "1.0"
---
See the SKILL.md Frontmatter Guide for the complete field reference, including disable-model-invocation and user-invocable controls.
Step 3: Write Effective Instructions
The markdown body is where you teach the agent how to do the task. Here are the principles that make skills work well.
Be concise
The agent already knows how to code, write, and reason. Only tell it things it does not already know: your specific process, your conventions, your domain knowledge.
## Bad (wastes tokens)
Python is a programming language that uses indentation for code blocks.
To review Python code, you need to understand Python syntax...
## Good (gets to the point)
## Python conventions for this project
- Use type hints on all public functions
- Prefer dataclasses over plain dicts for structured data
- Tests go in tests/ mirroring the src/ structure
Use numbered steps for workflows
Ordered lists give agents a clear sequence to follow. Use them for any multi-step process.
## Database migration workflow
1. Create migration file: `alembic revision --autogenerate -m "description"`
2. Review the generated migration in `alembic/versions/`
3. Test locally: `alembic upgrade head`
4. Verify: `python scripts/validate_schema.py`
5. If validation fails, fix and repeat from step 2
6. Commit the migration file
Include examples
Show the agent what good input and output look like.
## Example
### User request
"Review the authentication changes in this PR"
### Expected output
A structured review covering:
- Auth flow correctness
- Token expiration handling
- Password hashing verification
- Session management
- CSRF protection
Set boundaries
Tell the agent what NOT to do. This prevents common mistakes.
## Important
- Never modify files directly during a review. Only suggest changes.
- Do not auto-fix issues unless explicitly asked.
- If unsure about a pattern, ask rather than assuming it is wrong.
Step 4: Add Scripts (Optional)
Scripts handle deterministic operations where you want exact, repeatable behavior. The agent executes scripts without loading their source code into context. Only the output enters the context window.
Create a scripts/ directory:
mkdir scripts
Example validation script (scripts/validate.py):
#!/usr/bin/env python3
"""Validate that all API endpoints have tests."""
import os
import sys
api_dir = "src/api"
test_dir = "tests/api"
missing = []
for f in os.listdir(api_dir):
if f.endswith(".py") and f != "__init__.py":
test_file = f"test_{f}"
if not os.path.exists(os.path.join(test_dir, test_file)):
missing.append(f)
if missing:
print("Missing test files:")
for f in missing:
print(f" - {f} -> tests/api/test_{f}")
sys.exit(1)
else:
print("All API endpoints have corresponding tests.")
Reference the script in your SKILL.md:
## Validation
Before completing the review, run the test coverage check:
python scripts/validate.py
If any endpoints lack tests, flag them in your review.
Step 5: Add Reference Documents (Optional)
For skills that need extensive documentation, keep the main SKILL.md focused and move details to separate files.
mkdir references
Create references/EXAMPLES.md:
# Code Review Examples
## Example 1: SQL injection vulnerability
```python
# Bad
query = f"SELECT * FROM users WHERE id = {user_id}"
# Good
query = "SELECT * FROM users WHERE id = %s"
cursor.execute(query, (user_id,))
Example 2: Missing error handling
...
Reference it from SKILL.md:
```markdown
## Detailed examples
For common vulnerability patterns with fixes, see [EXAMPLES.md](references/EXAMPLES.md).
The agent only loads this file when it needs the examples, keeping the initial context cost low.
Step 6: Test Your Skill
Install locally
Copy your skill to your platform's skills directory:
# Claude Code (project-level)
cp -r my-skill .claude/skills/
# Claude Code (global)
cp -r my-skill ~/.claude/skills/
# Cursor
cp -r my-skill ~/.cursor/skills/
Test it
- Start a new conversation with your agent
- Ask it to do something that matches your skill's description
- The agent should automatically load and follow the skill instructions
- You can also invoke it directly:
/code-review
Iterate
Watch how the agent uses your skill. Common issues to fix:
- Agent does not activate the skill: Make the description more specific about when to use it
- Agent loads the skill at wrong times: Make the description more precise, narrowing the trigger conditions
- Agent misses steps: Add more explicit numbered instructions
- Agent over-explains: Remove information the agent already knows
- Output format is wrong: Add a concrete example of expected output
Step 7: Publish to agentskill.sh (Optional)
Share your skill with the community by publishing it to agentskill.sh.
1. Push to GitHub
Create a public GitHub repository for your skill:
cd my-skill
git init
git add .
git commit -m "Initial skill"
git remote add origin https://github.com/your-username/my-skill.git
git push -u origin main
2. Submit to the directory
Go to agentskill.sh/submit and enter your repository URL. The directory indexes your skill automatically and makes it discoverable to the community.
Once published, anyone can install your skill with:
/learn @your-username/my-skill
3. Keep it updated
When you push changes to your GitHub repository, the directory picks up the updates. Users who installed your skill can update with /learn update.
Quick Reference: Skill Checklist
Before publishing, verify your skill meets these criteria:
- Folder name matches the
namefield in frontmatter -
nameis lowercase with hyphens only (no underscores, no uppercase) -
descriptionexplains both what the skill does and when to use it - SKILL.md is under 500 lines
- Instructions are concise and skip things the agent already knows
- Multi-step workflows use numbered lists
- At least one example of expected input/output
- Scripts (if any) are executable and handle errors
- Reference files (if any) are linked from SKILL.md
Next Steps
- Browse examples: See how other skills are built at agentskill.sh
- Frontmatter reference: Learn all available fields at SKILL.md Frontmatter Guide
- Understand the format: Read What Is a Skill? for architecture details
- Skills vs MCP: Know when to use each at Skill vs MCP
FAQ
Do I need to know how to code to create a skill?
No. A skill can be pure markdown instructions with no code at all. Scripts are optional and only needed for deterministic operations like running validation checks or processing files.
How long should a SKILL.md file be?
Keep the main SKILL.md under 500 lines (roughly 5,000 tokens). If you need more detail, move reference material to separate files in a references/ directory. The agent loads these on demand, so they do not consume context until needed.
Can I publish my skill to agentskill.sh?
Yes. Push your skill to a public GitHub repository, then submit it at agentskill.sh/submit. The directory indexes it automatically and makes it discoverable to the community.
How do I test if my skill works?
Copy the skill folder to your platform's skills directory (e.g., ~/.claude/skills/), start a new conversation, and ask the agent to do something that matches the skill's description. You can also invoke it directly with /skill-name.
Can my skill reference MCP tools?
Yes. Your SKILL.md instructions can reference MCP tools by their fully qualified name (e.g., GitHub:create_issue). The skill provides the workflow and the MCP tool provides the capability. See Skill vs MCP for more detail.