Why Skills Don't Trigger
Three failure modes. One fix. The debug trick that solves 90% of issues.
When Your Skill Should Have Loaded, But Didn't
You built the skill in Module 4. You test it.
Half the time it works. Half the time Claude just answers like the skill is not there.
Frustrating.
Almost every "my skill is not working" story comes down to one thing: the description field.
Get that right and 90% of trigger problems disappear.
This module is short. It is just a clean explanation of:
- Three failure modes
- The fix for each
- One debug trick that tells you exactly what Claude thinks your skill is for
How Claude Decides Whether to Load Your Skill
Every skill has a short description in its frontmatter (the metadata header at the top).
That description is the only thing Claude reads when it scans your skill library to decide which skill is relevant.
It does not read the whole skill. It does not read your instructions.
It reads the description, decides "yes this fits" or "no it doesn't", and moves on.
So if the description does not match how you naturally phrase requests, the skill never gets a chance.
That is it. That is the whole mental model.
Failure Mode 1: The Skill Never Triggers
Symptom: You ask Claude to do exactly the thing your skill is for. Nothing happens.
The skill is enabled, it just sits there.
Cause: The description is too generic.
Example of a description that fails:
Helps with documents.
This says nothing.
Claude has no signal to match against. It could mean almost anything.
Fix: Add specific trigger phrases. Tell Claude what users will actually say when they want this skill.
Summarises meeting notes into decisions, owners, dates, and open questions. Use when user pastes raw notes, transcripts, or call recordings and asks for a "summary", "recap", or "action items".
That second version reads almost like a stage direction.
Claude can match "summary" or "recap" to a user prompt and load the skill confidently.
How to apply: open your skill folder, find the line that starts with description: in the file header, and rewrite it with two things:
- What the skill does
- The exact words a user would say to need it
Failure Mode 2: The Skill Triggers on Everything
Symptom: You ask Claude something completely unrelated. The skill loads anyway.
Sometimes it tries to do its job on the wrong input.
Cause: The description is too broad.
Example of a description that over-triggers:
Processes documents.
Almost every Claude conversation involves a document of some kind.
This description fires constantly.
Fix: Narrow the description and tell Claude when not to use it.
Processes PDF legal contracts for review and red-flag identification. Use only for contract documents. Do NOT use for general document questions or non-legal PDFs.
The phrase "Do NOT use for…" is a real instruction Claude respects. It is not a hack.
Use it to carve out the false-positive cases you have already seen happen.
How to apply: every time the skill triggers wrongly, jot down the prompt that triggered it.
After three or four examples, you will see the pattern.
Add a "Do NOT use for…" line to the description that fences them out.
Failure Mode 3: The Skill Triggers but Claude Ignores the Instructions
Symptom: The skill loads (you can sometimes see it in the chat header), but the output looks nothing like what you told the skill to do.
Format is wrong. Voice is wrong. Steps are skipped.
Cause: The instructions are buried.
This is the one case that is not about the description.
Once the skill loads, Claude reads the whole SKILL.md file.
If your important instructions are halfway down a long file, surrounded by examples and notes, Claude can lose them.
Fix: Put critical instructions at the top of the skill. Use a clear heading.
## CRITICAL: Do This First
Before writing anything, check:
- Output starts with "Decisions" (not "Summary").
- Sentences are under 15 words.
- Never use the word "stakeholders".
That block, at the top of the file, gets read first and weighted heavily.
How to apply: open the skill and read it like Claude does, top to bottom.
If a key instruction is more than a screen down, move it to the top under a "Critical" heading.
If you have an "Examples" section that takes up the first half of the file, push it to the bottom.
The Debug Trick That Solves Half the Cases
When in doubt, ask Claude directly.
When would you use the [your-skill-name] skill?
Claude will quote your description back at you and explain what triggers it would respond to.
If what it says does not match what you actually want, the description is the problem.
Fix the description, ask again.
The meeting-notes-summary skill activates when you paste raw meeting notes, transcripts, or call summaries and ask for a recap or action items. It produces output with sections for decisions, owners, and open questions in a direct voice.
If that read sounds wrong, you know exactly what to edit.
A Short Field Guide
Pin this table somewhere. It is genuinely 90% of skill maintenance.
The same fix loop works in skill-creator. You do not have to hand-edit the description. Just say "use skill-creator to make the meeting-notes-summary skill trigger on shorter prompts like 'recap this'". skill-creator updates the description for you.
🎓Go further: Measure your skill's impact (the baseline test)click to expand
How do you know your skill is actually helping versus you just believing it does?
Anthropic publishes a clean baseline test you can run in 10 minutes. Pick a real task, run it twice. Once with the skill off, once with it on. Compare three numbers:
- Back-and-forth count. How many messages did you exchange before getting what you wanted?
- Failed tool calls. How many times did Claude try something that did not work and need to retry?
- Total tokens consumed. Most chat interfaces hide this, but if you are on the API you can see it.
The before/after example from Anthropic's own data:
If your skill is working, you should see a 5x or better reduction on the first metric, and at least a 2x reduction on the others.
If you do not see the gap, your skill is not pulling its weight. Either it is not triggering (back to Module 5), or the instructions are too vague (the user is still steering manually), or the workflow itself was not actually repeatable. Better to find that out in a 10-minute test than after three months of "yeah I have a skill for that".
A second mistake people make: one giant skill instead of three small ones.
The "writing skill" that tries to handle blog posts, internal memos, and your wife's birthday card at the same time will be worse at all three than three specialised skills (blog-writer, internal-memo, personal-comms) each scoped tightly. Operator Karol Zieminski tested this directly. Three focused skills beat one monolithic one every time.
When a skill starts feeling stretched, split it. The cognitive load on Claude's routing increases with scope creep, and so does the rate of wrong triggers.
What's Next
Module 6 is the final module: how to share what you have built.
If you have a skill working well, it is probably worth giving it to your team.
Even if you do not share it, you should know how, because the same pattern is how you receive skills built by other people.