Skip to main content
The skill editor includes a built-in preview pane that lets you test your skill with real inputs before saving or sharing it. This is the fastest way to iterate on a skill’s system prompt and catch issues early.

The Preview Pane

When you’re in the skill editor (Settings → Skills → Create New or edit an existing skill), the right side of the screen shows a preview pane — a live test environment for the skill. Enter a sample input in the preview pane and click Run to see exactly what the skill produces. Adjust the system prompt and test again without leaving the editor.

Testing Process

1

Write your skill in the editor

Fill in the skill name, system prompt, tools, and model in the editor panel.
2

Enter a test input in the preview pane

Type or paste a representative example of what a user would provide when invoking this skill.For a @summarize skill: paste a paragraph or article you want summarized. For an @explain-code skill: paste a function or code snippet. For a @translate skill: paste a sentence or paragraph in the source language.
3

Click Run or Test

The skill processes your test input using the current system prompt and shows the output in the preview pane.
4

Evaluate the output

Ask yourself:
  • Is the format what I wanted?
  • Is the length appropriate?
  • Did it follow all the instructions?
  • Would this output be useful in a real workflow?
5

Adjust and re-test

If the output isn’t right, edit the system prompt and run the test again. Repeat until the output consistently matches your expectations.
6

Save

When you’re satisfied with the skill’s behavior across your test inputs, click Save.

What to Test For

Format consistency — run the same skill 2–3 times on the same input. Does it produce the same format each time? If not, add more explicit format instructions. Edge cases:
  • Empty or very short input — does the skill handle it gracefully?
  • Very long input — does it still stay within your expected output length?
  • Off-topic input — does the skill stay focused or does it wander?
  • Input in an unexpected language or format — what happens?
Tool usage — if you’ve enabled tools like web search, test with a prompt that should trigger the tool. Verify the skill actually uses it. Instruction adherence — if you added constraints (“only return the translation, no explanation”), test that they’re followed.

Iteration Expectations

Most skills need 3–5 iterations before they reach consistently good quality. This is normal. Common iteration patterns:
ProblemFix
Output is too longAdd a word limit or max bullet count to the system prompt
Format is inconsistentSpecify the exact structure with an example in the prompt
Skill ignores constraintsMake the constraint more explicit (“You MUST return only…”)
Output includes unwanted preambleAdd “Return only the result — no intro, no commentary”
Model doesn’t follow all rulesNumber the rules and add emphasis to the most critical ones

Adding Examples to Your Prompt

If the preview pane consistently shows formatting or behavioral issues, try adding an example directly into the system prompt:
Summarize the provided text in 3-5 bullet points.

Example output format:
• Key insight one in under 20 words
• Key insight two in under 20 words
• Key insight three in under 20 words

Return only the bullet points — no preamble, no conclusion.
This technique (called few-shot prompting) dramatically improves format consistency for skills with specific output requirements.
Test your skill on at least 3 different real-world inputs before saving — one typical input, one edge case, and one input that might be off-topic or ambiguous. If it handles all three well, it’s ready.