Structured Output Designer
Designs JSON schemas for LLM structured output — field types, enum vs. free text, nesting limits, required vs. optional, and native output method selection. Use when building schemas for Claude tool_use, OpenAI strict JSON, or prompt-based structured responses. Schema design, JSON output, data extraction.
Prompt Optimizer
Rewrites underperforming LLM prompts for clarity, consistency, and output quality. Use when a prompt produces vague, inconsistent, or off-target results. Analyzes failure modes (missing constraints, ambiguous intent, instruction overload) and applies targeted fixes. Prompt engineering, prompt improvement, refine prompt.
Context Window Optimizer
Designs context window allocation strategies — priority tiers, dynamic trimming, attention-aware placement, and token budgeting across prompt components. Use when LLM responses degrade in long conversations, system prompt instructions get ignored, or context limits are being hit. Context management, token allocation, lost-in-the-middle.
Few Shot Example Designer
Designs few-shot example sets for LLM prompts with deliberate coverage, edge cases, negative examples, and format anchoring. Use when outputs need consistent formatting, classification accuracy, or style calibration. Few-shot examples, in-context learning, prompt examples.
Multimodal Prompt Designer
Designs prompts that combine text instructions with images, screenshots, diagrams, and visual inputs for accurate extraction, comparison, and analysis. Use when building vision+text LLM features — OCR, UI comparison, chart interpretation, visual QA, document extraction. Multimodal, vision, image analysis.
Output Parser Designer
Designs robust LLM output parsers with extraction, validation, repair, and fallback layers. Use when building pipelines that turn free-form LLM responses into structured data — JSON extraction, schema validation, graceful degradation on malformed output. Output parsing, JSON repair, LLM reliability.
Prompt Debugger
Systematically diagnoses why LLM prompts produce broken, inconsistent, or unexpected output using a 10-point fault-tree analysis. Use when a prompt is actively failing — wrong format, contradictory behavior, hallucinations, or ignored instructions. Prompt debugging, fix prompt, broken prompt.
Prompt Library Curator
Designs prompt library organization systems — taxonomy, file structure, versioning, A/B testing frameworks, quality gates, and metadata schemas for managing prompts at scale. Use when organizing prompt collections, setting up prompt versioning, or building prompt management infrastructure for a team. Prompt management, prompt versioning, prompt library.
Chain of Thought Architect
Designs structured reasoning chains for LLM prompts — decomposition strategies, verification checkpoints, self-correction loops, and chain pattern selection. Use when building prompts for multi-step reasoning, analysis, or decision-making tasks. Chain-of-thought, reasoning, step-by-step, CoT.
System Prompt Designer
Designs production system prompts that define LLM behavior for applications — role identity, behavioral rules, knowledge boundaries, guardrails, and response format. Use when building a Claude-based chatbot, API assistant, Claude Project, or CLAUDE.md configuration. System prompt, assistant design, persona.