Premium

LLM Development

Building with LLMs is harder than it looks — prompts break in unexpected ways, costs spiral, and it's unclear what 'good enough' even means. These 35 skills give you the frameworks to build AI features that actually work in production.

Built for developers building AI-powered products who want fewer surprises and more reliable results.

€79.00one-time purchase
  • 35 skills
  • 5 prompts
  • 1 drop-in preset
  • 1 mini-lesson included

One-time purchase · Instant access · Install in 60 seconds

What You'll Be Able To Do

Debug failing prompts instead of guessing why they misbehave

Design RAG pipelines and agent workflows with proven patterns

Estimate and control API costs before they become a problem

Build guardrails and evals so you can ship with confidence

Featured Skills

Prompt Debugger

Systematically diagnoses why LLM prompts produce broken, inconsistent, or unexpected output using a 10-point fault-tree analysis. Use when a prompt is actively failing — wrong format, contradictory behavior, hallucinations, or ignored instructions. Prompt debugging, fix prompt, broken prompt.

RAG Advisor

Designs retrieval-augmented generation pipelines — document ingestion, chunking, embedding, vector storage, retrieval, reranking, and generation prompt structure. Use when building a RAG system from scratch or evaluating whether RAG is the right approach. RAG architecture, vector search, semantic search, knowledge base.

System Prompt Designer

Designs production system prompts that define LLM behavior for applications — role identity, behavioral rules, knowledge boundaries, guardrails, and response format. Use when building a Claude-based chatbot, API assistant, Claude Project, or CLAUDE.md configuration. System prompt, assistant design, persona.

AI Cost Calculator

Estimates and optimizes LLM API costs with real-world multipliers — system prompt overhead, retry rates, caching ROI, model tier allocation, and scaling projections. Use when budgeting AI features, justifying costs to leadership, or diagnosing unexpected API bills. Cost estimation, token budget, AI spend, pricing.

Eval Writer

Creates evaluation suites for AI features — scoring rubrics, test cases with coverage strategy, pass/fail criteria, and LLM-as-judge configurations. Use when measuring AI output quality, building eval pipelines, or writing test cases for prompt changes. Evals, evaluation, scoring rubric, AI quality, benchmarking.

This is 5 of 35 skills in this bundle.

What's Inside

Also Included

5 Ready-to-Use Prompts

  • System Prompt Template
  • AI Feature Spec
  • Claude Code Setup Checklist
  • LLM Evaluation Criteria
  • Prompt Engineering Worksheet

1 Drop-in Configuration

Drop-in CLAUDE.md for AI/LLM projects — prompt versioning, eval-first development, model abstraction, and integrated skill coordination.

Included Bonus

LLM Development Bundle

Install, configure, and master the 35 skills in your LLM Development bundle

  1. 1What's in This Bundle
  2. 2Installation
  3. 3The Preset
  4. 4What You Can Do
  5. 5Prompts & How to Use Them
  6. 6Tips & Common Patterns
~20 minutes

Common Questions

LLM Development

€79.00one-time purchase

One-time purchase · Instant access

Not what you're looking for? Browse all bundles →