Skip to content

Ad Alchemy — AI-Assisted Creative Reverse Engineering

Ad Alchemy — AI-Assisted Creative Reverse Engineering

TL;DR: A Claude skill that analyzes a competitor’s winning ad, extracts its structural “formula” (lighting, composition, copy skeleton), and outputs ready-to-run image generation prompts + native-language ad copy for your own product. Turns expensive creative reverse-engineering into a 15-minute workflow.

The Problem

Creative teams face a classic dilemma: you see a competitor running successful ads, and you want to learn from what’s working. But:

  • Manual analysis is shallow — “it’s blue and has good vibes” doesn’t translate to executable creative briefs
  • Wholesale copying is risky — legal issues + your brand looks like a knockoff
  • Hiring consultants is expensive — art direction expertise costs $200+/hour
  • Starting from scratch is slow — why reinvent when competitors already validated a formula?

The hypothesis: a multimodal AI can do art direction-level analysis if given a structured workflow and the right mental checklist.

The Solution: Formula vs. Skin

The core insight that makes this work:

ComponentWhat It IsTransferable?
FormulaStructural choices: lighting direction, composition grid, focal hierarchy, palette weights, copy skeleton✅ Yes — these are reusable
SkinBrand-specific elements: the product, colors, exact wording, model, setting❌ No — swap with your own

A winning ad wins for structural reasons that are invisible to the naked eye. The skill’s job is to articulate those structural choices precisely enough that an image model can re-execute them with a different product.

Two Failure Modes to Avoid

  1. Surface mimicry: “Photo of a bottle on a beach, like the reference” — copies the skin, not the formula. Output looks nothing like the source.

  2. Wholesale cloning: Copying exact product silhouettes, headlines, or trademarked styling — legal risk and creative dead end.

The sweet spot: the formula transfers, the product is unmistakably yours.

How It Works

Input

The user provides:

  • Reference ad — screenshot or image file of the competitor ad
  • Their product — name, one-sentence description, what makes it distinct
  • Brand colors — hex codes if possible
  • Target language — for localized copy (defaults to English)
  • Optional: target image model, number of variations, platform

The 6-Step Workflow

Step 1: Visual Deconstruction (10 layers)

The skill walks through a systematic checklist:

LayerWhat to CaptureExample
Composition gridWhere focal point sits, aspect ratio, eye travel path”Lower-third rule of thirds, 9:16 vertical, vertical scroll”
Focal hierarchyPrimary → secondary → tertiary visual weight”Headline > food circles > labels”
Lighting recipeKey/fill/rim directions, temperature, contrast”Soft overhead key, ~5000K, low contrast, no drama”
Palette weightsColor % distribution and semantic roles”60% warm beige / 20% dark brown / 10% food colors”
Typography patternType classes, sizes, placement zones”Bold condensed sans + handwritten script in headline”
Product framingArchetype: hero, lifestyle, macro, flatlay, etc.”Checklist infographic with circular macro crops”
Environment/surfaceWhat product sits in/on/against”Flat beige gradient canvas, no scene”
Supporting propsCategory-level prop description”Curved hand-drawn arrows connecting circles”
Emotional promiseOne-line feeling before reading copy”Organized warmth — a plan that feels homely, not clinical”
Copy patternHook type, body structure, CTA verb class”Authority/list hook, declarative modules, soft-discovery CTA”

Key discipline: Be concrete. “Golden-hour backlight from camera-right at ~30° elevation” is useful. “Warm lighting” is not.

Step 2: Extract Template

Compress the deconstruction into a reusable spec with competitor-specific bits abstracted out. This template is the most valuable artifact — everything flows from it.

Step 3: Cast Template onto User’s Product

Swap the skin while preserving the formula:

  • Replace product with user’s product (same archetype)
  • Replace environment/props with brand-appropriate equivalents
  • Recompute palette around user’s brand colors (same weight distribution)
  • Keep lighting recipe exact — highest-leverage transferable element
  • Preserve composition grid and focal hierarchy exactly

Step 4: Construct Image-Generation Prompts

Different engines reward different prompt styles:

EngineStyleBest For
MidjourneyComma-separated descriptors, --style rawPhotographic ads
FluxNatural language prose, camera settingsTechnical control
DALL-E / GPT-Image2-4 sentences, mood-focusedQuick iterations
Nano Banana / GeminiProduct reference + composition descriptionProduct injection
IdeogramQuoted text stringsEmbedded typography

Step 5: Write Native-Language Copy

Write copy directly in the target language — don’t translate from English. Respect:

  • Character limits (Meta headline: 27 chars visible, body: 125 chars above fold)
  • Formal/informal address (tu/vous, du/Sie)
  • Cultural context (holidays, idioms, measurement units)

Step 6: Generate Variation Set

Five structured variations with testable hypotheses:

VariationWhat It Tests
Closest-to-referenceSafe A/B anchor — tightest formula execution
Hook swapSame visual, different copy hook (curiosity → pain)
Framing swapSame lighting/palette, different archetype (checklist → phone-hero)
Palette inversionAccent becomes dominant, tests color psychology
Wild cardDeliberate departure — highest variance bet

Output

A structured Markdown file containing:

  • Full deconstruction (show your work)
  • Extracted template (sanity-check the formula)
  • Brand context used
  • 5 variations, each with: image prompt + copy + testing hypothesis
  • Review flags (language confidence, trademark risk, factual claims)

Real Example: fitme.lt × Tastier

Reference: A Tastier meal-plan infographic ad (9:16, checklist with circular food photos)

User product: fitme.lt — meal-planning and product-scanning app

Template extracted:

  • Two-column editorial layout (type left, imagery right)
  • Two-class headline typography (bold sans + handwritten script for brand name)
  • Five circular food photos in dark rings, connected by curved arrows
  • Warm beige canvas, 60/20/10/5 palette distribution
  • “Organized warmth — a plan you can follow” emotional promise

Variations generated:

  1. Closest: Same checklist format, fitme green + orange palette
  2. Hook swap: “Nežinai, ką pavalgyt?” (pain hook instead of authority)
  3. Framing swap: Phone-hero with orbiting food circles
  4. Palette inversion: Orange-dominant canvas, green accents
  5. Wild card: 4:5 before/after split (“chaos → order”)

Review flags raised:

  • Brand colors were inferred (should verify against actual brand)
  • V1 structurally close to Tastier — consider differentiation if same market
  • Language confidence high on headlines, moderate on body copy

Why This Works

The Advisor Strategy Pattern

This implements the automation/advisor-strategy — expensive expertise (art direction analysis) paired with cheap execution (image generation).

RoleWho/WhatCost
AdvisorClaude analyzing the ad~$0.10 per analysis
ExecutorImage model generating the creative~$0.05 per image
VerifierHuman reviewing outputs10 minutes

Total: ~$0.50 and 15 minutes vs. $500+ and days for traditional creative reverse-engineering.

The Skill Pattern

This is a glossary/skill — a reusable instruction package that bundles:

  • Domain expertise (art direction, copywriting, localization)
  • Structured workflow (6 steps with quality checks)
  • Reference materials (visual deconstruction checklist, copy frameworks, model-specific prompt guides)
  • Output format (consistent, actionable deliverable)

See tools/claude-skills for more on building skills.

Limitations

  • Static images only — video ads need keyframe extraction and motion analysis
  • Prompts only — skill outputs prompts, doesn’t generate images
  • Language fluency bounded — rarer languages flagged for human review
  • No verification loop — can’t check if generated images match the formula
  • Single-ad input — no batch mode for analyzing entire ad libraries

Key Takeaways

  1. Formula vs. Skin is the core insight — winning ads win for structural reasons that transfer
  2. Systematic analysis beats intuition — the 10-layer deconstruction forces concrete observations
  3. Variations should be testable — each has a hypothesis, not just visual noise
  4. Native copy, not translation — write directly in target language or flag for review
  5. The skill pattern scales — bundle expertise into reusable workflows

Possible Extensions

  • Video variant — keyframe extraction + pacing/motion analysis
  • Batch mode — analyze entire competitor ad libraries, cluster by structural similarity
  • Image generation integration — actually generate the images, not just prompts
  • Performance feedback loop — inform new variations from campaign performance data

Sources

  • Ad Alchemy skill experiment (Primores internal, April 2026)
  • Reddit inspiration post — Web platform that reverse-engineers ad composition (no longer available)