Experiments — Testing What Actually Works
Experiments
TL;DR: We test things before recommending them. This section documents our experiments — what we tried, what worked, what didn’t, and what we learned. No theory without practice.
Why We Experiment
AI moves fast. What worked six months ago might be obsolete. What sounds good in theory might fail in practice. We run experiments to:
- Validate approaches before recommending them to clients
- Generate real metrics — not hypothetical projections
- Document failures — knowing what doesn’t work is valuable
- Build case studies — experiments that work become client offerings
Our Method
Every experiment follows this structure:
| Phase | What We Do |
|---|---|
| Hypothesis | Clear statement of what we’re testing |
| Setup | Tools, data, constraints documented |
| Execution | Actually run the test |
| Results | Metrics, outputs, observations |
| Analysis | What worked, what didn’t, why |
| Verdict | Recommendation: adopt, adapt, or abandon |
Cross-Cutting Patterns
Patterns that emerged across multiple experiments:
Context > Tools
Every experiment confirms: AI tools without your business context produce generic results. The differentiator is always the data you feed it, not the tool itself.
Honesty Signals
Counter-intuitive finding from SEO/GEO experiments: content that admits weaknesses gets cited more by AI engines. The glossary/honest-assessment pattern emerged from testing, not theory.
Human-in-the-Loop Required
No experiment produced “set and forget” results. AI generates drafts; humans verify and polish. The speedup is real (5-6x typical), but zero human time is not realistic.
Native > Translated
For non-English markets: generating directly in the target language beats translating. AI can match local idioms when prompted correctly.
Active Experiments
| Experiment | Status | Key Finding |
|---|---|---|
| experiments/seo-geo-content-ecommerce | 🌿 Complete | AI articles are publish-ready with 15-30 min review |
| experiments/ad-alchemy-competitor-piggyback | 🌿 Complete | Competitor creative analysis accelerates ad development |
| experiments/ai-visibility-ecommerce | 🌱 In progress | Lithuanian e-commerce AI citation rates vary widely |
Experiments → Case Studies
When an experiment proves successful and we implement it for a client, it becomes a case study:
- experiments/seo-geo-content-ecommerce → cases/product-article-generator-pigu
- experiments/ad-alchemy-competitor-piggyback → cases/ad-alchemy-creative-reverse-engineering
Propose an Experiment
Have something you want us to test? We’re always looking for:
- New AI tools to evaluate
- Workflow automation hypotheses
- Content generation approaches
- Competitor intelligence methods
Contact us: primores.org/contact
Related
- cases/product-article-generator-pigu — Experiment that became a client offering
- glossary/honest-assessment — Pattern discovered through experimentation
- glossary/geo-anchor — Another experiment-derived pattern
- index — Full wiki catalog