Skip to content

Activity Log

Activity Log

This log tracks what happens in the wiki — sources ingested, pages created, experiments run, questions explored.


[2026-04-24] create | Fresh Niche Hunter Run — Three Niches Evaluated

Live validation of three wiki expansion candidates using five-axis methodology.

Case study created: cases/niche-hunter-fresh-2026-04

Niches evaluated:

NicheVerdictKey Finding
AI e-commerce contentMAYBETool-dominated SERPs (same failure pattern as TikTok scheduling)
AI visibility auditingGOEmerging category, no methodology authority, maps to Experiment 01
Reddit-to-article workflowGOCompletely uncontested, unique IP (substance ranking)

SERP patterns documented:

  • Tool-intent SERPs can’t be won with articles (Copy.ai, Hypotenuse dominate AI product descriptions)
  • Emerging categories like “GEO audit” are goldmines before big players notice
  • Unique terminology (“substance ranking”) creates defensible moats

Recommended build order: Reddit workflow → AI visibility → E-commerce (narrowed)

Wiki stats: Now at 91 pages, 8 case studies.


[2026-04-24] create | Niche Hunter Case Study + Template

Created comprehensive case study from Primores test run data, plus reusable template.

Wiki pages created:

Case study contents:

  • Phase 1: Three candidate niches evaluated
  • Phase 2: Five-axis validation with full evidence tables
  • Phase 3: 118-article map breakdown (8 pillars, 62 clusters, 32 FAQ, 16 glossary)
  • “What the Skill Caught” section documenting three framing errors avoided

Key insights documented:

  1. Brand-name collisionai ad creative pool was dominated by AdCreative.ai product queries
  2. Wrong-shape SERP — Product-intent queries can’t be won with articles
  3. Audience/SERP divergence — Winnable SERP ≠ valuable traffic if audience doesn’t match buyers

Wiki stats: Now at 90 pages, 7 case studies.


[2026-04-24] ingest | Niche Hunter Skill + Super-Niche & Topical Authority Concepts

Documented Primores internal skill for finding winnable content niches and building article maps.

Source: experiment/06-niche-hunter/ skill + test run on primores.org

Wiki pages created:

Key frameworks extracted:

  • Five-axis validation rubric (Size, Competition, Commercial Density, Expertise Fit, AEO Gap)
  • Article role taxonomy (Pillar → Cluster → FAQ → Glossary)
  • Three-phase build strategy (Quick wins → Authority core → Completion)
  • Super-niche formula: Audience × Problem × Context

Skill capabilities documented:

  • Phase 1: Hypothesis generation (5-10 candidates)
  • Phase 2: Validation against five axes with go/maybe/skip verdicts
  • Phase 3: Article map generation (50-200 interlinked articles)
  • Phase 4: (Optional) Article drafting with frontmatter + schema markup

Real example included: Primores.org test run showing one GO, one MAYBE, one SKIP verdict with rationale for each.

Cross-links to existing wiki:

Wiki stats: Now at 88 pages, 25 glossary entries, 9 tool reviews.


[2026-04-24] create | Wiki Methodology Page + LLM Usage Guide

Created public methodology page and added prominent “Use with your LLM” guide to the index.

Problem solved: CLAUDE.md was a private file but was being referenced from public wiki pages. Visitors clicking those links would get 404 errors on the published site.

Solution:

  1. Created methodology — public page explaining how the wiki is built
  2. Added ”🤖 Use This Wiki With Your LLM” section at top of index
  3. Updated all CLAUDE.md references across wiki to point to methodology

Wiki page created:

  • methodology — How this wiki is built (three-layer structure, three operations, status system)

Files updated:

Key additions to index:

  • Example prompts for LLM users (“Based on the Primores wiki, how should I…”)
  • Claude Code usage note for folder-level context
  • Why the wiki structure works for AI (TL;DRs, structured headings, cross-links)

Wiki stats: Now at 85 pages.


[2026-04-24] ingest | New Site SEO Strategy (Reddit Thread Analyzer Output)

Ingested SEO article produced by the Reddit Thread Analyzer skill from r/DigitalMarketing thread.

Source: Reddit thread “Is it actually possible to rank a new site in 2026?” (27 comments) Article: articles/2026-04-23-digitalmarketing-rank-new-site.md

Wiki page created:

Key frameworks extracted:

  • “Targeting Problem, Not Content Problem” — If 15 guides = 10 visits/week, topics are wrong
  • “Trust to Win, Not Pay to Win” — New sites build trust, not buy in
  • “Narrow the Battlefield” — Pick terrain incumbents don’t defend
  • “Weird, Specific, Long” — Long-tail keyword pattern

Practical tactics documented:

  • GSC audit technique (find queries with impressions but low rankings)
  • Pinterest as SEO channel (pins rank in Google)
  • Hyper-specific keyword transformations (with real examples)
  • AI search favors small sites with specific answers

Cross-links to existing wiki:

Meta-observation: This is the first full article produced by the Reddit Thread Analyzer skill to be ingested into the wiki — demonstrating the 05 → wiki pipeline working as designed.


[2026-04-23] create | Reddit Thread Analyzer Skill + Substance Ranking Glossary

Documented Primores internal skill for transforming Reddit threads into SEO-optimized articles.

Pages created:

Core insight: Reddit upvotes measure popularity, not truth. The skill’s 6-axis substance rubric corrects for this:

  • Substance (0-3): sentiment → opinion → reasoning → evidence+numbers
  • Source type: first-hand > professional > second-hand > inferred
  • Contrarian bonus: downvoted-but-reasoned often contains signal
  • Actionability: can reader do/decide/change?

Workflow (6 stages):

  1. Capture thread (JSON endpoint or saved file)
  2. Parse comment tree with metadata
  3. Score every comment on substance rubric
  4. Extract building blocks (numbers, frameworks, case studies)
  5. Honest go/no-go gate (red-light unrankable threads)
  6. Produce highlights file and/or SEO article

Business applications:

  • Content marketing from community insights
  • Research swipe files with “worth stealing for” hooks
  • Keyword monitoring + audience engagement
  • Client briefings with noise filtered out

Key metric: ~30% divergence from popularity sort typical.


[2026-04-23] ingest | Reddit Shill Detection Article → Wiki Synthesis

Ingested original investigative article about Reddit astroturfing patterns.

Source: articles/2026-04-23-reddit-shill-detection.md (kept for blog publishing)

Wiki synthesis:

Private playbook update:

  • private/content-playbook/reddit-style-guide.md — Added “Anti-Shill Patterns” section

Key extractions:

  • Three-Post Pattern: Case study → Outcome → Concern troll (named framework)
  • Detection signals: Multi-sub test, tool drop in step 2, templated comments
  • Authentic alternatives: Inverse of each shill pattern
  • Community immune response: Reddit catches shills within hours

Connection to existing wiki: Links to glossary/honest-assessment (inverse pattern), marketing/ai-marketing-case-studies (what real case studies look like).


[2026-04-23] create | AI Implementation Patterns (Meta-Analysis of 1,048 Cases)

Created comprehensive patterns page synthesizing insights from the entire Google Cloud dataset.

Key findings from analysis:

  • 17.7x more augmentation than replacement language in real deployments
  • Document processing is #1 use case (46% of all cases)
  • Four universal patterns appear in every industry: customer communication, workflow automation, data analysis, personalization
  • 90%+ improvements share one trait: eliminate time on repetitive tasks (54% mention time reduction)
  • 43% use Gemini — off-the-shelf models with domain context, not custom AI

The data contradicts common narratives:

  • “AI replaces workers” → Reality: 443 augmentation cases vs 25 replacement
  • “You need massive data” → Reality: Most work on existing docs and conversations
  • “Results take years” → Reality: Median improvement is 50%, many in weeks

New page: automation/ai-implementation-patterns — marked as 🌳 evergreen (data-backed, comprehensive)


[2026-04-23] update | Added All 1,048 Google Cloud Cases to Wiki

Expanded wiki pages to include ALL cases from the dataset, not just metric-rich ones.

Added non-metric cases:

  • Customer Service: +88 cases (128 total)
  • Marketing: +168 cases (212 total)
  • Cross-Industry: +201 cases + Automotive/Manufacturing/Media (+58)
  • HR & Workforce: +65 cases (84 total)
  • Security: +67 cases (79 total)
  • Retail & E-commerce: +53 cases (71 total)
  • Healthcare: +50 cases (62 total)
  • Developer Tools: +27 cases (33 total)
  • Finance & Banking: +20 cases (32 total)
  • Supply Chain: +15 cases (22 total)
  • Legal: +10 cases (15 total)

SEO value: Company names now searchable across all industries (BMW, Mercedes, Uber, etc.)


[2026-04-23] ingest | Google Cloud AI Use Cases Dataset (1,048 Cases → 232 with Metrics)

Processed Google’s full compilation of real-world gen AI deployments. Created 10 new wiki pages covering all 232 metric-rich cases.

New pages created (10):

Updated:

Extraction process:

  • Source: 795KB HTML file, parsed with Python regex
  • 1,048 total cases → 232 with quantified metrics
  • Categorized by industry, formatted as wiki pages

Raw data:

  • raw/cases/google-cloud-ai-use-cases-all.json — Full 1,048 cases
  • raw/cases/google-cloud-ai-use-cases-categorized.json — Categorized with metrics
  • raw/cases/metric-cases-by-category.json — 232 metric-rich cases only

Cross-cutting insight: The pattern that emerges most strongly: documentation and repetitive task automation show 2-5x ROI regardless of industry. Customer service, HR, legal, healthcare — the use cases look different but the mechanics are identical.


[2026-04-23] create | Wiki Structure Improvements + About Page

Implemented planned changes from earlier session:

New pages (2):

  • about — About Primores + Andrej bio (ex-Adform VP Eng, Monetha co-founder, Vilnius)
  • experiments/overview — Experiments methodology with cross-cutting patterns

Renamed:

  • getting-started.mdcontributing.md — Clarifies this is a contributor guide

Updated:

  • index — Added about page, experiments overview, updated stats (68 pages)
  • llms.txt — Real primores.org/wiki/ URLs, refreshed stats, notable content section
  • CLAUDE.md — Added optional frontmatter fields (canonical, og_image, author) with converter defaults

Client naming policy verified: pigu.lt, fitme.lt, varle.lt are publicly cited in wiki case studies.


[2026-04-23] experiment | First Wiki-to-Content Test (Reddit Response)

First test of using wiki knowledge to create external content:

  • Drafted response to Reddit post about AI marketing tools
  • Wiki provided substance (patterns, case study numbers, advisor+executor model)
  • Manual editing added authentic Reddit voice

Key learnings:

  • AI drafts are “too polished” for Reddit — need intentional imperfection
  • Lowercase, minor typos, conversational flow = trust signals
  • Karpathy’s LLM Wiki reference adds external credibility
  • Wiki gives substance, human gives voice

Files created:

  • drafts/reddit-response-ai-marketing-tools.txt — Final response
  • private/content-playbook/reddit-style-guide.md — Internal style notes (not public wiki)

Also fixed:


[2026-04-22] create | Product Article Generator — Full Case Study + Pattern Extraction

Completed comprehensive coverage of the Product Article Generator skill:

  • Case study documenting the pigu.lt implementation
  • Extracted two reusable GEO patterns into glossary entries
  • Enriched experiment with concrete examples from Hisense freezer output
  • Added cross-links across wiki

Why this skill is critical:

  1. Revenue-generating work — active client work for pigu.lt, not just an experiment
  2. SEO/GEO convergence — solves both Google SEO AND AI citation in one workflow
  3. Counter-intuitive insight — honest assessments (admitting weaknesses) = more AI citations
  4. Scalability proof — demonstrates AI handling long-tail content at 10,000+ SKU scale

Pages created (3):

Pages updated (6):

Patterns extracted:

  • GEO Anchor: First sentence must be quotable by AI — product + capacity + audience + value in one sentence
  • Honest Assessment: Naming real weaknesses (with cost impact) increases AI trust and citations

Product Article Generator now has proper wiki coverage:

ComponentStatus
Tool pagetools/product-article-generator
Case studycases/product-article-generator-pigu
Experimentexperiments/seo-geo-content-ecommerce
Glossary patternsglossary/geo-anchor, glossary/honest-assessment

[2026-04-22] create | SEO/GEO Content Experiment — AI Article Generation for E-commerce

Created experiment page testing AI-generated product articles for e-commerce SEO and AI search visibility at pigu.lt.

Pages created (1):

Pages updated (2):

Business problem framed:

“E-commerce sites have 10,000+ products needing unique content. Human writers cost €5-15 per article. AI search engines need specific structure. Multiple languages multiply costs. How do we scale content without sacrificing quality?”

Results documented:

  • 5-6x speedup (20-30 min AI+review vs. 2-3 hours human writing)
  • ~80% cost reduction (€2-3 vs. €10-15 per article)
  • Consistent SEO/GEO optimization (schema markup, GEO anchors, honest assessments)
  • Human review remains non-negotiable (15-30 min per article)

Product Article Generator now has proper experiment coverage.


[2026-04-22] create | Ad Alchemy Experiment — Piggybacking Competitor Concepts

Created experiment page testing the “piggybacking competitor ad concepts” use case using the fitme.lt × Tastier output.

Pages created (1):

Pages updated (2):

Business problem framed:

“I see competitors running successful ads but I don’t know how to learn from them. Copying feels wrong. Hiring consultants is expensive. Starting from scratch wastes the market intelligence sitting right in front of me.”

Results documented:

  • 10-layer formula extraction: all layers concretely articulated
  • 5 variations generated with distinct testing hypotheses
  • Production-ready prompts and native Lithuanian copy
  • Review flags for trademark adjacency, language confidence, brand color verification

Ad Alchemy now has the same structure as AI Visibility:

  • Case study (the skill/approach)
  • Experiment (testing it on a real problem)

[2026-04-22] create | Ad Alchemy Case Study

Analyzed the Ad Alchemy skill experiment and documented it as a wiki case study. This skill reverse-engineers competitor ads into reusable creative formulas.

Source: Internal experiment at /Documents/_tasks/experiment/02-ad-alchemy/

Pages created (1):

Pages updated (2):

  • competitor-analysis/overview — Added “Creative Reverse Engineering” section with Formula vs. Skin framework; upgraded to 🌿 growing status
  • index — Added case study, updated stats (61 pages, 5 case studies)

Key insights:

  1. Formula vs. Skin Framework:

    • Formula (transferable): lighting, composition, focal hierarchy, palette weights, copy skeleton
    • Skin (brand-specific): product, exact colors, wording, models
    • AI articulates structural choices precisely enough for image models to re-execute
  2. 10-Layer Visual Deconstruction:

    • Composition grid, focal hierarchy, lighting recipe, palette weights
    • Typography, product framing archetype, environment, props
    • Emotional promise, copy pattern
    • Forces concrete observations (“30° backlight” not “warm lighting”)
  3. Structured Variations:

    • 5 variations with testable hypotheses (not random visual noise)
    • Closest-to-reference, hook swap, framing swap, palette inversion, wild card
  4. Advisor Strategy Pattern:

    • Claude as advisor ($0.10/analysis) + image model as executor ($0.05/image)
    • 15 minutes vs. days for traditional creative reverse-engineering

Connection to wiki: Fills competitor-analysis gap, demonstrates automation/advisor-strategy pattern, exemplifies tools/claude-skills approach.


[2026-04-21] create | AI Tools Comparison (When to Use What)

Created comprehensive comparison page covering four categories of AI tools for different user types.

Sources: MindStudio, Digital Applied, Lindy, DEV Community, IntuitionLabs, Airtable, Medium (7 sources synthesized)

Pages created (1):

Pages updated (1):

  • index — Added comparison, updated stats (60 pages, 3 comparisons)

Key frameworks:

  1. Four Tool Categories:

    • AI Platforms (ChatGPT, Claude, Gemini) — general business
    • CLI Tools (Claude Code, Codex CLI, Gemini CLI) — developers
    • Computer Use Agents — desktop/browser automation
    • No-Code Builders (Zapier, Make, Lindy) — workflow automation
  2. Task-to-Tool Routing:

    • Writing/quality → Claude
    • Research/large context → Gemini
    • Creative/images → ChatGPT
    • Browser automation → Gemini (DOM-aware)
    • File ops/Windows → Claude Computer Use
    • Quick automations → Zapier/Make
  3. The Hybrid Approach:

    • Sophisticated users don’t pick one tool
    • Build routing layer: cheap tools for exploration, accurate tools for output
    • Connects to existing automation/advisor-strategy pattern
  4. No-Code Reality:

    • Most people can build functional agent in 15-60 minutes
    • Zapier for beginners, Make for complex logic, Lindy for sales/ops

Business value: Directly actionable for the wiki’s target audience (business owners, marketers). Answers “which AI tool should I use?” with specific recommendations by role and task type. Cross-links to existing advisor-strategy and enablement-levels pages.


[2026-04-21] ingest | Wharton AI Agent Adoption Blueprint

Enriched the AI enablement levels page with psychological adoption research from Wharton + Science Says collaboration.

Source: AI Agent Adoption Blueprint — Science Says × Wharton School (April 2026) Contributors: Google, Zapier, ServiceNow, Wolters Kluwer, Workato, Concentrix (700,000+ employees surveyed)

Pages updated (1):

Key insights:

  1. Three Psychological Frictions Blocking Adoption:

    • Perceived Competence: “Can this agent actually do this?”
    • Trust: “Should I trust it with this specific task?”
    • Delegation of Control: “How much autonomy should I give?”
  2. Counterintuitive UX Finding:

    • Agents with friendly/warm tone are perceived as LESS competent
    • Clarity and reasoning visibility beat personality
    • “Pratfall Effect” — too personable reduces professional credibility
  3. The Goldilocks Zone:

    • Moderate autonomy optimal — propose actions, let humans approve
    • Maps to Levels of Automation theory (Sheridan & Verplank, 1978)
    • Middle levels outperform full automation OR full manual control
  4. Level Progression Blockers:

    • Level 1→2: Don’t trust standardized AI (haven’t seen reasoning)
    • Level 2→3: Won’t delegate execution (autonomy anxiety)
    • Level 3→4: Can’t trust agent to know its own limits

Business value: Directly explains WHY the existing enablement levels page says “the jump is psychological, not technical.” Now we have the specific psychology framework. Cross-links to TPB (perceived behavioral control) and Rumpelstiltskin Effect (naming limitations builds trust).


[2026-04-21] ingest | Rumpelstiltskin Effect (Problem Naming Psychology)

Ingested marketing psychology concept from Why We Buy newsletter. The principle: naming a customer’s vague problem with a specific term builds trust and positions your brand as the solution.

Source: The Rumpelstiltskin Effect — Why We Buy newsletter, Katelyn Bourgoin (April 2026)

Pages created (1):

Pages updated (1):

  • index — Added glossary entry, updated stats (59 pages, 19 glossary entries)

Key insights:

  1. Named problems feel solvable — Unnamed problems feel overwhelming and personal. A label converts unknown into known.

  2. The brand that names owns the solution — Febreze owns “noseblind,” Snickers owns “hangry,” chiropractors own “tech neck.” Whoever coins the term gets associated with the fix.

  3. Real examples with outcomes:

    • Febreze “noseblind” — created awareness of problem people didn’t know they had
    • Snickers “hangry” — entered Oxford Dictionary 2018, became cultural phenomenon
    • Deepwrk “body doubling” — app became synonymous with productivity method
  4. Finding the name: Interview customers: “Before you found us, how did you describe the problem?” That language is the name.

  5. SEO/GEO connection: Naming creates search queries you own. “Am I noseblind” leads to Febreze. AI models learn the association.

Business value: Practical positioning technique that connects psychology to sales. Links to existing wiki content on emotional triggers (S-O-R model) and AI visibility (terminology ownership).


[2026-04-21] ingest | AI Marketing Case Studies (Real Results)

Ingested practical AI marketing case studies with specific metrics from multiple sources. Focus: named companies, measurable outcomes, no marketing fluff.

Sources:

Pages created (1):

Pages updated (1):

  • index — Added case studies page to Marketing section, updated stats (58 pages, 18 domain pages)

Key findings:

  1. Brand Voice Training Matters

    • Adore Me, Vector, Virgin Holidays all invested in teaching AI their specific voice
    • Generic AI outputs underperform customized implementations
  2. Specific Metrics That Stand Out

    • A.S. Watson: 396% better conversion with AI skin analysis advisor
    • Adore Me: Product descriptions 20 hours → 20 minutes
    • Heinz DALL-E campaign: 850M+ impressions, 25x media ROI
    • HubSpot intent-based nurturing: 82% conversion increase
  3. Small Business Success Pattern

    • The Original Tamale Company: 22M views, 1.2M likes using ChatGPT for scripts
    • Vector B2B: LinkedIn following 7K→11K, demos quadrupled with 15-min human review
    • AI democratizes content creation — budget no longer the differentiator
  4. Augment, Don’t Replace

    • Verizon: AI predicts 80% of call reasons, empowers agents
    • Best ROI comes from human+AI collaboration, not automation replacement

Business value: Fills the wiki gap of practical, non-theoretical marketing content. Provides proof points for consulting conversations. Organized by use case (e-commerce, content, creative, email, support, small business).


[2026-04-20] create | Agenica.ai Competitor Ads Case Study

Created case study comparing AI agent approach to competitor ad monitoring versus manual Meta Ad Library searching.

Pages created (1):

Pages updated (2):

Key insights:

  1. Manual Monitoring Fails

    • Less than one-third of competitive intelligence programs engage daily/weekly
    • Each check is a point-in-time snapshot with no historical context
    • By discovery time, competitor campaigns have already run their course
  2. AI Agent Advantage

    • Continuous monitoring with accumulated history
    • Proactive alerts vs reactive checking
    • Role-based insights (CMO vs PPC Manager vs Social Manager)
    • Pattern detection from historical baseline
  3. What Accumulated Data + Chat Enables (key differentiator)

    • Identify winning ads (ads running for months = proven performers)
    • Detect messaging angles being A/B tested (and which failed/succeeded)
    • Map influencer partnerships via Instagram tracking
    • Spot seasonal patterns and launch playbooks
    • Build self-updating competitive creative swipe file
  4. The Core Shift

    • Manual = archaeology (digging through what competitors did)
    • AI agent = weather forecasting (detecting patterns, predicting moves)
    • Chat interface = queryable competitive intelligence

Business value: Strengthens the thin competitor-analysis domain with a concrete, actionable case study. Goes beyond “monitoring is good” to show specific strategic actions enabled by accumulated data.


[2026-04-20] ingest | TPB Framework & Multi-Model Synthesis

Ingested dissertation providing comprehensive multi-framework analysis of AI’s influence on consumer behavior.

Source: Marshall, S. (2024). “A systematic analysis of AI in digital marketing and its effects on consumer behaviour and decision making in E-commerce.” University of Bedfordshire Dissertation. Type: Academic dissertation (38 sources, 3 theoretical frameworks, systematic literature review)

Pages updated (1):

Pages created (1):

  • glossary/tpb — Theory of Planned Behaviour glossary entry

Key concepts extracted:

  1. Theory of Planned Behaviour (TPB)

    • Attitude: Trust and faith drive initial intention to engage
    • Subjective Norms: Surprisingly WEAK correlation with AI acceptance
    • Perceived Behavioral Control: Directly affects ease of use and purchase behavior
  2. Multi-Framework Synthesis

    • S-O-R: Emotional responses (stimulus → feeling → response)
    • TAM: Rational assessment (usefulness → ease → acceptance)
    • TPB: Intentional factors (attitude + norms + control → intent)
    • Together: Complete picture of consumer AI behavior
  3. Subjective Norms Finding

    • Peer pressure has weaker effect on AI adoption than expected
    • Consumers need personal motivation to engage with AI
    • Social proof less effective for AI features than traditional products
  4. Cultural Context

    • Tech-embracing cultures (Japan, Korea): Higher acceptance
    • Privacy-conscious markets (Germany, EU): Conditional acceptance
    • Most research ignores this crucial variable
  5. Research Gaps Identified

    • Negative effects (fatigue, frustration) underexplored
    • Long-term preference evolution not studied
    • TAM may need augmentation for modern AI e-commerce

Business value: Completes the theoretical trifecta (S-O-R + TAM + TPB) for understanding AI consumer behavior. Key finding that subjective norms are weak predictors suggests marketers should focus on personal benefits rather than social proof for AI features.


[2026-04-20] ingest | Vietnamese Gen Z Algorithm & Mental Well-being Research

Ingested academic research on how TikTok’s recommendation algorithms affect Gen Z mental well-being.

Source: Nguyen, K.A.T., Duong, B.N., & Tran, N.A.V. (2025). “The Impact of TikTok’s Social Media Recommendation Algorithms on Generation Z’s Perception of Mental Well-Being in Ho Chi Minh City.” ICBESS-2025 Conference. Type: Academic paper (n=419 Vietnamese TikTok users, ages 16-27)

Pages updated (1):

Pages created (1):

  • glossary/smra — Social Media Recommendation Algorithms glossary entry

Key concepts extracted:

  1. Mediation Model

    • Algorithms don’t directly harm mental health
    • Effects work through cognitive interpretation (arousal, information perception, empathy)
    • Model explains 67.5% of variance in mental well-being — strong predictive power
  2. Path Coefficients

    • Personalized Content → Arousal Level: β = 0.533 (strongest)
    • Personalized Content → Information Perception: β = 0.451
    • Personalized Content → Empathy: β = 0.440
    • Personalized Content → Social Interaction: β = 0.416
  3. Surprising Non-Findings

    • Emotion → Mental Well-being: NOT significant (β = 0.009, p = 0.873)
    • Social Comparison → MWB: NOT significant (β = -0.004, p = 0.947)
    • Suggests emotional desensitization and selective comparison in Vietnamese Gen Z
  4. MWB vs PMWB Distinction

    • Mental Well-being (MWB): Objective psychological functioning
    • Perceived Mental Well-being (PMWB): Subjective self-evaluation
    • Different factors affect each — algorithms may primarily affect perception
  5. Policy Implications

    • Digital literacy is the key intervention
    • Algorithm transparency builds trust
    • Emotional filtering and reset mechanisms recommended

Business value: First empirical data on SMRA effects in Southeast Asia. Cultural insight that Vietnamese Gen Z may show emotional resilience absent in Western samples. Reinforces importance of information quality over emotional charge in content strategy.


[2026-04-20] ingest | AI Personalization Evolution & Ethics

Ingested comprehensive literature review on AI-driven personalization in e-commerce.

Source: Iqbal, F. et al. (2025). “AI-driven personalization in e-commerce: evaluating the transformative effects on consumer behavior.” International Journal of Science and Research Archive. URL: https://doi.org/10.30574/ijsra.2025.16.1.2035 Type: Literature review (10 pages, 33 references)

Pages updated (1):

Key concepts extracted:

  1. Three Eras of Personalization

    • Pre-AI: Rule-based, collaborative filtering (static)
    • Machine Learning: Real-time behavior analysis (2010s)
    • Deep Learning: Hyper-personalization at scale (2020s — current frontier)
  2. New Risk Concepts

    • “Creepy Factor” — when personalization feels invasive
    • Filter Bubbles — AI narrows choice by showing similar content
    • Autonomy Erosion — over-reliance on algorithmic suggestions
  3. Demographic Differences

    • Younger/tech-savvy: embrace personalization
    • Older consumers: more skeptical, need transparency
  4. Regulatory Landscape

    • GDPR, CCPA, EU AI Act pushing toward explainable AI
    • Black box personalization becoming legally risky
  5. Trust-Loyalty Mediation

    • Trust mediates between personalization quality and loyalty
    • Lose trust = lose the customer

Business value: Expanded ethics section with specific risks (creepy factor, filter bubbles) and regulatory considerations. Page now covers the full personalization landscape from evolution to implementation to risks.


[2026-04-20] ingest | TAM Model & Cognitive Purchase Factors

Ingested academic research on how AI-enabled ease of use affects purchase intention.

Source: Lopes, J.M., Silva, L.F., & Massano-Cardoso, I. (2024). “AI Meets the Shopper: Psychosocial Factors in Ease of Use and Their Effect on E-Commerce Purchase Intention.” Behavioral Sciences. URL: https://pmc.ncbi.nlm.nih.gov/articles/PMC11273900/ Type: Academic paper (n=1,438 Portuguese consumers)

Pages updated (1):

Key concepts extracted:

  1. Technology Acceptance Model (TAM)

    • Consciousness (β=0.40) — strongest predictor; users who understand AI find it easier
    • Faith/Trust (β=0.34) — confidence in AI reliability
    • Perceived Control (β=0.12) — feeling in charge of AI features
    • Ease of Use (β=0.61) — direct effect on purchase intention
  2. Cognitive Load Reduction

    • AI features (chatbots, recommendations, smart search) reduce mental effort
    • Less effort → easier decisions → more purchases
  3. Surprise Finding

    • Subjective norms (peer pressure) did NOT directly affect ease of use
    • Users adopt AI features based on understanding and trust, not social pressure
  4. Practical Implications

    • Explain what AI does (don’t hide it)
    • Show why recommendations were made
    • Give users control over AI features
    • Build trust through transparency

Status upgrade: Page now covers both emotional triggers (S-O-R) and cognitive factors (TAM) — comprehensive purchase psychology guide. Upgraded to 🌿 growing.


[2026-04-20] ingest | S-O-R Model & Social Commerce Psychology

Ingested academic research on how TikTok’s recommendation system drives impulse purchases.

Source: Li, J. (2025). “Applying the S-O-R Model to Algorithmic Commerce: How TikTok’s Recommendation System Stimulates Impulsive Consumer Behavior.” Academic Journal of Management and Social Sciences. URL: https://drpress.org/ojs/index.php/ajmss/article/view/33210 Type: Academic paper (University of Toronto)

Pages created (1):

Pages updated (1):

  • automation/agentic-commerce — Added “Human Psychology vs. Agent Logic” section connecting human impulse triggers to AI agent behavior

Key concepts extracted:

  1. S-O-R Framework (Practical)

    • Stimulus: What you show (content, offers, social signals)
    • Organism: How they feel (emotional state activated)
    • Response: What they do (purchase, share, bounce)
  2. Three Core Triggers

    • Personalized recommendations → emotional arousal
    • Social proof signals → trust and FOMO
    • Scarcity cues → urgency and impulse
  3. Platform as Behavioral Environment

    • TikTok isn’t neutral distribution — it’s engineered to compress decision-making
    • Same insight applies to any social commerce platform
  4. Agentic Commerce Connection

    • Human triggers may not work on AI agents (scarcity can be verified via APIs)
    • Raises question: separate optimization strategies for humans vs. agents?

Business value: Practical checklist for implementing psychological triggers ethically. Connects current social commerce tactics to future agentic commerce preparation.


[2026-04-20] lint + create | Wiki Maintenance Session

Ran full wiki lint check and addressed critical issues.

Lint findings:

  • 5 days since last activity (approaching 7-day warning threshold)
  • 2 broken wikilinks in questions/what-ai-tools-actually-deliver-roi.md
  • competitor-analysis/ domain was completely empty
  • 7 content seedlings identified for potential upgrade
  • 0 orphan pages (healthy linking)

Pages created (1):

Pages updated (2):

Broken links fixed:

  • Removed questions/how-to-evaluate-ai-tools (didn’t exist)
  • Removed questions/ai-automation-that-works (didn’t exist)
  • Added links to: automation/finding-ai-use-cases, automation/ai-enablement-levels, glossary/llm-evals, questions/ai-as-personal-advisor

Competitor Analysis overview covers:

  • 5 key use cases (pricing, content, sentiment, signals, market share)
  • Tool landscape (Semrush, SimilarWeb, SpyFu, Crayon/Klue)
  • AI-specific considerations for agentic search era
  • Open questions for future exploration

Business value: The competitor-analysis domain is no longer empty — this is a core consulting area that needed representation.

Wiki health restored: Activity resumed after 5-day gap.


[2026-04-15] ingest | AI Visibility Audit Skill + E-commerce Experiment

Documented the AI Visibility Audit Claude skill and created the wiki’s first experiment entry.

Source: /Documents/_tasks/experiment/01-ai-visibility/ Type: Claude skill (internal tool)

Pages created (2):

Pages updated (4):

Key skill features documented:

  1. 5-Dimension Scoring (100 points)

    • Crawlability (25): robots.txt, llms.txt, UA-specific blocks
    • Rendering (25): SSR/CSR detection, visible text analysis
    • On-page (20): Schema, meta tags, answer-first content
    • Share-of-Voice (20): Live AI search queries
    • Authority (10): Wikipedia, press coverage
  2. Technical Innovations

    • Live UA spoofing catches WAF blocks invisible to standard tools
    • Separates automated checks (Python) from interpretation (Claude)
    • Hard blocker detection zeroes dimensions + triggers URGENT flags
  3. Experiment Findings (pigu.lt, varle.lt)

    • pigu.lt: WAF blocking AI bots on product pages (403 to GPTBot, ClaudeBot)
    • varle.lt: llms.txt misconfigured as redirect chain → 404
    • Both sites have good JSON-LD but access issues block AI agents

Business value: This fills the Experiments domain (was empty!) and bridges wiki theory to practice. The skill is immediately usable for client AI visibility assessments.

Wiki milestone: First experiment entry! The experiments domain is no longer empty.


[2026-04-14] ingest | AI Agent Buying Biases (Columbia/Yale Research)

Ingested Science Says newsletter covering Columbia + Yale working paper on AI agent purchasing behavior.

Source: raw/articles/ai-agent-buying-biases-science-says.md Type: Newsletter summarizing academic research (Working Paper, August 2025) Original research: “What is your AI Agent Buying? Evaluation, Biases, Model Dependence and Emerging Applications for Agentic E-Commerce”

Pages created (1):

Pages updated (2):

Key findings extracted:

  1. Keyword Order Has Massive Impact

    • Changing “Floor Lamps for Living Room” → “Office Floor Lamp” increased selection:
      • GPT-5.1: +80.4 percentage points
      • Gemini 2.5 Flash: +52 percentage points
      • Claude Opus 4.5: +41 percentage points
  2. Factor Influence Ranking

    • Keywords in title (highest impact)
    • Number of reviews
    • Product ratings (+0.1 improves chances)
    • Positive badges (“Bestseller”, “Recommended”)
    • “Sponsored” label (negative — reduces selection)
  3. Model-Specific Biases

    • Different AI models have different decision patterns
    • GPT-4.1 preferred top-left products; GPT-5.1 did opposite
    • Biases change with model updates
  4. Models Are Improving

    • Failure rates on objective decisions dropped dramatically between generations
    • Claude: 63.7% → 4.3%, GPT: 25.8% → 1%, Gemini: 2.8% → 0%
  5. Bonus: Cialdini’s Principles Work on AI

    • Wharton + Cialdini research: persuasion techniques increased AI compliance 33.3% → 72%

Business value: This is the first quantitative research on optimizing for AI shopping agents — critical for e-commerce clients preparing for agentic commerce. The finding that keyword order can swing selection by 80pp is immediately actionable.

Researchers: Amine Allouah, Omar Besbes (Columbia), Josue D. Figueroa (MyCustomAI), Yash Kanoria (Columbia), Akshit Kumar (Yale/Columbia)


[2026-04-14] ingest | Claude Skills — The Complete Guide

Ingested Anthropic’s official guide to building Skills for Claude.

Source: raw/articles/The-Complete-Guide-to-Building-Skill-for-Claude.pdf Type: Official Anthropic documentation (32 pages)

Pages created (2):

Pages updated (1):

  • tools/mcp — Added “MCP + Skills” section explaining the kitchen analogy

Key concepts extracted:

  1. Skills = Reusable AI Recipes

    • Folders containing SKILL.md with YAML frontmatter
    • Teach Claude once, benefit every time
    • Portable across Claude.ai, Claude Code, and API
  2. The Kitchen Analogy

    • MCP = professional kitchen (access to tools)
    • Skills = recipes (how to use tools effectively)
    • Together = complete solution for users
  3. Three Skill Categories

    • Document & Asset Creation (consistent outputs)
    • Workflow Automation (multi-step processes)
    • MCP Enhancement (workflow guidance for tools)
  4. Progressive Disclosure Design

    • Level 1: YAML frontmatter (always loaded)
    • Level 2: SKILL.md body (when relevant)
    • Level 3: Linked files (on demand)
  5. Five Workflow Patterns

    • Sequential workflow orchestration
    • Multi-MCP coordination
    • Iterative refinement
    • Context-aware tool selection
    • Domain-specific intelligence
  6. Testing Framework

    • Triggering tests (load at right times)
    • Functional tests (correct outputs)
    • Performance comparison (vs baseline)

Business value: This is the missing piece for MCP integrations — raw tool access isn’t enough, users need workflow guidance. Skills turn MCP connections into complete solutions.


[2026-04-14] ingest | Strategic AI Infrastructure

Deep research into Claude as strategic infrastructure — Cowork, MCP, departmental implementations, and enterprise case studies.

Sources analyzed:

  • Anthropic product pages (Cowork, MCP)
  • Model Context Protocol documentation
  • Ad Age: How 4 Ad Agencies Use Claude Enterprise Tools
  • Anthropic customer stories (Intercom, Binti)
  • HubSpot/Xero partnership announcements

Pages created (5):

Key insights extracted:

  • Cowork tiered hierarchy: Connectors first, desktop control as fallback
  • Skills: Persistent instructions encoding organizational knowledge
  • MCP ecosystem: HubSpot, Salesforce, Xero, Notion all connected
  • Departmental results: Marketing 4x output, Sales 21% reply rates, Finance 80% reduction, Support 86% resolution

Real-world statistics:

  • Intercom: 86% resolution rate, 40% fewer escalations
  • Binti: 50% documentation time reduction, 47% of US foster care served
  • Brainlabs: Presentation generation from Notion via MCP
  • Synthesia: 87% self-serve support rate with Fin

Business value: This is the “Level 3-4 playbook” — how advanced organizations move Claude from chat assistant to strategic infrastructure.


[2026-04-14] ingest | More Practitioner Frameworks

Second batch of practitioner content — use case discovery, context engineering, and fine-tuning guidance.

Sources analyzed:

Pages created (2):

Pages updated (1):

Key frameworks extracted:

  • TRIPS: Systematic scoring for AI use case prioritization; “Sexy Block” prevents organizations from seeing valuable but unglamorous opportunities
  • Context Engineering: Tool responses ARE prompt engineering; 4-level framework from chunks to faceted landscape; 90% reduction in clarification questions
  • Fine-tuning prerequisite: “Impossible to fine-tune effectively without an eval system”

Business value: TRIPS framework is immediately usable in client discovery sessions. Context engineering explains why enterprise RAG systems underperform.


[2026-04-14] ingest | Academic & Practitioner Sources

Ingested high-quality practitioner content from Almost Timely (Christopher Penn), Hamel Husain, and Jason Liu.

Sources analyzed:

Pages created (2):

Pages updated (1):

  • glossary/rag — Added 6-stage systematic improvement methodology, practical insights

Key frameworks extracted:

  • Five Levels: 75% stuck at Level 1; jump to Level 3 is psychological not technical; “$6-9M project in 6-9 hours for $6-9”
  • Eval Hierarchy: Unit tests → Human/Model evaluation → A/B testing; “unsuccessful products almost always fail to build robust evaluation systems”
  • RAG Improvement: Full-text search often matches embeddings at 10x speed; baseline first, optimize second

Business value: These frameworks provide concrete maturity models for client conversations about AI adoption and implementation quality.


[2026-04-14] ingest | HBR + Fortune Deep Dive

Enriched existing agentic pages with full statistics from primary sources.

Sources analyzed:

Pages updated (2):

Key new statistics added:

  • 60% of US shoppers expect agentic AI within 12 months (Kearney)
  • 40% MoM growth in Target’s ChatGPT traffic
  • 35% of Walmart’s referrals from ChatGPT
  • 14% of US consumers prefer ChatGPT over Google
  • 90% of ChatGPT sources aren’t in Google’s top 20
  • 78.3% brand choice variation from prompt wording (Carnegie Mellon)
  • 94% agentic visibility increase case study

Concept introduced: UX → AX (User Experience to Agent Experience)


[2026-04-14] ingest | McKinsey Agentic Commerce Report

Ingested the comprehensive McKinsey report on agentic commerce from local file.

Source: McKinsey — “The agentic commerce opportunity: How AI agents are ushering in a new era”

Pages created (1):

Key insights extracted:

  • $1T US B2C by 2030, $3-5T globally
  • Three interaction models: agent-to-site, agent-to-agent, brokered
  • Six domains merchants must address (engagement, loyalty, commerce, payments, in-store, fulfillment)
  • Seven new revenue models as ad revenue declines
  • Trust as foundational infrastructure, not just sentiment
  • Three risk categories: systemic, accountability, data sovereignty

Business value: This is the definitive strategic framework for agentic commerce — essential for client conversations about e-commerce transformation.


Extended the agentic search topic with additional research from HBR, Fortune, and Search Engine Land.

Sources analyzed:

  • Harvard Business Review: “Preparing Your Brand for Agentic AI”
  • Fortune: “AI agents are already driving 10% of revenue for some brands”
  • Search Engine Land: “AAO: Assistive Agent Optimization”

Pages created (2):

Pages updated (1):

Key new insights:

  • “Share of model” is the new market share metric (pioneered by Pernod Ricard)
  • Only 12% URL overlap between AI citations and Google top 10
  • Three brand interaction modes: brand agents, consumer agents, full AI intermediation
  • 72% of consumers demand transparency about AI vs human interactions
  • Strategic Text Sequences (STS) and llms.txt are emerging optimization tools

[2026-04-14] ingest | Bulk Ingestion from Priority Sources

Analyzed and ingested content from 5 priority sources identified for wiki expansion: Marketing AI Institute, Semrush AI Blog, Search Engine Land, Zapier Blog, and MarketingProfs.

Articles analyzed (9 total):

  • Semrush: AI Visibility, AI SEO Tools (18 tools), Agentic Search, Does AI Content Rank (42K study)
  • Zapier: Agentic AI vs Generative AI, Cognitive Automation
  • Search Engine Land: LLM Nudges
  • MarketingProfs: AI Video Marketing (Sora/Meta Vibes)
  • Marketing AI Institute: AI Agents for Agencies

Pages created (6):

Pages updated (1):

  • seo/ai-seo-content — Added “Does AI Content Rank” study findings (42K posts analyzed)

Key insights:

  • Agentic search is an emerging discipline — AI agents filter brands before humans see them
  • AI visibility is distinct from traditional SEO (only 44% overlap with Google rankings)
  • LLM nudges reveal AI assumptions: 45% focus on budget/deals
  • AI content can rank but human expertise determines top positions
  • Authenticity beats synthetic in AI video marketing

Priority sources saved to: private/sources-to-ingest.md


[2026-04-10] create | Personal AI Advisor Exploration + Source Tracking

Started a new exploration thread based on observation: ALL professionals struggle with information overload, task management, and decision fatigue. AI as a “personal advisor” is a different angle from enterprise tools.

Pages created:

Private files created:

  • private/sources-to-ingest.md — Tracking checklist for filling wiki gaps

Wiki gaps identified (CMO/Director perspective):

  • Competitor analysis section (empty)
  • Marketing content thin
  • No tool comparisons for marketing use cases
  • No ROI/business case content

New thread: Personal AI Advisor could become a Primores consulting angle — helping individuals (not just companies) set up AI productivity systems.


[2026-04-10] ingest | Advisor Strategy (Anthropic Blog)

Key concepts extracted:

  • Advisor Strategy = inverted multi-agent pattern
  • Cheap executor (Sonnet/Haiku) consults expensive advisor (Opus) only when stuck
  • Benchmark results: Sonnet + Opus = +2.7pp performance, -11.9% cost
  • Haiku + Opus = 2x performance, 85% cheaper than Sonnet alone
  • Built into Claude API as advisor_20260301 tool type
  • max_uses parameter for cost control

Pages created:

Pages updated:

Key insight: This inverts the typical “smart orchestrator, dumb workers” pattern. Most agentic subtasks don’t need the smartest model — only the hard decisions do. This is a significant cost optimization pattern.


[2026-04-10] ingest | Multi-Agent Patterns (OpenClaw + Hermes)

  • Source: raw/articles/two-agents-openclaw-hermes.md
  • Original language: Russian (translated to English)
  • Original URL: pimenov.ai
  • Type: Architecture patterns / practical guide

Key concepts extracted:

  • Dispatcher + Deep Worker pattern (one agent for breadth, one for depth)
  • Six practical implementations: analysis, content pipeline, meeting prep, monitoring, code review, content marketing
  • Self-learning agents that improve over time
  • Personal model fine-tuning after ~1 month of usage
  • Implementation priority order (start with content pipeline)

Pages created:

Pages updated:

Key insight: “Two agents complementing each other beat one agent trying to do everything.” This validates the multi-agent approach in Managed Agents but shows it’s a broader pattern applicable to any tooling.


[2026-04-10] create | Expanded Managed Agents Knowledge

Building on the playbook ingest, created three derived pages:

Pages created:

Pages updated with cross-links:

Key insight: The comparison page and break-even analysis are valuable for client conversations. “When should we build vs. buy?” is a common question.


[2026-04-10] ingest | Claude Managed Agents Playbook

  • Source: raw/articles/claude-managed-agents-playbook.md
  • Original language: Russian (translated to English)
  • Original source: Telegram @prompt_design (translation of Anthropic docs)
  • Type: Technical playbook / API documentation

Key concepts extracted:

  • Claude Managed Agents = ready infrastructure (no custom orchestration needed)
  • Four key concepts: Agent (config) → Environment (container) → Session (instance) → Events (stream)
  • Built-in tools: bash, read, write, edit, glob, grep, web_fetch, web_search
  • Permission system: always_allow vs always_ask for production safety
  • Usage patterns: event-triggered, scheduled, fire-and-forget, long-horizon
  • Outcomes (research preview): grader-based completion criteria with iteration
  • Multi-agent coordination (research preview): one-level delegation
  • Architecture: “Brain” (Claude) + “Hands” (sandboxes) + “Session” (journal)
  • Pricing: $0.08/hour + token costs
  • Companies using it: Notion, Rakuten, Asana, Sentry, Vibecode

Pages created:

Pages updated:

  • index — Added new tool, updated stats

Cross-references created:

Business value: This is Anthropic’s official infrastructure for production AI agents — key for any enterprise deployment discussion. The permission system and outcomes features are particularly relevant for client implementations.


[2026-04-10] ingest | Telegram Community Wiki Bot (Case Study)

  • Source: raw/cases/telegram-community-knowledge-bot.md
  • Original language: Russian (translated to English)
  • Type: Real-world case study

Key concepts extracted:

  • LLM Wiki pattern validated in production
  • Multi-source ingestion: chat messages + YouTube transcripts
  • Zettelkasten methodology for knowledge structure
  • Anti-recursion pattern (mark bot messages to skip indexing)
  • Access control with separate knowledge bases
  • Clickable profile links in answers

Pages created:

Pages updated:

  • index — Added Case Studies section

Cross-references created:

Key insight: This proves the wiki pattern works at scale in real communities — “A wiki that writes itself.”


[2026-04-10] ingest | Product Article Generator System

  • Source: raw/articles/product-article-generator-system.md
  • Original location: Primores internal tool (/primores/article-generator/)
  • Type: Internal tool documentation + methodology

Key concepts extracted:

  • GEO/AEO (Generative Engine Optimization) — optimizing for AI search
  • AI-SEO content strategy — what gets cited by AI
  • Human writing rules — avoiding AI tell-signs
  • Schema markup for AI discoverability
  • Self-contained FAQ answers principle
  • “Honest assessment increases citation” insight

Pages created:

Pages updated:

  • index — Added new pages, updated stats

Business value: This documents a Primores service offering — can be referenced in client conversations about AI-SEO capabilities.


[2026-04-10] ingest | 12 Techniques for AI Agents

  • Source: raw/articles/12-techniques-ai-agents-practical-tools.md
  • Original language: Russian (translated to English)
  • Original URL: pimenov.ai

Key concepts extracted:

  • “AI agent isn’t a magic button — requires organization”
  • Context separation (different threads for different topics)
  • Model specialization (route tasks to appropriate models)
  • Sub-agent delegation pattern
  • Six-layer security model
  • Self-syncing documentation
  • Subscription vs API economics

Pages created:

Pages updated:

  • index — Added new pages, updated stats

Cross-references created:


[2026-04-10] ingest | LLM Wiki Pattern

  • Source: raw/articles/llm-wiki-pattern.md
  • Key concepts extracted:
    • RAG vs. Wiki pattern (retrieve-and-forget vs. compile-and-maintain)
    • Three-layer architecture (raw, wiki, schema)
    • Three operations (ingest, query, lint)
    • Compounding knowledge principle
    • Memex historical connection (Vannevar Bush, 1945)

Pages created:

Pages updated:

  • index — Added new pages, added stats section

This source is meta — it describes the very pattern this wiki implements!


[2026-04-10] create | Added maintenance protocol

  • Created maintenance — comprehensive wiki health and growth protocol
  • Updated CLAUDE.md with:
    • MAINTAIN workflow (daily/weekly/monthly/quarterly cadences)
    • GROW workflow (proactive wiki development)
    • Session start/end checklists
    • Growth mindset principles
    • Red flag warnings
  • Updated index to include maintenance page

The wiki now has built-in growth mechanisms!


[2026-04-10] create | Wiki initialized

Next steps:

  • Ingest first sources
  • Build out glossary with foundational terms
  • Start exploring key questions