All Skills

Use when optimizing conversion rates, designing A/B tests, or improving landing pages. Covers A/B testing methodology, landing page optimization, form design, statistical significance, funnel analysis, and CRO prioritization frameworks.

V
$npx skills add vasilyu1983/AI-Agents-public --skill marketing-cro

CRO — CONVERSION OPTIMIZATION OS (OPERATIONAL)

Built as a no-fluff execution skill for systematic conversion rate optimization.

Structure: Core CRO fundamentals first. Advanced testing in dedicated sections. AI/ML optimization in clearly labeled "Optional: AI / Automation" sections.


Modern Best Practices (January 2026)


When to Use This Skill

  • Landing page optimization: Hero, CTA, proof, form optimization
  • A/B testing: Hypothesis design, sample size, statistical significance
  • Funnel analysis: Drop-off identification, micro-conversion mapping
  • Form optimization: Field reduction, multi-step forms, friction removal
  • Trust/credibility: Social proof, security signals, guarantees

When NOT to Use


Expert: CRO Mental Model (Quick Calibration)

Use this to avoid local wins / global losses.

  • CRO: Increase the rate of valuable commitments (purchase, qualified lead, activation) while protecting business outcomes (revenue, margin, LTV, support load).
  • UX optimization: Reduce friction/errors so users can do what they already intend; good UX does not guarantee better conversions.
  • Funnel optimization: Optimize the system across steps and handoffs (traffic quality → intent match → page → form/checkout → sales/onboarding → retention).
  • Experimentation: A causal learning method; not every decision belongs in a test.

Do not delegate these to A/B tests (even with infinite traffic): legal/compliance/ethics, dark patterns, misleading claims, and irreversible brand trust decisions.


Core: CRO Framework

The CRO Process

1. ANALYZE → Identify conversion problems (data + qualitative)
2. HYPOTHESIZE → Form testable hypotheses
3. PRIORITIZE → Score by impact/effort (ICE/PIE)
4. TEST → Run A/B tests with statistical rigor
5. LEARN → Document results, iterate
6. IMPLEMENT → Roll out winners, test next

Conversion Rate Benchmarks

Page TypePoorAverageGoodGreat
Landing page<1%2-3%4-5%>6%
Checkout<40%50-60%65-75%>80%
Form completion<20%30-40%45-55%>60%
Add to cart<3%5-8%9-12%>15%

Note: Benchmarks vary significantly by industry. Use as directional only.


Core: Landing Page Optimization

Above-the-Fold Checklist

Every landing page needs these elements visible without scrolling:

ElementRequirementCommon Issues
HeadlineClear value propositionVague, company-focused
SubheadlineSpecific benefit or outcomeMissing or weak
Hero image/videoRelevant, shows outcomeStock photos, irrelevant
CTAProminent, action-orientedHidden, generic text
Trust signalLogo strip, rating, or statMissing entirely

Headline Formula

[Outcome] + [Timeframe/Ease] + [Without Pain Point]

Examples:
"Get 10 qualified leads per week without cold calling"
"File your tax return in 15 minutes with expert review"
"Double your email conversions without hiring a copywriter"

CTA Button Best Practices

DoDon't
"Start Free Trial""Submit"
"Get My Quote""Click Here"
"Book My Demo""Learn More" (bottom of funnel)
"Download the Guide""Send"

CTA Button Optimization:

  • Size: Large enough to tap on mobile (min 44px height)
  • Color: Contrasts with page background
  • Position: Above fold AND after key sections
  • Text: First person ("Get My...") often outperforms second person
  • Whitespace: Use spacing to isolate the primary CTA from competing elements; treat big lift claims as case-dependent and verify in your context

Trust Elements Hierarchy

STRONGEST TRUST SIGNALS (use at least 3):
├─ Customer logos (recognizable brands)
├─ Review score (4.5+ stars with count)
├─ Security badges (SSL, payment, compliance)
├─ Money-back guarantee
└─ Phone number visible

SUPPORTING TRUST SIGNALS:
├─ Customer testimonials (with photo, name, company)
├─ Case study snippets (specific metrics)
├─ "As seen in" media logos
├─ Team photos (for services)
├─ Live chat widget
└─ Physical address (for services)

User-Generated Content (UGC)

UGC often increases conversions in SaaS and e-commerce, but lift magnitude varies widely by category, placement, and traffic intent.

UGC TypePlacementImpact
Customer videosHero or below foldHigh trust, high engagement
Review excerptsNear CTAReduces uncertainty
Case study quotesConsideration sectionBuilds credibility
Community mentionsFooter or social proof barVolume signal

Implementation: Pull from G2, Capterra, or in-app feedback. Verify permissions before use.


Core: Form Optimization

Form Field Rules

RuleWhyImpact
Minimum fieldsEvery field adds frictionOften lowers completion (magnitude varies)
Email firstCaptures partial submissions+15-30% lead capture
Persistent labelsPlaceholders disappear, cause errors+10% completion
Single columnEasier flow+5-10% completion
Inline validationCatch errors early+22% completion
Browser autofillReduces typing, fewer errors+15-20% completion

2026 Benchmark: Average checkout = 5.1 steps, 11.3 fields (Baymard). Target ≤5 fields for lead gen.

Field Priority (Ask Only What You Need)

PriorityFieldWhen Required
1EmailAlways
2NameIf personalization needed
3CompanyB2B only
4PhoneSales-ready leads only
5Job titleEnterprise targeting
6+Everything elseGate behind progressive profiling

Multi-Step Form Pattern

Step 1: Low commitment (email)
├─ "What's your email?"
├─ Progress indicator: 1 of 3
└─ CTA: "Continue"

Step 2: Qualifying info
├─ Company size / Industry
├─ Progress indicator: 2 of 3
└─ CTA: "Almost there"

Step 3: Contact info
├─ Name / Phone (optional)
├─ Progress indicator: 3 of 3
└─ CTA: "Get My [Deliverable]"

Multi-step benefits:

  • Commitment and consistency principle
  • Captures partial data (even if abandoned)
  • Feels less overwhelming
  • Can qualify leads progressively

Core: A/B Testing Methodology

Hypothesis Template

IF we [change/add/remove X]
THEN [metric] will [increase/decrease] by [estimate]
BECAUSE [reasoning based on data/research]

Example:
IF we add customer logos to the hero section
THEN form conversion will increase by 15%
BECAUSE trust signals reduce perceived risk for new visitors

Sample Size Calculator

Minimum sample size formula (simplified):

n = (16 × p × (1-p)) / MDE²

Where:
- n = sample per variant
- p = baseline conversion rate
- MDE = minimum detectable effect (e.g., 0.10 for 10% lift)

Example:
Baseline CVR: 3% (0.03)
MDE: 20% relative lift (looking for 3.6% or higher)

n = (16 × 0.03 × 0.97) / (0.006)²
n ≈ 12,933 per variant

Total traffic needed: ~26,000 visitors

Quick reference:

Baseline CVR10% MDE20% MDE30% MDE
1%63,00015,8007,000
3%20,7005,2002,300
5%12,2003,0501,350
10%5,8001,450650

Per variant. Multiply by 2 for total traffic needed.

Statistical Significance

Requirements for valid test:

  • 95% confidence level (minimum)
  • 80% power (default) unless you have a reason to change it
  • Run for at least 1-2 full business cycles (7-14 days)
  • Don't peek and stop early (increases false positives)
  • Document before test: hypothesis, primary metric, guardrails, sample size, duration
  • Avoid post-hoc slicing; pre-register segments or adjust for multiple comparisons

Reality check (expert defaults):

  • Statistical significance does not mean the change is worth shipping (check practical impact + guardrails)
  • Ignore "significant" results when experiment integrity is in doubt (tracking issues, traffic mix shifts, SRM, broken randomization)
  • Stop early only for clear harm (guardrail breaches) or invalidity (instrumentation/assignment problems), not for "early wins"

Experiment Integrity (2026 Default Checks)

  • Assignment sanity: A/A test periodically; check SRM on day 1 and day 3
  • Tracking sanity: confirm event definitions, dedupe, cross-domain, and consent-mode behavior before interpreting results
  • Contamination: avoid showing multiple variants to the same user across devices/sessions; prefer stable IDs when possible
  • Change control: freeze other major changes to the same flow during the test window

CUPED: Faster Tests via Variance Reduction

CUPED (Controlled-experiment Using Pre-Existing Data) can reduce variance by ~40-60%, allowing tests to reach significance faster.

AspectDetails
How it worksUses pre-experiment user behavior to control for inherent variance
Lookback window1-2 weeks (optimal balance)
LimitationDoesn't work for new users (no history)
PlatformsVWO, Optimizely, Statsig, Eppo, PostHog

When to use: High-traffic sites where test velocity matters. See advanced-testing.md for implementation details.

Test Prioritization: ICE Framework

FactorScore (1-10)Description
ImpactHow much will this move the metric?
ConfidenceHow sure are we this will work?
EaseHow easy is this to implement?
ICE Score(Impact + Confidence + Ease) / 3

ICE Score interpretation:

  • 8-10: High priority, test immediately
  • 5-7: Medium priority, add to queue
  • 1-4: Low priority, revisit later or skip

Core: Funnel Analysis

Funnel Diagnostic Framework

STEP 1: Map your funnel
Page Visit → Key Action → Form Start → Form Complete → Confirmation

STEP 2: Measure drop-off at each step
├─ Page Visit to Key Action: ___% (bounce rate inverse)
├─ Key Action to Form Start: ___%
├─ Form Start to Complete: ___%
└─ Complete to Confirmation: ___%

STEP 3: Identify biggest drop-off
Biggest percentage drop = highest priority to fix

STEP 4: Diagnose root cause
├─ High bounce? → Relevance, load speed, messaging
├─ Low engagement? → Content, CTA visibility
├─ Form abandonment? → Form friction, trust
└─ Checkout drop? → Pricing, shipping, trust

Expert note: The "biggest drop-off" is not always the best target. Confirm it's a defect (not intentional filtering), not a measurement artifact, and not caused upstream (traffic quality / offer mismatch).

Micro-Conversion Mapping

Funnel StageMicro-Conversions to Track
AwarenessScroll depth, time on page, video views
InterestCTA hover, tab/section views, resource clicks
ConsiderationPricing page visit, comparison page, demo video
DecisionForm start, add to cart, checkout start
ConversionForm complete, purchase, signup

Heatmap & Recording Analysis

What to look for:

  • Click heatmaps: Are users clicking CTAs? Clicking non-clickable elements?
  • Scroll maps: Where do users stop scrolling? Key content below fold?
  • Session recordings: Where do users hesitate? Rage clicks? Form confusion?
  • Form analytics: Which fields cause abandonment? Error patterns?

Reference: Triage, Speed, SOPs

For page speed targets, CRO triage decision tree, operating cadence, and anti-patterns, see references/triage-and-ops.md.


Templates

TemplatePurpose
landing-audit.mdFull landing page audit
ab-test-plan.mdA/B test planning
form-audit.mdForm optimization checklist
funnel-analysis.mdFunnel diagnostic
ice-scoring.mdTest prioritization

Expert: Hypothesis Quality (Silent Failure Checklist)

A good CRO hypothesis is not "change X to raise CVR." It must specify mechanism and risk.

Strong hypothesis includes:

  • Which constraint it targets: clarity, trust, motivation, friction
  • Who it's for: segment/intent/channel/device (at least one)
  • What moves: primary metric + guardrails (value, quality, downstream)
  • Why it should work: evidence + mechanism (not vibes)

How CRO fails silently (common):

  • Conversions go up but value goes down (lower-quality leads, higher refunds/chargebacks, worse retention)
  • Overall looks flat but a high-value segment is harmed (mix effects hide damage)
  • "Win" is novelty or seasonality; it doesn't repeat

Use assets/ab-test-plan.md to pre-register guardrails and invalidation criteria.


References

ReferenceDescription
advanced-testing.mdCUPED, sequential testing, MAB
ai-automation.mdAI personalization, tool stack
triage-and-ops.mdPage speed, triage, SOPs, anti-patterns

International Markets

This skill uses US/UK defaults. For international CRO:

NeedSee Skill
Regional payment methodsmarketing-geo-localization
Cultural trust signalsmarketing-geo-localization
Regional CTA adaptationmarketing-geo-localization
RTL/localized designmarketing-geo-localization

Auto-triggers: When your query mentions regional markets or cultural adaptation, both skills load automatically.



Usage Notes (Claude)

  • Stay operational: return checklists, audit results, test plans
  • Always include statistical significance requirements for testing
  • Recommend qualitative research for low-traffic sites
  • Use benchmark ranges, not absolute numbers
  • Do not invent conversion data; state "varies by industry" when uncertain