A Player Labs × Ecommerce Equation

Auto Throughput

Autonomous growth experiments for ecommerce. The system runs the loop — hypothesise, create, launch, measure, decide, next experiment — so the operator focuses on strategy, not manual testing.

March 2026 · Collaborative build proposal
A Player Labs logo
A Player Labs
H
Hypothesise
System proposes the next best test based on prior evidence
C
Create
AI generates the asset — ad, page, offer, campaign structure
L
Launch
Variant goes live with guardrails and contamination checks
M
Measure
Ad platform + store data reconciled against a profit objective
D
Decide
Keep, discard, scale, or rerun — with confidence scores
N
Next
Decision memory feeds the next hypothesis. Loop compounds.

What Karpathy's auto-research did for papers, Auto Throughput does for ecommerce growth. The system doesn't report on experiments — it runs them.

01
The Growth Equation

Four independent loops, one autonomous engine

Each loop optimises a different growth lever. Auto Throughput runs the experiment cycle across all four — and the shared decision layer makes every loop smarter over time.

Loop A

Message-market fit

Are we saying the right thing to the right people?

H "Does a profit-clarity angle outperform lifestyle?"
C AI generates ad copy, image, or video variant
L Run as ad variant with spend cap
M Reach, CPP, CTR, qualified traffic
D Scale the angle or kill it
N Insight feeds next creative hypothesis
Loop C

Page conversion fit

Does the landing experience convert traffic into buyers?

H "Does proof-first PDP beat feature-first?"
C Build page variant with new layout and copy
L Split traffic to page variants
M LPV to ATC, LPV to purchase, AOV
D Keep winning layout or rerun with more sample
N Page pattern stored for future tests

Auto Throughput Engine

Experiment registry
Primary profit objective
Confidence + effect size
Decision memory
Loop B

Economic fit

Does the offer make money after all costs?

H "Does 15% bundle discount lift gross profit?"
C Configure offer / bundle / pricing in store
L Run offer variant with margin tracking
M AOV, MER, contribution margin per visitor
D Discard if margin-negative, scale if not
N Pricing insight feeds next offer test
Loop D

Distribution fit

Are we spending in the right places at the right scale?

H "Does broad targeting beat lookalike at this spend?"
C Set up campaign structure, audiences, bids via API
L Run audience split with budget allocation
M CPP, Reach, MER (ROAS for high-SKU catalogues)
D Shift budget to winner or extend test
N Audience pattern stored for scaling decisions
02
How It Works

A fully autonomous system connected to the APIs, grinding 24/7

Not a dashboard. An engine.

Auto Throughput connects directly to Meta and the store. It doesn't wait for someone to export a CSV and upload it. It pulls live data, runs classification, makes decisions, and launches the next experiment — autonomously, continuously.

Connected to
Meta Ads API
Shopify / Store
Analytics

EE already has the logic

MOAC classifies ads into SUPERHERO / HERO / SIDEKICK / CIVILIAN / VILLAIN tiers based on CPP, ROAS, and frequency. Today that's a manual CSV upload. Auto Throughput runs that same logic — and all four loops — continuously against live data.

Pre-mortem: what's actually hard

Loop A — Message-market fit

Classification is solved (MOAC). The hard part is generating the next creative hypothesis without falling into creative fatigue. Need diversity pressure — the system can't just recycle what worked.

Loop B — Economic fit

A bad offer test can tank margin before the system has enough sample to catch it. Need hard spend caps and loss limits per experiment. Price anchoring bleeds outside the test — customers screenshot discounts.

Loop C — Page conversion fit

Most stores don't have enough traffic for fast statistical significance on page tests. Checkout path changes are highest-leverage but highest-risk — a broken checkout loses revenue now, not in a report next week.

Loop D — Distribution fit

You're testing against Meta's own algorithm, which is already optimising. Budget shifts trigger learning phase resets. Audience overlap between tests contaminates results — the engine needs isolation protocols.

03
The Collaborative Build

APL is building this. EE is the ideal partner to build it with.

A Player Labs brings

The autonomous system

  • AI experiment engine — hypothesis generation, launch orchestration, statistical decision-making
  • Store + ad platform data reconciliation
  • Decision memory that compounds across experiments
  • Engineering, infrastructure, and product development

APL is building Auto Throughput as a core product. This happens regardless — but the right partner makes it 10x better.

Ecommerce Equation brings

The domain + distribution

  • Growth frameworks already battle-tested across hundreds of brands
  • Member base as early design partners and first users
  • Real experiment data across verticals, spend levels, and strategies
  • The benchmark moat — anonymised results become reusable intelligence

EE's frameworks are the policy layer. Auto Throughput is the engine that improves them through evidence.

First wedge
Now

Creative testing — one loop, one profit objective, decision confidence

Next

Four-loop OS — message, economics, page, distribution in one workspace

Then

AI sequencing — system proposes what to test next

Moat

Benchmarks — member data becomes reusable market intelligence

04