Message-market fit
Are we saying the right thing to the right people?
Autonomous growth experiments for ecommerce. The system runs the loop — hypothesise, create, launch, measure, decide, next experiment — so the operator focuses on strategy, not manual testing.
What Karpathy's auto-research did for papers, Auto Throughput does for ecommerce growth. The system doesn't report on experiments — it runs them.
Each loop optimises a different growth lever. Auto Throughput runs the experiment cycle across all four — and the shared decision layer makes every loop smarter over time.
Are we saying the right thing to the right people?
Does the landing experience convert traffic into buyers?
Does the offer make money after all costs?
Are we spending in the right places at the right scale?
Auto Throughput connects directly to Meta and the store. It doesn't wait for someone to export a CSV and upload it. It pulls live data, runs classification, makes decisions, and launches the next experiment — autonomously, continuously.
MOAC classifies ads into SUPERHERO / HERO / SIDEKICK / CIVILIAN / VILLAIN tiers based on CPP, ROAS, and frequency. Today that's a manual CSV upload. Auto Throughput runs that same logic — and all four loops — continuously against live data.
Classification is solved (MOAC). The hard part is generating the next creative hypothesis without falling into creative fatigue. Need diversity pressure — the system can't just recycle what worked.
A bad offer test can tank margin before the system has enough sample to catch it. Need hard spend caps and loss limits per experiment. Price anchoring bleeds outside the test — customers screenshot discounts.
Most stores don't have enough traffic for fast statistical significance on page tests. Checkout path changes are highest-leverage but highest-risk — a broken checkout loses revenue now, not in a report next week.
You're testing against Meta's own algorithm, which is already optimising. Budget shifts trigger learning phase resets. Audience overlap between tests contaminates results — the engine needs isolation protocols.
APL is building Auto Throughput as a core product. This happens regardless — but the right partner makes it 10x better.
EE's frameworks are the policy layer. Auto Throughput is the engine that improves them through evidence.
Creative testing — one loop, one profit objective, decision confidence
Four-loop OS — message, economics, page, distribution in one workspace
AI sequencing — system proposes what to test next
Benchmarks — member data becomes reusable market intelligence