Skip to main content

Proof

Purpose

The Proof Library is the source of truth for MetaCTO’s credibility.

Its job is to turn positioning into evidence.

The company is moving toward a bigger claim: helping growing companies build production AI systems, improve engineering leverage, and change how work gets done. That claim needs proof at multiple levels:

Company credibility

Engineering credibility

AI/context credibility

Offer-specific proof

Buyer-specific proof

Before/after outcomes

Internal dogfooding proof

Case studies and testimonials

This library should support:

  • homepage copy
  • offer pages
  • sales decks
  • outbound
  • founder posts
  • case studies
  • proposals
  • ads
  • partner conversations
  • agent outputs

The Q2 internal plan already says MetaCTO needs to begin collecting time, quality, and workflow economics proof, then package and expand the proof library over the next year. It also assigns proof and case-study packaging to the Marketing Manager.

Proof Strategy

Core principle

Do not just collect “nice results.” Collect proof that supports the new company truth.

MetaCTO’s proof should show that the company can:

  • build real production systems
  • connect scattered systems and context
  • create usable outputs
  • support reliable actions
  • improve engineering velocity or reduce cost
  • make AI measurable
  • ship quickly with senior technical judgment
  • keep systems improving after launch

The ECE brief says proof should stay concrete and output-driven, not abstract or architecture-first. It specifically calls out concrete outputs like sales call summaries, CRM updates, follow-up drafts, executive summaries, proposal drafts, support ticket routing, product feedback synthesis, and internal process automation.

Proof Hierarchy

Not all proof does the same job.

Use this hierarchy.

Level 1: Trust proof This answers:

Can we trust MetaCTO to ship?

Examples:

  • 20+ years engineering leadership
  • 100+ products shipped / apps launched
  • strong review profile
  • senior operator credibility
  • production experience
  • client testimonials

Current source examples:

  • AEMI landing page cites 20+ years engineering leadership, 100+ products shipped, and 5.0 Clutch rating.
  • Existing product-development proof cites 100+ apps launched, $40M+ funding raised, 10M+ users served, and 4.8 average app rating.

Level 2: Relevance proof This answers:

Has MetaCTO solved problems like ours?

Examples:

  • similar industry
  • similar company stage
  • similar team structure
  • similar operational complexity
  • similar systems involved
  • similar buyer role

Marketing Manager task: Create a tagged proof database by:

  • buyer type
  • industry
  • company size
  • offer
  • system type
  • pain point
  • result

Level 3: Outcome proof This answers:

What changed?

Examples:

  • faster follow-up
  • reduced manual prep time
  • improved output consistency
  • faster delivery
  • improved app rating
  • reduced review effort
  • improved margin
  • reduced cost
  • increased velocity
  • higher retention
  • faster MVP launch

Current ECE-style draft proof includes a B2B sales team case with disconnected tools, 30+ minutes of manual prep, inconsistent reps, and after-state metrics like real-time discovery summaries, follow-up drafts in seconds, 85%+ classification accuracy, less than 30 seconds to summary, and 3x follow-up consistency. Treat this as a draft or candidate proof asset until validated.

Level 4: System proof This answers:

Did MetaCTO build something durable, not just a demo?

Examples:

  • connectors
  • context model
  • evals
  • approval paths
  • write-backs
  • monitoring
  • cost visibility
  • launch scorecard
  • improvement loop

The ECE follow-up deck defines production maturity through reliable connectors, business object models, governed access, workflow execution, observability and cost visibility, evals, and feedback.

Level 5: Compounding proof This answers:

Did the system get better after launch? Examples:

  • quality improved month over month
  • review effort declined
  • adoption increased
  • cost per output declined
  • failures were reduced
  • next workflow became easier to ship
  • usage expanded to more users or teams

The Phase 0 plan defines the recurring operating loop as Observe → Evaluate → Improve → Expand and says Continuous AI Operations should improve output quality, reduce review effort, increase trust and adoption, optimize workflow economics, and expand into adjacent workflows.

Proof Categories

A. Company Credibility Proof Use this when the buyer needs confidence in MetaCTO generally.

Current proof assets

  • 20+ years engineering leadership
  • 100+ products shipped / apps launched
  • 5.0 Clutch rating in AEMI copy
  • 4.8 average app rating in legacy/product proof
  • 10M+ users served
  • $40M+ funding raised by products/clients
  • senior operator experience
  • founder/CTO-led delivery posture

Sources currently support 20+ years engineering leadership, 100+ products shipped, and 5.0 Clutch rating in the AEMI page. Another source supports 100+ apps launched, $40M+ funding raised, 10M+ users served, and 4.8 average app rating.

Marketing Manager TODO

Create a Company Proof Sheet with: Proof Exact Source Last Approved Notes Claim Number Verified For Website?

Products 100+ TBD TBD Yes / No Reconcile shipped “products shipped” vs “apps launched”

Clutch 5.0 or 4.8 TBD TBD Yes / No Confirm rating current public rating

Users 10M+ TBD TBD Yes / No Define served source and scope

Funding $40M+ TBD TBD Yes / No Define raised whether client fundraising, product funding, etc.

Goal

Create one verified, approved set of MetaCTO proof stats by the end of Week 1.

B. Engineering Delivery Proof Use this when the buyer needs confidence that MetaCTO can ship real systems. Current proof themes

  • production engineering execution
  • senior technical leadership
  • product and platform delivery
  • AI-native delivery model
  • senior-guided teams
  • outcome ownership

The GTM strategy differentiates MetaCTO from generic AI consultancies by emphasizing production engineering execution, custom context infrastructure, agent workflows, and operational automation.

Existing testimonial candidates

AEMI page:

“MetaCTO combined startup-style speed with senior-level engineering discipline. They not only built what we asked but also proactively proposed better solutions.” Founder, This Life

Lightning Pods page:

“MetaCTO stood out for their ability to quickly grasp the intricacies of our product and translate that into clean, scalable solutions.” Bo Abrams, CEO, ATP

Marketing Manager TODO

Build a testimonial bank with:

Quote Client Role Offer Proof Permissi Best Use Theme on

Status

MetaCTO This Life Founder Product / Speed + TBD AEMI, combined AEMI senior homepag startup-st page discipline e yle speed… MetaCTO ATP CEO Pods / Complexi TBD Pods, stood product ty + clean ECE, out… scalable homepag solutions e

Research task

Pull every approved testimonial from:

  • Clutch
  • website case studies
  • proposal docs
  • sales decks
  • Slack
  • client emails
  • project closeout notes

Tag each quote by pain:

  • speed
  • quality
  • senior judgment
  • rescue
  • scalability
  • AI leverage
  • communication
  • delivery reliability
  • product strategy

C. ECE Proof Use this when the buyer needs confidence in the flagship offer.

Current proof we can use now

ECE proof is still early, so the library should be honest. Right now, the strongest support is:

  • clear documented offer structure
  • strong internal thesis
  • concrete output examples
  • production maturity model
  • internal dogfooding plan
  • candidate case-study patterns

The ECE brief positions Enterprise Context Engineering as the core AI infrastructure service for mid-market companies and says the goal is qualified pipeline, $60K–$180K deals, and follow-on work. It also defines the concrete output examples that should be used for conversion: CRM updates, follow-up drafts, executive summaries, proposal drafts, support routing, product feedback synthesis, and internal process automation.

ECE proof claims we can likely make now

Use carefully:

  • 100+ products shipped by the team
  • AI-enabled delivery system already used internally
  • built around real workflows, not demos
  • designed for growing companies / mid-market operating constraints
  • connects real business data
  • structures context for AI
  • produces usable outputs inside work

The ECE brief explicitly lists 100+ products shipped and AI-enabled delivery system already used internally as proof points.

ECE proof claims we need to earn

Do not overstate yet:

  • “X% reduction in manual prep”
  • “Y hours saved per week”
  • “Z% improvement in output acceptance”
  • “reduced review time by X%”
  • “improved close rate”
  • “reduced support backlog”
  • “lowered operational cost by X%”

These should become the first lighthouse proof targets.

Marketing Manager TODO

Create an ECE Lighthouse Proof Tracker. Client / Use Baselin After Metric Source Permis Status Internal Case e sion System

Internal Discove TBD TBD Time to HubSpo Internal To revenue ry → follow-u t / email collect system follow-u p / p→ manual proposa tracking l

B2B Follow- 30+ min <30s Time Draft Needs Candid sales up prep prep summar saved source validati ate team y on draft

First TBD TBD TBD TBD TBD TBD Needed client lighthou se

First ECE metrics to collect

Prioritize these:

Time to usable output

○ before: manual prep time ○ after: generated summary/draft/report time

Output acceptance rate

○ accepted as-is ○ accepted with edits ○ rejected

Review effort

○ time spent reviewing output ○ number of correction cycles

Action completion

○ CRM updated ○ email drafted ○ ticket routed ○ report generated ○ approval sent

Usage

○ weekly active users ○ runs per week ○ repeat usage

Quality

○ accuracy rubric ○ completeness score ○ hallucination/error count ○ user correction themes

Economics

○ time saved ○ cost avoided ○ throughput increased ○ cycle time reduced

The internal plan already requires instrumentation of time, cost, quality, and usage from day one.

D. AEMI Proof Use this when the buyer has internal engineering resources and wants to know whether AI is improving velocity or reducing cost.

Current proof assets

AEMI already has a strong proof shape:

  • 30-day assessment
  • maturity score
  • blocker map
  • prioritized roadmap
  • executive-ready summary
  • before/after clarity
  • review of full delivery system, not just coding assistant usage

The AEMI landing page says buyers get an AI maturity score, blocker map, prioritized fixes, and executive-ready summary in 30 days. It also frames the before state as unclear AI usage, “we feel faster” with no data, unanswered board questions, and rising tool spend with unknown ROI. Strong AEMI proof angles

  • AI usage is not the same as AI impact
  • bottlenecks move from coding to review, QA, CI/CD, and release
  • leadership needs measurement, not anecdotes
  • the assessment looks across workflow fit, review load, release infrastructure, knowledge/context, governance, and measurement

The AEMI page explicitly says the review covers workflow fit, review and QA load, release infrastructure, knowledge and context, governance, and measurement.

Marketing Manager TODO

Create an AEMI Sample Proof Pack.

Must include:

Redacted maturity scorecard

Sample blocker map

Sample prioritized roadmap

Sample executive readout

Sample ROI / velocity model

Example before/after engineering workflow map

AEMI sales collateral already references a sample engagement pack with a redacted maturity scorecard, sample blocker register, 30/60/90 roadmap, and ROI model template.

AEMI metrics to collect

Metric Why It Matters Example

AI tool adoption by workflow Shows usage distribution Coding vs review vs QA

Perceived speed vs Exposes illusion of progress “We feel faster” vs measured speed throughput

Review load Shows bottleneck PR review time movement Rework rate Shows quality impact Bugs, QA failures

Cycle time Shows delivery speed ticket start → release

Cost per shipped feature Shows financial impact team cost / output

Tool spend Shows ROI pressure AI licenses + platforms

Engineering satisfaction Shows adoption and friction survey/interviews

E. Lightning Pods Proof Use this when the buyer needs ongoing AI-native engineering capacity.

Current proof assets

Lightning Pods are positioned as small senior teams that own defined outcomes, not staff augmentation. The landing page contrasts pods with traditional teams: 5–8 engineers vs 2–3 operators, months of ramp vs days, shared/unclear ownership vs pod-owned outcomes, and AI built into every step.

The Q2 board material also shows a target pod structure of 6 active clients plus 3 maintenance clients, $175K/month revenue, $2.1M/year, and 66% gross margin, plus a shift from blended US/offshore teams to senior US + agent team composition.

Proof angles

  • smaller senior team
  • lower overhead
  • faster decisions
  • outcome accountability
  • AI built into planning, coding, QA, docs, and iteration
  • flexible capacity without hiring delay Marketing Manager TODO

Create a Pods Proof Sheet.

Proof Type Needed Evidence Source

Delivery speed Time from scope to shipped project records outcome

Team efficiency team size vs output pod staffing + delivery data

Margin improvement gross margin by pod finance

Client satisfaction quote / NPS / renewal client success

AI-native delivery proof examples of agent-assisted delivery team work

F. Product Development Proof Use this when the buyer needs confidence in classic engineering delivery.

Current proof assets

Existing product-development proof includes:

  • 100+ apps launched
  • $40M+ funding raised
  • 10M+ users served
  • 4.8 average app rating
  • 20+ years combined experience
  • MVPs in as little as 90 days This comes from existing proposal/website proof.

Known case-study themes to organize

From existing MetaCTO memory and prior materials, the likely case-study categories are:

  • mobile app launch
  • app rating improvement
  • MVP to market
  • UX workflow improvement
  • retention improvement
  • modernization / rescue
  • AI feature integration
  • internal tool / dashboard

Marketing Manager TODO

Verify and structure every product case study.

Client Work Before After Metric Quote Approv Best Type ed? Offer Use

G-Sight App TBD TBD App TBD TBD Product improve rating / ment change Rescue

MamaZ Mobile TBD TBD Retenti TBD TBD Product en app on / ARR / categor y rank

Drop UX / TBD TBD UX TBD TBD Product Offer product workflo w improve ment

kommu App / TBD TBD launch / TBD TBD Product auth / quality launch

Founde MVP TBD TBD <90 TBD TBD Product rBrand days? / AI AI

Important: do not publish client-specific numbers until each source is verified and permission status is known.

G. Spreadsheet to App Proof This is a newer wedge, so the proof library should start with pain proof and then build delivery proof.

Pain proof to collect

  • number of spreadsheet versions
  • frequency of wrong-file updates
  • hours spent reconciling reports
  • number of people using the spreadsheet
  • business process dependent on the file
  • number of manual copy/paste steps
  • errors caused by outdated versions
  • time spent creating weekly/monthly reports
  • approval steps hidden in email or comments

Proof claims we want to earn

  • reduced report creation time
  • fewer version issues
  • faster field updates
  • cleaner permissions
  • fewer duplicate entries
  • improved visibility
  • better handoff between office and field
  • structured data foundation for future AI

Marketing Manager TODO

Create a Spreadsheet to App Campaign Proof Kit.

Sections:

Common pain examples

Before/after mockups

“Which file is the real one?” copy examples

Construction/field ops use cases

Data model extraction example

App conversion example

AI-readiness education angle

Template:

Spreadshe Who Uses Current Business App Future AI et Process It Pain Risk Output Opportunit y

Bid tracking Estimators multiple wrong bid bid bid

  • PMs versions status dashboard summary agent

Job Ops + field outdated missed scheduling daily risk schedule copies updates tool summary

Asset Admin + manual lost asset app anomaly tracking field updates equipment detection

Proof by Buyer

CEO / Founder proof

They need proof of:

  • leverage
  • speed
  • focus
  • growth enablement
  • reduced founder/expert bottleneck
  • systems that scale

Best proof types:

  • before/after operating model
  • time saved
  • faster decision cycles
  • faster delivery
  • clear executive narrative
  • system adoption

COO proof

They need proof of:

  • execution consistency
  • fewer handoffs
  • better visibility
  • more repeatable work
  • fewer manual steps

Best proof types:

  • handoff time reduced
  • manual steps removed
  • report generation time reduced
  • queue visibility
  • process consistency

CFO proof They need proof of:

  • cost reduction
  • ROI
  • labor leverage
  • tool spend clarity
  • reduced waste
  • productivity gains

Best proof types:

  • cost per output
  • hours saved
  • tool spend rationalization
  • reduced review time
  • margin impact
  • ROI model

The Phase 0 doc says buyers are asking harder questions about throughput, bottlenecks, what is safe to automate, ROI on AI spend, and scaling wins without scaling chaos.

CTO / Engineering Leader proof

They need proof of:

  • system quality
  • maintainability
  • permissions
  • evals
  • traces
  • delivery velocity
  • reduced bottlenecks

Best proof types:

  • maturity score
  • blocker map
  • workflow review
  • eval suite
  • review load impact
  • CI/CD or release impact
  • architecture examples

The AEMI page says AEMI reviews workflow fit, review and QA load, release infrastructure, knowledge/context, governance, and measurement. Revenue Leader proof They need proof of:

  • faster follow-up
  • better account prep
  • consistent messaging
  • cleaner CRM
  • better proposal speed
  • more complete customer context

Best proof types:

  • time to summary
  • time to follow-up
  • CRM update completion
  • proposal prep time
  • output acceptance rate
  • consistency score

Proof Asset Types

The Marketing Manager should create each of these over time.

Proof stat

A single metric used in website, ads, or decks.

Template:

Field Example

Claim 100+ products shipped

Source Website / proposal / internal Proof type Company credibility

Last verified TBD

Approved uses Homepage, offer pages

Caveats Define products vs apps

Before/after card

Use on website and sales decks.

Template:

Before

  • fragmented systems
  • manual prep
  • inconsistent outputs

After

  • connected context
  • usable output
  • measurable execution

Metric

  • time saved / quality improved / adoption / cost reduced

Mini case study

Use in landing pages.

Template: Client type: Problem: Systems involved: What we built: Timeline: Result: Quote: Best CTA:

Deep case study

Use for sales enablement and SEO.

Template:

Client context

Business problem

Why existing approach failed

Systems and constraints

MetaCTO approach

What we built

Launch path

Results

What improved after launch

10.What this proves

Proof-backed founder post

Use for LinkedIn.

Template:

Observation: Growing companies are outgrowing the way work gets done.

Proof: Client/team example with a concrete before/after.

Point of view: The issue was not the AI tool. It was missing context, outputs, actions, or measurement.

CTA: Talk to a CTO / read case study / download guide.

Sales proof snippet

Use in proposals and follow-ups.

Template:

We have seen this pattern before: [pain]. In a similar environment, the path was [approach], and the result was [metric/outcome]. The key was not adding another tool, but building the context and execution layer behind the work.

Proof Claims Matrix

Use this table to decide what can be said publicly.

Claim Status Source Needed Public Use?

100+ products Supported, verify AEMI / website Yes, after shipped phrasing proof standardization

100+ apps Supported in proposal/site source Yes for Product Dev launched product proof

5.0 Clutch rating Supported in AEMI Clutch page Yes after verification copy, verify current

4.8 average app Supported in app portfolio data Yes after verification rating product proof

10M+ users served Supported in portfolio data Yes after verification product proof $40M+ funding Supported in portfolio data Yes after verification raised product proof

AI-enabled delivery Supported in ECE internal proof Yes, but add system used brief required examples internally

Internal revenue Planned dogfooding data Not yet workflow system live

ECE reduces Need data client/internal Not yet manual prep by X% measurement

Continuous AI Ops Need monthly scorecards Not yet improves quality reports over time

AEMI increases Need client data AEMI engagements Not yet engineering velocity

Pods reduce team Current positioning / delivery proof Use carefully size from 5–8 to model 2–3

Internal Dogfooding Proof Plan

This is the most important near-term proof motion.

Goal

Use MetaCTO’s own revenue system to prove the ECE thesis.

The Phase 0 plan already says Q2 should dogfood an internal revenue workflow system across discovery, brief, proposal, and follow-up, and begin collecting time, quality, and workflow economics proof.

Internal proof workflow

Start with:

Discovery → Brief → Follow-up → Proposal

Baseline to collect

Before automation/agent support:

Metric Baseline

Time from call to summary TBD

Time from call to follow-up TBD

Time to first proposal draft TBD

Time to final proposal TBD

Number of manual handoffs TBD

CRM completeness TBD Proposal quality score TBD

Founder time per opportunity TBD

SDR/BDR time per opportunity TBD

After-state to collect

Metric After

Time to generated summary TBD

Time to reviewed summary TBD

Time to draft follow-up TBD

Time to CRM update TBD

Time to proposal input pack TBD

Output acceptance rate TBD

Review time TBD Founder time saved TBD

Marketing Manager TODO

Create a recurring Internal Proof Review every Friday.

Agenda:

What outputs were generated this week?

Which were accepted, edited, or rejected?

What time was saved?

What broke?

What proof can be captured?

What should become a screenshot, quote, metric, or case-study note?

Proof Library Database Schema

Use this as the structure in your docs/database.

Proof Item

Field Description

Proof ID Unique ID

Title Short name

Proof Type stat, quote, case, before/after, metric, screenshot Offer ECE, AEMI, Pods, Product Dev, S2A, Agent Dev, Continuous AI Ops

Buyer CEO, COO, CFO, CTO, Rev Leader, Engineering Leader

ICP Archetype Scaling Operator, AI-Pressured Executive, Revenue Team Under Strain, etc.

Pain Fragmented systems, manual coordination, AI not measurable, etc.

Claim What we want to say

Evidence Source material

Metric Numeric outcome, if any

Client Client/account

Permission Approved, needs approval, internal only

Confidence High, medium, low

Last Verified Date Best Use Homepage, deck, proposal, outbound, founder post

Notes Caveats and instructions

Marketing Manager 30-Day Plan

Week 1: Inventory and verification Goals:

  • collect all proof assets
  • verify current claims
  • create the proof database
  • flag public vs internal-only proof

Tasks:

  • pull current case studies
  • pull Clutch reviews
  • pull homepage and proposal stats
  • collect all client quotes
  • reconcile 100+ products vs 100+ apps
  • verify Clutch rating
  • verify 10M+ users, $40M+ funding, 4.8 app rating
  • identify top 10 proof assets

Output: Proof Library Inventory v1

Week 2: Reframe proof around new positioning Goals:

  • map existing proof to Company Truth and Language System

  • rewrite old product/dev proof into new relevance where appropriate Tasks:

  • tag proof by buyer pain

  • tag proof by offer

  • rewrite 5 mini case studies

  • create before/after cards

  • build homepage proof section options

  • build ECE proof placeholders

Output: Proof Messaging Pack v1

Week 3: Build missing proof motions Goals:

  • start collecting proof where it does not exist yet

Tasks:

  • launch internal dogfooding proof tracker
  • define ECE lighthouse metrics
  • define AEMI outcome metrics
  • define Spreadsheet to App campaign proof examples
  • define Continuous AI Ops monthly scorecard template

Output: Proof Collection System v1

Week 4: Package and publish Goals:

  • make proof usable in GTM

Tasks:

  • update homepage proof section
  • create sales proof snippets
  • create 3 LinkedIn proof posts
  • create 1 ECE case-study skeleton
  • create 1 AEMI sample proof pack outline
  • create proposal proof block Output: Proof Activation Pack v1

Marketing Manager Research Backlog

High priority

  1. Verify public MetaCTO stats: ○ products/apps shipped ○ Clutch rating ○ users served ○ funding raised ○ average app rating
  2. Pull top 10 client quotes: ○ from Clutch ○ proposals ○ website ○ client emails ○ Slack
  3. Identify 5 best case studies to reframe: ○ one product launch ○ one rescue/modernization ○ one AI-enabled delivery example ○ one revenue/ops workflow example ○ one engineering leverage/AEMI example
  4. Build ECE proof baseline: ○ internal revenue workflow ○ first lighthouse client ○ candidate B2B sales team example
  5. Build AEMI proof baseline: ○ sample scorecard ○ sample blocker map ○ sample roadmap ○ sample ROI model

Medium priority 6. Build Spreadsheet to App proof examples: ○ construction ○ field services ○ ops-heavy services ○ finance/admin workflows 7. Create proof by buyer: ○ CEO ○ COO ○ CFO ○ CTO ○ Engineering leader ○ Revenue leader 8. Create proof by claim: ○ speed ○ cost ○ quality ○ consistency ○ trust ○ adoption ○ improvement over time

Lower priority 9. Build external market proof: ○ AI adoption stats ○ mid-market AI ROI studies ○ engineering productivity research ○ spreadsheet risk research ○ AI pilot failure / production gap research 10.Build competitor contrast proof:

  • why tools are not enough
  • why search is not enough
  • why generic consulting is not enough
  • why staff aug is not enough

Proof Goals

30-day goal Have enough proof to support:

  • new homepage
  • ECE page
  • AEMI page
  • Spreadsheet to App campaign
  • sales deck
  • founder posts

Minimum viable proof assets:

  • 10 verified stats
  • 10 approved quotes
  • 5 mini case studies
  • 3 before/after cards
  • 1 ECE lighthouse proof tracker
  • 1 AEMI sample proof pack
  • 1 internal dogfooding proof tracker

60-day goal Have early ECE-specific proof:

  • internal revenue workflow baseline and after-state
  • first lighthouse client metric baseline
  • first launch scorecard
  • first value report
  • first case-study draft

90-day goal Have proof that supports the flagship claim:

  • ECE can turn scattered context into usable outputs and reliable actions
  • internal dogfooding has measurable time/quality gains
  • at least one client or internal lighthouse has before/after metrics
  • Continuous AI Operations has a scorecard template
  • proof is embedded in website, sales, proposals, and outbound

Proof Library Rules

Rule 1: Every claim needs a source No proof goes public without:

  • source
  • date
  • owner
  • permission status
  • confidence rating

Rule 2: Separate public proof from internal proof Use labels:

  • Public
  • Sales-only
  • Internal-only
  • Needs approval
  • Draft / candidate

Rule 3: Do not overstate ECE proof yet ECE is the flagship, but the proof base is still developing. Use company credibility and production engineering proof while intentionally building ECE-specific proof.

Rule 4: Reframe old proof carefully Product-development proof can support engineering credibility, but do not pretend every app case study proves ECE.

Use old proof for:

  • trust
  • delivery quality
  • speed
  • production experience
  • senior judgment

Use new proof for:

  • context systems

  • AI outputs

  • reliable actions

  • improvement loops

  • operating leverage Rule 5: Turn every delivery into proof Every project should produce at least one proof artifact:

  • metric

  • quote

  • screenshot

  • before/after

  • case-study note

  • launch scorecard

  • lesson learned

Final Standard

The Proof Library exists to make MetaCTO’s positioning believable.

The standard is:

Proof should show how growing companies move from scattered knowledge, disconnected systems, and unmeasured AI usage into production capability.

The Marketing Manager owns turning this proof into assets. The team owns creating and capturing the proof.

Agents