Applied AI, measured by outcomes.

The Institute for AI Outcomes works with enterprises and large institutions to move AI work past pilots and proofs-of-concept into systems that produce measureable results enhanced by harnesses, context, and recent AI research — on the balance sheet, in the warehouse, across opportunity-focused education contexts.

Begin an engagement → See our work
Selected work

Three engagements illustrating how we connect model performance to business outcomes.

01
P&C Insurance
Risk modeling

Lifting the Gini coefficient on a property risk model using unstructured submission data

A specialty carrier had a credible structured-data risk model and a quiet suspicion that most of the signal on each submission was sitting in broker emails, inspection narratives, and scanned loss runs the model never saw. We built a document-understanding layer that parses, grounds, and attributes unstructured submission data into engineered features the existing GLM consumes — with full lineage back to the source paragraph.

+0.11
Improvement in Gini coefficient
Submissions cleared per underwriter
100%
Features traceable to source documents
02
Food Distribution
Predictive logistics & inventory

Forecasting demand and replenishment for a major US foodservice distributor

A national distributor was stocking thousands of perishable SKUs across regional DCs against a forecast that hadn't kept pace with shifting restaurant traffic, weather, and menu cycles. We replaced a planner-driven weekly process with a hierarchical demand forecast feeding directly into replenishment and lane-level routing — refreshed nightly, and built to surface its own uncertainty so it isn't trusted when it shouldn't be.

−21%
Spoilage on perishable categories
+8.4 pt
Fill rate on A-items
−$14M
Working capital tied up in inventory
03
Workforce & Education
Conversational agent

An agent driving discovery and engagement with alternative teaching credentials

A national organization working on the US teacher pipeline needed to meet career-changing candidates where they actually are — confused about which alternative-certification programs apply to them, in which states, with which prior degrees. We built a guided-discovery agent that maps each candidate's background to eligible programs across 10 priority states, surfaces fit signals (cost, time, subject area), and hands warm leads to program partners with consent.

4.7×
Lift in program-application rate
10
Priority states with mapped credentialing pathways
+8%
Model response quality from custom evaluation harness
Contributors & advisors

A small bench of operators and advisors, assembled around each engagement.

Contributor
Luis Silva
Product & research
Advisor
Harlyn Pacheco
Outcomes & strategy

Contributors have built and shipped AI at Google, Microsoft, and VC-funded startups. We assemble the right people for each engagement and are explicit about who is doing the work.

How we engage

A working method built around the outcome, not the model.

01 — Define

The outcome contract

Before a line of code, we write down with your team what success looks like on the P&L or the balanced scorecard — and what we'd rather not ship than compromise on.

02 — Build

Small teams, short cycles

Engagements run 8–16 weeks with a contributor on every call. We prototype in your data, not a sandbox, and resist the urge to rebuild systems that already work.

03 — Hand off

Your team owns it

Every engagement ends with a runbook, an evaluation harness, and a named owner on your side. We'd rather make ourselves unnecessary than build a dependency.

Begin an engagement

Tell us what outcome you are trying to move.

We take on a small number of engagements each quarter. A 30-minute call with a contributor is the fastest way to find out whether we are a fit.

Or write directly · hi@outcomesinstitute.ai