AI Demo AI Demo
Room 275
The MIT State of AI report surfaced a brutal truth: most GenAI systems do not retain feedback, adapt to context, or improve over time. While frontier models get better with every release, enterprises rarely gain a durable advantage, because their systems don’t actually learn. The default answer is fine‑tuning. In practice, it’s often expensive, brittle, slow to iterate, and tightly coupled to a specific model version. Worse, it can lock teams out of rapidly improving frontier models. This session presents an alternative: learning‑loop architectures that allow enterprise GenAI systems to improve continuously, without fine‑tuning, while remaining flexible enough to adopt new models as they emerge. You’ll see how feedback from real usage can be captured, measured, and reintegrated safely into production systems. We’ll demonstrate how observability, evaluation, and automated optimization work together to turn GenAI from a static capability into a learning system. We’ll explore: * Automated Prompt Optimization: enabling systems to evolve their own instructions using Genetic‑Pareto (GEPA) techniques based on measurable feedback * Observability‑Driven Learning: detecting failure patterns and routing targeted corrections back into the system * Trust & Auditability: fitting learning loops into existing governance, compliance, and risk frameworks rather than fighting them If your GenAI initiative is stuck in pilot, or producing inconsistent or stagnant results, this session shows the missing half: the learning loop that makes improvement routine instead of exceptional.

Use with AI

Copy this session's complete context to paste into ChatGPT, Claude, or any AI assistant.

Preview context block
## Session: Close the GenAI “Learning Gap”: Self‑Improving AI Without Fine‑Tuning
**Track:** AI Demo | **Time:** 2:15 PM–3:00 PM | **Room:** 275 | **Type:** AI Demo
**Conference:** CIRAS AI Summit for Iowa — May 6, 2026, Scheman Building, Iowa State University, Ames IA

### Speaker(s)

**Ben McHone** — Staff Engineering Consultant, Source Allies (Urbandale, IA)
Ben McHone is a Staff Engineering Consultant at Source Allies, specializing in deploying agentic AI systems to production. He focuses on metric‑driven development and real‑world reliability, addressing the question: How do we know we can trust this technology? Ben is a DSPy contributor, LangChain Expert Program member, and Arize / Phoenix Ambassador.

**Matt Vincent** — Founder, Source Allies (Urbandale, IA)
Matt Vincent founded Source Allies, an Iowa‑headquartered consultancy specializing in Data & AI with multiple GenAI systems in production delivering measurable ROI. He works with organizations to move generative AI from pilot to product.

### Session Description

The MIT State of AI report surfaced a brutal truth: most GenAI systems do not retain feedback, adapt to context, or improve over time. While frontier models get better with every release, enterprises rarely gain a durable advantage, because their systems don’t actually learn.

The default answer is fine‑tuning. In practice, it’s often expensive, brittle, slow to iterate, and tightly coupled to a specific model version. Worse, it can lock teams out of rapidly improving frontier models.

This session presents an alternative: learning‑loop architectures that allow enterprise GenAI systems to improve continuously, without fine‑tuning, while remaining flexible enough to adopt new models as they emerge.

You’ll see how feedback from real usage can be captured, measured, and reintegrated safely into production systems. We’ll demonstrate how observability, evaluation, and automated optimization work together to turn GenAI from a static capability into a learning system.

We’ll explore:

* Automated Prompt Optimization: enabling systems to evolve their own instructions using Genetic‑Pareto (GEPA) techniques based on measurable feedback
* Observability‑Driven Learning: detecting failure patterns and routing targeted corrections back into the system
* Trust & Auditability: fitting learning loops into existing governance, compliance, and risk frameworks rather than fighting them

If your GenAI initiative is stuck in pilot, or producing inconsistent or stagnant results, this session shows the missing half: the learning loop that makes improvement routine instead of exceptional.

### Other sessions in the AI Demo track

- M365 Copilot Rollout: Driving Adoption and Impact at Pella (3:10 PM–3:55 PM)
- From Chatbot to Builder: Practical AI in Everyday Work (10:20 AM–11:05 AM)
- Stop Automating Broken Processes: How to Redesign Your Business Operations for the Age of AI Agents (11:15 AM–12:00 PM)
- Building Enterprise-Scale RAG Chatbots Using Azure AI Foundry (1:20 PM–2:05 PM)

### Suggested prompts for this session

- "What questions should I prepare to ask the speaker(s) at this session?"
- "Create a structured note-taking template for this session focused on actionable takeaways"
- "Based on this session description, what background reading should I do to get the most value?"
- "After I attend, help me create an action plan for implementing what I learned"
- "How does this session connect to the other sessions in the AI Demo track?"

Verify your attendee email to copy AI context for this session.

Verify Email