There’s a line I keep coming back to in Andrew Ng’s recent thinking on AI: the application layer is underinvested. Not misunderstood. Not overhyped. Underinvested. That distinction matters—especially for what we’re building at Reflekta.
Ng’s argument is clean. Infrastructure—model training, chips, compute—feels safer to capital because it looks legible. You can count GPUs. You can price cycles. You can deploy billions with a spreadsheet and a timeline. Applications are messier. Human. Contextual. Harder to benchmark. And so many investors hesitate, worrying that frontier models will simply “roll over” anything built on top of them.
Ng doesn’t buy that. Neither do we.
At Reflekta, we sit squarely in the application layer—but not the thin, gimmicky kind. We’re building intergenerational storytelling systems that take raw human memory and make it operational: usable by families, preserved across time, and emotionally intelligible to the next generation. That’s not something a better base model magically replaces.
Foundational models improve. They always will. But meaning doesn’t emerge from parameter counts. Meaning emerges from structure, context, permission, ritual, and care. Those live at the application layer.
Ng points out that his venture studio, AI Fund, is doubling down on applications precisely because value compounds there. Applications have to pay for infrastructure—which means, by definition, they must create more value than the layers beneath them. That resonates deeply with Reflekta’s thesis. We don’t exist to showcase AI. We exist to solve a human problem that has existed long before AI: stories get lost between generations.
Intergenerational storytelling isn’t a “feature.” It’s a workflow. One that spans grief, legacy, memory, language, culture, and time. It requires trust. It requires custodianship. It requires knowing when not to ask a question. No foundation model, no matter how advanced, understands that on its own.
Ng also notes something practical and telling: the bottleneck right now isn’t demand—it’s inference capacity. Tools like OpenAI Codex, Claude Code, and Google CLI are driving real adoption. Usage is growing faster than infrastructure can keep up. That’s not a bubble signal. That’s unmet demand.
Where Ng raises caution is on training infrastructure. Open-source models, algorithmic gains, and hardware efficiencies are eroding moats. Training a model of a given capability gets cheaper every year. The edge doesn’t last. Which reinforces his core point: defensibility increasingly lives above the model layer.
This is where Reflekta plants its flag.
We’re not betting against better models. We’re betting on better outcomes. Platforms that make AI operational in real human lives. Vertical applications that respect defined workflows—like how families remember, how elders pass wisdom, how children ask questions years too late.
VC hesitation creates whitespace. And whitespace is where builders who understand people—not just technology—do their best work.
The future of AI won’t be decided by who trains the biggest model. It will be decided by who uses intelligence—artificial or otherwise—to help us remember who we are, and pass it on.
The application layer refers to AI products that sit on top of foundational models and infrastructure, solving real workflows for users rather than providing raw capabilities.
Ng argues that investors understand how to deploy capital into infrastructure but struggle to evaluate application winners—creating missed opportunities.
Because it requires long-term structure, emotional intelligence, consent, and continuity—things best handled at the application level, not by base models alone.
No. Better models enhance applications, but they don’t replace the human workflows, trust systems, and contextual design required for meaningful use.