When we say “AI” at Pollen Sense, most people assume we mean LLMs. We don’t. When people hear that Pollen Sense is building AI data infrastructure, the default assumption is usually large language models, chatbots, text generation, or conversational AI. That’s understandable given the moment we’re in. But it’s not what we mean when we say AI.
At Pollen Sense, AI data infrastructure means Physical AI, machine learning systems that directly observe the real world, classify physical signals in real time, and convert them into structured, trustworthy data that other systems can reason on.
Our AI starts with perception, not language.
Physical AI Starts With Perception
Before AI can reason, predict, or explain, it has to accurately sense what’s happening in the environment. Our work focuses on the hardest and most foundational layer of AI, translating raw physical phenomena in the air into reliable digital signals, continuously and at scale. That’s where the difference between CNNs and LLMs matters.
CNNs Power Perception in Physical AI Systems
Convolutional Neural Networks are exceptionally good at seeing. They’re designed to process spatial data, images, optical signals, and structured sensor outputs, and to identify patterns that are invisible to the human eye at scale. In Physical AI systems, CNNs do the heavy lifting at the edge. They distinguish meaningful signals from noise, classify what’s physically present, and do so continuously in real-world conditions. This is the layer where accuracy, repeatability, and scientific validity are earned.
CNNs answer a very specific and critical question:
What is physically present, right now?
LLMs Add Value After the Physical Signal Is Trustworthy
Large Language Models operate at a different layer. They don’t observe the physical world directly. Instead, they work on structured representations of data, text, sequences, summaries, and metadata, to understand context, relationships, and implications over time. In a Physical AI stack, LLMs add value after perception. They help explain trends, correlate environmental data with outcomes, support decision-making, and communicate insights across systems and stakeholders.
LLMs answer a different question:
What does this data mean in context, and what should we do about it?
Physical AI Is About Building the Stack, Not Choosing Sides
There’s a tendency right now to frame AI as one model or another. In the physical world, that framing doesn’t hold up. Perception without reasoning is limited.
Reasoning without reliable perception is dangerous. Physical AI systems require both, but in the correct order. At Pollen Sense, we focus on building the foundational layer first, high-integrity, real-time environmental data that can be trusted by scientists, healthcare systems, regulators, and industries. From there, higher-level models, including LLMs, can responsibly interpret, communicate, and act on that data.
When Physical AI Becomes Infrastructure
Public health and environmental infrastructure depend on one thing above all else: trustworthy, continuous data. You can’t protect communities, plan cities, or respond to health risks using delayed, incomplete, or manually sampled signals. By applying Physical AI to the air we all share, Pollen Sense turns an invisible and constantly changing part of the environment into reliable public infrastructure. Real-time, high-resolution environmental data enables earlier warnings, better policy decisions, more resilient communities, and healthier outcomes, especially for populations most affected by air quality and exposure risks. That’s where AI moves beyond innovation and becomes impact.
Kris Klein
CEO & Co-Founder, Pollen Sense