Geographic, temporal, or sensor? Choose the wrong axis and your pilot will fail
 ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌

What Should "Generalization" Mean?


Most computer vision pilot failures aren't due to weak models. They're due to an undefined or mismatched definition of "generalization."


After two decades building vision systems for healthcare, earth observation, and other applications, I've seen the same pattern repeatedly: teams cite distribution shift, site variability, and unexpected performance drops as reasons their pilots stalled. These aren't random failures—they're symptoms of blurred generalization goals set before training even began.


Here's the problem: we treat "generalization" as a single property to achieve, when it's actually a strategic choice to make.


The Trap of Treating Generalization as One Thing


In research papers, "generalization" typically means performance on a held-out test set drawn from the same distribution. Clean datasets, controlled conditions, similar capture protocols.


Real-world vision deployments operate under shifting, heterogeneous, and messy conditions.

Earth observation teams need robustness to weather patterns, seasonal variations, and sensor drift. Pathology teams need stability across scanners, staining protocols, laboratories, and patient demographics. Agriculture systems must handle cultivar variants, soil types, lighting conditions, and growth stages.


When teams don't explicitly define their target generalization domain, they validate models against the wrong dimension—then discover too late that the system collapses in production.


Earth Observation: Multiple Axes, Multiple Choices


In satellite imagery analysis, "generalization" could mean entirely different things:

  • Geographic generalization. Can a model trained on regions in North America perform on landscapes in Southeast Asia or Sub-Saharan Africa with different terrain, infrastructure, and land use patterns?

  • Temporal generalization. Does performance hold across years with different weather patterns, seasonal variations, and environmental changes?

  • Sensor generalization. Can the model handle Landsat's spectral bands and spatial resolution today, then switch to Sentinel-2's different sensor characteristics tomorrow?

A model that generalizes perfectly across geographies might fail completely when applied to imagery from a different satellite sensor. A land cover classification system that works one year may collapse during unusual weather conditions because validation never tested temporal shift.


You can't optimize for all dimensions simultaneously. You must choose which matters most for your deployment context.


Pathology: Even More Complexity


Digital pathology reveals how subtle these distinctions become. "Generalization" hides at least four distinct challenges:

  • Scanner generalization. Variations in optics, color profiles, compression artifacts across Aperio, Leica, Hamamatsu, and Philips systems.

  • Staining generalization. H&E staining variations from different reagent batches, staining protocols, and laboratory procedures create substantial color and intensity shifts.

  • Site generalization. Different hospitals mean different workflows, technician training, and tissue handling protocols.

  • Population generalization. Patient demographics, disease subtypes, treatment histories, and ancestry backgrounds.

A model that generalizes across scanners may still fail on tissue from a laboratory with slightly different H&E staining procedures. A model stable across staining variations may drift when encountering underrepresented patient populations or rare disease subtypes.


I've watched pathology pilots stall because teams validated only on held-out slides from the same site—which tells you nothing about deployment-grade robustness.


This Is a Leadership Decision, Not Just a Technical One


Strategic leaders often assume generalization is the model team's responsibility. In practice, generalization is a product decision that determines everything downstream:

  • Data acquisition strategy and budget allocation

  • Labeling priorities and annotation guidelines

  • Pilot design and success criteria

  • Whether foundation models add value or introduce new risks

When generalization remains undefined, teams validate the wrong scenarios, ship models prematurely, confuse prototype success with deployment readiness, and burn trust with clinicians or operations teams who discover the system fails under real conditions.


The Question That Changes Everything


Replace "Does it generalize?" with a single question: Generalization to what?


This forces immediate clarity. What environments matter most for your deployment? What failure modes are acceptable? What shifts are likely—seasonal, geographic, instrumental, demographic? Where do you need robustness now versus in version two?


This transforms generalization from an academic abstraction into a practical, testable specification.


Define your primary generalization target explicitly. Identify secondary axes that matter but aren't phase-one blockers. Build validation sets that test each dimension separately. Only then decide whether you need domain-specific foundation models, targeted augmentation, hardware calibration, or simply more representative data.


The leaders who succeed don't ask if their vision model generalizes. They ask what definition of generalization their deployment requires—and they validate against that definition from day one.


- Heather

Vision AI that bridges research and reality

— delivering where it matters


Insights: Multi-Sensor EO


Seeing More Clearly – Why Multi-Sensor Models Matter in EO


More and more EO foundation models are moving beyond optical. They're being designed—from the start—to ingest and integrate data from multiple sensors. Why? Because Earth’s complexity demands more than a single lens.

The Earth looks very different depending on the sensor—and models need to see more than one perspective:
🛰️ SAR sees structure, moisture, and movement—even through clouds and darkness
🔥 Thermal reveals surface temperature—key for fires, drought, and heatwaves
🌈 Optical captures visible and near-infrared reflectance—essential for vegetation and land cover

Each one reveals something different. No single sensor can capture it all.

Multi-sensor fusion strengthens generalization and closes blind spots:
- SAR + optical: disaster monitoring in cloudy wildfire zones
- Thermal + optical: identifying crop stress invisible to the eye
- SAR + elevation: flood mapping with topographic flow context

These combinations aren’t just technical upgrades—they drive better decisions: faster disaster response, more accurate food security forecasting, and improved climate resilience.

🧠 But multi-sensor models need to be designed differently. It’s not just about stacking inputs.

Rather than standard masked autoencoding, these models often use cross-modal reconstruction—learning to recover one modality from another. It’s a similar principle, but applied across sensor types. That shift impacts model architecture, training objectives, and evaluation strategy.

📌 Multi-sensor isn't a feature—it's a foundation. If we want models that generalize across conditions, regions, and disasters, they need to be built with multiple sensors in mind from the beginning.

👇 What sensors do you think should be included in the next generation of EO foundation models?
What combinations have made your models more resilient—or more brittle?


Leave a comment

Talk: Domain-Specific AI


We’re doing AI all wrong. Here’s how to get it right


Sasha Luccioni’s recent TED talk makes a sharp point: we don’t need stadium lights to find our keys. In AI terms — we’re burning enormous energy for marginal gains, mistaking scale for progress.

I see the same pattern in digital pathology and Earth observation. Models and datasets keep getting larger — gigapixel slides, multi-sensor constellations, multi-modal fusion pipelines — but not always better aligned with the problems they’re meant to solve. Bigger models often demand centralized compute, lock teams into vendor clouds, and obscure real-world robustness issues like site, scanner, or sensor shift.

The sweet spot for many vision applications lies in small, domain-specific foundation models:
- Right-sized for the task. Compact models trained on domain-relevant data outperform giants on robustness, latency, and deployment feasibility.
- Efficient by design. Quality of pretraining and curation > raw data volume.
- Deployable where it matters. Running on-prem or near-edge (scanners, microscopes, phones, drones) reduces cost, carbon, and dependency.
- Transparent and tunable. Teams can adapt, audit, and maintain them without supercomputer budgets.

Imagine if every model came with an energy or carbon label beside accuracy — would you still reach for the largest one?

Research: Fine-Tuning Foundation Models


Adapting Vision Foundation Models for Plant Phenotyping


Foundation models pretrained on internet-scale data have shown remarkable transfer capabilities across many domains. But agricultural images present unique challenges—and full fine-tuning is prohibitively expensive. What's the most efficient path forward?

Feng Chen et al. from the University of Edinburgh investigated this question, examining how general-purpose vision foundation models adapt to specialized plant phenotyping tasks.

𝐓𝐡𝐞 𝐪𝐮𝐞𝐬𝐭𝐢𝐨𝐧:
When foundation models are pretrained on ImageNet or web-scale data, can they efficiently adapt to the specialized domain of plant phenotyping without full fine-tuning? And how do different adaptation strategies compare?

𝐓𝐡𝐞 𝐚𝐩𝐩𝐫𝐨𝐚𝐜𝐡:
The authors tested three foundation models (MAE, DINO, DINOv2) on three essential plant phenotyping tasks: leaf counting, instance segmentation, and disease classification. Critical constraint: they kept the pretrained backbones frozen. They evaluated two parameter-efficient fine-tuning methods: adapter tuning (using LoRA) and decoder tuning.

𝐊𝐞𝐲 𝐟𝐢𝐧𝐝𝐢𝐧𝐠𝐬:
- Foundation models can be efficiently adapted to multiple plant phenotyping tasks with minimal fine-tuning
- Performance approaches task-specific state-of-the-art models that were designed and trained specifically for each task
- However, efficiently fine-tuned foundation models perform slightly worse than specialized SoTA models in some scenarios
- The gap highlights an important research direction: understanding when general-domain pretraining suffices versus when domain-specific approaches are needed

𝐖𝐡𝐲 𝐭𝐡𝐢𝐬 𝐦𝐚𝐭𝐭𝐞𝐫𝐞𝐝:
This work represents a systematic investigation into parameter-efficient adaptation of vision foundation models for agriculture. It demonstrates feasibility while honestly acknowledging performance gaps—setting the stage for subsequent work on crop-specific foundation models that have since shown the value of domain-specific pretraining.
The tension between general-purpose and domain-specific foundation models remains a central question in agricultural AI.