Signal Capture
Get real RF data flowing. SDR spectrum monitoring, WiFi metrics, windowed sampling, and a normalization pipeline for future sensors.

Robots.fm is building passive RF sensing infrastructure that reveals structure inside electromagnetic fields already surrounding every space. Signal ingestion, disturbance modeling, and spatial visualization — from desktop prototype to full environmental awareness system.
Interactive preview of the monitoring interface. Watch sensing modes cycle through baseline scanning, active detection, anomaly response, and counter-sensing.
If ambient RF can be used to infer bodies, movement, posture, or presence through walls, then counter-sensing and obfuscation become a massive defensive research track — not a side quest. The sensing race and the counter-sensing race will grow together.
WiFi disturbance sensing first, RF spectrum awareness second. Start with what's already everywhere before building exotic sensor rigs.
Raw signal ingestion, field modeling, and visualization stay separated from day one. The renderer never touches raw RF data directly.
Not merely detecting devices — revealing structure inside an invisible field that is already everywhere. Bigger, more flexible, and more interesting.
The architecture assumes from the start that 2D visualization is a temporary projection of a spatial model. The pipeline is built around field data, not UI-specific metrics. Capture, normalize, model, render — each layer fails separately and none depend on the others' internals.

Even if the first visual is a 2D projection, the underlying structure thinks in spatial cells, temporal change, and projected disturbance volume. The future version gets smarter; the foundation never needs a transplant.


Each phase produces something real, testable, and motivating — while keeping the architecture aligned with the eventual spatial sensing vision.
Get real RF data flowing. SDR spectrum monitoring, WiFi metrics, windowed sampling, and a normalization pipeline for future sensors.
Learn what normal chaos looks like. Noise floor estimation, variance tracking, anomaly thresholds, and persistence scoring.
Translate changing conditions into a spatial field: voxel-ready schema, movement trails, confidence regions, temporal smoothing.
Make the invisible feel alive. 2D interference fields now, volumetric 3D disturbance volumes next. Replay and debug views built in.
Test whether the system can distinguish movement classes. Human vs pet vs aerial vs unknown. Confidence scoring, false positive logging.
The big one: learn whether visibility through RF can be weakened, distorted, hidden, or confused. Defensive invisibility research.
Some are immediate reading tracks. Some are rabbit holes. The point is to separate what powers the prototype now from what needs deeper study.
Not the polished product. Not the grand theory. Build the system that can ingest signals, model disturbance, and make invisible structure legible. That is enough to create momentum, intuition, and a real base for the much crazier versions later.