It's using the invisible WiFi signals already bouncing around your home to detect people and their movements through walls, the same way a camera would — just without any lens.

A GitHub project called RuView went viral last month. So much so, it hit around 45,000 stars, and was rampant all over the YouTube Shorts and LinkedIn posts (most especially), where it was framed as "see through walls with WiFi and AI." That sounds kind of vague, right? So what is it actually about? To understand in full, let’s dive in!
π RuView Overview
The Science Behind It

Normally, your WiFi router is constantly sending and receiving radio signals, and those signals pass through walls. When a person is in a room, or even behind a wall, their body reflects, absorbs, and slightly disturbs those signals in ways that are measurable. And the ihe idea of this AI is precisely that, what if we capture those disturbances accurately enough, we can run them through an AI model, and can reconstruct where a person is, how they're moving, and even their breathing or heart rate, without any camera ever being involved. That’s what I mean by “it sees through walls with WiFi and AI.” The AI reads the ripples a human body makes in the radio environment and translates them back into a picture of what's happening, through solid walls included.
How to do this?
What you need to do is plug in a few ESP32-S3 nodes (these are low-cost microcontroller-based chips that are deployed to act as radio signals), and point them at a server, and your ambient WiFi signals become a real-time human pose estimation system. It requires no cameras, wearables, or even a cloud of any sort whatsoever. Through radio waves alone, they do their thing.
- Also, read
- Vivo X300s launched in China with 200MP Zeiss camera, Dimensity 9500, and a 7100mAh battery
Does this work, though?
There are some mixed results regarding this, and many people are actually suspicious of this instead. Recently, a technical audit published on GitHub tore into the early codebase and found CSI parsers returning random or hardcoded data. They couldn’t actually find any trained model weights or datasets, and even accuracy metrics like "94.2%" seem to have no real backing, and let’s not even get started about the Docker images that didn't exist. The demo observatory, the holographic Three.js visualization showing keypoints and vital sign readouts, instead appears simulated rather than live hardware-fed. When people called this out, they were soon closed and deleted.
π RuView: Conclusion
It’s not that basic presence detection cannot work. If you barge into any engineering college, especially in an electronics faculty, you might even find projects that concern rough motion sensing via a proper ESP32-S3 mesh, likely working. But the issue is not that; it's a bit complex. The real-time 17-keypoint DensePose reconstruction through walls on commodity hardware, reliably, for multiple people simultaneously, is what I cannot get behind. There’s this big gap between the README and reality, which makes it very dubious. So, it's either a research prototype that got marketed several steps ahead of where it actually is, or it's a wrapper around known techniques dressed up with AI-generation boilerplate. But there’s a chance that it is possibly both; I wouldn’t put it past them.
Article Last updated: April 1, 2026







