TeleWindow

Experimenting with 3D reconstruction and relighting for pointcloud telepresence.

Tags

telepresence, graphics, point cloud

Roles

Graphics Developer.

Collaborators

Michael Naimark, Cameron Ballard, Barak Chamo, David Santiano, Ivy Huang, Bruce Luo, Grace Huang, Mateo Juvera Molina, Ada Zhao, Nathalia Lin.

2020

telepresence research

TeleWindow
TeleWindow
How might we bridge the gap between technical possibility and human connection in real-time telepresence?

Project Overview

TeleWindow is an ongoing research initiative led by Michael Naimark that reimagines video calls through the lens of spatial computing. The system creates a "window" between spaces, enabling natural eye contact and spatial presence through real-time 3D reconstruction.

Research Focus
My six-month research focused on two critical challenges: improving visual fidelity through surface reconstruction and developing more naturalistic lighting systems. This work sits at the intersection of technical innovation and human-centered design, asking how we can make telepresence feel more like presence.

Current System

Multi-camera coverage reducing occlusion issues
The system employs four Intel RealSense cameras for volumetric capture, creating real-time point cloud representations that enable natural spatial presence and eye contact.
๐Ÿ“ท

Capture

Four-camera array for complete volumetric coverage

๐Ÿ”„

Processing

Real-time point cloud fusion and perspective correction

๐Ÿ‘๏ธ

Display

Eye-tracked stereoscopic output via SeeFront

Real environment capture (monoscopic view)
Real-time movement in virtual space

Research Challenges

Visual Coherence

Despite using static transformation matrices calculated through ICP registration, perfect point cloud alignment remains elusive. The challenge intensifies with closer objects, where perspective distortion and sensor noise become more pronounced.

Point cloud alignment artifacts in close-range capture
When the coverage of each camera has too much overlap, the point clouds generated from each camera will have significant misalignments and cause z-fighting artifacts.

Volumetric Representation

Point clouds, while computationally efficient, lack true volumetric properties. Without associated geometry, they cannot fully participate in virtual environments through lighting interactions or physical behaviors.

Research Approaches

1. Surface Reconstruction

While state-of-the-art systems like Microsoft's Fusion4D achieve high-quality reconstruction, their approaches rely on proprietary algorithms and different camera configurations. Our unique setup, with closely-positioned cameras, required alternative solutions.

Fusion4D Pipeline (Microsoft Research)Fusion4D Pipeline (Microsoft Research)

We explored Marching Cubes as a GPU-accelerated alternative, which previous research suggested could achieve real-time performance with our data scale.

Our GPU-accelerated Marching Cubes pipelineOur GPU-accelerated Marching Cubes pipeline

2. Normal-Based Relighting

๐Ÿ’ก

Challenge

Adding realistic lighting to point clouds without full reconstruction

๐Ÿ”

Approach

Generating normal vectors from depth maps for lighting calculations

โšก

Implementation

Custom shaders for real-time normal calculation and lighting

Normal vectors enabling lighting calculationsNormal vectors enabling lighting calculationsNormal map generation from depth dataNormal map generation from depth data

Implementation Results

Surface Reconstruction

Our implementation, while limited by memory constraints to 256ยณ voxel resolution, revealed important insights. The reconstructed meshes offered improved rendering performance and better integration with virtual environments, despite lower visual fidelity.

Comparison of reconstruction resultsComparison of reconstruction results

Real-time Relighting

Normal map generation from depth data
The normal-based lighting system successfully added volumetric characteristics to point clouds, though camera discrepancies introduced visible noise in normal calculations.
Directional lighting on captured objects
Human subject under dynamic lighting

Research Insights

๐Ÿ”

Technical Boundaries

Simple solutions for visual artifacts remain elusive

๐Ÿ’ก

Alternative Approaches

Point cloud post-processing offers promising feature possibilities

๐ŸŽฏ

Future Direction

Focus on novel user experiences over perfect visual fidelity

Multi-camera capture demonstrating system capabilities
This research revealed that while perfect visual reconstruction remains challenging, the combination of real-time 3D capture and stereoscopic display creates compelling telepresence experiences. The future of this work lies in exploring novel interactions rather than pursuing perfect visual fidelity.

Future Directions

I'm currently exploring these insights through a new project using Looking Glass Factory's holographic display technology, continuing to investigate the intersection of technical capability and human experience in telepresence systems.

Acknowledgments

Special thanks to Michael Naimark and Dave Santiano for their guidance and mentorship throughout this research journey.

Back to top