Emma Alexander


Google Scholar


I'm interested in low-level, physics-based, bio-inspired artificial vision.


ICCP 2021 paper posted here

Recent talk: UCLA/Caltech Grundfest Memorial Lecture on connecting our previous work on differential defocus to microscopy – recording below.

Two upcoming talks:

I'm co-organizing the virtual CVPR Computational Cameras and Displays Workshop on June 20, 2021, which will be putting out a call shortly for spotlight presentations. If you've done cool work this year on computational sensing or displays, please consider highlighting your work with a short video at this venue! And be sure to catch our exciting speaker line up on the livestream.


Depth from Differential Defocus

Defocus reveals object location in cameras and microscopes. Inspired by the unique anatomy and behavior of the jumping spider, we explore differential defocus changes that enable efficient computation of depth, velocity, and phase. We develop a family of depth sensors for a variety of imaging settings, using standard cameras, deformable lenses, metalenses, and microscopes.

Best Student Paper ECCV 2016, Best Demo ICCP 2018

US Patent Application No. 62/928,929

ICCV Workshop 2015, ECCV 2016, ICCV 2017, IJCV 2017, ICCP Demos 2018, Dissertation 2019, PNAS 2019, ICCP 2021

[focal flow project page] [focal track project page] [metalens project page] [TIE project page] [dissertation] [media coverage 1 2 3 4 5 6 7]

Motion Estimation

Optic flow provides key motion cues, but the underlying brightness constancy constraint is often violated in real-world settings. We explore two kinds of mitigation strategies for brightness constancy violations: first, we show that explicit modeling of defocus-based violations can reveal depth and motion simultaneously; next, we explore spatial sampling techniques for robust self-motion estimation in generic and natural scenes, explaining biases found in larval zebrafish brains and behavior.

ECCV 2016, ICCV 2017, VSS 2021

[focal flow project page] [zebrafish project page]

Shape & Color

The human visual system explains pixel-to-pixel changes with a combination of material, geometric, and lighting-based explanations. We show that aligning color changes with shading cues disrupts shape perception, particularly around critical contours.

VSS 2013, VSS 2013, Interface Focus 2018

[project page] [paper]

Teaching & Outreach

I am passionate about sharing the beauty of computational imaging and the math and science that make it work. As a recipient of the Harvard University Certificate of Distinction in Teaching, I have served as head TA for Harvard's undergraduate-level Introduction to the Theory of Computation (CS121) and Mathematical Methods in the Sciences (AM21b, a combined introduction to differential equations and linear algebra), and as a course TA for the graduate-level Computer Vision course (CS283). As an undergraduate, I spent several years as a course tutor for Yale's two-semester Fundamentals of Physics series, covering mechanics and electromagnetism (PHYS200, PHYS201).

In the wider community, I have taught introductory computer skills classes through Tech Goes Home and developed and taught content for elementary schoolers through Bay Area Scientists Inspiring Students.

I have mentored undergraduate students through their personal experiences of diversity and access through Harvard's Women in STEM and Women in Computer Science organizations, as well as through the Berkeley Artificial Intelligence Research undergraduate mentoring program. I have spoken at the Women Engineers Code conference and taught for ProjectCSGirls.

As part of the Waller lab, I led undergraduate and rotation student projects, leading to an undergraduate-coauthored ICCP paper.

Differential Defocus in Cameras & Microscopes - UCLA/Caltech Grundfest Memorial Lecture Series 2021:

erratum: prisms drawn upside down so beams are shown deflected the wrong way, sorry!