Spring 2018 Vision in Animals, Humans and Machines — Final Projects

Vision in Humans, Animals and Machines is a seminar / hands-on course where we engage in a sort of comparative neuroscience with respect to how organic and inorganic systems ‘see’.

Some things are hard for animals, some things are easy. The same can be said for machines. The exhaustively deployed aphorism — “Easy things are hard for computers, hard things are easy for computers” reminds us that, the way ‘computer vision’ works probably doesn’t have all that much in common with how living organisms do1.

One of the best ways to observe this is to probe situations where each type of system fails to work. In this class, we learned about biological mechanisms of vision as well as computational analogs. We tried to ‘break’ computer vision systems in systematic ways and analyzed the results.

Final Projects

This year, the final projects were self-determined. Individuals and teams pitched their proposals early in the semester and we refined and implemented them throughout the rest of the term. They then pitched the final work and demonstrated what they had accomplished (and failed to).

The projects had to use computational methods to implement some function or malfunction of the visual system. There was some overlap between this class and the Computational Methods class, so therefore there was a lot of Mathematica used, along with some Lego MindStorms2.

Here are this year’s projects. Please enjoy them —

Synthetic Beings, Evolutionary Vision

A genetic-algorithm driven method of generating and evolving synthetic beings with different perceptual abilities in an ever changing environment.

Modeling Color Vision in Animals

A cross-species look at animals with as few as 1 and as many as 11 color receptors. Using multispectral images and banks of cone response functions and illuminations can we predict the organism’s ability to ‘see’ certain features?

Robotic Model of Simple Vision

Lego robotics, Euglena.

Modeling Prosopagnosia3

Can we make a machine learning based face recognizer ‘face-blind’?

Tracking Rats

Can we make a machine-vision system that can track a rat in a socialization apparatus and use machine learning identify its behavior? (In cooperation with the Computational Methods class.)

Cast of Characters

Zachariah Arnold, Iman Bays, Sierra Carlen, George Chakalos, Jessica Cheng, Daniela Cossio, Allison Dalton, Seeley Fancher, Sara Fontana, Rachel Greene, Julia Howe, Donna Nguyen, Jeffrey Okoro, Reece Robinson, Anthony Song, Henry Stadler, Megan Volkert, Xueying Wu.

  1. This is, of course, fine. ↩︎
  2. Remind me to tell you the story of visiting Mitch Resnick’s lab at MIT, back while I was working at Pixar, and playing with the OG LegoLogo blocks and wires and things. ↩︎
  3. People’s choice award winner. ↩︎

Flip Phillips is Protected by Akismet | Powered by WordPress