Spring 2018 Vision in Animals, Humans and Machines — Final Projects

Vision in Humans, Animals and Machines is a seminar / hands-on course where we engage in a sort of comparative neuroscience with respect to how organic and inorganic systems ‘see’.

Some things are hard for animals, some things are easy. The same can be said for machines. The exhaustively deployed aphorism — “Easy things are hard for computers, hard things are easy for computers” reminds us that, the way ‘computer vision’ works probably doesn’t have all that much in common with how living organisms do1.

One of the best ways to observe this is to probe situations where each type of system fails to work. In this class, we learned about biological mechanisms of vision as well as computational analogs. We tried to ‘break’ computer vision systems in systematic ways and analyzed the results.

Final Projects

This year, the final projects were self-determined. Individuals and teams pitched their proposals early in the semester and we refined and implemented them throughout the rest of the term. They then pitched the final work and demonstrated what they had accomplished (and failed to).

The projects had to use computational methods to implement some function or malfunction of the visual system. There was some overlap between this class and the Computational Methods class, so therefore there was a lot of Mathematica used, along with some Lego MindStorms2.

Here are this year’s projects. Please enjoy them —

Synthetic Beings, Evolutionary Vision

A genetic-algorithm driven method of generating and evolving synthetic beings with different perceptual abilities in an ever changing environment.

Modeling Color Vision in Animals

A cross-species look at animals with as few as 1 and as many as 11 color receptors. Using multispectral images and banks of cone response functions and illuminations can we predict the organism’s ability to ‘see’ certain features?

Robotic Model of Simple Vision

Lego robotics, Euglena.

Modeling Prosopagnosia3

Can we make a machine learning based face recognizer ‘face-blind’?

Tracking Rats

Can we make a machine-vision system that can track a rat in a socialization apparatus and use machine learning identify its behavior? (In cooperation with the Computational Methods class.)

Cast of Characters

Zachariah Arnold, Iman Bays, Sierra Carlen, George Chakalos, Jessica Cheng, Daniela Cossio, Allison Dalton, Seeley Fancher, Sara Fontana, Rachel Greene, Julia Howe, Donna Nguyen, Jeffrey Okoro, Reece Robinson, Anthony Song, Henry Stadler, Megan Volkert, Xueying Wu.

  1. This is, of course, fine. ↩︎
  2. Remind me to tell you the story of visiting Mitch Resnick’s lab at MIT, back while I was working at Pixar, and playing with the OG LegoLogo blocks and wires and things. ↩︎
  3. People’s choice award winner. ↩︎

Spring 2018 Computational Methods — Final Projects

The goal of Computational Methods in Psychology and Neuroscience is to acquaint students with scientific computing, broadly speaking, but especially as it applies to psychology and neuroscience.

Even so, it attracts students from a pretty wide swath of majors. This year, in addition to psychology and neuroscience, we had majors from business, biology, as well as political and computer sciences.

Over the years we have used a variety of software in the course including Python, Matlab and Mathematica, as well as purpose-built environments like PsychoPy, freesurfer, ImageJ and others.

This year, we focused on Mathematica as it provides a rich set of tools and access to data and datasets without the sometimes painful management of packages and such1.

Final Projects

This year, the final projects were self-determined. Individuals and teams pitched their proposals early in the semester and we refined and implemented them throughout the rest of the term. They then pitched the final work and demonstrated what they had accomplished (and failed to).

Some of these projects are super ambitious for an introductory class, but the goal was learning and understanding the problem solving needed. Not so much minute implementation and theoretical details. Even if the problem wasn’t ‘solved’ in every case I feel like each individual / group now has a much better sense of what is possible and what is difficult2. In some cases, I implemented ‘helper’ code that is now part of the FPTools repository, but the ideas and final implementations are their own.

Here are this year’s projects. Please enjoy them —

Giant Asteroids Might Destroy Earth3

A computational simulation of asteroid impact with the planet earth, featuring animations and mortality rates.

Kids and Words

Linguistic analysis of conversations between kids and their parents.

Morality in Political Candidates

In the wake of the Facebook/Cambridge Analytica fiesta, a look at some crowdsourced (MTurk) questionnaire data about the personalities of political candidates. Machine-learning modeled candidate preferences based on interactive input.

Cartoon Face Recognition

The predominant implementation of ‘face finding’ algorithms doesn’t do a very good job with cartoon faces. This machine learning project sets out to rectify this oversight.

Name That Tune

Linguistic analysis from audio clips of songs? A huge project. Phonemes and classifiers and lyrics oh my!

Tracking Rats

Can we make a machine-vision system that can track a rat in a socialization apparatus and identify its behavior? (In cooperation with the Vision in Animals, Humans and Machines class.)

Primordial Soup

Delicious! Can we simulate the conditions of the creation of life’s building blocks (amino acids) ala the Miller-Urey experiment?

Get Your Axon

Can we teach a classifier to tell the difference between normal and malformed axons?

Cast of characters

Andres Beltre, George Chakalos, Jacob Chen, Jessica Cheng, Daniela Cossio, Allie Dinaburg, Izzy Fischer, Emil Ghitman Gilkes, Helen Gray-Bauer, Aimee Hall, Ryan Hill, Natasha Martinez, Zoe Michas, Annika Morrell, Laura Noejovich, Sarah Wilensky, Ray Yampolsky

  1. This is especially true with young scientists just dipping their toes into scientific computing. Even with some of the great package and environment management software out there some scientific computing environments can be too much. ↩︎
  2. On the first day of class, I type Sphere[]//Graphics3D into Mathematica and explain that, in 1983, I took 3 G/UG courses at OSU (CS 781–3) to get that to happen on a 320×240 pixel screen in roughly geological time. Then I shake my cane at them and tell them to get off my lawn. ↩︎
  3. People’s choice award ↩︎

Perceiving Object Shape from Specular Highlight Deformation, Boundary Contour Deformation, and Active Haptic Manipulation

Haptic and visual ‘contours’

It is well known that motion facilitates the visual perception of solid object shape, particularly when surface texture or other identifiable features (e.g., corners) are present. Conventional models of structure-from-motion require the presence of texture or identifiable object features in order to recover 3-D structure. Is the facilitation in 3-D shape perception similar in magnitude when surface texture is absent? On any given trial in the current experiments, participants were presented with a single randomly-selected solid object (bell pepper or randomly-shaped “glaven”) for 12 seconds and were required to indicate which of 12 (for bell peppers) or 8 (for glavens) simultaneously visible objects possessed the same shape. The initial single object’s shape was defined either by boundary contours alone (i.e., presented as a silhouette), specular highlights alone, specular highlights combined with boundary contours, or texture. In addition, there was a haptic condition: in this condition, the participants haptically explored with both hands (but could not see) the initial single object for 12 seconds; they then performed the same shape-matching task used in the visual conditions. For both the visual and haptic conditions, motion (rotation in depth or active object manipulation) was present in half of the trials and was not present for the remaining trials. The effect of motion was quantitatively similar for all of the visual and haptic conditions– e.g., the participants’ performance in Experiment 1 was 93.5 percent higher in the motion or active haptic manipulation conditions (when compared to the static conditions). The current results demonstrate that deforming specular highlights or boundary contours facilitate 3-D shape perception as much as the motion of objects that possess texture. The current results also indicate that the improvement with motion that occurs for haptics is similar in magnitude to that which occurs for vision.


J. F. Norman, F. Phillips, J. R. Cheeseman, K. E. Thomason, C. Ronning, K. Behari, K. Kleinman, A. B. Calloway, and D. Lamirande, “Perceiving Object Shape from Specular Highlight Deformation, Boundary Contour Deformation, and Active Haptic Manipulation,” PLoS ONE, vol. 11, no. 2, p. e0149058, Feb. 2016.


Norman, Phillips et al. 2016

Haptic shape discrimination and interhemispheric communication


In three experiments participants haptically discriminated object shape using unimanual (single hand explored two objects) and bimanual exploration (both hands were used, but each hand, left or right, explored a separate object). Such haptic exploration (one versus two hands) requires somatosensory processing in either only one or both cerebral hemispheres; previous studies related to the perception of shape/curvature found superior performance for unimanual exploration, indicating that shape comparison is more effective when only one hemisphere is utilized. The current results, obtained for naturally shaped solid objects (bell peppers, Capsicum annuum) and simple cylindrical surfaces demonstrate otherwise: bimanual haptic exploration can be as effective as unimanual exploration,showing that there is no necessary reduction in ability when haptic shape comparison requires interhemispheric communication. We found that while successive bimanual exploration produced high shape discriminability, the participants’ bimanual performance deteriorated for simultaneous shape comparisons. This outcome suggests that either interhemispheric interference or the need to attend to multiple objects simultaneously reduces shape discrimination ability. The current results also reveal a significant effect of age: older adults’ shape discrimination abilities are moderately reduced relative to younger adults, regardless of how objects are manipulated (left hand only, right hand only, or bimanual exploration).

C. J. Dowell, J. F. Norman, J. R. Moment, L. M. Shain, H. F. Norman, F. Phillips, and A. M. L. Kappers, “Haptic shape discrimination and interhemispheric communication,” Sci. Rep., vol. 8, no. 1, pp. 1–10, Dec. 2017.


Norman et al. 2017

Creating noisy stimuli

So much noise.

A method for creating a variety of pseudo-random `noisy’ stimuli that possess several useful statistical and phenomenal features for psychophysical experimentation is outlined. These stimuli are derived from a pseudo-periodic function known as multidimensional noise. This class of function has the desirable property that it is periodic, defined on a fixed domain, is roughly symmetric, and is stochastic, yet consistent and repeatable. The stimuli that can be created from these functions have a controllable amount of complexity and self-similarity properties that are further useful when generating naturalistic looking objects and surfaces for investigation. The paper addresses the creation and manipulation of stimuli with the use of noise, including an overview of this particular implementation. Stimuli derived from these procedures have been used successfully in several shape and surface perception experiments and are presented here for use by others and further discussion as to their utility.

F. Phillips, “Creating noisy stimuli,” Perception, vol. 33, no. 7, pp. 837–854, 2004.



Phillips 2004.

Perceptual representation of visible surfaces

What is a surface, anyway?

Two experiments are reported in which we examined the ability of observers to identify landmarks on surfaces from different vantage points. In Experiment 1, observers were asked to mark the local maxima and minima of surface depth, whereas in Experiment 2, they were asked to mark the ridges and valleys on a surface. In both experiments, the marked locations were consistent across different observers and remained reliably stable over different viewing directions. These findings indicate that randomly generated smooth surface patches contain perceptually salient landmarks that have a high degree of viewpoint invariance. Implications of these findings are considered for the recognition of smooth surface patches and for the depiction of such surfaces in line drawings.

Includes a handy differential geometry tutorial appendix.

F. Phillips, J. T. Todd, J. J. Koenderink, and A. M. L. Kappers, “Perceptual representation of visible surfaces,” Percept Psychophys, vol. 65, no. 5, pp. 747–762, Jul. 2003.


Phillips et al. 2003

Complicated sports and movie watching

While I was on sabbatical in Gießen I was thrilled to have super (über?) fast internet and a nice television.

Unfortunately for me, the good folks at Apple / Netflix / HBO / etc, don’t want people in other countries to be able to easily access American™ feeds (and vice-versa of course, and I dig the whole ‘licensing’ thing, etc. Still, I paid for it and it would be nice to be able to access things I pay for when I am in places other than my usual places.)

So, what to do? Basically, set up a server back in America™, VPN into that thing, then try my best to convince the AppleTV in my apartment to access that feed. Turns out, this was more complicated than you’d think, since the AppleTV does bits of voodoo so that, even though it was connected to a VPN back in Saratoga, it still ‘knew’ it was in Germany. So basically my approaches involved various iOS devices, screen sharing, and voodoo strategies.

My first iteration looked like this:

(Note that I had improperly drawn the flag of Germany but unfortunately made it the flag of Belgium. I’m pretty sure they get that all the time… I erased it here.)

I wanted to watch Arsenal and, sure enough, whatever sport network my apartment building had was not really that ‘diverse’ in the sporting sense, so that’s how I did it. It was ridiculous.

So, to make it even more ridiculous, I went with version 2 here-

This required internet sharing on the laptop, hard wiring the AppleTV and using screen sharing only.

I’m sure neither of these work anymore, but found these Paper drawings I made to remind myself, and thought I should re-visit my insanity.

Gromitcam – in stereo

Back in 1997 I made a stereo webcam for keeping track of my dog, Gromit.

I was rolling through some archives and found an image from January 1998. It used two Logitech golf-ball cams and a bunch of ad hoc software. Pushed the frames to my machine at OSU Vision Laboratory. I’ll see if I can find some photos of the rig, but, I dare say it was one of the first stereo webcams ever. (Update: found some)




















You can take the boy out of architecture school

rat palace

This is a rodent enclosure for behavioral experiments.

We printed the main cage, post retainers and lid with ABS from our uPrint and the ‘stay off the roof’ roof with our Formlabs in a nice, slick durable resin. There are 4 of these babies in Hassan’s lab, in our awesome 80/20 observation cages. All parametrically designed with OpenSCAD.

rat palace




Testre le embed das iframe

This is a test of an embedded iframe of fun and joy.

Please excuse anything annoying.

Most sincerely.

Flip Phillips is Protected by Akismet | Powered by WordPress