I’ve got strong last-minute game.

Editing my VSS stuff on the Uncanny Valley. I’ll post something here when I get back.

As usual, Illustrator is doing something strange that I cannot understand.

Screen Shot 2018-05-16 at 3.08.47 PM

Nope, not supposed to look like that, in case you were wondering.

Fechner’s Aesthetics Revisited

Isn’t it beautiful?

Gustav Fechner is widely respected as a founding father of experimental psychology and psychophysics but fewer know of his interests and work in empirical aesthetics. In the later 1800s, toward the end of his career, Fechner performed experiments to empirically evaluate the beauty of rectangles, hypothesizing that the preferred shape would closely match that of the so-called ‘golden rectangle’. His findings confirmed his suspicions, but in the intervening decades there has been significant evidence pointing away from that finding. Regardless of the results of this one study, Fechner ushered in the notion of using a metric to evaluate beauty in a psychophysical way. In this paper, we recreate the experiment using more naturalistic stimuli. We evaluate subjects’ preferences against models that use various types of object complexity as metrics. Our findings that subjects prefer either very simple or very complex objects runs contrary to the hypothesized results, but are systematic nonetheless. We conclude that there are likely to be useful measures of aesthetic preference but they are likely to be complicated by the difficulty in defining some of their constituent parts.

F. Phillips, J. F. Norman, and A. Beers, “Fechner’s Aesthetics Revisited,” Seeing and Perceiving, vol. 23, no. 3, pp. 263–271, Jul. 2010.


Combinational Imaging: Magnetic Resonance Imaging and EEG Displayed Simultaneously

Before fMRI (Functional Magnetic Resonance Imaging) existed, I got to do this.

We were one of the first labs to do this pre ‘functional’ functional imaging. Instead of volume rendering (which I would move on to Pixar to work on with Bob Drebin, Pat Hanrahan, and Loren Carpenter) we made surfaces for everything.

Abstract: We report on a new technique to combine two technologies [magnetic resonance imaging (MRI) and topographic imaging of EEG] to produce an overlapping image of both scalp-recorded EEG and the underlying brain anatomy within a given subject. High-resolution-graphics postprocessing of these data was used to create this integrated image.

M. W. Torello, F. Phillips, W. W. Hunter Jr., and C. A. Csuri, “Combinational Imaging: Magnetic Resonance Imaging and EEG Displayed Simultaneously,” Journal of Clinical Neurophysiology, vol. 4, no. 3, pp. 274–293, Jul. 1987.

DOI: 10.1097/00004691-198707000-00007

pdf Torello et al., 1987


The perception of surface orientation from multiple sources of optical information

The first piece of work in did in the “Todd Lab” at OSU.

I had just come off five years at Pixar and a year back in grad school in the Architecture and Planning department. I wrote most of the code for this for making and displaying objects, the interactive ‘gauge figure’ and the like. Farley and Jim came up with the distortion method (these are notoriously Farley’s “potatoes” as compared to my “glavens”. Potato, potato), and Farley and I implemented it. I wrote the gauge figure stuff during a visit with Jan Koenderink, whose book Alvy Ray Smith recommended I look at while back @ Pixar. Crazy.

Abstract: An orientation matching task was used to evaluate observers’ sensitivity to local surface orienta- tion at designated probe points on randomly shaped 3-D objects that were optically defined by tex- ture, lambertian shading, or specular highlights, These surfaces could be stationary or in motion, and they could be viewed either monocularly or stereoscopically, in all possible combinations. It was found that the deformations of shading and/or highlights (either over time or between the two eyes’ views) produced levels of performance similar to those obtained for the optical deformations of tex- tured surfaces. These findings suggest that the human visual system utilizes a much richer array of optical information to support its perception of shape than is typically appreciated.

J. F. Norman, J. T. Todd, and F. Phillips, “The perception of surface orientation from multiple sources of optical information,” Percept Psychophys, vol. 57, no. 5, pp. 629–636, Jul. 1995.

pdfNorman, Todd & Phillips, 1995.

Spring 2018 Vision in Animals, Humans and Machines — Final Projects

Vision in Humans, Animals and Machines is a seminar / hands-on course where we engage in a sort of comparative neuroscience with respect to how organic and inorganic systems ‘see’.

Some things are hard for animals, some things are easy. The same can be said for machines. The exhaustively deployed aphorism — “Easy things are hard for computers, hard things are easy for computers” reminds us that, the way ‘computer vision’ works probably doesn’t have all that much in common with how living organisms do1.

One of the best ways to observe this is to probe situations where each type of system fails to work. In this class, we learned about biological mechanisms of vision as well as computational analogs. We tried to ‘break’ computer vision systems in systematic ways and analyzed the results.

Final Projects

This year, the final projects were self-determined. Individuals and teams pitched their proposals early in the semester and we refined and implemented them throughout the rest of the term. They then pitched the final work and demonstrated what they had accomplished (and failed to).

The projects had to use computational methods to implement some function or malfunction of the visual system. There was some overlap between this class and the Computational Methods class, so therefore there was a lot of Mathematica used, along with some Lego MindStorms2.

Here are this year’s projects. Please enjoy them —

Synthetic Beings, Evolutionary Vision

A genetic-algorithm driven method of generating and evolving synthetic beings with different perceptual abilities in an ever changing environment.

Modeling Color Vision in Animals

A cross-species look at animals with as few as 1 and as many as 11 color receptors. Using multispectral images and banks of cone response functions and illuminations can we predict the organism’s ability to ‘see’ certain features?

Robotic Model of Simple Vision

Lego robotics, Euglena.

Modeling Prosopagnosia3

Can we make a machine learning based face recognizer ‘face-blind’?

Tracking Rats

Can we make a machine-vision system that can track a rat in a socialization apparatus and use machine learning identify its behavior? (In cooperation with the Computational Methods class.)

Cast of Characters

Zachariah Arnold, Iman Bays, Sierra Carlen, George Chakalos, Jessica Cheng, Daniela Cossio, Allison Dalton, Seeley Fancher, Sara Fontana, Rachel Greene, Julia Howe, Donna Nguyen, Jeffrey Okoro, Reece Robinson, Anthony Song, Henry Stadler, Megan Volkert, Xueying Wu.

  1. This is, of course, fine. ↩︎
  2. Remind me to tell you the story of visiting Mitch Resnick’s lab at MIT, back while I was working at Pixar, and playing with the OG LegoLogo blocks and wires and things. ↩︎
  3. People’s choice award winner. ↩︎

Spring 2018 Computational Methods — Final Projects

The goal of Computational Methods in Psychology and Neuroscience is to acquaint students with scientific computing, broadly speaking, but especially as it applies to psychology and neuroscience.

Even so, it attracts students from a pretty wide swath of majors. This year, in addition to psychology and neuroscience, we had majors from business, biology, as well as political and computer sciences.

Over the years we have used a variety of software in the course including Python, Matlab and Mathematica, as well as purpose-built environments like PsychoPy, freesurfer, ImageJ and others.

This year, we focused on Mathematica as it provides a rich set of tools and access to data and datasets without the sometimes painful management of packages and such1.

Final Projects

This year, the final projects were self-determined. Individuals and teams pitched their proposals early in the semester and we refined and implemented them throughout the rest of the term. They then pitched the final work and demonstrated what they had accomplished (and failed to).

Some of these projects are super ambitious for an introductory class, but the goal was learning and understanding the problem solving needed. Not so much minute implementation and theoretical details. Even if the problem wasn’t ‘solved’ in every case I feel like each individual / group now has a much better sense of what is possible and what is difficult2. In some cases, I implemented ‘helper’ code that is now part of the FPTools repository, but the ideas and final implementations are their own.

Here are this year’s projects. Please enjoy them —

Giant Asteroids Might Destroy Earth3

A computational simulation of asteroid impact with the planet earth, featuring animations and mortality rates.

Kids and Words

Linguistic analysis of conversations between kids and their parents.

Morality in Political Candidates

In the wake of the Facebook/Cambridge Analytica fiesta, a look at some crowdsourced (MTurk) questionnaire data about the personalities of political candidates. Machine-learning modeled candidate preferences based on interactive input.

Cartoon Face Recognition

The predominant implementation of ‘face finding’ algorithms doesn’t do a very good job with cartoon faces. This machine learning project sets out to rectify this oversight.

Name That Tune

Linguistic analysis from audio clips of songs? A huge project. Phonemes and classifiers and lyrics oh my!

Tracking Rats

Can we make a machine-vision system that can track a rat in a socialization apparatus and identify its behavior? (In cooperation with the Vision in Animals, Humans and Machines class.)

Primordial Soup

Delicious! Can we simulate the conditions of the creation of life’s building blocks (amino acids) ala the Miller-Urey experiment?

Get Your Axon

Can we teach a classifier to tell the difference between normal and malformed axons?

Cast of characters

Andres Beltre, George Chakalos, Jacob Chen, Jessica Cheng, Daniela Cossio, Allie Dinaburg, Izzy Fischer, Emil Ghitman Gilkes, Helen Gray-Bauer, Aimee Hall, Ryan Hill, Natasha Martinez, Zoe Michas, Annika Morrell, Laura Noejovich, Sarah Wilensky, Ray Yampolsky

  1. This is especially true with young scientists just dipping their toes into scientific computing. Even with some of the great package and environment management software out there some scientific computing environments can be too much. ↩︎
  2. On the first day of class, I type Sphere[]//Graphics3D into Mathematica and explain that, in 1983, I took 3 G/UG courses at OSU (CS 781–3) to get that to happen on a 320×240 pixel screen in roughly geological time. Then I shake my cane at them and tell them to get off my lawn. ↩︎
  3. People’s choice award ↩︎

Perceiving Object Shape from Specular Highlight Deformation, Boundary Contour Deformation, and Active Haptic Manipulation

Haptic and visual ‘contours’

It is well known that motion facilitates the visual perception of solid object shape, particularly when surface texture or other identifiable features (e.g., corners) are present. Conventional models of structure-from-motion require the presence of texture or identifiable object features in order to recover 3-D structure. Is the facilitation in 3-D shape perception similar in magnitude when surface texture is absent? On any given trial in the current experiments, participants were presented with a single randomly-selected solid object (bell pepper or randomly-shaped “glaven”) for 12 seconds and were required to indicate which of 12 (for bell peppers) or 8 (for glavens) simultaneously visible objects possessed the same shape. The initial single object’s shape was defined either by boundary contours alone (i.e., presented as a silhouette), specular highlights alone, specular highlights combined with boundary contours, or texture. In addition, there was a haptic condition: in this condition, the participants haptically explored with both hands (but could not see) the initial single object for 12 seconds; they then performed the same shape-matching task used in the visual conditions. For both the visual and haptic conditions, motion (rotation in depth or active object manipulation) was present in half of the trials and was not present for the remaining trials. The effect of motion was quantitatively similar for all of the visual and haptic conditions– e.g., the participants’ performance in Experiment 1 was 93.5 percent higher in the motion or active haptic manipulation conditions (when compared to the static conditions). The current results demonstrate that deforming specular highlights or boundary contours facilitate 3-D shape perception as much as the motion of objects that possess texture. The current results also indicate that the improvement with motion that occurs for haptics is similar in magnitude to that which occurs for vision.


J. F. Norman, F. Phillips, J. R. Cheeseman, K. E. Thomason, C. Ronning, K. Behari, K. Kleinman, A. B. Calloway, and D. Lamirande, “Perceiving Object Shape from Specular Highlight Deformation, Boundary Contour Deformation, and Active Haptic Manipulation,” PLoS ONE, vol. 11, no. 2, p. e0149058, Feb. 2016.


Norman, Phillips et al. 2016

Haptic shape discrimination and interhemispheric communication


In three experiments participants haptically discriminated object shape using unimanual (single hand explored two objects) and bimanual exploration (both hands were used, but each hand, left or right, explored a separate object). Such haptic exploration (one versus two hands) requires somatosensory processing in either only one or both cerebral hemispheres; previous studies related to the perception of shape/curvature found superior performance for unimanual exploration, indicating that shape comparison is more effective when only one hemisphere is utilized. The current results, obtained for naturally shaped solid objects (bell peppers, Capsicum annuum) and simple cylindrical surfaces demonstrate otherwise: bimanual haptic exploration can be as effective as unimanual exploration,showing that there is no necessary reduction in ability when haptic shape comparison requires interhemispheric communication. We found that while successive bimanual exploration produced high shape discriminability, the participants’ bimanual performance deteriorated for simultaneous shape comparisons. This outcome suggests that either interhemispheric interference or the need to attend to multiple objects simultaneously reduces shape discrimination ability. The current results also reveal a significant effect of age: older adults’ shape discrimination abilities are moderately reduced relative to younger adults, regardless of how objects are manipulated (left hand only, right hand only, or bimanual exploration).

C. J. Dowell, J. F. Norman, J. R. Moment, L. M. Shain, H. F. Norman, F. Phillips, and A. M. L. Kappers, “Haptic shape discrimination and interhemispheric communication,” Sci. Rep., vol. 8, no. 1, pp. 1–10, Dec. 2017.


Norman et al. 2017

Creating noisy stimuli

So much noise.

A method for creating a variety of pseudo-random `noisy’ stimuli that possess several useful statistical and phenomenal features for psychophysical experimentation is outlined. These stimuli are derived from a pseudo-periodic function known as multidimensional noise. This class of function has the desirable property that it is periodic, defined on a fixed domain, is roughly symmetric, and is stochastic, yet consistent and repeatable. The stimuli that can be created from these functions have a controllable amount of complexity and self-similarity properties that are further useful when generating naturalistic looking objects and surfaces for investigation. The paper addresses the creation and manipulation of stimuli with the use of noise, including an overview of this particular implementation. Stimuli derived from these procedures have been used successfully in several shape and surface perception experiments and are presented here for use by others and further discussion as to their utility.

F. Phillips, “Creating noisy stimuli,” Perception, vol. 33, no. 7, pp. 837–854, 2004.



Phillips 2004.

Perceptual representation of visible surfaces

What is a surface, anyway?

Two experiments are reported in which we examined the ability of observers to identify landmarks on surfaces from different vantage points. In Experiment 1, observers were asked to mark the local maxima and minima of surface depth, whereas in Experiment 2, they were asked to mark the ridges and valleys on a surface. In both experiments, the marked locations were consistent across different observers and remained reliably stable over different viewing directions. These findings indicate that randomly generated smooth surface patches contain perceptually salient landmarks that have a high degree of viewpoint invariance. Implications of these findings are considered for the recognition of smooth surface patches and for the depiction of such surfaces in line drawings.

Includes a handy differential geometry tutorial appendix.

F. Phillips, J. T. Todd, J. J. Koenderink, and A. M. L. Kappers, “Perceptual representation of visible surfaces,” Percept Psychophys, vol. 65, no. 5, pp. 747–762, Jul. 2003.


Phillips et al. 2003

Flip Phillips is Protected by Akismet | Powered by WordPress