When considering how vision works, it is tempting to treat the retina as a digital camera. It converts light into electricity, performs some simple computations and reliably (but passively) conveys the result to the user, who is left to make sense of the picture. In this view of the retina, the user is the brain which does all the tough neural computation required to use visual information to (ultimately) guide our behaviour. Gregory Schwartz’s terrific book shows just how wrong this view is, revealing the exquisite computations the retina must perform to reliably signal a wide range of visual information (luminance, contrast, spatial features, motion, colour).
Starting with the familiar role of the retina in signalling luminance, how can rods reliably respond to a single photon when phototransduction is the result of a series of noisy biochemical reactions? Schwartz describes the neural circuitry that does the filtering and thresholding allowing such astounding sensitivity. He next considers the synaptic, neuromodulatory and feedback mechanisms supporting luminance adaptation and how they allow the limited (two orders of magnitude) response-range of retinal ganglion cells (RGCs) to signal luminance-change in the face of the huge variation in light levels (nine orders of magnitude) we encounter every day.
But what about absolute luminance needed to control the pupil? Intrinsically photosensitive RGCs contain melanopsin, allowing them to respond directly to light – and their large receptive fields and long integration times (up to eight seconds!) makes them suitable light meters. Schwartz describes how RGCs achieve contrast sensitivity through high background firing rates, by weighting excitation/inhibition to maximise signal-response, and through adaptation at timescales. Weirdly, some RGCs are turned off by high-contrast stimuli but spike happily to uniform fields of light. It’s been suggested that this could allow these cells to signal poor focus (as a drive to accommodation). Moreover, this could complement other RGSs that – by non-linearly combining input across space – can signal fine-scale information that would be consistent with good focus. On the subject of interactions across space, retinal cells are subject to surround suppression (SS, aka lateral inhibition) perhaps the best-known retinal computation of all. Almost 70 years after Hartline et al’s discovery, much research still focuses on the role of horizontal cells in SS, which combine local input from one photoreceptor (via fine dendritic contact with individual cone synapses) with global input from thousands of photoreceptors (via electrical coupling to a network of gap junctions).
A frog eyes a fly
While the information considered would help say what an object is, the retina can also play a role in revealing where objects are. Animals exhibit remarkable visual abilities when localising prey, and Lettvin et al’s seminal (1959) paper ‘What the frog’s eye tells the frog’s brain’ revealed that some RGCs respond only to small dark moving objects (‘bug-detectors’). These cells are generally found in species that use vision for prey detection but lack a fovea, and achieve their bug-sensitivity through strong SS. As well as capturing shifts in position, the retina plays a role in motion processing.
The first indication that RGCs cared not only about changes in position but also about the direction in which features move came in the ’60s, from work in frogs, rabbits and pigeons. Professor Horace Barlow argued at the time that such motion detection was driven by an inhibitory mechanism, the source of which was later determined to be starburst amacrine cells. Direction-selective RGCs are likely involved in driving optokinetic response to motion arising from self-motion, for example. Remarkably, some RGCs use surround suppression to respond only to motion likely to have arisen from a moving object rather than to motion consistent with eye movements, and even seem to be able to perform motion prediction (to minimise the impact of delays arising from neural processing on signalling motion). Other RGCs have even evolved to be sensitive to ‘looming’ – expanding dark objects in the upper visual field that might indicate a threat, such as an approaching overhead predator, or any object on a collision course with our head.
The structure of the book is great. Each chapter digs into how retinal computations work, but also (where the literature exists) how the retina evolved to do this computation and how the result is transmitted to the brain to guide behaviour. Examples are drawn from a range of species as the principles of retinal computation generalise well across vertebrates and invertebrates. Some computations are complex and the presentation can be dense. The author (who is an active researcher in this area at Northwestern University, US) assumes a basic understanding of retinal physiology and of neuroscience. However, the writing is clear and the concepts well illustrated (with each chapter commencing with a rather charming watercolour illustration). Chapters also have Matlab programming exercises (with supporting code available online) and discussion points, making this a great teaching tool (albeit likely at a graduate level). That said, this book is also a unique overview of our current understanding of the why and the how of retinal computation and there is something here for anyone with a grounding in vision science who recognises that there is more to what the retina does than… meets the eye.
Professor Steven Dakin is the former head of the School of Optometry and Vision Science (SOVS) at the University of Auckland.