Image feature detectors are a common concept between mammalian vision and computer vision. When using them, a raster image is not directly processed to identify complex objects (e.g. a flower, or the digit 2). Instead feature detectors map the distribution of simple figures (such as straight edges) within the image. Higher layers of the neural network then use these maps for distinguishing objects.
In the mammalian brain’s visual cortex (which is at the back of the head, at the furthest possible point from the eyes) the image on the retina is recreated as a spatially faithful projection of the excitation pattern on the retina. Overlapping sets of feature detectors use this as input.
A feature detector neuron becomes active when its favourite pattern shows up in the projected visual field – or more exactly in the area within the visual field where each detector is looking. A typical class of detectors is specific for edges with a specific angle, where one side is dark, and the other side is light. Other neurons recognise more complex patterns, and some also require motion for activation. These detectors together cover the entire visual field, and their excitation pattern is the input to higher layers of processing. We learned about these neurons first by sticking microelectrodes into the visual cortex and measuring electrical activity. When lucky, the electrode measured the activity of a single neuron; then by showing different visual stimuli the activation pattern of the neuron could be mapped.
In mammals the combined output of many detectors are analysed by higher layers of the visual cortex for complex object recognition. Sometimes however the raw feature detector output is already good enough for choosing appropriate behaviour. In toads (where feature detectors sit directly in the retina) hunting is driven by a detector for a moving small dark point (a bug detector), and a detector for a lengthy stripe moving longitudinally (a worm detector). Fleeing is initiated by the big scary moving object detector (moving large rectangle) which overrides the worm and bug detectors (videos of great experiments).
How do feature detectors know what to look for? Are they set up 1) completely before birth without input from experience, or 2) are they refined using input from experience, or 3) are they completely the product of experience? (Disregarding that evolution of development itself is a very slow form of gaining experience.) Different species of animals are probably different in details, but at least for cats and primates 1) is not the case. If during a critical period of 3-6 months right after birth these animals are completely deprived of visual experience, then they never develop the ability to distinguish even simple shapes such as rectangles from circles. However, once the necessary structures are in place, they are stable. Visual deprivation later in life does not deteriorate abilities with similar finality. (See here and here for more detailed descriptions of these experiments.)
Each particular feature needs to be seen frequently during the critical period to be well recognised later in life. For example cats only exposed to vertical stripes early on will be later virtually blind to horizontal edges (Blakemoore & Cooper, Nature (1970) 228, 477-478). The opposite is true for cats exposed only to horizontal stripes. In electrophysiological measurements there will be a corresponding absence of feature detector neurons for the unexercised direction. Similarly, cats that see a world only illuminated by very short flashes of light with long dark periods in-between, to suppress any sensation of continuous motion, will have difficulty with analysing motion. Half of the feature detector neurons in these cats respond however to diffuse strobing lights, as the prevalent stimulus during their critical period (Cynader, Berman & Hein, PNAS (1973), 70, 1353-1354). In a different experiment, if the perceived motion during the critical period is always only in one direction, then feature detectors for other directions are less well developed (Cynader, Berman & Hein, Exp. Brain Res. (1975) 22, 267-280).
As a question for a future post, how do neurons choose what they are sensitive for? They might be set up for recognising random figures and shaping this pattern into actually occurring shapes. There might be also a variety of biases towards shapes, and those cells that find something regularly will survive and refine their target. For comparison, in a slightly different system, binocular vision (in species where both eyes look at the same image unlike e.g. many birds), there are at birth already intertwined anatomical structures that process overlapping visual input from both eyes next to each other (optical dominance columns). If one eye is deprived of vision during the critical period, then the optical dominance columns for the other eye will enlarge at the cost of the deprived eye’s columns, and the cortex for the deprived eye loses processing abilities. (This becomes a problem for those who are born with strabismus and begin to favour one eye.) Similar competition for input or survival might take place also with feature recognisers: either feature types compete for malleable cells, or relatively fixed cells compete for survival. Single-orientation-trained cats seem however not to have large areas without neuronal activity, which might arise through the elimination of never activated neurons. Therefore it seems more likely to me that the orientation specificity of cells is malleable.