Perception and attention - An Introduction to Applied Cognitive Psychology - David Groome, Anthony Esgate, Michael W. Eysenck

An Introduction to Applied Cognitive Psychology - David Groome, Anthony Esgate, Michael W. Eysenck (2016)

Chapter 2. Perception and attention

Errors and accidents

Graham Edgar and Helen Edgar

2.1 INTRODUCTION: SENSATION, PERCEPTION AND ATTENTION

Perception of the world around us is something that we tend to take for granted - it just happens. We recognise objects; we pick them up; we walk around them; we drive past them. Generally, our perceptual systems work so well that we are completely unaware of the complex sensory and cognitive processes that underpin them - unless something goes wrong.

Figure 2.1 shows a simplified model of perception. The first stage in the process of perception is that ‘sensations’ are collected by our senses. Even defining a ‘sense’ is not a trivial problem. If we classify a sense by the nature of the stimulus that it detects, then we have only three - chemical, mechanical and light (Durie, 2005). If we go for the traditional classification, we have five - vision, hearing, touch, taste and smell. But what about the sense of where our limbs are? Or our sense of pain? It is not difficult to identify at least twenty-one different senses, and as many as thirty-three with a little more ingenuity (Durie, 2005). This chapter, however, will simplify things by focusing on vision.

Returning to Figure 2.1, we see that the visual system gathers information from the world around us using the light that is collected via the eyes. Note that the eyes are not the only photoreceptors that humans have. We can, for example, feel the warm glow of (some) infrared light on our skin. This chapter will, however, concentrate on the eyes. Our visual world is incredibly rich and dynamic and, as a result, the amount of visual information we collect moment to moment is staggering. Look around you. There is colour, shape, motion, depth … In fact there is too much information for everything to be processed, and this is where attention comes in. This will be considered in more detail later but, for now, it is sufficient to consider attention as acting as a ‘filter’, reducing the amount of sensory input to a manageable level. By the way, if you have looked closely at Figure 2.1, you may be wondering what the little soldier is doing there. Well, he is standing to attention …

Imag

Figure 2.1 A simplified representation of the pathway from sensation to perception.

Although memory is not the subject of this chapter, it is necessary to be aware that it may influence perception. We carry with us, in terms of memories and knowledge, information about things we have perceived in the past, things we have learnt, things we know. Using this stored knowledge can make perception far more efficient. If you know that your dog will always trip you up as soon as you enter the house, you can be primed for it. You can identify the fast-moving shape more efficiently as you know what it is likely to be.

So, to summarise the processes shown in Figure 2.1. Vast amounts of sensory information are filtered and reduced to a manageable level by our attentional processes. What is left is then combined with what we know and what pops out of the ‘top’ is our perception. It should be noted that the effects are not all one-way (note the double-headed arrows). Attention, for example, influences the amount of sensory information that may get through to be combined with what we know - but the interaction goes the other way as well. If you know where something (such as your dog) is likely to be, you can direct your attention to that spot (more on this later). It follows that, given all this filtering and processing, what we perceive may not be the same as what we sense. Most of the time, this is not a problem - but it can be.

This chapter will consider the processes of visual perception and attention and will explore, particularly, how they operate when we are doing what for many people is the most dangerous thing they will ever do - driving a car.

2.2 DRIVING - A RISKY BUSINESS

Worldwide, road traffic accidents (RTAs) are the leading cause of death in those aged 15-29 (World Health Organization, 2011). If current trends continue, RTAs could become the fifth most common cause of death worldwide by 2030 (currently ninth). While some accidents may be due to things such as mechanical failure, overwhelmingly the most common factor contributing to RTAs is the ‘human factor’. Rumar (1985) suggested that 57 per cent of (British and American) crashes were due solely to driver factors.

Driving a car will, at times, stretch the normal human capabilities to the limit and sometimes beyond. When human capabilities reach their limit, accidents happen. The first part of the chapter will consider collisions with pedestrians, as pedestrians account for a disproportionately high number of RTA casualties. The second part will then consider collisions with other vehicles. All of these issues will be considered with regard to theories of perception and attention. Lastly, this chapter will demonstrate that issues with perception and attention extend beyond driving by considering such issues within another domain - aviation.

Imag

Figure 2.2 Pedestrians may not ‘show up’ that well on the road.

Source: copyright © Oleg Krugliak/Shutterstock.com.

Although casualties on the roads in the UK are declining, in 2013 there were 21,657 people seriously injured and 1,713 fatalities as a result of RTAs; 398 (23 per cent) of the fatalities were pedestrians (Department for Transport (DfT), 2014). Olsen (2005) reports that, in the United States, pedestrians account for about 11 per cent of RTA fatalities and that, in a collision, while about 1 per cent of drivers die, about 6 per cent of pedestrians do. Pedestrians are particularly vulnerable on the roads as, not only are they less protected than car drivers, they are also generally more difficult to see than vehicles - particularly at night. Sullivan and Flannagan (2002) suggest that pedestrians may be 3 to 6.75 (approximately!) times more vulnerable to being involved in a fatal crash at night, as compared with during the day. Perhaps, not surprisingly, once other factors (such as fatigue and alcohol) have been parcelled out, probably the most important factor in the increased incidence of crashes involving cars and pedestrians at night is that it is darker (Owens and Sivak, 1996; Sullivan and Flannagan, 2002).

Pedestrians often do not show up well at night - for example, have a look at Figure 2.2. The pedestrian in this case, while visible, is not particularly conspicuous. If you were driving and had other things to think about, such as checking your in-car displays, adjusting the heater controls or scanning further down the road for oncoming vehicles, it would be easy to miss (cognitively if not physically) such an inconspicuous part of the scene (more on this later).

So, can what psychologists (and cognitive neuroscientists) know about human perception and attention be used to find a solution to the difficulty of spotting pedestrians at night? In particular, can the data, theory and practice of psychology provide any insights to help reduce the likelihood of a driver running over a pedestrian at night? To see whether this is possible, it is necessary to examine how the visual system works.

2.3 FROM THE EYE TO THE BRAIN

It is fair to say that the human eye is a simple optical system with some impressively powerful image-processing machinery and software sitting behind it. The ‘front-end’ is illustrated in Figure 2.3. Incoming light falls first on the cornea (the transparent front-surface of the eye), and that is where most of the focusing of the light is done; the lens is just doing the fine-tuning. The cornea and lens, in tandem, focus the light on the retina at the back of the eye (if everything is working to specification), which is where the light-sensitive detectors are located. Indeed, the eye is such a simple optical system (rather like a pin-hole camera) that the image formed on the retina is upside down. This might seem like a problem, as it provides the brain with extra work to do in turning the image the right way up. However, this is not the way to think of the problem. The best thing to do is not to regard it as a problem at all. The brain simply works with the image as it is, and there is no ‘right way up’.

The receptors in the retina are of two main types, rods and cones (so called because of their shapes in cross-section). The cones are responsible for daylight (photopic) vision and are of three types that are maximally sensitive to red, green or blue light (although there is a lot of overlap between the sensitivities of the different cone types). As a group, the cones are maximally sensitive to yellow light. The rods are sensitive to much lower levels of light and are responsible for night (scotopic) vision. During normal daylight levels of illumination, rods are not active as there is just too much light. Rods are maximally sensitive to green/blue light - which is why grass (for example) may appear relatively brighter at night than it does during the day. This change in the peak colour-sensitivity of the visual system as it alters from photopic to scotopic vision is referred to as the ‘Purkinje shift’ - named after the Czech psychologist who first identified it. There is an intermediate range of light levels (mesopic) where both the rods and cones are active to some extent.

Imag

Figure 2.3 The human eye in cross-section.

Copyright: Alila Medical Media/Shutterstock.com.

Each receptor has a ‘receptive field’ - that area of the field of view where, if there is light of the right wavelength present, the receptor will respond to it. If it is dark in that area of the visual field, or the wavelength of the light is outside the receptor’s range of sensitivity, the receptor will not respond. The responses of all the receptors are then carried from the eye by retinal ganglion cells, the axons of which make up the optic nerve. The optic nerve passes back through the retina, and as there are no receptors at this point, each eye has a ‘blind spot’ - although this is not usually perceived, as the lack of vision in that spot is either covered by the other eye or ‘filled in’ by the brain.

So, given that receptors in the retina respond to light, would more light help with seeing pedestrians? The short answer to this is, ‘Not necessarily’, due to the way the visual system works. Dipped-beam headlights have been found to provide illumination in the high end of the mesopic range, and full beam into the photopic range (Olson et al., 1990). Hence, object recognition is largely mediated by the cones at the light levels found in driving at night. An appropriate next step would be to focus on how the responses of cones are processed by the visual system - and whether more light would help.

A simple comparison of the number of receptors compared with the number of retinal ganglion cells provides a clue to the complexity of the retina. There are many more receptors (over one hundred times more) than there are ganglion cells, and this suggests that each ganglion cell is carrying information from more than one receptor. Many ganglion cells have a more complex receptive field than that of the receptors serving them, and the most common form of receptive field is illustrated in Figure 2.4. The receptive field shows a simple centre-surround configuration, and a number of receptors will feed their responses into both the centre and the surround. Considering the ‘on-centre’ receptive field shown on the left of the figure, if light falls in the centre of the receptive field, the ganglion cell will respond more vigorously. If, however, light falls within the surround of the receptive field, the ganglion cell will respond less vigorously. If the whole of the receptive field (centre and surround) is illuminated, the two responses balance out and the cell will not respond at all. The cell on the right is the other way around - light in the centre will inhibit its response, whereas light in the surround will excite it (hence, ‘off-centre’). It will still not respond to evenly spread illumination, indicating that absolute light level is not the most important factor governing activation.

Imag

Figure 2.4 A representation of the receptive fields of on-centre and off-centre retinal ganglion cells.

Returning to our original problem of detecting pedestrians at night, the responses of ganglion cells that are found so early in the visual system indicate that just providing more light by, for example, fitting brighter headlights to our cars may not make pedestrians easier to see. These cells do not respond to light per se, they respond to contrast, and this is a fundamental property of the human visual system. Although the visual system can respond to overall light levels (helpful in maintaining the diurnal rhythm), most ganglion cells do not respond to light, but respond to contrast. When you think about it, this has obvious benefits. One of the things the visual system has to do is to separate objects out from the background so that we can recognise them. Edges between objects and the background are usually defined by contrast. If the contrast between an object and its background is low, it is difficult to ‘pick out’ that object and recognise it; this is the way that (some) camouflage works.

Now look back to Figure 2.2. The pedestrian is likely to be difficult for a car driver to see, not because there is not enough light, but because the contrast between the pedestrian and the background is low. Anything that increases the contrast of the pedestrian will, in all likelihood, make them easier to see, but just increasing the amount of light (such as having brighter headlights) may not help as much as one might think. More light on the pedestrian may also mean more light on the background, with little effect on the overall contrast.

Another factor to consider is that pedestrians do not usually fill the entire visual field of a driver. If they do, something has probably gone seriously wrong and they are on the windscreen. Usually, the pedestrian is only a small part of the visual scene. For example, in Figure 2.2, other features include a streetlight and light coming from the moon (and reflecting off the damp road surface). Both provide localised, high-contrast features in the field of view. If you blanked out these high-contrast areas, and left the contrast of the pedestrian the same, would it make the pedestrian any easier to see? The intuitive answer is that it would make no difference as it is the contrast of the pedestrian that is important, but it is not that simple. The context (i.e. the rest of the visual field) makes a difference.

The human visual system has to cope with an enormous range of contrasts (looking at a black car key you’ve dropped in a dim footwell, compared with looking at sunlight reflecting off a damp road, for example), and it does this by adjusting the overall ‘contrast sensitivity’ of the system (rather like adjusting the exposure setting for a camera). Van Bommel and Tekelenburg (1986) looked at the detection of low-contrast pedestrians by drivers and suggested that bright areas in the field of view lower drivers’ overall contrast sensitivity and mean that lower-contrast items, such as pedestrians, are more difficult for the driver to detect.

All else being equal, however, the higher the contrast of a pedestrian, the better chance they have of being seen - as contrast is so important to the human visual system. So, rather than increasing the illumination, an alternative (or additional) solution to making a pedestrian more visible is to change the characteristics of the pedestrian so that they are of a higher contrast. Those of a certain age in the UK may remember the public information campaign that advised, ‘If you must go out at night, you really should wear something white or carry in your hand a light.’ Given that the background is usually fairly dark at night (although not always: the pedestrian could be silhouetted against a light, for example), making the pedestrian lighter will tend to increase the contrast.

Even better than wearing something white would be to use ‘conspicuity enhancers’, such as retroreflecting bands or patches on the pedestrian’s clothing. Retroreflectors are designed to return as much light as possible back in the direction from which it came, and so they tend to be particularly effective in enhancing the contrast of people wearing them when illuminated by, for example, headlights (Luoma et al., 1996).

Retroreflectors of the same spatial extent, and generating the same contrast, are more effective if placed in a bio-motion configuration. This difference gives an indication that human perception is about more than just contrast (more on this later). So, an even better solution would be to design clothing that positions the retroreflectors on the joints (elbows, wrists, knees, ankles) to create what has been termed ‘biological motion’. The human gait has particular characteristics (speed, stride length and so on) that differentiate it from, say, a swaying tree or flapping bin bag. These biological-motion characteristics are familiar to a driver (remember that in Section 2.1 we talked about the importance of knowledge in perception) and appear to make pedestrian detection easier for drivers (Luoma et al., 1996).

While contrast is crucially important to visibility, it is not something that humans demonstrate a great awareness of. Pedestrians appear to show little appreciation of the effect of what they are wearing on their visibility. Tyrrell et al. (2004) found that, on average, pedestrians believe they can be seen 1.8 times further away than they really can. A pedestrian wearing black has a tendency to overestimate the distance at which they can be seen by a factor of seven. When using bio-motion reflectors Tyrell et al. (2004) found that pedestrians actually underestimated their visibility by a factor of 0.9. That is, they believed they were less visible than they actually were. Such an inappropriate judgement of their own visibility could explain why more pedestrians do not just get out of the way of an approaching car. There is perhaps an implicit assumption on the part of a pedestrian that if they can see the car (with its multi-watt headlights), the car can also see them. This is unfortunately one mistake that it may be difficult for the pedestrian to learn from.

This chapter will now consider two distinct theoretical approaches to perception and how they can be applied to explain perceptual aspects of driving. The first approach is the ecological theory of James Gibson (1950, 1966, 1979), which emphasises what perception is for (interacting with the world) and places little or no emphasis on stored knowledge. The second approach is the constructivist theory of Richard Gregory (1980) and others, which considers knowledge as of central importance to perception. At first, it will appear as though the two approaches are wholly irreconcilable, but, as will become apparent, this is not the case.

2.4 GIBSON’S ECOLOGICAL APPROACH TO PERCEPTION

The finding that biological motion enhances visibility emphasises an important aspect of our perceptual world, which we have not really considered so far - it is highly dynamic. While driving, the car and driver are moving, as are many other things in the scene. The importance of dynamic perception has been emphasised in the theories of James Gibson (1950, 1966, 1979), who put forward what was at the time a radical (and largely ignored) theory of perception.

What Gibson proposed was an ecological theory of perception. A crucial aspect of Gibson’s theory is the importance of what perception is for. In this conceptualisation, perception is less about working out what something is, and more about working out what to do with it - perception for action. Rather than being a passive observer of the environment, Gibson’s approach emphasises that any individual is moving and interacting with that environment and that a key role of our perceptual systems is to support that interaction by registering the ambient optic array (essentially the visual field already discussed). Gibson’s theories emphasise the central importance for perception of information readily available in the visual scene, and place little or no importance on the role of stored knowledge or attention. A visual system working in the way that Gibson suggested could be represented by a much simpler version of Figure 2.1, with a direct link from sensation to perception - in other words, direct perception. This is referred to as a bottom-up approach as it emphasises the processing of information coming from the bottom end of the system - the senses. Other theories (considered later) that emphasise the importance to perception of processes internal to the individual, such as knowledge and expectations, are referred to as top-down approaches.

Let us consider an example of how direct perception might work. Even if things in the world are not moving, if the observer moves, there will still be relative motion (with respect to the observer). If an individual moves forward (whether walking, running, skiing, driving etc.), the world, relative to them, moves past them. This movement will be registered as what Gibson referred to as optic flow. Optic flow refers to the differential motion of the optic array with respect to the viewer. If an individual is moving in a straight line towards something, then the point towards which they are moving appears motionless (but only that single point). Everything around that single point will appear to move outwards in the optic array as the individual moves closer. Figure 2.5, for example, gives an indication of the optic-flow field generated by a driver approaching a stationary car in their line of travel.

Drivers can, in theory, use this optic flow to derive important information about time-to-contact (TTC) with an obstacle in their line of travel (or of an object approaching them). The TTC can be obtained by dividing the visual angle subtended by the obstacle (essentially a measure of the size of the object at the eye) by the rate of change of that visual angle - a measure referred to as τ (tau). Put more simply, people can use the rate at which an object increases in size to gauge their (or its) speed of approach. Gibson proposed that people can use such information derived from optic flow to guide their interaction with the world. It has been suggested, for example, that drivers can use changes in τ to control their braking (Lee, 1976), although sensitivity to τ is known to decline at longer TTCs (Schiff and Detwiler, 1979). The driver does not need any extra information or knowledge to use optic flow to control their actions. Everything that is needed to calculate heading and TTC is there in the optic array. More generally, everything we need to interact with the world is there in the visual stimulus.

Imag

Figure 2.5 An indication of the optic-flow field as a driver approaches a stationary vehicle in the roadway.

Source: photograph courtesy of Karen Jackson.

While it seems reasonable that drivers can use τ to control their braking, it seems unlikely that this is all they use (another possible method will be considered later). Kiefer et al. (2006) found that drivers are able to make a rapid judgement of TTC from a brief glimpse of the road ahead - consistent with a ‘fast’ perceptual judgement based on optic flow. Kiefer et al. also found, however, that judgements of TTC varied with vehicle speed, which should not be the case if only optic-flow information is being used, and, rather worryingly, that TTC was consistently underestimated (drivers thought they had longer before contact than they actually had). Another issue, of course, is that any calculation of TTC does rather presuppose that a driver is aware of the need to brake in the first place. As Rock and Harris (2006) point out, changes in τ can be useful in controlling the rate of braking, but are less useful in determining when braking should be initiated. Direct perception can explain how TTC can be calculated from the optic array, but struggles to explain why sometimes drivers do not brake appropriately, or at all. This situation will be considered in the next section.

2.5 BRAKE OR BREAK - A FAILURE OF DIRECT PERCEPTION

Have a look at the vehicles in Figure 2.6. The vehicles range from a bicycle to a hovercraft but have one thing in common: they have been designed to be conspicuous. They are liberally covered in retroreflective material that should provide a high-contrast stimulus to any approaching car drivers, particularly if viewed against a dark background. These are the types of vehicle that are designed to operate in traffic (perhaps less so in the case of the hovercraft) and, particularly for vehicles such as the police car, may have to stop in traffic (if, for example, there is a problem further up the road). With their high-contrast livery augmented by flashing lights, these vehicles should be highly visible. Time-to-contact should be easy to calculate. So why, then, do drivers crash into the back of such vehicles and claim subsequently (if they are lucky enough to survive the collision) that they did not see it?

Imag

Figure 2.6 Now you see me, now you don’t.

This class of RTA is usually referred to as ‘looked but failed to see’ (LBFS). The term was first coined by Sabey and Staughton (1975) and first published by Hills (1980). It refers to occasions when drivers have driven into something that was clearly there to be seen, and claimed subsequently that they simply did not see it. A study looking at accident data collected over the course of a year (beginning in 1999) in the UK, and reported in Brown (2005), recorded the contributory factors that were judged to have precipitated driving accidents. LBFS errors were reported as a contributory factor in nearly 8 per cent of all accidents in the sample.

Often, the vehicle that is hit in an LBFS collision does have relatively low ‘sensory conspicuity’. It has, for example, low contrast with its surroundings - and these are easier cases to explain. Some vehicles, however, such as those shown in Figure 2.6, appear to have extremely high conspicuity, and yet still drivers may not ‘see’ them. It seems unlikely that drivers did not look at the obstruction for the whole of their approach. For example, Olson et al. (1989) found that if drivers were following a lead car in daylight on a straight road, their fixations on the lead car accounted for about 37 per cent of the total fixations, and 54 per cent of the total time.

Langham et al. (2002) investigated LBFS collisions in which stationary police cars, fitted with a full range of sensory conspicuity enhancers (including reflective and retroreflective materials, flashing lights, cones etc.), such as the police car in Figure 2.6, were hit by drivers who subsequently claimed that they did not see them. They obtained details of twenty-nine collisions involving police vehicles that fitted the criteria for an LBFS accident, from twelve UK police forces. Langham et al. found that 39 per cent of the reports contained evidence that the driver did not brake at all before the collision, and 70 per cent of the offending drivers’ statements included the phrase ‘I did not see it’.

From this survey, Langham et al. identified a number of features of LBFS accidents:

✵ There were more accidents when the police vehicle was parked ‘in line’ (stopped in a lane and facing in the same direction as the prevailing traffic) than when it was parked ‘echelon’ (parked across a lane ‘side-on’ to the direction of traffic).

✵ Deployment of warning signs and cones did not guarantee detection.

✵ Although the accidents usually occur on motorways and dual carriageways, 62 per cent of the accidents examined appeared to be within 15 km of the perpetrator’s home.

✵ The offending drivers were nearly all over the age of 25. This is an unusual facet of these data. Novice drivers appear to be under-represented in the sample - in many classes of accident they are over-represented.

While Gibson’s bottom-up theories are highly relevant to a dynamic task such as driving, LBFS accidents tend to involve more experienced drivers on roads that those drivers know well. These data indicate that previous experience (top-down processing) also has a crucial part to play in these accidents.

Langham et al. investigated further the role of experience in accidents of this kind. A series of video clips were shown to two groups of drivers - experienced and inexperienced. The drivers were asked to identify potential hazards. In just one of the video clips shown there was a stationary police car: parked either in line or echelon (slanted). Experienced drivers recognised the echelon-parked police car as a hazard faster than the in-line one. Inexperienced drivers took about the same amount of time to detect the hazard whatever the parking orientation of the police car. Consideration of drivers’ knowledge of ‘normal’ driving situations suggests a possible explanation for this finding. When parked ‘in line’ the police car is in the same orientation as any other car driving along the road and, particularly if a driver is approaching from directly behind the stationary car, there are very few cues to indicate that it is not moving. A car parked echelon, however, is clearly not in the ‘usual’ orientation for a moving car on the road.

These findings suggest that experienced drivers take longer to perceive the in-line police car as stationary, because their driving experience (top-down information) will tend to suggest that a car in an in-line orientation on a dual carriageway is moving - novice drivers simply have less experience of perceiving cars in this way and are less likely to make the same assumption.

2.6 A CONSTRUCTIVIST APPROACH TO PERCEPTION

But why should experience affect our perception of the world? The police car is still there and blocking the line of travel of the driver whether or not the observer is an experienced driver. It is a feature of the world. Bottom-up processing of the ambient array will reveal that an obstacle is ‘there to be seen’. Why should a driver’s experience or knowledge of the world affect that? The clue comes from a phrase often attributed to the philosopher Immanuel Kant: ‘We see things not as they are, but as we are.’ This phrase rather beautifully encapsulates the interplay of bottom-up information (seeing things as they are) with top-down information (seeing things as we are) and suggests that top-down processing may sometimes override bottom-up.

An approach that emphasises the importance of top-down processing in perception is the constructivist theory initially proposed by Irvin Rock (1977, 1983) and Richard Gregory (1980) - although Gregory freely acknowledged the importance of earlier work by Helmholtz and Wundt in developing his theories. The theory is referred to as a constructivist theory because it is based on the notion that it is necessary for us to ‘construct’ our perception of what we see from incomplete sensory (bottom-up) information. Unlike Gibson’s theories, the constructivist approach does not assume that everything we need for perception is there in the visual stimulus. As mentioned, the assumption is that the visual input is not complete, and that we use what we already know (top-down) to fill in the gaps and interpret the sensory (bottom-up) information. In order to do this, Gregory suggested, we act as ‘scientists’, generating perceptual hypotheses (predictions) about what we may be seeing and testing those hypotheses against the sensory information coming in.

Imag

Figure 2.7 The Ponzo illusion (Ponzo, 1910).

Gregory suggested that the importance of knowledge in our perception is evident in the way that we perceive visual illusions. For example, look at the illusion in Figure 2.7. This is the well-known ‘Ponzo illusion’ (Ponzo, 1910). The two horizontal lines are the same length, but the top one invariably appears longer. The constructivist theory would explain this illusion by suggesting that we attempt to interpret this graphically impoverished image using our implicit knowledge of the 3-D world in which we live. The two slanting lines then become not just two slanting lines on a flat page, but the edges of (for example) a road receding into the distance. Once this interpretation is made, the two lines appear to be at different distances on that road, with the upper horizontal line being further away. To explain the illusion, we have to accept that we also ‘know’ that things that are further away give rise to a smaller image on our retina and we scale them up to make allowances for this (we don’t perceive people as shrinking in size as they walk away from us). This is an example of size constancy. In the Ponzo illusion the two lines are actually the same length, but one appears to be further away and so is scaled up by our visual system, giving the impression that it is longer.

LBFS collisions can be considered within a constructivist model of perception as just another visual illusion. Drivers (particularly experienced ones) ‘know’ that most cars positioned in line on a road are moving - particularly on a multi-lane road that has parking and stopping restrictions. It is possible that even a very experienced driver will never have encountered a stationary car in the middle of a multi-lane road. When they do encounter a stationary car presenting in the same orientation as a moving car, they rely on what they already know, and are familiar with, about driving on that type of road to generate the ‘most likely’ hypothesis - that what they are seeing is a moving car. They may not realise that that hypothesis cannot be supported until the point of collision.

The ecological approach therefore explains how a driver can bring their car to a halt before hitting an obstacle; the constructivist approach can explain why they sometimes do not.

2.7 TWO APPROACHES, TWO STREAMS

So far, we have considered two distinct approaches. The approach taken by Gibson emphasises the importance of bottom-up information, and sees little necessity for top-down processing. The constructivist approach is almost the opposite. While acknowledging that there must be bottom-up processing (to get information into the visual system in the first place), the importance of top-down processing is central to the theory. It looks as though the two approaches cannot be reconciled into a single theory, but fortunately they do not have to be. It is possible for both approaches to be valid, as there appear to be (at least) two processing streams in the human visual system - as encapsulated in the ‘two streams’ hypothesis (Goodale and Milner, 1992, 2006; Ungerleider and Mishkin, 1982; Westwood and Goodale, 2011).

The two processing streams are apparent even in the optic nerve running back to the visual cortex (Shapley, 1995), which is positioned at the back of the head. The two streams at this point are referred to as the parvocellular and magnocellular pathways, the names deriving from the relative sizes of the cells in the two pathways. After the visual cortex, the visual information is still maintained in (again at least) two distinct streams. One stream is termed the ventral stream and the other is the dorsal stream.

The characteristics of the dorsal and ventral streams rather nicely match those that would be required to underpin the constructivist and Gibsonian approaches. The ventral stream (constructivist) appears to be responsible for the recognition and identification of what is in the visual field. The dorsal stream (Gibsonian), on the other hand, appears to have a different role, with subsystems responsible for working out where things are in the visual field and also guiding the control of actions to interact with those things - that is, perception for action. Considering in more detail the characteristics of the two streams (Goodale and Milner, 1992; Ungerleider and Mishkin, 1982) provides support for the notion that they operate in distinctly different ways that are congruent with the two approaches to perception already discussed:

✵ The ventral system is better at processing fine detail (Baizer et al., 1991) whereas the dorsal system is better at processing motion (Logothesis, 1994), although the differences are only relative and there is some crossover of function.

✵ The ventral system appears to be knowledge based, using stored representations to recognise objects, while the dorsal system appears to have only very short-term storage available (Milner and Goodale, 1995; Bridgeman et al., 1997; Creem and Proffitt, 2001).

✵ The dorsal system is faster (Bullier and Nowak, 1995).

✵ We appear to be more conscious of ventral stream functioning than dorsal (Ho, 1998; Króliczak et al., 2006).

✵ The ventral system aims to recognise and identify objects and is thus object centred. The dorsal system drives action in relation to an object and thus uses a viewer-centred frame of reference (Goodale and Milner, 1992; Milner and Goodale, 1995).

Although Gibson considered visual illusions to be artefactual (making the argument that if you present static impoverished images, the visual system will have to construct its own interpretation), some illusions can reveal what appears to be the operation of the two processing streams.

Figure 2.8a shows a hollow mask of Shakespeare. Under certain viewing conditions (and this illusion is quite robust), when viewing the face from the ‘hollow’ side it looks like a normal, ‘solid’ face, as shown in Figure 2.8b. Gregory (1970) suggests that this is because we are very familiar with faces as visual stimuli and we are used to seeing ‘normal’ faces with the nose sticking out towards us. A hollow face is a very unusual visual stimulus and we appear very resistant to accepting the hypothesis that what we are viewing is a face that is essentially a spatial ‘negative’ when compared with faces we normally see (the bits that normally stick out now go in). Although we can, at times, perceive the face as hollow, we are heavily biased towards seeing it as a ‘normal’ face. Some evidence for this perception being based on acquired knowledge is provided by studies (Tsuruhara et al., 2011) that suggest that infants (5-8 months) appear less likely than adults to see a hollow face as ‘solid’. So far, this illusion appears to be entirely open to explanation within a constructivist framework.

A rather elegant study conducted by Króliczak et al. (2006), however, demonstrated that people’s perception of the hollow face differed if they were asked to interact with it, as compared with just looking at it. The study used a hollow face like the one in Figure 2.8, and participants were asked to estimate the position of targets placed on the hollow (but phenomenonologically normal) face and then to use their finger to make a rapid motion to ‘flick’ the target off - as in Figure 2.8c. Participants estimated the position of the target as though the face were solid, indicating that they were perceiving the illusion, consistent with a constructivist approach. When, however, participants were asked to flick the mark off, the flicking movements were directed to the ‘real’ position of the face; that is, ‘inside’ the hollow face - an action presumably supported by the dorsal, perception for action, stream, which was not ‘fooled’ by the illusion.

Imag

Figure 2.8 The hollow-face illusion (Gregory, 1970).

THE ACTION OF TWO PERCEPTUAL STREAMS IN DRIVING?

Section 2.4 suggested that optic-flow information can be used by a driver to control braking to avoid a collision. Such a process could be handled by the Gibsonian dorsal stream. McLeod and Ross (1983) suggest, however, that although optic-flow information may be of great importance in calculating TTC, cognitive factors (that could be associated with the operation of the ventral stream) may also play a part. For example, if the change in visual size of an approaching vehicle is the only criterion used for judging the TTC, it should make no difference what kind of vehicle it is. Keskinen et al. (1998) found, however, that drivers will pull out in front of motorcycles with a much lower TTC than with cars.

Horswill et al. (2005) found that drivers tend to judge a motorcycle to be further away than a car when they are actually at the same distance (note that the use of τ to judge TTC does not require an appreciation of the distance to the object - only the rate of change of size), and they suggest that this is because the motorcycle is smaller. In the Ponzo illusion, perceived differences in distance generate perceived differences in size. With cars and motorcycles it is the other way around. Perceived differences in the size of motorcycles and cars can, apparently, lead to differences in perceived distance. The motorcycle is not seen as a smaller object at the same distance as a car, but as an object of the same size further away. The perception is illusory. Thus judging the TTC of an approaching motorcycle may also be influenced, to some extent, by constructivist processes such as those described in Section 2.6 and mediated by the dorsal stream.

Drivers’ estimations of how far away something is, and how soon they are likely to hit it (or how soon it is likely to hit them), thus appear to be based on the action of both the dorsal and ventral streams.

2.8 PAYING ATTENTION

The discussion above gives us an insight into how drivers are able to bring their car to a stop before they hit an obstacle - and also why sometimes they do not. There is still a puzzle, however, in that some drivers do not appear to be aware of something they are looking straight at. Drivers generally look where they are going, and this in confirmed by studies of drivers’ eye movements. So why do they not see what is there? It is not giving too much away to say that it looks as though they are not ‘paying attention’.

A striking demonstration of people failing to see things where they are looking is provided by the now classic study of Simons and Chabris (1999), although there have been many studies showing similar effects (e.g. Neisser and Becklen, 1975). Simons and Chabris asked participants to undertake a simple task. Participants were asked to watch a video of a basketball game between a white-shirted team and a black-shirted team, and to count the number of passes (bounced or direct) that one or other of the teams made. What the participants were not informed of was that, after 44-48 seconds, a woman dressed in a gorilla costume would walk through the middle of the game. The gorilla was on the screen for 5 seconds and in the region where the participants were looking to count the passes. In a condition where the participants were counting the passes made by the team dressed in white, only 50 per cent of the participants noticed the gorilla. If the contrast of the players and the gorilla was reduced, the noticing rate dropped to 8 per cent.

The gorilla was not ‘invisible’, even in the low-contrast condition. Indeed, once people know the gorilla is there on the video they always see it. The key process operating here is attention driven by expectancies. Participants in this study were not expecting (a top-down process) to see a gorilla, so when one appeared they did not pay any attention to it. Not seeing the gorilla is not a sensory issue, but an attentional one.

Following such a powerful demonstration of the effect of attention, the obvious questions are, ‘Why do we need attention? Why don’t we just process everything?’ The human brain is widely regarded as the most complex system in existence. The cerebral cortex has about a trillion synapses (nerve connections) per cubic centimetre of cortex (Drachman, 2005) and the white matter of the brain of a 20-year-old contains between 150,000 and 180,000 km of nerve fibre. But this is still apparently not enough. Research is clear regarding human information-processing abilities; individuals are unable to register, and process, all of the information potentially available from the senses (e.g. Kahneman, 1973). Thus, drivers cannot simultaneously process all of the information available to them while driving; some of the input will inevitably not reach conscious awareness and/or be acted upon.

Plainly, attention filters out some aspects of the world (this chapter focuses on vision, but the same general principles apply to, for example, audition), so what criteria are used in this filtering?

SPACE-BASED ATTENTION

Attention is allocated to an area where either there is a lot of information to be processed, or the individual expects that objects requiring attention are likely to appear. For example, if a driver is proceeding along a dark road like the one in Figure 2.2, there may be few objects visible to attend to. Attention may be allocated to the area that best supports the driving task (e.g. to the nearside kerb, or lane/centreline markings, to assist in maintaining road position) and/or where experience suggests hazards may appear.

FEATURE-BASED ATTENTION

This may often precede object-based attention (see below) and involves the allocation of attention to some feature of the environment such as colour, movement, sound pitch etc. Objects that have that particular feature are likely to be attended to and ‘picked out’ easily and rapidly. Those objects that do not have that feature may not be attended to. Most and Astur (2007) tested whether feature-based attention may affect drivers’ performance. Drivers in a simulator were required to search at every junction for either a blue or yellow arrow indicating which way to turn. At one critical junction a yellow or a blue motorcycle suddenly veered into the driver’s path and stopped. If the colour of the motorcycle did not match the colour they were searching for (e.g. they were searching for a blue arrow and the motorcycle was yellow), the drivers were far more likely to collide with it, as compared with when it did match (e.g. blue arrow, blue motorcycle).

OBJECT-BASED ATTENTION

Attention is allocated to objects. For example, on a busy road, drivers may be primed to attend to those objects they are most likely to encounter on such a road - usually cars. As a result, they are less likely to attend to, and become aware of, less common road-users such as motorcyclists and pedestrians. Perceptual differences with motorcycles as compared with cars have already been discussed in Section 2.7, but there may also be attentional issues. Magazzù et al. (2006) found that car drivers who were also motorcyclists were less likely to be involved in collisions with motorcyclists than drivers whose only driving experience was in cars. The difference (as Magazzù et al. suggest) could be that motorcyclists are more aware of the possible presence of motorcyclists on the road - and so are more likely to be primed to allocate attention to them as an object on the road.

WHAT ATTRACTS ATTENTION?

The next question is, ‘How, or why, is attention allocated to some aspects of the environment and not others?’ Some stimuli, such as loud noises or flashing lights, will attract attention to them (although LBFS accidents involving police cars suggest that this is not guaranteed), and this process is referred to as exogenous control of attention. Cole and Hughes (1984) suggest that sensory conspicuity is a necessary, but not sufficient, condition for drivers to become aware of the presence of another vehicle. In addition to sensory conspicuity, Cole and Hughes suggest that attention conspicuity is an important factor; that is, how likely an object is to draw attention to itself.

Individuals can also, to an extent, choose where, or to what, they allocate their attention. This process is referred to as the endogenous control of attention and will be influenced by, among other things, an individual’s expectations. Endogenous control of attention may lead to drivers looking for and/or attending to what they expect to see, where they expect to see it. For example, using a driving simulator, Shinoda et al. (2001) found that a ‘Stop’ sign was more likely to be detected by drivers if it was located where they might expect it to be. A ‘Stop’ sign out of position (on the roadside but not near a junction) was less likely to be detected.

Cairney and Catchpole (1996) conducted an analysis of police records for more than 500 collisions at intersections (in Australia). The most frequent factor in the collisions appeared to be a failure on the part of a driver to ‘see’ another road-user in time. In many cases, it was apparent that the driver had ‘looked’ in the right direction but did not ‘see’ the other road-user (a LBFS phenomenon), probably because of a failure to attend to the right object and/or region of space.

It is clear from LBFS accidents that looking is not the same as attending, and a study by Luoma (1988) provides further support for this idea. Luoma recorded the eye fixations of drivers driving on real roads (as opposed to simulators or test-tracks) and questioned them afterwards about what they had seen. They found that drivers were not always aware of things that they had looked at, and were sometimes aware of things that they had not looked at.

2.9 DRIVEN TO DISTRACTION - ALLOCATING ATTENTION AWAY FROM THE MAIN TASK

Endogenous control and exogenous control of attention are not without issues in practice. Considering driving once more, if a driver allocates attention (or attention is drawn) to something other than those things that are of immediate relevance to the driving task (other cars, pedestrians etc.), then that driver is ‘distracted’ to a greater or lesser extent. Even when drivers acknowledge that driving when distracted is dangerous, driver behaviour does not necessarily reflect this. In a 2007 RAC study (Gambles et al., 2007), 51 per cent of British drivers said they regarded ‘doing other things while driving’ to be a very serious transgression; 47 per cent, however, still admitted to exceptions where they had done exactly that.

There was a total of 29,757 fatal crashes in the United States in 2011 (National Highway Traffic Safety Administration, 2013). Although many factors may contribute to any particular accident, distraction of drivers was identified as a factor in 3,085 (about 10 per cent) of those crashes. A study conducted in Australia (McEvoy et al., 2007) interviewed 1,367 drivers attending hospital following an RTA. Over 30 per cent of the drivers reported at least one distracting activity at the time they crashed. In some crashes, more than one of the drivers involved reported being distracted. Some drivers reported more than one distracting activity. The major distracting activities reported are shown in Figure 2.9 and Table 2.1.

One of the most widely reported, and researched, distractors while driving is the use of a mobile phone. The negative effect that this has on driving performance is now well supported by a body of published research going back as far as 1969 (Brown et al., 1969). It has been suggested that the use of a mobile phone while driving can increase the accident risk fourfold (McEvoy et al., 2005), and that using a mobile phone while driving can have as big an effect on performance as driving while drunk (Strayer et al., 2006).

Imag

Figure 2.9 There can be a lot of distracting things going on while driving. The numbers are the percentage of drivers reporting that activity as a distractor just before (or while!) they crashed.

Source: copyright © Daxiao Productions/Shutterstock.com.

Table 2.1 Self-reported driver distractions prior to crashing

Distracting activity

Percentage of drivers reporting that distraction at the time of crashing

Passenger in vehicle

11.3

Lack of concentration

10.8

Outside person, object or event

8.9

Adjusting in-vehicle equipment

2.3

Mobile phone or similar

2.0

Other object, animal or insect in vehicle

1.9

Smoking

1.2

Eating or drinking

1.1

Other (e.g. sneezing, coughing, rubbing eyes …)

0.8

Data from McEvoy et al. (2007). Note that these figures are for people attending hospital following a crash and so do not include crashes where a distraction may have resulted in a minor incident.

The percentages given in Table 2.1 suggest that mobile phone use is some way behind the effect of having a passenger in the car in terms of causing a distraction. Charlton (2009), however, found that conversing with a passenger had less of an effect on driving performance than using a mobile phone. Charlton presents evidence to suggest that this may be due to passengers moderating their conversation if they perceive hazards. The relatively greater proportion of accidents in which having a passenger was a contributory factor (as compared with using a mobile phone) may perhaps be due to the fact that the passenger is likely (one would hope) to be in the car for the whole journey, whereas few drivers spend all of their time while driving on the phone.

It should be noted that using a mobile phone apparently does not only affect attention while driving. Hyman et al. (2010) questioned pedestrians who had just walked across a square, in the centre of which was a clown on a unicycle. They first asked participants if they had seen anything unusual, and if they replied ‘No’ asked them directly if they had seen the unicycling clown. The results are shown in Table 2.2. What is of interest in the context of the effect of mobile phones on attention is that pedestrians using a phone were far less likely to have noticed the unicycling clown - only 25 per cent reported noticing the clown if they were asked directly, and only about 8 per cent spontaneously reported the clown. This is compared with about 51 and 32 per cent respectively for people who were not on the phone. Of more general interest is that the figures suggest that about 17 per cent of the participants that were on the phone, and about 20 per cent that were not, thought that seeing a unicycling clown was not unusual.

The effect of distractors, such as a mobile phone, provides evidence for the theory that our cognitive resources are limited, hence the need for attentional processes. If we allocate our attention to one thing (such as talking on a mobile phone), we have fewer resources available to attend to other things, such as unicycling clowns or driving.

SELF-KNOWLEDGE

This chapter has considered the importance of knowledge in influencing many aspects of our perception of the world, but it is interesting to consider how closely drivers’ knowledge of their own abilities matches the reality. A phrase used by Charles Darwin (1871) is quite prescient in this regard: ‘ignorance more frequently begets confidence than does knowledge’. An RAC survey of 2,029 British drivers (Gambles et al., 2007) showed that most drivers thought they were better than average, and 80 per cent judged themselves to be very safe drivers. The inability of drivers to judge their own driving ability effectively may be linked to issues with ‘metacognition’ - the ability to appraise one’s own cognitive processes. Returning to the distracting effects of mobile phones on driving, Horrey et al. (2008) found that drivers were poor at estimating the distracting effect of a mobile phone task on their own driving performance. A failure of metacognition may also underpin the failure on the part of some drivers to realise how poorly they are performing. Kruger and Dunning (1999) suggest that incompetence on a task robs individuals of the ability to appreciate how bad they are, leading to an inflated belief in their own performance. This may explain why so many drivers feel justifiably (in their view) vexed by the poor performance of other drivers.

Table 2.2 The percentage of people walking alone, or using a mobile phone, that reported noticing a unicycling clown either without direct prompting (they were only asked if they had seen something unusual) or in response to a direct question

Walking alone (%)

Using mobile phone (%)

See anything unusual?

32.1

8.3

See unicycling clown?

51.3

25

2.10 TROUBLE ON MY MIND - THE INTERACTION OF EMOTION AND COGNITION

It would be nice to believe that drivers are, at all times, rational and reasonable in their approach to driving. This, however, appears not to be the case. Although the term ‘road rage’ only warranted inclusion in the Oxford English Dictionary’s list of new words in 1997, an epidemiological study in Ontario (Smart et al., 2003) found that while driving in the past year, 46.2 per cent of respondents were shouted at, cursed or had rude gestures directed at them, and 7.2 per cent were threatened with damage to their vehicle or personal injury. There is no reason to suppose that Ontario is in any way unusual in the incidence of road rage.

While road rage may be upsetting, does it have any effect on driving? Probably. For example, Hu et al. (2013), in a simulator study, found that drivers in a bad mood took more risks and drove more dangerously. They also found that a bad mood has more of an effect on driving behaviour than a good one.

Emotion may also affect the cognitive processes that are crucial for driving. Research suggests that emotion influences attention (e.g. Moriya and Nittono, 2010; Vuilleumier and Huang, 2009) and indeed that attention influences emotion (Gable and Harmon-Jones, 2012). Changes in arousal and affective state may influence the scope of attention (Fernandes et al., 2011), with negative affect generally leading to a narrowing of attentional focus (e.g. Derryberry and Reed, 1998; Easterbrook, 1959; Gable and Harmon-Jones, 2010), and positive affect leading to a broadening of focus (e.g. Derryberry and Tucker, 1994; Easterbrook, 1959; Rowe et al., 2007) - although there is evidence that the link between affect and attention may have some flexibility (Huntsinger, 2012).

Imag

Figure 2.10 A simplified representation of the pathway from sensation to perception, now including the influence of emotion.

Such emotional effects on attention may, in turn, affect how people drive. Trick et al. (2012) used positive and negative images to induce emotional states in drivers in a simulator. Steering performance was most affected by highly arousing negative images and Trick et al. suggest that this was due to a narrowing of drivers’ attentional focus, while suggesting that a wide attentional focus is best for some aspects of driving.

But emotion may not just affect attentional processes. This chapter began with a detailed consideration of the importance of contrast in perception in general, and driving in particular. It is now possible to bring the chapter full circle, as there is evidence that something as apparently low level as contrast perception can also be influenced by emotion. Lee et al. (2014) found that negative arousal moves the peak of the contrast sensitivity function to lower ‘spatial frequencies’ - broadly, sensitivity to fine detail decreases as sensitivity to coarse increases. It thus seems that emotion can modify our perceptual process at a number of levels. Figure 2.1 should therefore be modified to incorporate the effects of emotion - see Figure 2.10.

2.11 PERCEPTION AND ATTENTION IN AVIATION

This chapter has so far concentrated on how the processes of perception and attention influence a particular task - driving. Human perception and attention will, however, operate within the same constraints, and fail in the same way, no matter what the task. To illustrate this point, this section will consider a tragic aviation accident that occurred in 1979.

THE MOUNT EREBUS DISASTER

A meticulous examination of this incident is given in the Report of the Royal Commission of Inquiry chaired by Justice Mahon (1981).

Flight TE-901 was a scheduled Antarctic sightseeing flight that was due to leave Auckland in the morning, fly over the Antarctic for a few hours (including a fuelling stop) and return to Auckland in the evening. Previous flights had included a flight down McMurdo Sound (an approximately 40-mile-wide expanse of water and ice connecting the Ross Sea to the Ross Ice Shelf), giving the 3794-metre-high Mt Erebus to the east a wide berth. This route was included in a flight plan, printouts of which were given to the flight crew of TE-901 at a briefing 19 days before the flight.

Between the briefing and the flight, however, the longitude of the final waypoint (destination point for the aircraft) was changed from 164 degrees 48 minutes east to 166 degrees 58 minutes east. The flight crew were not informed of this change and no one noticed the change among the mass of digits representing the flight path of the aircraft. This small change, however, moved the destination waypoint of the aircraft 27 miles to the east. Instead of flying down the centre of McMurdo Sound, the flightpath would now take the aircraft directly over Mt Erebus (if the plane was flying high enough).

The aircraft was flying in clear air under the cloud base and visibility was very good. The cockpit voice recorder established that neither the First Officer nor the Captain expressed any doubt as to where they were. They believed they were flying down the centre of McMurdo Sound, on the flight path that they had been briefed on 19 days before. The aircraft was not, unfortunately, where the crew believed it to be, but was heading straight towards Mt Erebus - and low enough to hit it. Visible landmarks, however, appeared to confirm to the crew that they were where they believed they were. For example, if they were flying up McMurdo Sound, Cape Bernachhi should have been visible on their right. The crew were able to see a cape on their right, but it was the wrong one - it was in fact Cape Bird. Cape Bird is actually a lot lower than Cape Bernacchi, but it would also have been closer to the aircraft than Cape Bernacchi would have been had they been flying up McMurdo Sound. With poor cues to distance, the smaller, closer, Cape Bird could easily be mistaken for a bigger Cape Bernacchi further away (see Section 2.6).

As TE-901 flew on the revised flight path towards Mt Erebus, what was actually in front of the aircraft was a stretch of flat ground, and then the low ice cliff that marked the beginning of the slopes of Mt Erebus. Unfortunately, the sun was directly behind the aircraft, contributing to what is known as a ‘sector whiteout’. Contrast between scene features (such as Mt Erebus and the surroundings) would have been very low (see, for example, Figure 2.11) and there would have been no shadows to provide evidence of terrain features. As a result of this whiteout, Mt Erebus, and the flat ground leading to it, would have appeared as a single expanse of continuous flat terrain, merging with the overhead cloud at some point in the far distance; a view consistent with flying over the flat expanse of McMurdo Sound. There would probably have been no horizon visible at all.

At 12:50 pm on 28 November 1979, Flight TE-901 collided with Mt Erebus. There were no survivors. Initially, the incident was ascribed to ‘pilot error’, but the pilots were later cleared.

Imag

Figure 2.11 Mount Erebus, Antarctica. Note how easy it is to see the slope of the mountain contrasted against the sky - and how difficult it is to see (not) contrasted against the cloud.

Source: copyright © Sergey Tarasenko/Shutterstock.com.

Many of the ‘human factors’ that contributed to this incident have been discussed in this chapter in terms of the operation of perception and attention. For example, the importance of contrast in perception was discussed at the start of the chapter, and a lack of contrast between scene features was an important contributor to this disaster. Such factors will affect most, or all, people in the same way - they arise from the basic operation of perception and attention. The generality of these factors is illustrated by a statement from Mahon’s report, bearing in mind that there were at least five people in the cockpit (Captain, First Officer, two flight engineers and a commentator):

It was clear, therefore, that the aircraft had flown on a straight and level flight at 260 knots into the mountain side in clear air, and that not one of the persons on the flight deck had seen the mountain at any juncture.

Of course, one thing that may be different between individuals is their knowledge. Unfortunately, individuals will often try to fit what they see to what they think should be there (a phenomenon known as ‘confirmation bias’). If everybody on the flight deck believed they were flying up McMurdo Sound, then McMurdo Sound is what they would see. Or, as expressed in Mahon’s report: ‘Discrepancies between what appears to be seen and what is known to be visible are automatically cancelled out by the mind in favour of a picture of what is known to be there.’

Or, to put it another way: We see things not as they are, but as we are.

2.12 CAN PSYCHOLOGY HELP?

This chapter has concentrated on how the operation of human perception and attention can lead to problems in real-world tasks, such as driving and flying (and many others). Although some of these incidents happened a while ago (for example, the Mount Erebus disaster occurred in 1979), human perception and attention will still be operating in the same way, and any ‘failures’ of those processes that happened then could happen again now. Given that the processes discussed in this chapter are so general, is there any way that psychology can help to reduce the incidence of accidents that arise from the fundamental properties of human perception and attention?

Broadly speaking, we can do little to change the basics of perception and attention, but the more we know about them, the easier it is to design systems that work with them, an application of psychology that forms a part of the discipline of ‘human factors’. The basic idea is to design systems and procedures that take account of normal and natural human capabilities, rather than trying to change the human to fit the system.

For example, contrast is important for visibility. So, if we want something to be visible, we should increase its contrast with its surroundings, and this general principle is widely applied. ‘Hi-vis’ jackets that have high-contrast retroreflective stripes abound. Some road signs in the UK have a bright yellow border. This provides a high contrast between the yellow border and the sign (ensuring the sign is highly visible) that does not rely on what could be a highly variable background (sky, trees, buildings) to generate the contrast. There are many other similar examples, all illustrating how an understanding of basic psychology leads to improved products and systems that make the world easier for us - and hopefully safer.

The importance of knowledge has also been emphasised in this chapter, and so manipulating what people know, or expect, can also be useful and effective. The ‘Think Bike’ campaign is an example of such an approach. If a motorist has been primed to expect bikes on the road, they are more likely to attend to, and respond to, them.

The link between psychological research and application can sometimes be quite direct. For example, the research of Langham et al. (2002) on looked-but-failed-to-see accidents generated specific and useful advice for drivers. Langham et al. found that drivers appear more likely to perceive a car parked ‘echelon’ (rather than ‘in line’) as what it is - an obstruction in the line of travel. So, the advice from Langham et al. is, if you break down in the middle of a road, park, if possible, at an angle to the flow traffic so that your car cannot easily be confused with a moving vehicle. Do not park neatly in line or, to use advice as given by a driving instructor, ‘Park like you stole it.’

SUMMARY

✵ The visibility of stimuli (for example, the visibility of pedestrians to drivers) is determined, primarily, not by brightness but by contrast.

✵ The human visual system is able to cope with enormous contrast ranges by adjusting its overall contrast sensitivity. This has the effect that high-contrast stimuli in the field of view may reduce the visibility of low-contrast stimuli.

✵ Although contrast is a key determinant of visibility, pedestrians show limited awareness of the effect of the clothing they are wearing on visibility.

✵ The ecological perception approach of Gibson can provide explanations for some aspects of driver behaviour (such as judging time-to-contact) but not others (such as a failure to brake at all).

✵ The constructivist approach can explain the failure of drivers to brake to avoid a collision with a highly visible obstruction.

✵ The two theories are congruent with the characteristics of two processing streams in human vision - the dorsal (ecological) and the ventral (constructivist) streams.

✵ A failure of individuals to ‘see’ things where they are looking can be explained by theories of attention.

✵ Attention can be allocated to regions of space, features and/or objects.

✵ If a driver allocates attention to anything other than the main task of driving, they are likely to be distracted and their driving performance will suffer.

✵ Emotion can affect contrast perception, attention, and driving performance.

FURTHER READING

✵ Eysenck, M. and Keane, M.T. (2015). Cognitive Psychology: A student’s handbook (7th edn). Hove: Psychology Press.

✵ Harris, J. (2014). Sensation and perception. London: Sage.

✵ Olson, P.L., Dewar, R. and Farber, E. (2010). Forensic aspects of driver perception and response. Tucson, AZ: Lawyers & Judges Publishing.

✵ The original Simons and Chabris (1999) gorilla video can be viewed at www.theinvisiblegorilla.com/videos.html