This is the first part of planned series on attention and its relationship to other cognitive processes.
Many years ago I recall sitting in a room at Durham University, headphones piping into my brain words I can no longer recall. As the words were uttered I was repeating them out loud, as I had been requested to do. After I had completed the task I was asked if I could detect any difference between the words being said in my left ear and in those in my right. I remember being a little confused but replying that, no, I didn’t detect any difference. I was then told that the words being spoken into my right ear were English, but in the left ear, they were French.
I found myself in Durham due to a mandatory requirement of my Open University studies, back in the days when they held summer schools at educational establishments around the country. In my early twenties, I had found myself working as a civil servant in a mind-numbingly boring admin role. To prevent my brain from dissolving into goo I thought it might be useful to study a course or two in computer science, so I commenced my studies in the less than delightful aspects of Pascal and Visual Basic programming languages. As part of my studies, I was introduced to the vastly more interesting topic of problem-solving, a brief but welcome foray into cognitive science. The following year I switched to psychology and, as they say, the rest is history.
Back to Durham.
The activity with the headphones was all to do with attention, more specifically, selective attention. My cognitive architecture had chosen (or selected) the most meaningful input to concentrate on and, as my knowledge of French is embarrassingly poor, filtered out what it deemed as unnecessary.
Curiously, attention is rarely discussed in educational circles beyond its obvious relationship to memory or as a passing comment related to distraction or not paying attention. Serendipitously, it was the death of cognitive psychologist Anne Triesman in February that nudged my recollection of that summer afternoon in Durham. Treisman’s Attenuation Model attempted to describe the process by which some information is attended to more than other information, selecting what details are then passed on to short-term (or more specifically, working) memory.
The study of attention began with a British cognitive scientist called Colin Cherry who was interested in a curious phenomenon dubbed the cocktail party problem. How is it that, when finding ourselves at a social event where there are many conversations taking place at once, are we able to filter out all those we find uninteresting or irrelevant and focus in on what remains? With multiple environmental stimuli, how on earth do we manage to catch the name of our favourite television program or hear our own name amongst the din and confusion?
Studies have found, however, that this isn’t always the case and that some people don’t even recognise their own name when it’s slipped into a conversation. Others are constantly distracted by the conversations around them, recalling very little about the conversation they were supposedly attending to. Individual differences aside for now, what can the cocktail party effect tell us about how we attend to auditory inputs?
Cherry set about studying this phenomenon in a rather clever way. He would equip volunteers with headphones and present two separate auditory messages, one through the right ear and one through the left. He would also ask participants to repeat out loud only one of the messages (a technique known as shadowing). These shadowing experiments found that very little information could be extracted from the non-attended message. Indeed (just like my own experience) there was seldom any recognition of the message when it was spoken in a foreign language or the speech was presented backwards – participants seemed blissfully unaware of startlingly obvious characteristics.
However, certain physical properties were detected, such as a change in tone or loudness. This would indicate that the unattended auditory information receives practically no processing, a hypothesis supported by other studies (for example, Moray, 1959).
British cognitive psychologist Donald Broadbent was the first to propose a systematic explanation for this phenomenon. While we choose certain information to attend to based on physical properties, such as pitch or loudness, the rest (the unattended information) is filtered out because our cognitive architecture simply can’t cope with multiple inputs. Only the attended to information, therefore, is selected for higher level processing, as can be seen in this image of the Broadbent’s model.
This is what we tend to describe as selective attention, that is, we are able to concentrate on one thing and ignore (or filter out) all the other information, so much so that (according to the Broadbent model) we have little or no recollection of the unattended stimuli.
However, this might not be entirely accurate. Some processing of the unattended information does seem to take place and not all experimental findings can be explained using the filter model.
What if the inputs are different?
Broadbent used auditory presented messages and words, but what if the shadowing task combined an auditory presentation with pictures? In this case, both pictures and words were recalled more thoroughly, indicating that if inputs are dissimilar, they are both processed.
Does this mean that similar information is selectively processed and that unattended information is lost? This is the implication of Broadbent’s model, but there are some problems with this hypothesis.
When Broadbent carried out a study whereby volunteers listened to three digits presented one after the other in one ear and (at the same time) another three different digits in the other (a technique known as dichotic listening), participants tended to recall the digits ear by ear, rather than pair by pair.
So, if the left ear heard 247 and the right ear heard 318, participants would recall 247318 and not 234178.
However, in their classic 1960 study, Gray and Wedderburn took a slightly different approach by using a mixture of numbers and digits. For example, participants would be presented with Who, 6, there, in one ear and 4, goes, 1 in the other. Recall was not ear by ear, but by meaning, with the majority of participants recalling, Who goes there and 4, 6, 1.
Perhaps we can process some information without being aware of doing so?
In one study (Von Wright, Anderson and Stenman, 1975), volunteers were given two auditory lists of words and told to shadow one while ignoring the other. Some of the words had already been paired with a mild electric shock on the unattended list (because psychologists love the combination of willing participants and electricity). Researchers measured the galvanic skin response of the participants as they were presented with the conditioned words on the unattended list and (sometimes) discovered a physical reaction. That is, they had associated the word with the electric shock, even though they were told to ignore that particular list. The same response was also found with similar sounding words or words with a similar meaning.
The implication here is that selection doesn’t appear to occur as early as the Broadbent model implies.
Treisman suggested that the unattended information is attenuated, or reduced. According to this model, the bottleneck is more flexible than Broadbent’s model allows. Processing is systematic and hierarchical, with analysis based on different features. The first part of the analysis is based on physical cues, syllabic pattern and specific words, while later analysis is related to individual words, grammatical structure and meaning. If insufficient processing capacity is available to allow full analysis, tests towards the top of the hierarchy are omitted. This then helps to explain why information from the unattended channel can still be processed.
Those familiar with the theory of working memory might notice some similarities with these models and those developed by memory researchers. Indeed, in the Baddeley and Hitch model of working memory, the component known as the central executive is, itself, and attentional system, in that its main responsibility is to control and regulate cognitive processes.
Another similarity is the view that cognitive processes can become overloaded when presented with competing stimuli. Again, working memory models attempt to explain this in similar terms to those of attention models, including the notion that different types of information are more readily processed simultaneously (pictures and words, for example). See activity 4: Overload your phonological loop.
Unfortunately, the relationship between attention and working memory isn’t particularly well understood, but attention does appear to be more important during memory encoding than memory manipulation (in working memory).
I’ll discuss this relationship in more detail in part 2.