An Introduction to Applied Cognitive Psychology - David Groome, Anthony Esgate, Michael W. Eysenck (2016)

Chapter 11. Biological cycles and cognition

Robin Law and Moira Maguire


Cyclicity characterises nature. For most organisms, physiological processes and behavioural activity are organised into cyclic patterns. These patterns provide timetables for biological and behavioural events, allowing them to be organised effectively. Cycles ensure that important activities such as searching for food, sleeping, and mating take place at optimal times. Given this temporal organisation of activities, it is essential that applied cognitive psychologists take account of the relationship between these cycles and cognitive performance. Is memory affected by time of day? How does working at night affect performance? Is a woman’s cognitive performance affected by her menstrual cycle phase? These are the kinds of question that we will address in this chapter.

The range of biological cycles is vast, from the pulsatile secretions of hormones to breeding cycles and life cycles. Cycles govern the timing of biological events, effectively providing timetables for both internal physio logical processes, such as hormone secretion, and active behaviours, such as hunting and migration. The cycles themselves are controlled by oscillations, and the frequency of these oscillations determines the time course or period of the cycle. The period is the time taken to complete a single cycle. For example, the human menstrual cycle is controlled by a low-frequency oscillator, as the time course for ovum (egg) maturation and release is relatively long. However, this low-frequency rhythm is underpinned by the high-frequency rhythms of the individual hormones (Dyrenfurth et al., 1974).

Biological cycles are not simply fluctuations in biological processes to maintain homeostasis, though this is clearly an important role. More precisely ‘they represent knowledge of the environment and have been proposed as a paradigmatic representation of and deployment of information regarding the environment in biological systems: a prototypical learning’ (Healy, 1987, p. 271). Indeed, Oatley (1974) considered that the ability to organise biological oscillations into rhythms allowed the effective timetabling of biological functions, providing ‘subscripts’ for internal processes. Biological cycles can therefore be regarded as a primitive but very effective form of learning. Information about the external environment is represented internally, and this information is used to organise behaviour in an adaptive way.

Two major human cycles will be considered in this chapter. The first is the 24-hour sleep/wake cycle, also known as the circadian rhythm. It takes its name from the Latin circa (about) and diem (a day). The second is the menstrual cycle, which regulates ovum maturation and release in humans. This is an ultradian (more than 24 hours) rhythm, in fact of approximately 30 days.


Circadian rhythms are the best-studied of the biological cycles. In the following sections we will begin by reviewing and discussing the role of circadian rhythms in regulating cognitive performance throughout the day. We will then go on to consider the effects of disruption to these rhythms (e.g. jet lag and shift-work) and the adverse implications for performance, and indeed for health.


Figure 11.1 The circadian cycle. At night, when we sleep, melatonin is released, and cortisol levels are relatively low. Body temperature also drops at night, reaching a trough at about 4.30 am, before gradually rising again. Light inhibits melatonin secretion in the morning, helping us to wake. Cortisol levels peak in the morning, and alertness increases throughout the morning. Motor coordination and reaction time are best during the afternoon. By about 5 pm muscle and cardiovascular efficiency are at their best and body temperature peaks soon after. Melatonin release begins again in the late evening, promoting sleepiness, and body temperature begins to drop.

All life on earth depends on the presence of the sun, and the evolution of circadian rhythms may be traced to an early dependence on the sun as an energy source. Organisms adapted to the 24-hour cyclic fluctuations of this energy source, and so their cells developed a temporal organisation. These rhythms are seen in nearly all organisms, from simple bacteria to human beings. Circadian rhythms serve to ensure that the activities important for survival are temporally organised to match the optimum times within the 24-hour day (Buijs et al., 2003). Circadian rhythms have been observed in a wide range of behaviours, from processes at the level of individual cells to information processing and mood. In humans, the circadian rhythm is closely linked to arousal (indeed, sleep is usually taken as the lower point on the arousal continuum) and temperature. As humans are a diurnal species, these rhythms are arranged so that alertness and performance will peak during the daylight hours and sleep pressure will peak during the dark hours of the night.

A standard measure of the circadian cycle in humans is the daily temperature rhythm. This rises to a peak in the afternoon and begins to fall again, reaching its lowest point, or trough, between 4 and 5 am. The hormone melatonin, secreted by the pineal gland, also plays an important role in regulating the sleep/wake cycle. Melatonin is released mainly at night and promotes sleep pressure (see Figures 11.1 and 11.2). In the modern day it is often prescribed over the counter as a treatment for insomnia and other sleep difficulties. It has been shown that melatonin can advance or delay the circadian clock depending on the precise time when it is taken (Arendt, 2010), and thus it has been suggested as a countermeasure for circadian disruption in jet lag and shift-work (see below).

The hormone cortisol follows the opposite pattern of secretion to melatonin. Cortisol is secreted by the adrenal cortex, under the influence of a system called the hypothalamic-pituitary-adrenal (HPA) axis. The circadian rhythm of cortisol is characterised by highest levels in the morning and lowest levels in the late evening and early part of sleep (see Figures 11.1 and 11.2). Cortisol has a stimulatory effect on arousal and, in addition to its circadian rhythm, it is secreted in response to stressful situations. Often, in the past, melatonin was seen as the most important hormone in circadian rhythms, while research on cortisol focused on its role in the stress response. Indeed, cortisol is often informally referred to as the ‘stress hormone’. However, in more recent years it has become clear that cortisol is much more than simply a stress hormone, as there is a growing body of evidence suggesting that the circadian pattern of cortisol secretion plays a very important role in regulating both physical and psychological function.


Most biological cycles are believed to be ‘endogenous’, which means they are believed to originate from within the organism. However, these endogenous rhythms are entrained by external or ‘exogenous’ variables. Entrainment refers to the synchronisation of endogenous biological clocks with these exogenous variables. An example of this is the light–dark cycle. Light entrains the circadian rhythm to about 24 hours, ensuring that our sleep and activity patterns are synchronised with the external environment. In the absence of the normal variations in light across the day, the ‘free-running’ human circadian rhythm is slightly over 24 hours. Exogenous variables like this are referred to in the literature as ‘zeitgebers’, which roughly translates from German as ‘time-givers’. Light is considered to be the primary zeitgeber, though some other important examples include food ingestion, exercise and social activity.


Figure 11.2 Alertness, body temperature, melatonin and cortisol levels through the day. Melatonin is released predominately at night. Levels begin to rise from early evening and fall again as dawn approaches. Cortisol levels are highest in the morning and decline through the day, reaching the lowest point in the late night/early morning. Body temperature decreases through the night, reaching a trough at about 4.30 am; it then rises again through the day, peaking in the early evening. Alertness increases from early morning onwards, reaching a peak in the morning and another in the afternoon. It decreases from evening onwards and is lowest in the early hours of morning, which can be a problem for those on night shift.

A wealth of research into circadian rhythms has been conducted using animal models, in particular the fruit fly (Drosophila). For instance, in flies it has been shown that exposure to normal daily light acts to entrain the rhythm to 24 hours. Kept in darkness, the fruit fly shows a ‘free-running’ activity rhythm of about 23.5 hours. In humans too, when kept in darkness and in isolation from external cues to time of day, the ‘free-running’ circadian rhythm is not quite equal to a day, but in fact runs to just over 24 hours. This was first demonstrated in a classic study by scientist and explorer Michel Siffre in1962, during which he spent 61 days in isolation in a dark underground cave with neither natural light nor any other time cues, such as a watch or a radio. Siffre’s only contact with the outside world was through a telephone with which he updated his collaborators on his daily activity, such as wake, sleep and meal times. Through this constant monitoring of his patterns of sleep and activity it was found that his daily sleep–wake routine lengthened from 24 hours to about 24.5 hours while underground, and consequently he had progressively lost synchronisation with the outside world. Indeed, when Siffre emerged from the cave he thought that the date was 20 August, whereas it was in fact 14 September, so he had subjectively ‘lost’ almost a month. Since Siffre’s study of ‘free-running’ circadian rhythms, work in controlled chronobiology laboratories (chrono refers to time) has suggested that even when time cues are manipulated to create a shorter or longer ‘day’, the circadian clock in humans maintains a period of about 24.2 hours on average (Czeisler et al., 1999). So we know that external zeitgebers entrain the human circadian rhythm to the environment, but that a near 24-hour endogenous circadian cycle is maintained even when environmental cues are manipulated.

Siffre’s study was largely driven by the political climate of the 1960s. Plans for a mission to the moon, and anxieties about the need for nuclear fallout shelters during the Cold War, meant that scientists were interested in finding out how the human body would cope in an environment without natural light. While the threat of nuclear war may have largely subsided, and few of us are likely to be considering a space mission in the near future, understanding the entrainment of circadian rhythms is proving to be enormously important for a quite different reason. Over the past 100 years or so, the rapid advancement of technology has changed the way we live. In particular, the introduction of electric lighting in the late 1800s has had a huge impact on modern humans, with many societies around the world now living a ‘round-the-clock’ lifestyle. These technological and lifestyle changes preceded any thorough scientific understanding of circadian rhythms in humans. It is only in the past 50 years or so that we have begun to understand how this disrupts the circadian system, and in turn can disrupt a range of cognitive functions.


Research has identified the circadian pacemaker in humans as the suprachiasmatic nucleus (SCN) of the hypothalamus. The location of the hypothalamus is shown in Figure 11.3. The SCN is located just above the optic chiasm, hence its name ‘suprachiasmatic’ (i.e. ‘above the chiasm’). This means that it is ideally located for its role of receiving light information from the retina and using this to entrain endogenous circadian rhythms. Indeed, there are extensive connections from the retina to the SCN, supporting the notion that light is the primary zeitgeber for the human circadian rhythm. Animal studies have shown that if the SCN is removed, the durations of sleep and wake remain the same, but these behaviours no longer show a regular cycle. As such, it is clear that the SCN does not directly control these behaviours, but rather synchronises them to the external light/dark cycle.


Figure 11.3 The location of the hypothalamus, the pineal gland and the pituitary gland in the human brain. The SCN is a small nucleus of cells in the hypothalamus and contains the circadian ‘clock’. The pineal gland secretes the sleep-promoting hormone melatonin. The pituitary gland makes up part of the HPA axis, responsible for secreting the arousal-promoting hormone cortisol.

Apart from the SCN there are other ‘peripheral clocks’ in various regions throughout the body, including the liver, pancreas, heart and brain. These peripheral clocks show ‘free-running’ circadian rhythms in isolation, but are synchronised by signalling from the SCN. The SCN entrains the rhythms of peripheral ‘clocks’ (including those in the brain) via a range of signalling methods, which include direct influence by neuronal input to various organs of the body and indirect influence via regulation of hormone secretion (Menet and Rosbash, 2011). For example, the SCN regulates the circadian rhyhms of cortisol and melatonin secretion (Buijs et al., 2003). Light information via the SCN inhibits melatonin secretion during the day, and also has an immediate inhibitory effect on melatonin secretion at night (Buijs et al., 2003; Perreau-Lenz et al., 2003). The effects of light on cortisol are quite the opposite, as bright light has a stimulatory effect on cortisol secretion. The sensitivity for light to influence cortisol secretion is heightened during the morning (Scheer and Buijs, 1999), which is highly significant as cortisol levels peak in the immediate post-awakening period. This peak is known as the ‘cortisol awakening response’ (CAR), and is thought to prepare the individual for the challenges of the day ahead (Adam et al., 2006). The CAR changes in size from day to day (Law et al., 2013) and has been shown to increase in size in response to bright-light exposure upon awakening (Scheer and Buijs, 1999; Thorn et al., 2004). The effect of the timing of light exposure on these hormones therefore has implications for individuals who may be exposed to light at unusual times, for example in jet lag and shift-work, discussed later in this chapter.

In recent years, researchers have started to debate the possibility that circadian rhythms might play a part in the development of various psychiatric conditions. It has long been known that disruption to the sleep–wake cycle is a frequently reported problem in patients with various psychiatric conditions, including depression, bipolar disorder, obsessive-compulsive disorder and schizophrenia (Jagannath et al., 2013; Karatsoreos, 2014). It has traditionally been assumed that these disorders cause the sleep problems, but it is now argued that abnormal circadian rhythms could be a contributing causal factor in the development of the disorders (e.g. Menet and Rosbash, 2011; Karatsoreos, 2014). This is based on the substantial evidence base for the role of circadian rhythms in cognitive performance, and several demonstrations that impairment of the SCN in animal models has downstream consequences on various brain regions involved in cognition and mood. For example, it is known that signalling from the SCN is vital to the function of the hippocampus within the temporal lobes of the brain (the primary structure responsible for declarative memory). Studies using animals have shown that removing or interfering with the function of the SCN causes impairments of hippocampus-dependent memory (Ruby et al., 2008; Stephan and Kovacevic, 1978). Crucially, however, such impairments of hippocampus-dependent memory can also be induced by simply changing the pattern of the light/dark cycle (Devan et al., 2001). This is particularly relevant, as such changes in light expo sure are very similar to what we see in cases of jet lag or in night-shift work in humans (both discussed below). It remains to be seen whether abnormal circadian rhythms do indeed contribute to the development of psychiatric conditions, as research on this topic is ongoing. However, what is clear is that circadian rhythms play a very important role in day-to-day cognitive performance, and this is what we will discuss in the next section.



Cognitive performance is typically impaired in the immediate post-awakening period. Re-establishment of consciousness is of course rapid upon awakening, but the attainment of full alertness can take some time. This delay in the recovery of cognitive performance post-awakening is known as ‘sleep inertia’ (SI). Typically, studies suggest that SI lasts for anywhere between 1 and 30 minutes post-awakening (Ferrara et al., 2000; Ikeda and Hayashi, 2008). Although some studies have reported detectable performance impairment for up to 4 hours after waking, such an extended SI period only seems to occur in cases of major sleep deprivation (Tassi and Muzet, 2000). Evidence for SI comes from a range of studies, mostly using cognitive tasks such as attention switching and reaction times, but also often using arithmetic tasks, memory tests, visual-perceptual tasks and other measures. It has been suggested that SI mainly affects accuracy of performance in these tasks, while speed is less impaired (Marzano et al., 2011).

It has been quite clearly established that SI is influenced by circadian phase and sleep stage upon awakening (Tassi and Muzet, 2000). A possible cause of SI is the delay in blood flow reaching the anterior cortical regions of the brain after awakening (Balkin et al., 2002). Other researchers have suggested that it may be caused by increased levels of adenosine in the brain during non-REM sleep, which may temporarily continue after abrupt awakening, causing reduced vigilance and increased sleep pressure (Van Dongen et al., 2001). SI, of course, has important implications for a range of professions in which individuals may need to be ready for cognitively demanding activity immediately upon awakening, such as on-call emergency workers or military pilots. Within these professions an individual can be required to wake up at any time of the night and make critical decisions or maintain concentration on complex tasks.

It is possible that in such professions the high level of motivation might improve task performance. Indeed, Robert and Hockey (1997) proposed a theory of ‘compensatory control’, suggesting that when motivation is high (as one would expect in an emergency response or flight situation) this may override any impairment brought about by fatigue or circadian phase. However, it should be noted that in this model the additional effort required in order to maintain primary task performance imposes an alternative problem, as it may result in increased strain on the individual and give rise to fatigue.

In order to establish whether the effects of SI vary with the phase of the individual’s circadian cycle, Scheer and colleagues (2008) conducted a study using what is called a ‘forced desynchrony’ protocol. This requires participants to adjust to sleeping and waking at all stages of the circadian cycle. Using body temperature measurements to establish the circadian phase at the time of waking, Scheer et al. found that the worst SI impairment of cognition occurred when participants were woken during their ‘biological night’ (approximately between 2300 and 0300 hours of the circadian cycle). This has important implications for people who need to perform cognitively demanding tasks upon awakening, as it shows that there is a circadian rhythm for SI, such that its effects are most debilitating when the body is most robustly primed for sleep.

With regard to countermeasures, it has been suggested that light exercise and exposure to bright light during this immediate post-waking period may reduce the severity of SI (Ferrara et al., 2000). Caffeine also appears to be highly effective in reducing psychomotor deficits such as impaired attention and reaction times during SI (Van Dongen et al., 2001). For most individuals, the morning routine of showering, making coffee or eating breakfast will see them through the first half-hour of the day. For these people the effects of SI are unlikely to produce anything more than a feeling of ‘grogginess’ and perhaps the occasional ‘absent minded’ error, such as putting something back in the wrong place. However, there may be more serious risks to consider if one engages in dangerous or cognitively demanding activities, such as driving or operating heavy machinery during a period of SI. It is certainly a good idea to avoid driving while drowsy, and it may therefore be important to consider the possible effects of SI before driving in the morning.


Figure 11.4 On-call emergency workers often need to perform complex cognitive tasks immediately upon awakening, including driving and decision making.

Source: copyright bikeriderlondon/


We will now move on to cognitive performance throughout the rest of the waking day. Broadly speaking, performance of cognitive tasks is worst in the early morning and late evening, and tends to be best somewhere in the middle of the day (Valdez et al., 2008). Performance often tends to be related to both the endogenous temperature rhythm and the arousal rhythm (Monk et al., 1983; Wright et al., 2002). However, variations within the general time patterns are seen, depending on the type of task used, and these differences will be discussed in the following section. There are also inter-individual factors, which may influence these associations. An example of this would be the individual differences in performance and preference for time of day between the morning and evening. This preference for ‘morningness’ or ‘eveningness’ is known as as ‘chronotype’, and is normally assessed using the morningness–eveningness questionnaire (Horne and Östberg, 1976). There is substantial evidence to suggest that differences in chronotype are associated with differences in biological rhythms, including the circadian rhythms of core body temperature, cortisol secretion, melatonin secretion and the sleep–wake cycle (Adan et al., 2012; Baehr et al., 2000; Duffy et al., 1999; Gibertini et al., 1999). As such, it is unsurprising that chronotype influences several of the circadian effects on cognition discussed below. Another important inter-individual factor here is age. It has been suggested that during adolescence, time of day preferences tend to shift towards the evening, while in older adulthood from around 50 years onwards there is a shift towards morningness (Horne and Östberg, 1976; Schmidt et al., 2007).

Speed of motor task performance has been observed to increase over the day, and seems to match the core body temperature rhythm quite closely (Folkard and Tucker, 2003). Accuracy of performance on a simple motor task has also been shown to be related to the body temperature rhythm, and to wake duration (Edwards et al., 2007). Working memory has also been shown to vary according to time of day. It appears that this too is closely associated with the core body temperature rhythm (Wright et al., 2002). However, it has recently been suggested that these variations in working memory performance may in fact be driven by the circadian fluctuations in attention (Schmidt et al., 2007).

Evidence regarding long-term memory is somewhat more complex, with different effects seen depending on the type of memory observed. Declarative memory recall has been reported to increase across the day for evening types but decrease for morning types (Petros et al., 1990), though it should be noted that such research has focused almost entirely on episodic rather than semantic memory (i.e. memory for events, rather than knowledge). In recent years it has become clear that procedural memory performance is worse at night than during the day, and this is seen even after controlling for the amount of time spent awake (Schmidt et al., 2007). Time of day has also been shown to affect the propensity for neuroplastic change in the human brain (Sale et al., 2007, 2008), and this relationship appears to be modulated by cortisol secretion (Sale et al., 2008; Clow et al., 2014). It has been proposed, therefore, that aspects of the cortisol circadian rhythm, such as the cortisol awakening response, may influence the circadian rhythms seen in memory and other cognitive functions (Clow et al., 2014). However, caution should be taken before drawing any conclusions here, as there have been very few studies in this field and much remains to be understood.

Several studies have reported time-of-day effects for components of executive function (Valdez et al., 2008). There is evidence to suggest that inhibitory control is related to the circadian rhythm, which may be important for control of appropriate responses in a range of situations involving changes to routine, for example driving on the opposite side of the road when in a foreign country. When measured using a Stroop-type task, the worst inhibitory performance was observed approximately 1–2 hours after habitual wake time, and best performance was at about 9 pm in the evening (Burke et al., 2015). A study by Allen et al. (2008) explored a range of cognitive performance measures in a sample of fifty-six US college students, in the morning, afternoon and evening. These students showed improved performance in two executive function measures (fluency and digit symbol task performance) in the afternoon and evening compared with their morning performance. However, there is still some way to go in understanding circadian rhythms in executive functions, in particular the issue of the ecological validity of these tests in predicting performance in the real world (Valdez et al., 2008).

There are well-established time-of-day effects for tasks involving attention and arousal (Schmidt et al., 2007; Valdez et al., 2008). Attention is a multidimensional construct, being made up of several components including tonic, phasic, selective and sustained attention. All of these separate components seem to reach their lowest levels around 4 am–7 am (Valdez et al., 2005). Sustained attention (or ‘vigilance’) tends to remain quite stable throughout the day, but begins to decline after the individual has been awake for over 16 hours, probably reflecting the effects of fatigue (Schmidt et al., 2007). Indeed, it is often a challenge in this area of research to tease out the relative contribution of the circadian rhythm from that of progressive fatigue during the day, not least because the two factors are often co-related. It has been suggested that the time-of-day effects for attention, executive functions and working memory may all be caused by fatigue, but that this process may involve a cascade of effects (Valdez et al., 2008). This may begin with impairment of tonic alertness (the most basic component of attention, comprising arousal and general alertness), and this in turn causes the increase in errors observed in these other cognitive tasks (Valdez et al., 2008). This is a plausible theory, but is at this stage still only speculative as research in this area is ongoing. Certainly fatigue is an important factor in cognitive performance, and we will return to it in the next sections of this chapter.

Although the research above describes a general peak in cognitive performance throughout the middle of the day, there is also a well-documented dip in performance at around 12 noon in the 24-hour cycle. This is often referred to as the ‘post-lunch dip’. The effects of the dip have been shown in many cases but not in all, suggesting some individual differences in this effect (Van Dongen and Dinges, 2005). Nevertheless, this effect has been observed in studies using a range of measures of attention and vigilance (Monk, 2005). A dip around this time of day can also clearly be seen in patterns of performance efficiency in the workplace (Folkard and Tucker, 2003). The name given to this effect is misleading, however, as eating food may not be the only causative factor; there is some evidence for a naturally occurring trough in performance and an increase in sleepiness at this time, regardless of food intake. This comes from studies in which participants are unaware of the time of day and have not eaten a meal, yet still show the same dip in performance in this period (Monk, 2005). Indeed, taking a single afternoon nap or ‘siesta’ at this time is common practice in many different cultures throughout the world (Dinges, 1992). While the ‘post-lunch dip’ may therefore be influenced by some form of circadian control, it is certainly made worse by a heavy lunch (especially one with high carbohydrate content). Alcohol at lunchtime should also be avoided, of course, if working in the afternoon. It is also important to bear in mind that time-of-day effects may be due to fatigue and changes in motivation. The effects of fatigue on performance are well documented and are addressed later in this chapter.

Time of day is an important variable to control in laboratory studies. Whether you are conducting a repeated-measures or a between-subjects study, it is essential to test participants at approximately the same time of day (e.g. early morning or late afternoon) in order to minimise possible biases related to circadian phase. Time-of-day effects on cognitive performance also have clear applications in the workplace. Research in this area may potentially offer insight into appropriate timing of work activities so as to enhance productivity, and also importantly to reduce the risk of accidents (Folkard and Tucker, 2003; Wagstaff and Lie, 2011). Indeed, there are well-documented effects of long working hours on safety. Work periods longer than 8 hours are known to carry an increased risk of accidents, and this increase in risk is cumulative, so that the increase in risk at around 12 hours is twice what is observed at 8 hours (Wagstaff and Lie, 2011). This of course has implications for a range of professions, especially those involving shift systems, as will be discussed in the next section. Given that many aspects of cognitive performance deteriorate with time awake, theoretically it may be best to focus one’s workload in the morning and reduce the degree of cognitive demand from the late afternoon onwards (Valdez et al., 2008). However, one should also remain mindful of the post-lunch dip, and that very early in the circadian rhythm performance may be reduced, especially in the presence of sleep inertia.


The circadian rhythm can be disrupted. Two of the most important sources of disruption in everyday life are shift-work and jet lag. Both of these have important implications for cognition and performance, particularly in applied settings such as healthcare, industry and aviation. Jet lag and shift-work are dealt with in the next two sections of this chapter.

11.4  JET LAG

Jet lag is caused by acute de-synchronisation of the circadian system. When flying through a number of time zones (east–west or west–east), the traveller will emerge at a destination with a different light/dark cycle, and their circadian clocks must adjust. The peripheral clocks in the body must also adjust and they do this at different rates, so there may be a great deal of internal de-synchronisation. Flying north–south or south–north does not cause jet lag as there is no change in the light/dark cycle or time of day.

While deeply unpleasant and disruptive for all travellers, jet lag is a particular problem for aircrew. Symptoms include fatigue, insomnia, falling asleep at inappropriate times, headaches, concentration deficits, digestive problems, mood disturbance, and impaired cognitive performance. Symptoms typically occur only after crossing three or more time zones, and then tend to increase in severity depending on the number of time zones crossed (Waterhouse et al., 2007). After crossing time zones, the biological clock naturally shifts by around 1 hour per day on average (Rajaratnam and Arendt, 2001). Therefore the symptoms of jet lag usually disappear after a few days, but can take up to 5 days or more in the case of travelling across nine or more time zones (Waterhouse et al., 1997). It is also well established that symptoms tend to be more severe following eastward rather than westward travel (see Figure 11.6). This is because it is easier to delay than to advance the circadian system (a similar effect is seen for the direction of shift rotation in night shifts, discussed below). Several studies have also shown that symptom severity increases with the age of the traveller, though the reason for this particular association is not yet known (Waterhouse et al., 2007).

The circadian rhythms of both melatonin and cortisol are desynchronised during jet lag. Following travel across time zones, melatonin secretion is disrupted, and this has been shown to be associated with increased feelings of anxiety and depression (Montange et al., 1981). Jet lag also disrupts cortisol secretion and reduces the size of the CAR (Doane et al., 2010).


Figure 11.5 The time and date in cities across the world. Someone leaving London at 6 pm on 10 May to fly to Auckland would arrive about 24 hours later (approximate length of direct flight). Their body clock would ‘think’ it was 6 pm on 11 May, whereas in fact it would be 5 am on 12 May – rather than early evening, it would be early morning. Their clock has to adjust to this new time. Note: British Summertime is one hour ahead of Greenwich Mean Time (GMT).


Figure 11.6 Jet lag. Flying east necessitates a ‘phase advance’, so the timing of activities such as eating, sleeping, etc., is brought forward. This tends to produce more jet lag than flying west, which involves a ‘phase delay’ or pushing back the onset of activities.

The cognitive impact of jet lag includes both lapses in alertness and concentration, and increased risk of errors and accidents (Waterhouse et al., 1997; Waterhouse et al., 2007). These effects are partially attributable to sleep deprivation, but can also be affected by the peak of melatonin secretion and the lowest point in the core body temperature rhythm (Arendt, 2009).

While jet lag is an acute disruption, it can also be chronic in individuals who regularly travel across several time zones (e.g. transmeridian flight attendants). Recently there has been some suggestion that chronic jet lag can bring about long-term memory impairment, due to the associated long-term elevation of cortisol concentrations (Cho et al., 2000). For example, learning and memory impairment, and reduced volume of the temporal lobe, have been reported in a sample of female flight attendants relative to control participants (Cho, 2001; Cho et al., 2000). Further supporting evidence has also been provided by animal studies (e.g. Kott et al., 2012). However, it should be emphasised that such effects appear only to be a concern for long-term ‘chronic’ jet-lag cases, and should not be a concern for the occasional traveller.

There has been a lot of research into possible methods of alleviating jet lag. As yet no cure has been discovered, though there are various countermeasures that can minimise symptom severity. During the flight it is important to avoid dehydration, and to avoid alcohol consumption, or keep it to a minimum. It should also be ensured that sleep, wake, and meal times in flight match the appropriate times at the destination. The most effective way to avoid jet lag upon arrival is to adapt the sleep–wake cycle to the destination time zone in advance of travel (Arendt, 2009), and indeed this is a method often used by aircrew. However, this requires both time and commitment and therefore will not always be practical. Whether or not it is actually necessary to adapt the circadian rhythm to the destination will depend on the nature and duration of the stay (Arendt, 2009). Circadian readjustment may be necessary for trips of 4–5 days or more but for shorter stays (e.g. a 2–3-day business trip) there is little point in adjusting the circadian rhythm, especially since it will require a second period of readjustment on return. Instead, the best solution on a short stopover is to time important meetings within the times of maximum alertness in the departure time zone (Arendt, 2009). Napping and caffeine consumption can also be effective ways to maintain alertness during this time (Kolla and Auger, 2011). There is also a new class of drugs (Chronobiotics) that can be used to change the timing of the circadian rhythm, including some containing exogenous melatonin. The availability of these drugs is still restricted in many countries, but they are gradually becoming available as new formulations are developed. For example, melatonin is now available on prescription in Europe (Arendt, 2009).


Shift-work is in many ways similar to jet lag, but differs in that the de-synchronisation of the circadian system is chronic. Shift-work is prevalent in the modern day, and it is estimated that in industrialised societies over 20 per cent of the workforce are shift-workers (Åkerstedt and Wright, 2009). When individuals begin working night shifts (and therefore begin to sleep during the daylight hours), their sleep–wake activity is in direct opposition to their circadian rhythm. The majority of night-shift workers will be working during the lowest point in their circadian rhythm and trying to sleep at a time of maximum alertness (Arendt, 2010).

During days off, workers will often revert to normal daytime activity, which adds to this de-synchrony. Even in workers who maintain a constant nocturnal pattern of activity, complete adaptation of circadian rhythms to shift-work is rarely seen. It is thought that this is because of the range of the effects of daylight exposure and social factors such as family commitments, which prevent adaptation to the shift. Indeed, the few examples of adaptation to night-shift work tend to be seen in unusual locations such as Antarctic bases and North Sea oil rigs, where the workers are less affected by social activity and do not have the requirement to return home during morning light (Arendt, 2010).

Typically, night-shift workers show altered melatonin and cortisol rhythms (Touitou et al., 1990; Burch et al., 2005). The onset of melatonin secretion in night-shift workers has been shown to be around 7.2 hours earlier than in daytime control participants (Touitou et al., 1990) and tends to be misaligned with the onset of sleep (Sack et al., 1992). The cortisol rhythm is also disrupted following shift rotation, resulting in elevated cortisol levels which can have detrimental effects on both the quality and duration of sleep (Niu et al., 2011). At the beginning of a new night-shift cycle, the CAR is also reduced in size, as in jet lag. In cases of adaptation, the CAR gradually re-synchronises following consecutive night shifts until it returns to baseline. This typically takes about 3 days in men, and 4 days in women (Griefahn and Robens, 2010). It has been demonstrated that re-synchronisation of the cortisol rhythm happens more readily if workers are exposed to bright light during the night shift, and are protected from bright light exposure during the day time sleep period (James et al., 2004). This is a good example of how external zeitgebers are crucial to the entrainment of circadian rhythms, as described in Section 11.2.


Figure 11.7 Is it safe to drive home at the end of a night shift? Studies suggest that driving performance may be seriously impaired at this time.

Source: copyright Peshkova/

Shift-work is associated with increased fatigue, disturbed sleep, reduced alertness, and reduced cognitive performance (Åkerstedt, 1995; Machi et al., 2012). These problems are primarily associated with night shifts, as there is little evidence to suggest that evening working in a shift pattern is disruptive (Gold et al., 1992). As might be expected, there is also an elevated risk of accident and injury when working night shifts (Folkard and Tucker, 2003; Spencer et al., 2006). An additional concern is that often night-shift workers will drive home at the end of the shift, and this may be particularly risky given the impairment of alertness and vigilance. Indeed, driving simulator studies have shown that post-night-shift driving performance is seriously impaired and results in an increased risk of road accidents (e.g. Åkerstedt et al., 2005).

Although the performance deficits during shift-work have been well established, there has been less research into the longer-term effects on cognitive function. In a recent study, Marquié et al. (2014) explored this in a longitudinal assessment of a group of over 3000 shift-workers, examining both cognitive speed and memory. The results of their study indicated an association between night-shift work and chronic impairment of these cognitive functions. This association was strongest in those exposed to shift-work for 10 years or more. Equally alarming was that after leaving shift-work, it took over 5 years for cognitive function to recover.

A clue to understanding the cognitive effects of shift-work in humans may be provided by animal studies. For example, a recent study of chronic circadian disruption in mice has shown that it alters the structure and complexity of neurons in brain regions such as the pre-limbic prefrontal cortex, which plays an important role in executive function and emotional control. These physical changes manifest themselves in adverse behavioural outcomes, including reduced cognitive flexibility and changes to the emotional state of the mice (Karatsoreos et al., 2011).

Further to the various effects on cognition, shift-work is associated with an increased risk of obesity, diabetes, and hypertension. It is thought that these adverse health consequences may also be a product of the misalignment of circadian rhythms with behaviour cycles, such as sleep–wake and meal times (Scheer et al., 2009). Shift-workers have also been found to suffer elevated levels of acute infections such as colds, and also more serious long-term health outcomes. The most common health problem encountered by shift-workers is disturbed sleep (Åkerstedt and Wright, 2009), and the greater sensitivity to infection might well be a result of the decreased immune response caused by this sleep deprivation. Perhaps the most concerning of the health risks is the recent evidence suggesting that shift-work may lead to increased vulnerability to heart disease and various forms of cancer, including much evidence for increased risk of breast cancer (Hansen, 2001; Blask, 2009; Arendt, 2010; Wang et al., 2011; Golombek et al., 2013).

Given the evidence presented, the most sensible recommendation regarding shift-work is simply that it should be avoided. However, for some occupations, for example nursing, this may not be an option. Certainly, if one must engage in shift-work it is advisable to avoid several night shifts in succession, as this can cause accumulation of sleepiness and the risk of accidents (Åkerstedt and Wright, 2009). Taking regular rest breaks during the shift is also important to reduce fatigue and increase performance (Spencer et al., 2006).

Despite a great deal of research on the topic, there is as yet no consensus on the ideal shift schedule. However, one thing that is generally agreed upon is that shift cycles should rotate. This is because permanent night shifts do not normally result in sufficient circadian adjustment to be of benefit to health and safety (Folkard, 2008), and rotation allows the shift-worker to more easily engage in social activities such as family commitments. The general view on shift rotation is that it should involve forward rotation only (e.g. day–afternoon–night), as this allows for phase delay. Backward-rotating schedules require a greater amount of recovery time, as this typically involves a phase advance, and also involves reduced time available to sleep between shifts (Van Amelsvoort et al., 2004). It has also often been suggested that slower rotating shift patterns have a less disruptive effect on sleep (Pilcher et al., 2000). However, this is a contentious issue, and some more recent studies have presented evidence that a fast forward rotating schedule may promote better sleep (Neil-Sztramko et al., 2014), especially for older workers (Viitasalo et al., 2015).

Various countermeasures have been proposed to reduce performance deficits in night shifts, including napping, bright-light exposure and drugs such as caffeine and modafinil. Research suggests that the most effective of these countermeasures is napping (Ficca et al., 2010), and it has been clearly shown that napping can improve night-shift alertness and performance. Indeed, naps and caffeine consumption are the most commonly used countermeasures, and a combination of both may be the best method for improving performance and alertness during a night shift (Schweitzer et al., 2006). Typically, the best times for napping among unadapted night-shift workers will be during the circadian trough around 3–6 in the morning, though naps should be implemented throughout the shift if possible. A nap from 1 to 4 hours will be most effective in reducing sleepiness and improving performance, with longer naps having the greatest benefits (Ficca et al., 2010). It should be noted, though, that there can be a period of sleep inertia in the post-nap period, which must also be taken into account to reduce the risk of accidents and also impaired work performance (Signal et al., 2012).

Exposure to bright light during the working hours and avoidance of it during the sleep period is also a highly effective way of adjusting the circadian rhythm to a night shift. Indeed, exposure to morning light on the journey home occurs at a very unfortunate time, as it opposes circadian adaptation to the shift (Arendt, 2010). Boivin and James (2002) conducted a study on night-shift-working nurses, in which one group were exposed to 6-hour intermittent periods of bright light during the shift and then wore sunglasses to shield them from light in the post-shift morning period. A control group meanwhile continued with their habitual night-shift routine. It was found that of the two groups, the workers in the light-regulating condition showed faster circadian adaptation to the shift pattern. This effect of timed light exposure promoting adaptation to night-shift work has since been supported by numerous studies (Neil-Sztramko et al., 2014).

As discussed above with regard to jet lag, the recent introduction of chronobiotic drugs has presented a new means of encouraging circadian adjustment and thus may be useful for reducing some of the symptoms of shift-work. However, the results as yet are not clear, as studies of exogenous melatonin have shown mixed efficacy for shift-work (Kolla and Auger, 2011). It appears that exogenous melatonin treatment can offer clear benefits for sleep, alertness and performance in shift-work, but the timing of the treatment is vital. Taken at the wrong time, melatonin would have the exact opposite effect (Arendt, 2010).

With regard to recovery from shift-work, the time taken for this varies, as it depends on the extent to which the individual has adapted their circadian rhythm to the shift pattern. For example, after 12 days of a 12-hour shift it typically takes around 3–4 days to recover, but it has been suggested that even 5 days may not be sufficient if the worker has adapted their rhythm to the night shift (Spencer et al., 2006).


Many of the problems discussed above are either caused, or complicated, by sleep loss and fatigue. It is well established that insufficient sleep is associated with reduced productivity, performance, and safety in the workplace (Rosekind et al., 2010). Both temporary (‘acute’) and longer-term (‘chronic’) sleep deprivation are associated with reduced arousal, psychomotor and cognitive speed, attention, memory and mood stability (Banks and Dinges, 2007; Goel et al., 2009). It takes far longer to recover from chronic sleep restriction than acute sleep restriction, and the reason for this is thought to be because chronic sleep deprivation induces changes in brain metabolic function and long-term changes in brain physiology and neural networks (Basner et al., 2013). Regardless of whether it is acute or chronic, sleep restriction and fatigue can result in considerable impairments of cognitive performance. There is also a heightened risk of accidents if engaging in activities such as driving or operating machinery. It is estimated that around 20 per cent of road accidents are caused by fatigued drivers. There is extensive evidence to suggest that when sleep is restricted to between 4 and 6 hours per night, driving performance is significantly impaired and crashes are more likely to occur (Banks and Dinges, 2007). The impairment of driving performance while fatigued is so severe that it is often compared to driving while drunk. Indeed, a study by Williamson and Feyer (2000) showed that sustained wakefulness of 17–19 hours results in slowed reaction times and other impairments of cognitive performance similar to what is seen at 0.05 per cent blood alcohol concentration (the legal limit for driving in many countries around the world).

While the safety of workers and the public should be the primary concern relating to impaired performance during sleep deprivation, a secondary concern is that there is an economic cost of poor sleep. It has been suggested that in terms of productivity, the annual cost of insufficient sleep per employee (in 2007 $US) was $2796, more than double that of normal, healthy sleepers (Rosekind et al., 2010).

As mentioned with regard to shift-work, a most effective countermeasure to sleepiness is napping. It is apparent that wherever possible, napping should be implemented when severe sleepiness is likely to occur (Ficca et al., 2010). Another effective countermeasure is caffeine. The effects of caffeine in reducing fatigue and encouraging wakefulness are widely recognised. For example, in a classic study by Lieberman et al. (2002), sixty-eight US Navy SEAL trainees were exposed to a 72-hour period of sleep deprivation and randomly assigned to receive 100 mg, 200 mg or 300 mg of caffeine or a placebo. A battery of cognitive tests, mood and marksmanship were all assessed. Caffeine was found to improve vigilance, reaction time and alertness in a dose-dependent fashion, but had no effect on marksmanship. As such, it was suggested that caffeine is highly effective in reducing several symptoms of fatigue, but does not improve fine motor coordination. However, since that report, further studies have highlighted a whole range of effects of caffeine on cognition that were not previously appreciated. For example, there is now evidence to suggest that caffeine can improve performance for short-term memory, decision making, reaction speed and accuracy, and even the ability to solve problems by reasoning (Glade, 2010). Nevertheless, the risk of dependence, tolerance and withdrawal effects should perhaps discourage over-consumption of caffeine.

In recent years, there has also been increased public interest in the anti-narcolepsy drug modafinil, which has proved to be effective in promoting wakefulness, vigilance and mood (Minzenberg and Carter, 2008; Repantis et al., 2010). However, despite the very promising results reported by some studies of single doses, repeated doses do not appear to be effective over longer periods of sleep duration. Moreover, there is some evidence to suggest that modafinil can induce over-confidence in one’s cognitive abilities (Repantis et al., 2010), so it is possible that it could actually increase risk in some cases, though this is yet to be fully explored. As a final point it should be noted that while drugs such as caffeine and modafinil may help reduce some of the effects of fatigue, they should not be relied upon as a ‘cure’ for extreme tiredness. It is simply not safe to conduct activities like driving or operating machinery when very tired.


There is an increasing amount of research exploring circadian rhythms, and it is now very well documented that disruption to these rhythms has detrimental effects on cognitive performance. Some countermeasures such as napping, light treatment, or drugs such as caffeine, modafinil, and chronobiotics may prove effective in symptom reduction. However, such treatments are often temporary and do little or nothing to prevent the serious long-term health outcomes associated with chronic exposure to circadian disruption. Moreover, reduction of symptoms should not be the highest priority. Instead, the greatest concern should be to understand the factors which are causing the increasingly widespread disregard for biological rhythms in modern society, and the implications this may have for both present and future generations.


We will now turn from daily cycles to monthly cycles. In the following sections we will consider the physiological basis of the menstrual cycle and explore the history and wider context of menstrual cycle research.

It must be emphasised that this research has always had wider social implications. The research is, and has been, conducted in sociocultural contexts whereby women face disadvantage, to varying degrees, relative to men. As with other forms of discrimination, research, including menstrual cycle research, has been used to justify existing inequalities.

It is important to appreciate the methodological difficulties that complicate this research, and we devote a section to discussing these before reviewing the evidence regarding the effects of gonadal (sex) hormones on cognition and performance. Gonadal hormones are those hormones released from the gonads (the ovaries and testes). They are the sex hormones, including estrogen and testosterone.


The menstrual cycle is experienced by most healthy women and girls between the ages of about 12 and 50. The typical cycle lasts between 28 and 32 days, though there is considerable variability both within and between women. Cole et al. (2009) report that a mean cycle length is 27.7 days with a standard deviation of 2.4 days. Two oscillators control the menstrual cycle: the ovaries, which release ova (eggs) in a cyclic pattern, and the hypothalamic-pituitary system, which provides feedback via hormones.

A typical cycle can be divided into phases on the basis of hormonal and physiological events driven by a feedback relationship between hormones released from the pituitary gland and the hormones released by the ovary (estrogens); see Figure 11.8.

There are inconsistencies, as researchers define phases in different ways and identify different numbers of phases. Anne Walker (1997) found the number of phases used by researchers to vary from two to fourteen! Traditionally four to five phases were identified, including menstrual and premenstrual. However, much recent work uses two phases, defining the follicular phase as the earlier part of the period before ovulation, and the luteal or mid-luteal phase post-ovulation. Table 11.1 shows a 28-day cycle with both two and five distinct phases. While the 28-day cycle is taken as the standard, many women experience menstrual cycles that are typically longer or shorter. The 95 per cent confidence interval for cycle length is 23–32 days (Cole et al., 2009), so this means that typically ninety-five out of every hundred cycles will fall within this range. Typical cycle length differs between women, but individual women also experience variability in their own cycles. Cole et al. (2009) found the average within-individual variability to be 3.8 days. Anovulatory cycles (cycles in which ovulation does not occur) are also fairly common, particularly in girls and younger women.


Figure 11.8 The menstrual cycle. The menstrual cycle is regulated via the hypothalamic-pituitary-ovarian axis. The hypothalamus releases gonadotrophin releasing hormone (GnRH). This reaches the pituitary and triggers the release of the follicle stimulating hormone (FSH). FSH stimulates the ovary to secrete estrogen. Levels of estrogen and FSH are regulated through a negative feedback loop. Increasing levels of estrogen inhibit further release of GnRH from the hypothalamus. So, as estrogen levels rise, FSH levels fall.


The menstrual cycle must be considered within the wider social and cultural contexts in which it occurs. Menstruation has historically been defined in very negative terms across many cultures (Walker, 1997), and this remains true (Marván et al., 2014; Moloney, 2010; Wister et al., 2013). It is often seen as something shameful and potentially pathological, and is construed as a ‘problem’ (Kissling, 2006, p. 1). This is not only true of menstruation. There is a long history of efforts to control and manage women’s bodies and to explain any distress they suffer in terms of their reproductive bodies rather than their lives (see Ussher, 2006 for discussion). While the nature of explanations has varied over time, currently hormones are often invoked as an explanation for the problems women face; this approach is often referred to as the ‘raging hormones hypothesis’ and has been heavily critiqued (see Ussher, 2006 for overview).

In view of the issues discussed above, it should be clear that menstrual cycle research is not ‘neutral’. Much of it is concerned with fluctuations in cognition, mood and/or behaviour across the cycle. Of course, fluctuation is normal and people accept that mood and performance fluctuate across the day. However, given the stereotypes and negative representations around menstruation and the wider history of discrimination against women, finding a fluctuation in, say, verbal memory premenstrually is charged with implications that a circadian fluctuation is not. This can be seen in the language used, even in some very recent papers, where researchers talk of ‘cognitive abnormalities’. Yet where menstrual-cycle-related changes are reported, they are typically mild fluctuations. Given the potential for research in this area to be used to support bias and discrimination, we have a particular duty to take a critical stance with respect to the research questions asked, the methods used and the interpretation of findings in this area.

Table 11.1 The hormonal and physiological events of the menstrual cycle illustrated using a standard 28-day cycle. Day 1 is the first day of menstrual bleeding. The table shows the cycle divided into two and five phases. The two-phase approach is commonly used now. In earlier research, three phases (menstrual, mid-cycle and premenstrual) were often used


There is an extensive literature examining the menstrual cycle from a psychological point of view. This body of research has two main strands: cognitive and skilled performance, and mood. There has long been a belief, largely unsupported, that women’s abilities are somehow impaired by or before menstruation. This is usually explained in terms of hormonal actions, and much of the earlier work focused on the premenstrual and menstrual phases of the cycle. However, in recent years the emphasis has shifted as researchers have become more concerned with the specific effects of gonadal hormones on certain aspects of cognition. Thus contemporary studies tend to focus on comparing cognitive performance at times of low and high estrogen, typically comparing follicular and luteal phases. A good deal of the work on mood is concerned either directly or indirectly with premenstrual syndrome (PMS), which is a very controversial concept. A discussion of this research is outside the scope of this chapter and interested readers are referred to Kissling (2006), Walker (1997) and Ussher (1989, 2006).


Walker (1997) identified three key traditions in psychological menstrual cycle research: mainstream, liberal feminist and postmodern. This description still holds.

The mainstream approach applies traditional positivistic research methods (experiments, quasi-experiments, correlational studies) to the study of the effect of the menstrual cycle on particular variables, such as memory or work rate. The liberal feminist approach is concerned with challenging negative assumptions around the menstrual cycle, such as the assumption that women are cognitively impaired premenstrually. Much of this research uses positivistic methods to challenge traditional methods, assumptions and findings. Research from this approach has been important in challenging biased methods and conclusions, and facilitating greater methodological rigour (e.g. in questionnaire design).

The postmodern approach is concerned with understanding women’s experiences and exploring the discourses around menstruation. Much of this research is conducted from a feminist perspective, and qualitative methods of inquiry are used. Most of the research that will be considered in this chapter comes from mainstream and liberal feminist traditions.


Studying the psychology of the menstrual cycle presents many methodological challenges. Researchers cannot manipulate menstrual cycle phase, so they cannot randomly allocate women to an experimental condition. This means that studies examining some aspect of performance across different cycle phases are fundamentally correlational in nature. So while a researcher might want to look at the effect of the menstrual cycle on, say, mental rotation, he or she can only examine the correlation between mental rotation performance and menstrual cycle phase or hormonal profile. This means that if a difference is observed (or not), we cannot say definitively that it has been caused by the hormonal profile. Despite this, many researchers interpret their findings in terms of hormonal changes ‘causing’ or mediating an observed change in performance. Yet changes, if observed, may be the result of culturally mediated emotional changes, or other factors such as expectations. However, these possibilities are rarely considered.

The accurate designation of menstrual cycle phase is challenging. The most common, least invasive and least expensive method is simply to count the number of days from the last menstrual period. This also relies on self-report, but evidence suggests considerable room for error (Cole et al., 2009). As discussed earlier, there is also considerable variability both between and with individuals in terms of both cycle length and cycle events. Given this, it is increasingly common to see hormone levels being measured directly, often taken from urine or saliva samples. These can be used to verify phase and also to correlate directly with measures of performance. Of course, the action of hormones in the central nervous system (CNS) is also determined by other factors, including the numbers and actions of receptors (Aloisi and Bonifazi, 2006). This is a consideration where research assumes that gonadal hormones influence performance via CNS rather than local action.

While there is a lot of menstrual cycle research, it can be difficult to compare studies because of methodological differences. As in many areas, there is a very wide range of measures used, so even where studies are concerned with the same aspect of performance, they may measure it in different ways. The number and definition of menstrual phases vary. For example, one study might compare delayed recall performance pre- and post-ovulation, while another might examine performance at menses, mid-cycle and premenstrually, and yet another might track performance across five phases. Designs may be between-subjects (e.g. a group of women tested in the follicular phase compared with another group of different women tested in the luteal phase) or within-subjects (e.g. the same women tested twice, once in their follicular phase and once in their luteal). Within-subjects designs are preferable as they minimise individual differences; however, they have their own problems, such as attrition leading to bias in the sample. Unless all women are tested for the first time during the menses (and this could produce order effects), data collection will be spread over more than one menstrual cycle. This is problematic given that, as we’ve discussed earlier, cycles differ both between and within individual women.

Table 11.2 Methodological difficulties in menstrual cycle research

Methodological difficulties in menstrual cycle research

•  Difficult to establish causation – studies tend to be correlational

•  Problems accurately designating menstrual cycle phase

•  Definition of phase varies

•  Aggregating data across menstrual cycles and across women can be problematic

•  Sampling – only some cycles are studied

•  Problems with some measures used, particularly in the research on mood

Box 11.1  Size matters

Traditionally, psychologists were concerned with statistical significance. We would say that there was a difference or an effect only if the scores of different groups, or the same people at different times, were significantly different at the 0.05 level. Significance testing has many limitations. Even if a difference is significant in the statistical sense, it might not be significant in the everyday sense of the term, and some significant differences are trivial or unnoticeable in real life. Equally, effects that are not statistically significant may be important. Effect size refers to the size of an effect or difference. There are many measures of effect size, but one of the best known is Cohen’s d. This is used to summarise the difference between two experimental conditions. It is calculated by subtracting the one mean from the other and dividing by the (pooled) standard deviation (SD). So, it expresses the difference between conditions in standard deviation units.




Small effect size – the difference is around a fifth of the pooled SD.


Medium effect size – the difference is around half of the pooled SD.

0.8 and

Large effect size – the difference


is almost as large as the pooled SD.

It is important to know or be able to estimate the size of effect we would expect in a research study, as this tells us how many participants we need. The power of a study refers to its ability to detect an effect, and sample size is one important element of this. Very large samples are needed to detect small effects. Many of the effects in menstrual cycle research are small to medium, so it is important that researchers consider whether the studies are powerful enough to detect the effects. Yet surprisingly, even in recent work, a discussion of power is often missing.

When we talk of a sex difference in performance on sex-sensitive tasks, we are talking about an average difference. There is still a good deal of overlap between the sexes. The sex difference in height is one of the most obvious, with women being on average shorter than men. This difference is large, with an effect size of approximately d = 2 (Hines, 2007), yet you only need to look around you to see the overlap, even with a difference this big. When we are dealing with cognitive performance, the effects are generally small to medium, and the overlap is correspondingly larger. Even the largest effect sizes reported are much smaller than the difference in height (Hines, 2007), so it is important take care in interpretation. For example, although women as a group have better average verbal memory performance than men, there is more variability between women than there is between women and men. Both sexes show a normal distribution in this, with some people performing well, some poorly and most about average, although this average is slightly higher for women. This means that we cannot make inferences such as men don’t do well at verbal memory or all women are good at this.

There are also individual differences in menstrual cycle experiences, and for some women performance may be better or worse at particular cycle phases and individual women might experience negative experiences in one cycle but not another (see Walker, 1997, pp. 119–123). Other research also suggests considerable individual variability in the effects of the menstrual cycle (Eisner et al., 2004).

There are further issues around sample size and sampling. Historically, much of this work used fairly small sample sizes. As most of the effects would be small to medium in size, this means that many studies would not have been powerful enough to detect them (see Box 11.1 for more on effect size). The samples themselves may be unrepresentative, as women tend to be excluded from this research if they have irregular or very long/short menstrual cycles, so not all cycles are studied. Furthermore, much menstrual cycle research uses university students or clinical samples of women who report, or have been diagnosed with, menstrual problems or PMS. This clearly has implications for the generalisability of the findings.



Sex differences exist in various aspects of sensory functioning, suggesting that hormones do influence sensation (Baker, 1987), and much of this research assumes that any cyclic variations in sensory performance are due to either direct or indirect hormonal action.

In an early review of the research, Parlee (1983) found that the evidence indicated a peak in sensory sensitivity around ovulation. Both visual acuity and general visual sensitivity have been reported to be highest mid-cycle (Parlee, 1983). Menstrual cycle rhythms have also been reported in the duration of various visual phenomena such as the McCollough effect (Maguire and Byth, 1998), the spiral aftereffect (Das and Chattopadhyay, 1982), and the figural aftereffect (Satinder and Mastronardi, 1974). There is some evidence of effects on aspects of retinal function, but for some women only (Eisner et al., 2004).

A number of studies have found that women show better odour performance than men, and this seems to be linked to gonadal hormones. Derntl et al. (2012) investigated effects of sex, menstrual cycle phase and oral contraceptive use on the ability to identify and discriminate odours (odour performance). The odours used were everyday ones and included cinnamon, orange, garlic and coffee. The menstrual cycle was found to have subtle and complex effects on olfaction. Women performed better than men, in line with previous findings. Women were tested either first in the follicular phase and then in the luteal, or first in the luteal and then in the follicular. A menstrual cycle effect was found only for those tested first in the follicular phase, and they showed reduced sensitivity in the luteal phase. The authors noted a significant correlation between duration of oral contraceptive use and overall odour performance, and concluded that odour performance is influenced by gonadal hormones. However, it is important to note that the effect of cycle phase was found only for one group of women and thus may represent an order effect.

Menstrual cycle variations in taste and taste detection thresholds have also been reported. For example, Wright and Crow (1973) found menstrual cycle variations in sweet preferences. Following a glucose meal, sugar solutions are judged as less pleasant than normal, but this shift was slowest at ovulation. Frye and Demolar (1994) found that preference for salt was greater pre-ovulation.


Figure 11.9 A summary of the findings of research on the relationship between menstrual cycle phase and sensation, perception and cognitive performance.

Source: (a) FreeImages. com/Elini Kappa, (b)

Pain perception has long been studied across the menstrual cycle, but the evidence is conflicting. Some studies focus on acute experimentally induced pain, others on clinical pain and still others on chronic pain. In her 1983 review, Parlee found a trend towards decreased sensitivity to pain in the premenstrual phase relative to other phases. A meta-analysis by Riley et al. (1999) found pain thresholds to be lower post-ovulation, except in the case of electrical pain. However, other work (e.g. Klatzkin et al., 2010) has found no effect of cycle phase. Ahmed et al. (2012) compared pain post-operatively in a sample of sixty women undergoing elective hysterectomies. They found no evidence of an effect of menstrual cycle phase on pain perception. Overall the evidence regarding pain perception is inconsistent.

In conclusion, while the evidence is conflicting, it does seem to suggest that gonadal hormones may affect sensory function and that, where there are changes, they are generally in the direction of enhanced sensitivity mid-cycle.


A good deal of the early research assumed that women’s cognitive performance would be impaired premenstrually, and much of it was motivated to find evidence of this debilitation (Richardson, 1992; Sommer, 1992), that is, poorer performance around the time of a woman’s menstrual period. Richardson (1992) and others have argued that any cognitive variations could be the result of culturally mediated emotional changes rather than hormonal changes. A publication bias was recognised in this field (and in many other fields), as many of the studies showing no differences were simply not published (see Nicolson, 1992). Despite a huge amount of research, the picture is very inconsistent. This is usually attributed to the methodological differences and challenges, and publication bias means that studies demonstrating effects of menstrual cycle phase are probably over-represented in the literature.

Early reviews of the literature (e.g. Asso, 1983; Sommer, 1992; Richardson, 1992; Walker, 1997) concluded that there was no evidence of a premenstrual or menstrual decrement in cognitive performance. Asso (1987) reviewed studies that suggested that where there was variability it was in strategy, rather than overall performance, with a trend towards speed pre-ovulation and accuracy post-ovulation. For example, Ho et al. (1986) found that the strategy used for spatial information processing varied across the cycle, but actual performance remained constant.

More recently, Souza et al. (2012) reviewed twenty-seven studies that had included some form of psychometric assessment of neuropsychological function across the menstrual cycle. The assessments used in these studies included the Stroop test, the Wechsler Adult Intelligence Scale-Revised (WAIS-R), the Wechsler Memory Scale-Revised (WMS-R) and Verbal Fluency (FAS). Souza and colleagues reported a very inconsistent picture – even where studies reported menstrual cycle effects, these were often not replicated. Overall they concluded that the evidence showed a trend towards lower performance in the luteal phase, particularly in women who have PMS. However, it is clear that where fluctuations are observed they are small and mild; we are talking about small effects even for the women with PMS (p. 11).


As mentioned previously, much of the earlier work on the menstrual cycle and cognition was explicitly concerned with the effects of paramenstrum on performance. Much of it was based on assumptions of paramenstrual debilitation or was concerned with refuting these assumptions. While many of the researchers explained any observed changes in terms of the action of particular hormones, the focus of the research was not hormonal per se. However, since the late 1970s and early 1980s there has been another strand of research directly concerned with exploring the effects of the gonadal hormones on cognitive function. This work has examined the effects of these hormones in both men and women and is concerned with understanding more about the neurochemistry of cognition. Most recent research on cognition across the menstrual cycle is conducted from this perspective.

Reliable sex differences exist in some aspects of cognitive performance. For example, on average, women show a slight advantage in verbal ability and men a slight advantage in spatial ability. Of course, even where sex differences do occur, there is a great deal of overlap – the differences between any two women or any two men are greater than the average difference between the two sexes (see Box 11.1, p. 276).

Drawing on evidence from animal work, Elizabeth Hampson and Doreen Kimura suggested that it is these sexually differentiated tasks that may be influenced by levels of gonadal hormones, rather than the many aspects of cognitive performance that are ‘gender neutral’. They extensively investigated changes in cognitive performance at different stages of the menstrual cycle in order to investigate the effects of variations in levels of estrogen and progesterone (e.g. Hampson and Kimura, 1988; Kimura and Hampson, 1994). This research is very much within the ‘mainstream’ tradition: the menstrual cycle is not the focus of interest, hormone levels are; menstrual cycle phases are selected on the basis of their hormonal profiles. They used only tests that show reliable (though small) average sex differences, arguing that we would not expect sex-neutral cognitive abilities to be influenced by sex hormones.

Box 11.2  Questions and answers

Questionnaires used to measures mood, and perceptions of performance across the menstrual cycle tend to be concerned with negative states and effects. For example, of the forty-eight ‘symptoms’ in the Moos Menstrual Distress Questionnaire (MDQ; Moos, 1968), only four are positive. The Menstrual Joy Questionnaire (Delaney et al., 1987) was developed as a feminist critique of these measures and showed that when given the option, women will endorse positive options too.

The measures themselves can prompt particular responses. Chrisler et al. (1994) found that the title of the Menstrual Joy Questionnaire primed positive reporting of menstrual symptoms. Aubeeluck and Maguire (2002) replicated the experiment, removing the questionnaire titles, and found that the questionnaire items alone also produced positive priming.

Now try the question below.


It’s probably difficult for you to answer this unless there is something very memorable or personally significant about the events of two weeks ago.

Retrospective measures ask participants to rate states or behaviours experienced in the past, but of course this is very difficult as most people don’t remember. A lot of earlier work on subjective experiences of the menstrual cycle was retrospective. For example, the MDQ asked women to rate their memory of symptoms experienced during their last menstrual period, the week before it and the rest of their last cycle. In a 1974 paper, Mary Brown Parlee asked men and women to complete the MDQ based on what they thought women experienced at different stages of the cycle. The close correlations between men’s and women’s responses led Parlee to conclude that what was being measured was not direct experience, but rather stereotypes about menstruation. Other evidence shows that women tend to report more distress and premenstrual symptoms in retrospective rather than prospective questionnaires (Ussher, 1992; Asso, 1983), suggesting that methods that highlight menstruation tend to exaggerate cyclic changes in mood and behaviour (Englander-Golden et al., 1978). This work prompted a shift from retrospective to prospective approaches. So rather than asking women to complete a questionnaire with respect to their last menstrual period, women might be asked to complete the questionnaire daily or weekly and then the responses are matched to menstrual cycle phase when the data collection is complete.

They tested women at two cycle phases: midluteal, when estrogen and progesterone levels are high, and the late menstrual phase, when levels of both are low. They found that manual dexterity (female-advantage task) was better midluteally, while performance on the rod and frame task (male advantage) was worse (Hampson and Kimura, 1988). Other studies supported these findings. Hampson (1990) reported that verbal articulation and fine motor performance (female advantage) were best in the luteal phase, while performance on spatial tasks (male advantage) was best during the menstrual phase. In order to separate the effects of estrogen and progesterone, they conducted further studies (see Kimura and Hampson, 1994) comparing performance shortly before ovulation (high estrogen, no progesterone) and during the menstrual phase (very low estrogen and progesterone). They again found that performance on female-advantage tasks was better pre-ovulation and performance on male-advantage tasks was worse. Thus high levels of estrogen improved performance on female-advantage tasks, but impaired performance on male-advantage tasks. Other work examined cognitive ability in post-menopausal women receiving estrogen therapy (see Kimura and Hampson, 1994). They found that motor and articulatory abilities were better when the women were receiving the therapy, though there were no differences on some perceptual tasks.

Table 11.3 A list of the cognitive tasks that show small, but reliable differences between the sexes

Female-advantage cognitive tasks

Male-advantage cognitive tasks

Ideational fluency

Mental rotation

Verbal fluency Verbal memory

Perception of the vertical and horizontal

Perceptual speed

Perceptual restructuring

Mathematical calculation

Mathematical reasoning

Fine motor coordination

Target directed motor performance

Source: Kimura (1996)

The research was also extended to men. Seasonal variations in testosterone have also been reported in men, with levels of testosterone tending to be higher in autumn than in spring (in the northern hemisphere). Men’s spatial performance was better in spring than autumn. While this may seem counterintuitive, it seems that there are optimum levels of testosterone for spatial ability and that these are higher than those present in a typical woman, but lower than those present in a typical man (see Kimura and Hampson, 1994). There is empirical support for these findings (e.g. Hausmann et al. 2000), although Epting and Overman (1998) failed to find a menstrual rhythm in sex-sensitive tasks.


Figure 11.10 When estrogen levels are high, women perform better on female-advantage tasks and worse on male-advantage tasks. The position is reversed when estrogen levels are low.

Source: (Hampson and Kimura, 1988; Kimura and Hampson, 1994).


Mental rotation tasks show a reliable male advantage. For example, Lippa et al. (2010) reported that the average performance of men on these tasks consistently exceeded the average performance of women in a sample of 200,000 from fifty-three different nations.

The rod and frame test is a measure of the ability to position a rod vertically in the absence of a vertical reference point. Usually the rod is positioned within a tilted frame and the observer is asked to move it to a vertical position. Abdul Razzak et al. (2015) tested men, women in the follicular phase (low gonadal hormones) and women in the luteal phase (high gonadal hormones) on the rod and frame test. They found that men performed better than midluteal women but not than women in the follicular phase, suggesting that the male advantage may depend on levels of female gonadal hormones. While this kind of research is concerned with activational effects of these hormones (i.e. the direct effects of hormones on the nervous system), gonadal hormones also have organisational effects, as they organise or shape aspects of the nervous system, usually early in life. Puts et al. (2010) examined the relationship between salivary testosterone and mental rotation, a male-advantage task. They found no relationship between testosterone levels and performance in either men or women. They concluded that the effects of testosterone on mental rotation are probably organisational (i.e. rooted in the effects of the androgens on the organisation of the nervous system early in life).


Earlier work suggested there was no effect of menstrual cycle phase on memory (e.g. Richardson, 1992). Hartley et al. (1987) found no differences in immediate and delayed recall during the premenstrual, menstrual and mid-cycle phases, although they found that speed of verbal reasoning on more complex sentences was found to be slower mid-cycle. Hatta and Nagaya (2009) tested thirty women on the Weschler Memory Scale during menses and the luteal phase and found no differences in performance. However, they found that performance on the Stroop test was significantly better in the menstrual phases when hormone levels are low.

As mentioned in our earlier discussion, we might expect only to see effects on those aspects of memory that show reliable sex differences.

There is evidence indicating that estrogen enhances verbal memory, a female-advantage task. Mordecai et al. (2008) found no evidence of menstrual cycle phase on verbal memory, but they did find evidence that verbal memory was better in oral contraceptive users during the active phase (i.e. when they were taking the pill compared with the break). In naturally cycling women the estrogen is endogenous (i.e. produced by their ovaries). In contrast, the estrogen is exogenous (i.e. externally produced) when consumed in pill form. The authors speculated that there may be differences in the effects of endogenous and exogenous hormones, and this is an area that merits further investigation.

Cahill and colleagues have conducted research on sex differences in, and effects of sex hormones on, emotional memory. In one study (Nielsen et al., 2013), men and women heard brief narrated stories, and for those in the experimental condition these studies included emotionally arousing components. The researchers were interested in the recall of the central information (the gist or storyline) versus peripheral information (specific details). Recall of both gist and detail was better for the emotional story (experimental) condition. For men, recall of gist was significantly higher. For women, recall was related to menstrual cycle phase, with women in the luteal phase showing greater recall of detail relative to the control. While memory for gist was not greater overall, it was for ‘phase 2’ of the story, which was the most emotional part. There were no differences between follicular women in either condition. These findings suggest that sex hormones may have an effect on memory; however, the authors caution that much more research is needed.

Overall there is no consistent pattern in cognitive performance across the menstrual cycle, and there is certainly no evidence that cognitive performance is impaired premenstrually or menstrually. Evidence does suggest that gonadal hormones affect performance on some sex-sensitive tasks in both men and women, but more research is needed to clarify these effects.


Given the stereotype that women’s work and academic performance is negatively affected by menstruation, it is not surprising that many researchers have attempted to investigate this. Yet as far back as 1928, a piece in the medical journal, The Lancet, applauded the demise of ‘the Victorian attitude’ in the face of overwhelming evidence against menstrual impairment!

Thanks largely to the enthusiasm and patient researches of Dr. Alice Clow, Prof. Winifred Cullis, and other women investigators, the Victorian attitude to menstruation has gone for good. Everybody knows now that women are not necessarily “unwell” once a month, and the invasion of every kind of industry by women workers has in itself been a massive experiment proving that the period does not mean any noticeable degree of invalidism for the great majority.

(p. 712)

It concluded, ‘The point for employers and doctors to remember is that an appreciable lowering of efficiency is to be regarded not as normal and inevitable, but as pathological and calling for special consideration’ (p. 712).

The ‘Victorian attitude’s obituary was premature, as the assumption that menstruation was a problem in the workplace continued. Even in 2008, Konishi and colleagues were motivated by concern around premenstrual errors to examine working memory across the cycle, and we will return to that study. Much of the research on industrial work was conducted before 1940. A good deal of recent work is concerned with the relationship between particular occupations or work patterns, shift-work in particular, and menstrual symptoms, while other work is explicitly concerned with PMS in the workplace, but that is outside the scope of this chapter. Nonetheless, there is no evidence that the work performance of women suffers premenstrually or during menstruation. Farris (1956) analysed the output of pieceworkers (paid per unit of work completed) and found that output was greatest mid-cycle and premenstrually. Redgrove (1971) similarly found that work performance was best premenstrually and menstrually in a sample of laundry workers, punchcard operators and secretaries. Black and Koulis-Chitwood (1990) examined typing performance across the menstrual cycle and found no changes in either rate or number of errors made. Konishi et al. (2008) examined working memory in a sample of twelve student nurses ‘for the purpose of managing women’s occupational health and safety’ (p. 254). Participants completed a dual task whereby they were asked to memorise a visual display of medication (primary task) while simultaneously matching medication names to prescriptions (secondary task). After 10 seconds they were asked to recall the originally memorised medication. Performance on the primary task was significantly better premenstrually, while there were no significant variations in performance on the secondary task. Overall, the evidence is inconsistent and fluctuations, where noted, are typically small, but there is no evidence that work performance is worse premenstrually.

Earlier work suggested that students of both sexes believe women’s academic performance can be disrupted premenstrually and menstrually (Richardson, 1989; Walker, 1992). However, there is little evidence these beliefs are justified. While Dalton (1960, 1968) reported that schoolgirls’ academic performance was poorer premenstrually and menstrually, these findings were not statistically analysed and are generally discounted. Work with university students has failed to demonstrate an effect of menstrual cycle phase on exam performance (e.g. Richardson, 1989).


While there is little evidence that women’s ability to think and work is impaired during the paramenstrum, this belief remains surprisingly prevalent. Expectations are likely to be important mediators of performance and, as discussed earlier, expectations of poor performance may lead women to make efforts to compensate. Ruble (1977) conducted a classic experiment to examine the effect of menstrual expectations on reporting of symptoms.

Student volunteers participated in the experiment about a week before their period was due. They were told that a new method of predicting menstruation onset had been developed and involved the use of an electroencephalogram (EEG). Participants were hooked up to the EEG but it was not actually run. One group of women was told that their periods were due in a couple of days, another group told their periods were due in a week to 10 days and a third group was given no information. Those who were told that their periods were due in a couple of days reported significantly more premenstrual symptoms than those in the other groups. This study clearly demonstrated the importance of menstrual cycle beliefs in mediating reports and behaviour.

However, beliefs can also directly impact on cognitive performance. Stereotype threat refers to the phenomenon whereby if an individual is made aware of a negative stereotype about a group to which he or she belongs, their cognitive performance is impaired. Studies (e.g. Steele and Aronson, 1995) have examined the effects of highlighting gender and racial stereotypes about cognitive performance and found that this impaired performance for those in the stigmatised group. Given the negative stereo types about menstruation and cognition, it is surprising that there is not more research on this. Wister et al. (2013) explored this with a sample of ninety-two women students. Using a between-groups design, the women were allocated to one of four conditions: two of these included a menstrual prime (stereotype threat) either with or without a positive prime; the remaining conditions were positive prime only and a control condition had neither. Participants were administered the Stroop test, and those in the menstrual threat conditions per formed more poorly. For those who had the stereotype threat only, performance was poorer the closer they were to their own menstrual period, while the opposite was true for those who had the positive prime only. Interestingly, there was no evidence that the positive prime counteracted the negative effect of the stereotype threat; rather it tended to increase it. The authors conclude:


Figure 11.11 Ruble (1977) manipulated women’s beliefs about their menstrual phase. All the women were due to menstruate 6–7 days after the study (based on menstrual history taken prior to study). They were allocated to one of three groups. One was told that they were premenstrual, another that they were inter-menstrual and the third was given no information. The ‘premenstrual’ group reported more premenstrual symptoms, particularly water retention, change in eating habits and pain. The ‘inter-menstrual’ group reported the fewest symptoms and the ‘no information’ group reported intermediate symptoms.

research is needed to document the extent to which widely held negative views of menstruation stigmatize girls and women when menstruation is made salient, and most importantly, the specific assumption that menstruation negatively influences girls’ and women’s thinking. This assumption has important implications not only for girls’ and women’s psychological but also economic and political well-being. Girls and women who believe that they are less cognitively able because of menstruation may be all too willing to accept diminished status in many arenas.

(p. 28)


Menstrual rhythms have been observed in aspects of sensation and perception, with a trend towards increased sensitivity mid-cycle. There is also evidence that some aspects of cognitive performance may be affected by gonadal hormones in both sexes, but there is no evidence that women are cognitively impaired premenstrually or menstrually. This research is concerned with the impact of hormones on cognition, but expectation and other social factors are important and merit more consideration. Further, while these findings are important in terms of understanding the neurobiology of cognition, it is not clear how significant these changes are in everyday life.


Circadian rhythms

•  The circadian rhythm organises physiological and behavioural activity.

•  ‘Zeitgebers’ such as light and social activity entrain the endogenous circadian rhythm to a 24-hour cycle.

•  Performance is typically impaired upon awakening, and also varies throughout the day, with different tasks associated with different aspects of the circadian system.

•  When circadian rhythms are disrupted (e.g. in jet lag and shift-work), cognitive performance is typically impaired.

•  Advancing the body clock (e.g. in eastward travel or backward shift rotation) tends to be more disruptive than delaying it.

•  Various countermeasures, such as napping, and drugs such as caffeine and modafinil, may reduce the symptoms of circadian disruption, though there is as yet no known cure.

Menstrual cycle

•  There are negative stereotypes around menstruation and the menstrual cycle and assumptions of paramenstrual impairment remain prevalent. Given this context, we must be mindful of the potential impact of research in this area on women in society.

•  While the evidence is not entirely consistent, there seems to be a trend towards greater sensory sensitivity mid-cycle.

•  There is no evidence of impaired cognitive or skilled performance premenstrually.

•  There is evidence that performance on sex-sensitive tasks (those that show an average male or female advantage) is influenced by gonadal hormones in both sexes.

•  Beliefs, expectations and negative stereotypes can affect performance, and more research is needed on this.

•  It is important to note that fluctuations in performance are to be expected in both men and women, and observed changes tend to be small.


•  Schmidt, C., Collette, F., Cajochen, C. and Peigneux, P. (2007). A time to think: circadian rhythms in human cognition. Cognitive Neuropsychology24(7), 755–789.

•  Waterhouse, J. (2010). Circadian rhythms and cognition. In G. A. Kerkhof, and H. Van Dongen (eds). Human sleep and cognition: Basic research (vol. 185). Amsterdam: Elsevier.