We are smart enough to realize we are stupid, and stupid enough to make the problem of becoming smarter hard.
Most transhumanists realize that in order to become truly trans- and posthuman we will need to amplify our cognition and remove the limits of our brains. This talk will look at the methods of amplifying cognition we have today and in the near future, not the more dramatic possibilities of a posthuman future. In any case, as you will see we can do a surprising amount with just present technology.
Memory problems are very common in daily life (Eldrige, Sellen & Debra 1992). While most memory limitations are inconveniences that make us irritated and slightly less efficient, our limited working memory put an upper limit on the complexity of our thinking, our limited learning rate slows our adaption and our unreliable long term memory not only forgets important facts and details, but also bias our thinking by being overly selective (Gilovich 1991).
As the above paragraph suggests limitations of memory also act as limitations of intelligence; our ability to recall and recognize are one major bottleneck in cognition. Other important limitations are our mental speed, our ability to make correct judgments of the consequences of our actions and our ability to attend for long periods of time, just to name some.
These limitations all have biological explanations, and were not relevant to our hunter-gatherer ancestors. In fact, many of them had advantages: there was no need for a faster, more energy consuming brain when food was scarce, or a perfect objective memory when there was no great need to remember details of the past. However, today we have other demands in modern society, and as we prepare to autoevolve towards posthumanity these limits become more and more troublesome. Let's look at what we can do about them.
My personal position is that speaking of intelligence as a thing is a semantic mistake. Intelligence is an adverb or an adjective rather than a noun: we can do things intelligently, we can come up with intelligent solutions. It describes our behavior, not some mysterious internal force which makes it intelligent. Intelligent actions have a purpose which they achieve well. What we should look for is the mechanisms that make our actions and thoughts efficient
The most well-known division is between Short-Term Memory (STM) and Long Term Memory (LTM); information enters STM where it is stored for a comparatively short period of time, and then is either forgotten or moved into LTM, a process called memory consolidation.
Short term memory acts as our working memory, our current context. It has a fairly small capacity, the famous "7 plus or minus 2" chunks of information (Miller 1956), and if we are distracted we easily forget what we were doing.
The border between STM and LTM is a bit unclear, while some memories (like a name at a cocktail party) can be forgotten in seconds, others remain for minutes, hours or even days before being forgotten. The importance and emotional content of a memory will determine how well it will be consolidated; shocking, unusual or personal memories will be stored quite clearly, while the tedious details of daily life just fade away. Studies in amnesia patients whose memory consolidation process has been damaged hint that it may take days to weeks until a memory is permanently stored.
The Medial Temporal Lobe (MTL) memory system has been a favorite research subject ever since it was discovered that it is deeply involved in memory consolidation (Squire & Zola-Morgan 1991). The MTL consists of several brain structures, most importantly the hippocampus, a tubular fold of cortex on the inner surface of the temporal lobe. People with damage to the hippocampus have normal short term memory, intact memories of their lives before the accident or illness, but lack the ability to learn new episodic or semantic information.
Exactly how the MTL works is as present controversial, but one theory which seems to be gaining support is the complementary encoding theory of McClelland, McNaughton and O'Reilly (1995).
The basic idea is this: in order to learn, the brain needs to be plastic, but if it is too plastic new memories will also overwrite old memories and we will forget what we have learned; this is the stability-plasticity dilemma which crops up in a lot of machine learning. But we can remember single experiences which have not been repeated, and at the same time retain memories for decades. This could be explained if there was an intermediary memory system with high plasticity (fast learning/forgetting) which stored recent experiences, and then played them back to the less plastic cortex several times (possibly during sleep or other mental activity), gradually storing them in permanent memory. This could be the function of the MTL, which we know is more plastic than the cortex, and it would explain why damage to the MTL prevents patients from learning new information. The MTL acts as the sketchbook of the brain. Emotional content and relevancy could affect how strongly a memory is stored in the MTL, explaining why some memories are clearer than others and how certain memory enhancing drugs work.
Long term memory is permanent; while STM seems to rely on neural activity and chemical changes at the synapses, LTM appears to be based on actual changes in cell morphology in the cortex. This means that it is not easily wiped, and since it is distributed even fairly large lesions will not remove specific memories.
LTM can be divided into several subsystems. Three of the most well-defined and useful systems are episodic memory, semantic memory and procedural memory.
An important and at present poorly understood subsystem of episodic memory is prospective memory: the ability to remember things in the future, like what to do tomorrow or what people will come and see you on Tuesday (Sellen et al 1996). While it appears that we usually forget more about the past than the future, forgetting to do things is usually a bigger problem (Eldrige et al 1992).
The borders are blurred; inside the brain the difference between hardware and software is very uncertain ("wetware"), some cognitive tools blur the line between internal and external, and many methods combine elements from several quadrants in the diagram. There is no real need for fine distinctions; this diagram is more intended to provide a convenient structure for this talk than to represent real divisions.
The first tools were based on the fact that we can reduce our cognitive load by constraining the material we are dealing with; if there is less freedom, there is less risk of change, and the structure helps us remember by narrowing down the possibilities if we are just recalling part of the material. A typical example is rhyme and meter in poetry, which create a structure which both is resistant to accidental change (the error will be easily heard), and the rhymes provide support for recalling the next lines.
The next form of memory support was to divide materials into more manageable chunks and organizing them. Lists appeared, followed by the more and more elaborate taxonomies developed by philosophers and scientists. Since then, more and more complex and streamlined forms of organizing information have appeared. Formal systems of thought have been developed in order to support efficient thinking or knowledge discovery (logic, mathematics, the scientific method). Information organization systems have often been implemented using external hardware, especially paper: books, binders, standardized layouts.
Another development was the deliberate use of mnemonics and other devices to aid memory (see (Patten 1990) for a readable overview). One of the first well documented and well know methods was Ars Memoriam, also known as the Walk, which was widely in use by students and scholars in the middle ages and renaissance. The basic method was to explore a building or location carefully, learning its layout, the contents of the rooms, the smells etc. until it was familiar. Once this was done a list of material (for example a book) could be "stored" in the different rooms, by associating each piece of material with objects and people in each room. The user mentally traversed the building, storing the entire text along his way. To recall it, it was only necessary to remember the path again, and by recalling the room recall of the material was improved.
The method was based on the simple observation that we are good at remembering things inside a context, so given a stable, easy to remember context information could be linked to it and then easily retrieved. This is the basis for much of memory the memory techniques used today. In addition, our memory for spatial location is very good; the hippocampus contains "place cells" which increase their activity when we stand in certain places or contexts. This may also be one reason graphical user interfaces are so popular: they contain a spatial context which supports recall of where we put our files (it is usually much easier to find a certain file by opening directory icons than by looking in a file selector).
Mnemonic methods are the most noticeable form of internal software for memory enhancement; using fairly simple tricks ordinary people can easily memorize 20-digit numbers, foreign words, chemical formulas etc. Practically all methods work by deliberately creating associations that are interesting, surprising, emotional or involve the user, aspects which are known to improve memory encoding. The efficiency for certain forms of data is impressive, often one or two orders of magnitude above normal memory. The main drawback is that the methods are directed towards memorizing data rather than learning information, they do not provide understanding in themselves.
Study techniques are directed towards improving understanding. Instead of concentrating on learning individual facts it concentrates on learning how they fit together, creating contexts for knowledge. Most methods are based on making the student analyze and think about the material, which is known to improve memory consolidation.
Beside the obvious uses to store information and perform calculations, computers can also supplement our skills. One example is symbolic math programs, able to amplify our mathematical skills by providing perfect storage and recall of formulas, the ability to do calculations with no risk of making slips and graphical visualization of the results. Similarly, decision support tools attempt to help making rational decisions in uncertain situations (Walter 1997).
Software agents are programs that act more or less independent of us, performing services ranging from checking mail over sorting it to searching out websites we might want to visit. While the field is currently hyped beyond belief it contains several promising technologies, and is often suggested as a solution to the interoperability problem. Some applications are information discovery and filtering, support for collaborative work and extended recall and recognition.
An example I actually use is the Remembrance Agent (Rhodes & Starner 1996). It consists of a process which runs continuously and shows suggestions for documents related to what I am currently writing in a small field in my editor. It looks at the words before the cursor, and compares it to a database of my mail, web pages and bibliographic files; by a keystroke I can bring up a suggested file if it seems interesting. When I log out, it updates the database with today's changes.
The Remembrance Agent acts as an extended associative memory. We are constantly reminded of things, but most of these reminders are weak and uncertain, and even if we do remember having read something useful about a certain subject, it is often hard to remember where we read it. The RA combines the human, intuitive associative memory model with the efficient database memory of a computer, making it possible to quickly get exact references.
Beside these obvious uses, the RA also suggests other interesting possibilities. The RA, combined with a list of abstracts, can suggest references to a text; combined with a large bibliographic database like Medline it could assist the writing of scientific articles and suggest similar works. Document databases can be shared, so that the gathered knowledge of one user can be used by others (although some sociological problems of privacy and authorship remain).
The RA is not just a memory aid, it can actually act as a (limited) cognition amplifier. It is not hard to imagine further improvements and applications. The same goes for most of the other cognitive software I have mentioned, and once we can get it to work together as a whole with ourselves, then we will have an extremely powerful cognition amplifying tool.
However, paper is rather passive and mainly suited for storing information, not processing it. That is why computers are needed.
I have already discussed some cognitive software. However, the hardware it is implemented in limits it. This is not just limitations of speed and memory (those can be circumvented), but limitations in design. Today's computers are fairly static objects, even the PDAs - discrete things that you have to turn to and concentrate on in order to use. In the near future that will change.
The remembrance agent was developed with an eye on wearable computing, computers that are an integral part of the clothing of the user. Some proposed applications beside remembrance agents are augmented reality (Starner et al.), desktop applications such as note-taking, email, web browsing and symbolic math, personal safety, collective intelligence though collaborative computing and affective computing (Picard 1995), as well as tool functions similar to a cellular phone, personal music system, dictating machine, pager and camcorder (Mann 1997, Starner 1995).
This form of intimate computing would enable us to use cognition amplifying software in almost any situation. It would be a step away from the "computer as a tool" metaphor to the "computer as a part of ourselves" metaphor. Since it would be present almost always, both we and the system would adapt to each other through a process of customization and learning, becoming more and more a composite being.
In fact, wearable computers are not far from the ideas of cyborgs based on brain-computer interfaces, although they only use currently available interface technologies such as head up displays, chord keyboards and datagloves (Thomas 1995). They might provide a testing ground for user interfaces which will later be incorporated into bionic technology, or even a serious competitor: after all, why do complex and dangerous neurosurgery followed by lengthy training when you can just buy a good wearable?
Of course, not everyone thinks the ideal computer should be something you wear. Another approach is ubiquitous computing, where instead of using one computer tightly integrated with the user the environment is filled with small computers, ideally as invisible and seamless as possible (Weiser 1991, 1994). The computers are intended to be cheap, low-power, mobile, connected through wireless networks and just about anywhere. Instead of putting the cognition amplification into the (or around) the user, ubiquitous computing puts it in the environment. For example, a small device accompanying the users could record the data from their lives (by communicating with other smart devices), indexing it after the context in a natural way and amplifying the episodic memory of the users (Lamming & Flynn 1994).
Personally I think both wearable and ubiquitous systems complement each other nicely. I have little doubt that many would prefer one over the other for various reasons (privacy, image, technophilia, corporate decisions etc), but taken together they could form an extremely powerful cognitive infrastructure. While the ubiquitous computers make the world smart, the smart clothing allows the wearer to make the most of it, creating a next to seamless transition between self and world.
As an example, consider the cut and paste function found in practically all modern programs. What if it could be extended into the physical world? When I see some interesting text in a book, I might want to put it into my digital clipboard for future use (I rely on my remembrance agent to find it again once it is needed), or dial a phone number written on a piece of paper. This could perhaps be implemented using a small camera in smart clothing, scanning the text, running OCR on it and then sending it to the right program. With ubiquitous computing moving information from one smart object to another, with a wearable computer I might move information just by looking and giving a simple command from "dumb objects" into the world of the smart objects (or vice versa, if I have a printer handy).
Cut and paste is actually a quite powerful memory amplification tool. The computer acts as an extended working memory; if we can create the right hardware infrastructure we can put more and more of the less interesting details of memory and cognition into our surroundings (and clothing) rather than ourselves.
Smart drugs, or nootropic drugs as I prefer to call them, are drugs intended to improve memory or other aspects of cognition.
I have some good news and some bad news about nootropics:
Still, there are experimental results which suggests that there are drugs which improve memory. Lashley, as early as 1917, noted that caffeine and strychnine accelerated learning in rat. There are many studies which show that different drugs can improve the learning or recall process in rats, monkeys and humans.
The nootropics that have been studied can be divided several into different categories:
At present most studies have been animal studies rather than human studies, and most of the human studies have been focussed on treating dementia or aging. Still, there exist positive results for healthy young humans (for example Hindmarch 1986, Dimond & Brouwers 1976), and give that the mammal memory systems appear to be quite similar, there is good reasons to believe memory-enhancing substances work in normal humans. There is also reason to believe that some aspects of brain aging can be ameliorated.
There are fewer substances known to improve other aspects of cognitive functions. Intelligence and creativity are hard to measure, and so far there are no data that show that there are nootropics that affect them, there might be whole classes of chemicals we have not yet found that influence higher-level cognition, although it is best to remain cautious. There are however drugs that improve reaction time and attention, the most well known are nicotine (Levin 1992b, Warburton 1992) and caffeine (Zwyghuizen-Doorenbos et al 1990, Mitchell & Redman 1992), the stimulants (of course) and acetyl-l-carnitine (Lino et al. 1992).
That was the good news. Now time for the bad news.
First, many of the above drugs have plenty of drawbacks. I think I do not need to explain why strychnine, picrotoxin and amphetamine are less desirable cognitive enhancers.
Studies of the piracetam family have produced contradictory results, where some preclinical studies showed an improvement in the condition of Alzheimer patients and others showed no improvement; the most likely explanation is that the effect is highly individual, possibly linked to the amount of steroids in the bloodstream (Mondadori 1994, 1996). In the same way caffeine appears to improve recognition and recall differently depending on internal arousal (Gupta 1993) and the time of day (Mitchell & Redman 1992). Nootropics need to be tuned to the neurochemistries of their users.
Finally, it is well known among researchers that the timing of the drug is important. In many cases the drug has no effect if it is given before or during the learning experience, but only after a certain time. Other drugs only work when given before the experience (Sansone et al. 1993, Garofalo 1996).
In fact, some drugs may even be given before birth! It has been found that if the diet of pregnant rats are supplemented with choline their offspring will exhibit enhanced spatial memory (Meck, Smith & Williams 1987a, 1987b). This appears to be due to changes in the receptor density in the brain; whether it would be a good idea to try it in humans is an open question (it might be interesting to compare the children of mothers eating choline-rich and choline-poor diets).
Then there is the problem with dose response: high dosages of the drugs doesn't improve performance, instead they decrease it. The memory improvement typically follows an "inverted U-curve", where there is an optimal dose.
Finally, many nootropics likely have a price. Caffeine, for example improves attention but impairs the ability to deal with contradictory or uncertain stimuli (Foreman et al 1988). If we learn too well, we might remember details at the expense of general knowledge or abstraction ability.
All of these problems suggest that while nootropics can enhance memory, using them optimally is much harder than just swallowing a smart pill at breakfast. The ideal solution would be to visit a cognitive enhancement expert who did a detailed check of one's neurochemistry and cognitive strategies, and then prescribed which substances to take to optimize various tasks. This may not be as far-fetched as it sounds: it is certainly possible to program an agent to suggest the right nootropics to take given the person, time of day, sensed mood and data about the task to be undertaken, and it could be run on a wearable computer. More advanced (and risky) systems could even include an autoinjector as is used for treatment of diabetes today.
It is known that nootropic drugs and cognitive training are complementary and likely act on different systems (Debert 1994). In general it seems that improving the hardware (be it internal or external) can give rise to a quantitative improvement, while improved software can give a qualitative improvement. A well implemented combination can become very powerful.
I believe the future of cognitive amplification will be found in the gradual combination of different cognitive tools into unified systems, where both internal and external software support each other.
Wearables and ubiquitious computing make an ideal platform to run cognition amplification software, supporting memory and study skills. As I already has mentioned, wearables can be seen as precursors/competitors to bionic implants, and could act as systems for nootropic amplification.
We might develop zenware: software intended to be used intimately in the thought process, just as wearable computers are physically intimate. Such software might be intended to create efficient mental states that makes the most of the skills of the user and the system.
It may be possible to develop new internal software, intended from the start to interface with external systems and based on our knowledge of how memory and cognition works.
In order to achieve this, we need a deeper understanding of the problems of cognition, neuroscience and human-computer interaction; three fields still young but very vital. Over the last years more and more research has become interdisciplinary in these fields. HCI and cognitive science are routinely taught together in some places and many neuroscientists are interested in cognitive science (and vice versa). The link between neuroscience and HCI is at present still weak, but I have no doubt that it will change in time.
This talk was made possible by the generous donation of Sean Hastings and the invitation to speak from Max More of the Extropy Institute for which I am very grateful and impressed.
Buccafusco, J.J., Jackson, W.J., Terry Jr A.V., Marsh, K.C., Decker, M.W. AND Arneric, S.P. (1995) Improvement in perfomance of a delayed matching-to-sample task by monkeys following ABT-418: a novel cholinergic channel activator for memory enhancement, Psychopharmacology 120, 256-266.
Walter Deberdt (1994) Interaction Between Psychological and Pharmacological Treatment in Cognitive Impairment, Life Sciences, 55: 25/26, 2057-2066
Margery Eldridge, Abigail Sellen, Debra Bekerian, (1992) Memory Problems at Work: Their Range, Frequency and Severity Rank Xerox Research Centre Technical Report EPC-1992-129, http://www.rxrc.xerox.com/publis/cam-trs/html/epc-1992-129.htm
Nick Carriero, Scott Fertig, Eric Freeman, David Gelernter, (1997) Lifestreams: Bigger than Elvis. http://www.mirrorworlds.com/whitepapers/elvis.pdf
David Concar, (1997) Brain Boosters, New Scientist February 8, (32-36)
Stuart J. Dimond & E. Y. M. Brouwers, (1976) Increase in the Power of Human Memory in Normal Man through the Use of Drugs Psychopharmacology, 49, 307-309
K. Eric Drexler (1987) Hypertext Publishing and the Evolution of Knowledge Social Intelligence, Vol. 1, No. 2, pp.87-120 http://www.asiapac.com/Hypertext/HypertextPublishingKED.html
Nigel Foreman, Sue Barraclough, Catherine Moore, Anita Mehta & Momento Madon (1988) High Doses of Caffeine Impair Performance of a Numerical Version of the Stroop Task in Men, Pharmacology, Biochemistry & Behavior 32, 399-403
Paolo Garofalo, Silvia Colombo, Marco Lanza, Laura Revel, Francesco Makovec, (1996), CR 2249: a New Putative Memory Enhancer. Behavioural Studies on Learning and Memory in Rats and Mice, J. Pharm. Pharmacol., 48, 1290-1297
Thomas Gilovich, How We Know It Isn't So: The Fallibility of Human Reason in Everyday Life, New York, Maxwell Macmillan 1991
Uma Gupta (1993), Effects of Caffeine on Recognition Pharmacology, Biochemistry and Behavior, 44, 393-396
J. Hindmarch (1986), Activite de l'extrait de Ginko biloba sur la memoire a court terme, Presse Medicale, 15: 31, September, 1592-1594
Martin Ingvar, Jose Ambros-Ingerson, Mike Davis, Richard Granger, Markus Kessler, Gary Rogers, Robert S. Schehr and Gary Lynch, Enhancement by and Ampakine of Memory Encoding In Humans, Experimental Neurology, 146, 553-559 1997
Diana Jerusalinsky, Edgar Kornisiuk & Ivan Izquierdo (1997) Cholinergic Neurotransmission and Synaptic Plasticity Concerning Memory Processing, Neurochemical Research, 22: 4 507--515
Mik Lamming & Mike Flynn (1994) "Forget-Me-Not" Intimate Computing in Support of Human Memory, Proceedings of FRIEND21, '94 Symposium on Next Generation Human Interface, http://www.rxrc.xerox.com/research/cbis/cbis_5.htm#HEADING4
E.H.Y. Lee & Y.L. Ma (1995) Amphetamine Enhances Memory Retention and Facilitates Norepinephrine Release From the Hippocampus in Rats, Brain Research Bulletin, 4, 411-416.
Edward D. Levin (1992) Nicotinic Systems and Cognitive Function Psychopharmacology, 108, 417-431
A Lino, MM Boccia, AC Rusconi, L Bellomonte L, B Cocuroccia (1992) Psycho-functional changes in attention and learning under the action of L-acetylcarnitine in 17 young subjects. A pilot study of its use in mental deterioration Clin Ter, 1992 Jun, 140:6, 569-73
G. Lynch, R. Granger, J. Ambros-Ingerson, C.M. Davis, M. Kessler, R. Schehr R. (1997) Evidence that a positive modulator of AMPA-type glutamate receptors improves delayed recall in aged humans. Experimental Neurology, 145: 1 May 89-92.
Yoelle S. Maarek, Israel Z. Ben Shaul (1996), Automatically Organizing Bookmarks per Contents, Fifth International World Wide Web Conference 1996 May 6-10, 1996, Paris, France. http://www5conf.inria.fr/fich_html/papers/P37/Overview.html
Steve Mann, (1997), Wearable Computing: A First Step Toward Personal Imaging, IEEE Computer, Vol. 30, No. 2, February 1997, 25-32 http://www.computer.org/pubs/computer/1997/r2025.htm
Martinez Jr., Joe L. AND Kesner, Raymond P., Eds. (1989,1991) Learning and memory: a biological view, Academic Press (Second edition)
J.L. McClelland, B.L. McNaughton, R.C. O'Reilly (1995) Why there are complementary learning systems in the hippocampus and neocortex: Insights from the success and failures of connectionist models of learning and memory, Psychological Review, 102, 419-457.
Warren H. Meck, Rebecca A. Smith, Christina L. Williams (1989) Organizational Changes in Cholinergic Activity and Enhanced Visuospatial Memory as a Function of Choline Administered Prenatally or Postnatally or Both, Behavioral Neuroscience, 103: 6, 1234-1241
Warren H. Meck, Rebecca A. Smith, Chritina L. Williams (1987) Pre- , Postnatal Choline Supplementation Produces Long-rerm Facilitation of Spatial Memory, Developmental Psychobiology, 21: 4, 339-353.
G.A. Miller (1956) The magic number seven, plus or minus two. Some limits on our capacity of processing information. Psychological Review, 63, 81-97
Paula J. Mitchell & Jennifer R. Redman (1992) Effects of caffeine, time of day and user history on study-related performance}, Psychopharmacology, 109, 121-126
Cesare Mondadori (1996), Nootropics: Preclinical Results in the Light of Clinical Effects; Comparision with Tacrine, Critical Reviews in Neurobiology, 10:3&4, 357-370
Cesare Mondadori (1994) In Search of the Mechanism of Action of the Nootropics: New Insights and Potential Clinical Implications, Life Sciences 55:25/26, 2171-2178
Bernard M Patten (1990) The History of Memory Arts, Neurology 40 346-352 1990.
R.W. Picard (1995) Affective Computing, MIT Media Laboratory Perceptual Computing Section Technical Report No. 321 ftp://whitechapel.media.mit.edu/pub/tech-reports/TR-321.ps.Z
Bradley J. Rhodes, Thad Starner (1996), Remembrance Agent: A continuously running automated information retrieval system The Proceedings of The First International Conference on The Practical Application Of Intelligent Agents and Multi Agent Technology (PAAM '96), pp. 487-495. http://rhodes.www.media.mit.edu/people/rhodes/Papers/remembrance.html
Mario Sansone, Claudio Castellano, Sergio Palazzesi, Mario Battaglia & Martine Ammassari-Teule (1993) Effects of Oxiracetam, Physiostigmine, and Their Comination ion Active and Passive Avoidance Learning in Mice, Pharmacology of Biochemistry and Behavior, 44: 2, 451-455
A.J. Sellen, G. Louie, J.E. Harris, A.J. Wilkins (1996), What Brings Intentions to Mind? An In Situ Study of Prospective Memory, Rank Xerox Research Centre Technical Report EPC-1996-104, http://www.rxrc.xerox.com/publis/cam-trs/html/epc-1996-104.htm
Thad Starner (1996) Intellectual Collectives Through Use of the Remembrance Agent (or "Serendipity is too important to be left to chance") http://lcs.www.media.mit.edu/people/lieber/Teaching/Collaboration/Final-Projects/Starner-Project.html
Thad Starner (1995) The Cyborgs are Coming. Unpublished. ftp://www-white.media.mit.edu/pub/tech-reports/TR-318.ps.Z
Thad Starner, Steve Mann, Bradley Rhodes, Jeffrey Levine, Jennifer Healey, Dana Kirsch, Rosalind W. Picard, and Alex Pentland. Augmented Reality Through Wearable Computing. To appear in Presence, Special Issue on Augmented Reality- ftp://www-white.media.mit.edu/pub/tech-reports/TR-397.ps.Z
Larry R. Squire & Stuart Zola-Morgan (1991) The medial temporal lobe memory system, Science, 253, 1380-1386.
Peter Thomas (1995), Interview of Thad Starner, New Scientist http://lcs.www.media.mit.edu/projects/wearables/newsci.html
Johan Walter, (1997) A Decision Tool fo Uncertain Decision Making, masters thesis at Stockholm University, Dept. of Numerical Analysis and Computing Science, TRITA-NA-E9728
D.M. Warburton, J.M., Rusted, J. Fowler (1992) A comparision of the attentional and consolidation hypotheses for the facilitation of memory by nicotine, Psychopharmacology, 108, 443--447
Mark Weiser, "The Computer for the Twenty-First Century," Scientific American, pp. 94-10, September 1991 http://www.ubiq.com/hypertext/weiser/SciAmDraft3.html
Mark Weiser "Building Invisible Interfaces". Keynote speech at the User Interface and Systems Technology (UIST) Conferece, November 1994, Marina del Rey, California. http://www.ubiq.com/hypertext/weiser/UIST94_4up.ps
Ardith Zwyghuizen-Doorenbos, Timothy A. Roehrs, Lauren Lipschutz, Victoria Timms & Thomas Roth (1990) Effects of caffeine on alertness Psychopharmacology, 100, 36-39