Chapter One
Over time, one invariable lesson of biological research has been the difficulty, virtual impossibility, of reliably predicting the properties of intact organisms from the properties of their constituent tissues, cells and molecules. Philosophers have argued the reasons, but empirically we know that accurate prediction is not possible now, or in the foreseeable future. For the genetics revolution to provide the insights we hope to gain into the human condition, we must have adequate experimental animals[1].
Kenneth Paigen
The human code was just the top layer of Celera’s information lode. Most valuable of all – even more important, perhaps, than the human code – would be the mouse genome… on the genomic level people and mice are amazingly similar… This is what makes mice such superb lab models for cancer and other human diseases: genetically, they are essentially little hairy human beings that can be manipulated in the lab in ways that people obviously cannot[2].
James Shreeve
Animals are born, are sentient and are mortal. In these things they resemble man. In their superficial anatomy – less in their deep anatomy- in their habits, in their time, in their physical capacities, they differ from man. They are both like and unlike[3].
John Berger
Introduction
Meet them in the lab
They are mice like any other. As a rule, there is no way you can tell from the outside whether a mouse is genetically modified or not. If you want to know, you have to look at its genes. This is not to say that transgenic mice are ordinary mice. On the contrary, whoever visits the transgenic unit of a research laboratory for the first time will be surprised by the large variety of specific mouse strains that are involved in mouse biotechnology. Every mouse strain has its unique properties. They differ in coat colour, the typical behaviour they display, and the distinct physical properties they have. The OLA, for example, also known as the 129, is very cute; it has a beautiful silver grey coat. It is a mouse of great value because it has good embryonic stem (ES) cells. At this moment there is no other mouse with ES that can so easily be cultured in vitro. Apart from that, however, it is a remarkably stupid animal, according to some researchers at least. Moreover, the 129 mice have a bizarre pathology. Over time they all develop an infection of the eye. The FVB, a traditional white mouse, is a neurotic creature. You can easily recognize this animal when it is running in circles. But according to the molecular biologists who work with such mice, it is a ‘supermouse’ because of its embryonic properties. The FVB embryo is very easy to manipulate, but only by means of micro-injection. You cannot place cells of another mouse into a FVB blastocyst. In contrast to the FVB the blastocysts, those of the ‘Black Six’, also known as B6, are easy to manipulate. This is one of the reasons why it has become such a popular mouse. The B6 is also an excellent foster mother. Because of its character and good looks, it is the favourite animal of many who work with mice. The B6 is a beautiful, dark-coated, small mouse, originally bred by Chinese breeders of fancy mice. Quite striking to see are the patched chimeras, which are a mixture of two mouse strains. The cells of these mice, including the cells that make up the coat, originate from the embryo cells of two different mice and therefore have a different genetic make- up.
The transgenic mice live in animal quarters referred to as mouse facilities. A tag is attached to their cage that provides information about the specific mouse strain: the genetic modification it carries; whether it is a homozygote or a heterozygote for this modification; its date of birth; and other specific information relevant to the animal caretakers and researchers. Before entering the mouse facilities one has to change clothing and shoes and wash hands thoroughly for a minute. This is merely one of the measures that have been taken to avoid the contamination of these precious animals. Bred only to live in laboratories, these mice are very susceptible to infections. For example, mice with severe immune deficiencies need to be kept pathogen free[4]. Outside the safe walls of the laboratory, many of these inbred mice, transgenic or not, would probably not survive very long.
The secret of the mouse’s scientific career
The ‘natural’ habitat of these laboratory mice is the laboratory. Nobody who is at home in contemporary bio-medical science will be surprised to see transgenic mice in a research facility. But how did they get there, and, why were these mice introduced into the laboratory? Since the first transgenic mice were born in the early 1980s, transgenic mice have had an impressive career within the life sciences. The vast majority of transgenic animals that can be found in a contemporary laboratory are mice. But, why was it the mouse, and not another animal, that became the most popular animal in transgenic technology? And, last but not least, how has the mouse evolved during the past 100 years as a laboratory animal? Not only in a literal sense: How did scientists influence the evolution of the lab mouse as a species, how did they influence its genetic make up, its genome? But also in a ‘conceptual’ sense; How did the mouse change from the perspective of the researchers?
This chapter is meant to be an introduction to the main protagonist of this book: the genetically modified mouse. It is not a creature ‘ex nihilo’, but the outcome of a long and unique history, an important chapter of the history of the life sciences. To understand the transgenic mouse of today, we have to look at its genealogy. For the reconstruction of the ‘birth and rise’ of the genetically engineered laboratory mouse, I use four types of written sources: 1) publications in the major scientific journals such as Science, Nature and Proceedings of the National Academy of Science. It is here that scientific breakthroughs are first presented to the scientific community and the public, and the scientists discuss the new mouse technologies in scientific reviews; 2) responses to these publications in the popular media such as national newspapers. These are a useful source of information because they reflect the spirit of the times and give a good impression of how developments in biotechnology are perceived by the public, and in the interviews with the responsible scientists important personal statements can be found that are illustrative of the scientific expectations of biotechnology at a particular time (e.g. Schmeck 1983; Saltus 1990; Rensberger 1992b; Schrage 1993); 3) retrospectives by scientists who, for different reasons, look back on their own field of research and the animal with which they are so familiar, by highlighting either the pioneering role of great scientists (e.g. Paigen 2003a,b; Crow 2002; Klein 2001; Arechaga 1998; Papaioannou 1998; Russell1978) or their own role in the development of the new revolutionary technology (e.g. Snell 1978; Strong 1978; Smithies 2001; Evans 2001; Capechhi 2001; Tarkowski 1998; Palmiter 1998; Bradley et al. 1998), or in order to the praise the mouse after the cracking of its genetic code for its contribution to biomedicine (Clarke 2002; Travis 2003); and, 4) last but not least, the work of social scientists who study the emergence of typical research models, laboratory animals, and research materials and give answers to the question how and why the mice became ‘the right organism for the job’ (Rader 1999, 2001). From these sources it can be concluded that the mouse was introduced into the biomedical laboratory a century ago. Since then, it has experienced a number of dramatic transformations. These transformations reflect important shifts and changes in biomedical power. For many years, before the emergence of transgenic technology, mice had been present in biomedical laboratories all over the world. Their specific evolution is highly influenced by science, but at the same time the laboratory mouse (its strengths and weaknesses as a model) has influenced the evolution of science as well. Ever since mice were introduced into laboratories, their genome has been influenced by the research agendas of science, but the opposite is also true: research agendas have been adapted to the mouse genome. The laboratory mice that have been used to create transgenic mice were by no means ‘normal’ mice. They were the result of a long process of inbreeding. But, probably even more important, as a result of the inbreeding, inbred mice have developed some unique properties that make them very suitable for transgenic technology.
Three crucial steps can be discerned in the ‘scientific career’ or ‘genealogy’ of the lab mouse. The first step was the transformation of the mouse from an object of study as an animal into a homogeneous laboratory tool that could be used for studying the laws of genetics. A second important step in the career of the mouse came about when the mouse became the pioneering species of transgenic technology. The third and final step can be described as the transformation of the mouse into a model. The mouse has become a mouse model, a transgenic stand-in for human beings in biomedical research[5].
This chapter takes a more or less chronological approach, and is structured around these three steps. In Part One, I will describe the birth of the laboratory mouse. Its history begins in the early years of the 20th century when researchers in biology began to inbreed mice for scientific purposes. Once this first step was taken, the inbred mouse became an important instrument in genetics. With this practice of inbreeding a new ‘phenomenon’ was born, the mouse became a laboratory mouse. In Part Two, I describe the transgenic revolution. When two critical threshold conditions were met in the 1970s: namely, the possibility to culture embryos in vitro, and the availability of DNA recombination technologies, a second step was taken: the creation of the transgenic mouse. I will refer to the development of the first transgenic mice and the technological improvements that led to the increasing control over gene expression as the ‘transgenic revolution’. In Part Three, I describe the development of the mouse as a model. During the transgenic revolution, research was primarily focused on gene regulation. When the possibility arose to knock out genes in a more direct and controlled manner, the mimicking of human gene defects in mice became the next challenge. With the ‘discovery’ of the knock-out technology the ‘career’ of the transgenic mice received a real boost. The mouse then evolved into the animal model for human diseases. This evolution, from laboratory tool into stand-in, is one that is still taking place today. Every week, it seems, new mouse models are reported that mimic human genetic diseases.
Part One: The ‘birth’ of the laboratory mouse
Clarence Cook Little’s inbred mice
All the sources I consulted[6] concerning the history of the laboratory mouse point to Clarence Cook Little as the first and most important scientist responsible for the ‘creation’ of the inbred mouse, the predecessor[7] of the transgenic mouse. In retrospect, his work can be seen as the beginning, the first milestone in controlling the genetic nature of mice for research purposes (Mobraaten and Sharp, 1999). Little studied biology at the beginning of the 20th century at Harvard University. There he followed genetics classes with W.E. Castle, Professor of Zoology, who invited him to work at his lab at the Bussey Institute. In November 1907, Little took over the work of maintaining Castle’s mouse stock. Castle believed that practical experience with real organisms was the best way to learn about genetics (Rader 1999). At the Bussey Institute, Little studied the inheritance of coat colour in mice. Most mice came from Granby Mouse Farm, managed by Abbie Lathrop. In those days, fancy and exotic mice were a much sought after curiosity. The mice that came from Lathrob’s farm were selected on the basis of physical features such as: friendly character, coat colour, or curious kinds of behaviour (for example: the Japanese waltzing mice). These animals – by no means wild mice – were Castle’s and Little’s experimental raw materials. Before entering Little’s scientific breeding programme, these mice were already the result of many years of selective breeding.
In the early days of the Bussey Institute, shortly after the rediscovery of Mendel’s work[8], there was much discussion about whether the laws of genetics could be studied through inbreeding. Little was convinced that, for research on the Mendelian inheritance of specific characteristics in mice, pure strains (that is, strains with a homogeneous genetic background) were needed. This was the reason why in 1909 he began to inbreed mice for his research in genetics. He bred mice that were recessive for specific coat colour genes by mating brother and sister in each generation in order to maintain the recessive genes. In those days, inbreeding of mammals was a controversial practice. For example, Castle, his own professor, who saw himself as an ‘experimental evolutionist’, did not believe that inbreeding alone could ever produce artificial yet stable genetic forms of truly stable strains (Rader 1999). An inbred variety tends to be delicate and sickly, and to be therefore rather susceptible to disease. Animals that do survive often become infertile. Few scientists believed that viable strains of inbreds could be maintained in the long run (Strong 1978). Notwithstanding the overall scepticism that his work provoked, in 1911 Little was able to report his first successful inbred strain, the DBA strain, named after its coat colour, diluted, brown, non-agouti.
It was not only his breeding skills that made Little the ‘father’ of the laboratory mouse. According to ‘laboratory mouse historian’ Karen Rader, Little was ‘not the first person to think of inbreeding mice or mammals, he was not the only researcher working with mice and not the only scientist to see the methodological potential of homogeneous mammalian animals for freeing genetic research from the local limits of time and space [ …] however he stabilized inbred mouse material at the time that he effectively connected this material to well understood sets of research questions and approaches in the rapidly expanding discipline of Mendelian genetics’ (Rader 1999: 328).
Of mice, Mendel and cancer research
Researchers soon discovered that an important property of inbred mice was the relatively high incidence of spontaneous tumours. Some strains possessed unique susceptibility characteristics to various types of cancer (Mobraaten and Sharp 1999). Little’s DBA mouse for example displayed a hereditary susceptibility to mamma tumours (Rader 2001). It was not surprising, therefore, that the focus of Little’s research changed from coat colour to cancer research (Russell1978). He was convinced that cancer was genetically determined and followed a Mendelian pattern of inheritance[9]. In order to study his hypothesis, he moved to Tyzzer’s lab in 1913. Tyzzer, a researcher on tumour resistance, had recently discovered that tumours derived from the Japanese Waltzing mouse (also an inbred strain) could be transported to mice of the same strain, but not to mice of other strains. Wild mice did not ‘accept’ the tumour transplants. In addition he saw that when he crossed the Japanese mice with wild type mice the resulting offspring F1 generation accepted the tumour but mice of the F2 generation did not. From these data he concluded that tumour susceptibility was not an inherited Mendelian trait. These confusing research data were the reason for a strong debate within the scientific community about the validity of the Mendelian postulates. Arguing in favour of Mendel, Little postulated that multiple Mendelian factors could explain the observation of the different tumour susceptibility in the F1 and F2 crosses. In other words, Little suspected that the effect observed could be explained by the involvement of more than one gene in tumour resistance. He suggested that a large number of genes were involved in determining whether a mouse would reject or accept a transplanted tumour, and that for each of these genes there were two alleles, one dominant and one recessive (Paigen 2003a). In order to prove his theory, Little compared two different homogeneous strains: the Japanese Waltzing mouse and his own DBA. His hypothesis was confirmed (Paigen 2003a; Rader 1999). It is important to note that, because of the mathematical precision of Little’s Mendelian based theory predictions, the multi-factorial hypothesis of cancer transmission could, by definition, only be probed with inbred mice (Rader 1999). In heterogeneous mice one has no control over the interference of other ‘background’ genes in the resulting phenotype. Research data obtained from heterogeneous mice would simply be too complex to interpret.
With the linking of mouse genetics to cancer research the inbred mouse became the model animal for this type of research, simply because it was the best available candidate. Amongst mammals the mouse is second only to man in frequency and variety of spontaneous cancers. ‘Regrettably the frequency of occurrence was still all too rare’, noted Strong in a reflection on Little’s work. ‘A single mouse with a spontaneous tumour was selling for $300 in laboratories on the eastern seaboard. The use of mice in the number for quantitative research necessitated a ready supply at minimal cost’ (Strong 1978)[10]. During the first half of the 20th century, cancer was the driving force behind mouse genetics. It greatly influenced the development of the mouse as a genetic system (Paigen 2003a). However, the career of the inbred mouse as model animal for cancer research did not proceed without dispute. Little became involved in a controversy with another prominent researcher in cancer genetics, Maud Slye, that lasted for years (Russell1978; Rader 1999). Slye was of the opinion that research on inbred animals could never lead to reliable research data. Little had to convince other scientists of the usefulness of his inbred mice. Not only did the mice themselves have to be changed – into homogeneous lab animals – but the dominant mind-set in the scientific community had to change as well. Little invested 40 years of work in his own laboratory, now known as the Jackson Laboratory, to get his inbred mice accepted.
The rise of mouse genetics
Of course, Little was not the only researcher involved in the early history of the inbred mouse. In 1918 he moved to the Cold Spring Harbor laboratory on Long Island (New York) where, at the station for experimental evolution, a small but robust research group was formed, focussing on mouse genetics. They called themselves the ‘Mouse Club of America’. It was here that the research on tumour genetics using inbred mice started to get serious. (Russell1978; Pennisi 2000; Rader 1997). Well-known researchers from the Mouse Club, besides Little, were Halsey Bagg, Leonard C. Strong, George Snell and Leslie C. Dunn. A number of inbred mouse strains in common use today were developed during that period by these researchers. These mouse strains were created either as strains exhibiting a very high incidence of spontaneous neoplasia or as strains that proved to be useful as the necessary low-incidence controls (Paigen 2003a). The BALB/c for example, the first albino mouse, now one of the best known inbred mice, was bred by Halsey Bagg (Pennisi 2000). Strong, one of Little’s co-workers, crossed this BALB/c with an albino produced by Little into what they called the ‘A-strain’ (Strong 1978). This mouse had a very high predisposition for lung and mamma tumours and was therefore very suitable for cancer research. Subsequently, Strong kept inbreeding this A-strain resulting into the C3H high tumour incidence sub-strain. With the aid of these mice he was able to show that cancer is indeed inherited (usually in a dominant way) (Strong 1978). In 1921 Little bred the C57BL line. This is the inbred mouse line from which the the sub-variety C57BL/6 originates. C57BL/6 also referred to as B6 or ‘Black six’, is one of the most popular mice today. It is also the mouse whose genome was ‘cracked’ in 2002 (Waterston et al. 2002). The name ‘B6’ originates from the number of the ‘mother’ or founding female of this strain: she was female number 6 of the C57BL strain (Russell1978). In 1928 Dunn bred the 129, a mouse with a high incidence of testicular carcinoma. This mouse was the predecessor of the 129/Sv, a mouse variety of great importance for transgenic technology because of its ES cells.
In 1929, Little founded the Roscoe B. Jackson Memorial Laboratory, now known as the Jackson Laboratory, in Bar Harbor . Other prominent members of the Mouse Club soon followed him. It is from here that the inbred mice started to conquer the world, notably when in 1933 the laboratory began to sell inbred mice to other laboratories. The selling of mice soon became an important activity of the Jackson Lab, as it still is today. In 1941 the laboratory shipped 2500 mice a week, and in 2002 the number of mice shipped per week was 44,000 (Crow 2002). Thus was established the international fame that the Jackson Lab and the ‘Jax™’ mice have today. Although cancer research was a dominant stream, not all members of the Mouse Club were into it. Dunn for example, also a pupil of Castle, used inbred mice to study genetic mapping, the localisation of genes on the chromosomes (Lyon, 1990). In 1920, he published the first paper on the systematic search for linkage amongst coat colour varieties. This pioneer work on the genetic mapping of the mouse genome would later become of great importance to genomics and the human genome project. Another big name in mouse genetics was George Snell, who joined the Jackson Lab in 1935. He too was a pupil of Castle, and was mainly interested in the genetics of tumour rejection, rather than in the genetic mechanisms underlying cancer as such. Snell was interested in the genetics behind the immune system. In 1936, the major immune histocompatibility complex, at that time referred to as H2, had been discovered by immunologist Peter Gorer. In order to eliminate complexity introduced by the presence of different interacting H loci, Snell set up an ingenious strategy of cross-intercrossing and cross-backcrossing and started with the breeding of congenic strains[11](Klein, 2001). With these mice, Snell discovered in 1951 that one of these H loci was more important than the others, and he also found a visible genetic marker with which he could follow the segregation of the H locus. Later, Snell discovered that the H locus was in fact a complex. Today we know this H2 locus as the major histocompatibility locus, one of the most important elements of the immune system. In 1980 he received the Nobel Prize for this boundary-breaking work (Paigen, 2003a).
In 1939, the inbreeding of mice had reached such proportions that a reliable and extensive overview of the inbred strains was needed. For that purpose The International Committee on Standardized Nomenclature for Mice was founded. All mice strains were given names and codes on the basis of a standardised system. Later the term ‘genetic’ was added to the nomenclature. The committee was charged with the task of establishing and updating rules and guidelines for genetic nomenclature (Silver 1995). Mouse genetics is by its very nature a collaborative field of scientific investigations. It is therefore of key importance that researchers speak the same language and use the same coding system. Now, with the explosive growth of transgenic mouse strains this nomenclature, has become indispensable. Another sign of how the mouse research became more strongly organised was the Mouse News Letter that started to circulate in 1949. This newsletter was renamed Mouse Genome in 1990 and Mammalian Genome in 1997.
The mouse and the Wistar rat
An interesting comparison can be made with the rat, another commonly used laboratory animal. The tale of the Wister rat also starts at the beginning of the 20th century, when Helen Dean King at the Wistar Institute worked on an inbred rat strain. It was her explicit goal to develop a standard lab animal, an animal that would generate the same research data at different laboratories (Tocher Clause 1993), an animal one could make atlases of. This albino rat, also known as the Wistar rat, can still be found in large numbers in laboratories all over the world. It has become the standard lab rat. In modern laboratories, the name ‘Wistar’ stands for reliability and quality. As Little and Dean had already shown, inbreeding does not necessarily lead to inferiority and infertility. On the contrary, with her Wistar rat, Dean proved that by inbreeding it is possible to produce useful qualities such as mild character, fertility, etc. Notwithstanding these similarities, the inbred mouse and the inbred rat have had totally different careers in science. Whereas the rat is a popular animal in physiology and behaviour studies, the mouse is the animal associated with genetics and cancer research. After the discovery of the MHC, mice also became associated with immunology.
‘The right tool for the job’
One of the most important transformations that the mouse underwent at the beginning of the 20th century was from animal research object into a ‘laboratory tool’. Little was not interested in mice as animals, but in what he could learn from them about genetics. He used his inbred mice to study Mendelian genetics in a living species. As a result of his scientific approach, a combination of inbreeding and mathematics, the mouse became an instrument rather than an animal. Mice became part of the standard equipment of the modern genetics laboratory. For Little, the mouse probably had the same meaning as the pea had for Mendel[12]. Little and his contemporaries were trying, so to speak, to look through the animals towards the laws of genetics. They were not interested in the mouse per se, but in his mysterious genes. They developed a genetic gaze on the animal. The laboratory mouse as a phenomenon was born.
The scientific rationale behind the mouse strains is well illustrated by the words of Mobraaten and Sharp: ‘High quality research depends on the purity and consistency of reagents, including experimental animals, for efficient reproducibility of results. It is readily recognized that the purity and consistency in experimental animals depends on both genetic homogeneity and controlled environments that avoid variation caused by nutritional, pathogenic or other environmental effects’ (Mobraaten and Sharp 1999: 129). To serve science best, the mouse not only had to be transformed into a tool or instrument, it had to become the ideal tool for studying genetics, that is, it had to be as reliable and predictable as possible. In order to achieve this goal, Little and his colleagues had to eliminate as much variation as possible within their mouse strains. The interchangeability of individual mice within a strain guaranteed scientific objectivity and efficient reproducibility of results. The quality of the mice depended on their purity and consistency. In the hands of the geneticists, the population of laboratory mice evolved into a collection of homogeneous strains. Mice of a congenic mouse strain are more or less genetically identical and therefore exchangeable. The variety between mouse strains, on the other hand, is quite significant. A specific inbred strain stands for specific behavioural and physical properties. This genetic variation between, but not within, different mouse strains, has made it possible to study genetics on the smallest level: the single gene. In the process of becoming the right tool for genetics research[13], the individual mouse lost its former identity as an animal. The individual mouse became the equivalent of the mouse strain it represented. This is also reflected in the language of researchers, who refer to their mice as to ‘BALB/c’, 129 or ‘Black Six’. In a unique process of selection, an artificial subspecies: namely, that of inbred strains, was created.
The transformation into a ‘standard’ or model animal was not unique for the mouse. But its role in genetics was. For research in genetics, mice were, at that time (as they are today) simply the right organism for the job. The mouse was cheap, easy to keep, and eager to multiply and reproduce itself. Newborn pups take about ten weeks to mature, so scientists could breed several generations in a short period. Moreover, mice are small, relatively tame, and require less space, food and attention than, for instance, dogs, rabbits, or other animals. But most importantly, fancy mice, available from breeders and collectors of pet mice, were a physically diverse lot (Stroh, 2002). They were a pool of interesting mutations in terms of coat colour and behaviour. All these characteristics made the mouse of key importance to genetics research. And when it turned out those mice, partly as an effect of their inbreeding, displayed a high incidence of tumours, the career of the mouse became a fact. Genetics determined the (genetic) fate (evolution) of the research mouse. As the dominant mammal in genetic cancer research, the mouse, in turn, influenced the course of mammalian genetics. The fact that researchers knew so much about its genes, their familiarity with the animal in the lab, and some typical properties of some inbred strains, all made the inbred mouse the ideal candidate for a pioneering role in the transgenic revolution that started in the late 1970s.
Part Two: The transgenic revolution
‘Those were heady days’ wrote Virginia Papaioannou in 1998, in a reflection on the ‘coming of age of the transgenic era’. ‘For those of us entering the field of genetics and mammalian embryology in the 60s and 70s, the excitement was palpable. As graduate students and post-docs, we saw that long-standing barriers were tumbling down before an onslaught of technological advances. And not only that the scientists breaking those barriers were all around us as mentors and colleagues. The rapid pace of progress and the seemingly boundless possibilities hooked us into the field, and we gradually became aware that we, too, were part of a revolution that was the way to opening the mammalian genome to experimental alteration’ (Papaioannou, 1998: 841).
Manipulating the genome: recombinant DNA technology
In the early 1970s, two necessary conditions for the making of transgenic mice were met: recombinant DNA technology, and in vitro culture of mouse embryos. In 1972, Paul Berg, a pioneer in the field of biotechnology, created the first recombinant DNA molecule. With his pioneering work, Berg showed that DNA can be manipulated and, even more, that DNA from one organism could be transported to another organism. In 1972 he used a plasmid, a bacteriophage, and E. coli DNA for his recombination. In 1980 he received the Nobel Prize for this boundary-breaking work (Anonymous, 1980). The big question was, of course, whether DNA recombination would also be possible in mammalian cells, or even whole organisms. The genetic modification of a mammal is for several reasons more complicated than a single mammalian cell or single cell organism. In order to create a genetically modified mammal, one needs to manipulate its embryonic cells. This is only possible in vitro. The next step is to place the manipulated embryo into a pseudo-pregnant foster mother in order to let it develop into a normal whole organism. This asked for completely different skills, knowledge and technologies than those which the geneticists had been acquainted with so far.
Manipulating the embryo: the search for the ES cell line
In the late 1960s and early 1970s, besides genetics, another field within biology: namely, developmental biology, started to play an important role in the history of the lab mouse. Scientists such as Andrzej Tarkowski, Beatrice Mintz, Ralph Brinster and Richard Gardner started to experiment with the in vitro culture and manipulation of embryonic cells. They did so because they were fascinated by the processes that led to the development of a complex organism out of one single embryo cell. In the early 1960s, Tarkowski and Mintz independently demonstrated that fusing two 8-cell mouse embryos (3-day old) would produce chimeric adults containing cells from each original embryo (Arechega 1998). The mouse embryos they used originated from mice with different coat colours. The resulting chimeras had a patterned coloured coat. ‘The composite animals that developed from such combinations of genetically different cells were dramatic to look at, but were even more impressive considering the potential they held for tracing cell lines, testing cell potential, and eventually (as we shall see later) as vehicles for gene manipulation’ (Papaioannou, 1998: 843)). That these were remarkable experiments can also be concluded from the words of Tarkowski, who reflected upon his own work in 1998: ‘At that time the idea of making one mammalian individual by aggregating two cleaving embryos must have looked rather preposterous and later I wondered why Professor Rogers F.W. Brambell, under whose supervision I worked [ ], had accepted this [ ] crazy project which I proposed to carry out in his laboratory’. But the fact that somebody else, namely Beatrice Mitnz, was involved in exactly the same experiments surprised him even more (Tarkowski 1998: 903).
Soon, others started to experiment with embryos. In 1968, Richard Gardner also successfully ‘created’ mouse chimeras. Unlike Mintz and Tarkowski, he did not fuse whole embryos. He injected embryo cells from one mouse into 4-day old blastocysts of another mouse (Arechaga 1998;Tarkowski 1998). Ralph Brinster, who was inspired by Gardner’s work, saw great potential in this blastocyst injection technique: ‘I believed that there were multipotent cells in older postimplantation embryos (e.g. 6-8 days old) as well as cells from teratocarcinomas that would colonize a blastocyst, thereby influencing differentiation of an embryo in a predictable way, and perhaps enter the germ line’ (Arechaga 1998: 866). In 1972 Brinster and his co-worker Moustafa were able to report another success when they succeeded in the creation of chimeras out of embryo cells of different ages, even up to 7-8 days (Moustafa and Brinster 1972). These studies supported the idea that mouse blastocysts could be colonised by nonsynchronous cells (cells of different age).
Encouraged by these results, Brinster searched for a pluripotent cell line that could be manipulated in vitro and subsequently replaced in a mouse blastocyst. Today we know this pluripotent cell line as the ES cell line. The history of the ES cell line dates back to 1967. In that year, Leroy Stevens bred a mouse strain with a high incidence of spontaneous testicular teratomas. This mouse, still widely used today, is the 129Sv. The teratomas of these mice are composed of different types of differentiated cells and also of multipotent (non-differentiated) stem cells known as embryonal carcinoma (EC) cells or teratocarcinoma. From these 129/Sv teratocarcinoma, Stevens cultured the OTT6050 cell line. These stem cells, or EC cells, resemble early embryos qua morphology, biochemistry and cell surface (Papaioannou et al.,1975). After his success with the chimeras in 1972, Brinster was able to lay his hands on this OTT6050 teratocarinoma cell line. The cells he obtained had to be cultured as an ascites[14] tumour (in the abdomen of a host mouse). Brinster injected these embryonic cells, once obtained from an agouti-pigmented mouse, into the blastocyst from a random bred albino mouse. According to his own reports these experiments were very successful[15]. In 1974 the chimera mouse with agouti stripes on an albino background was born (Arechega 1998). In 1975 Mintz, working at the Institute for Cancer Research in Fox Chase, Philadelphia, reported about the genetically mosaic mice. Like Brinster, she used Stevens’s 129/Sv teratocarcinoma. Mintz was surprised about the potential of malignant cells to develop after 200 transplant generations (8 years in culture as an ascites tumour) into normal functional cells in the chimeric mice. ‘The tumor itself generally kills its host by 3-4 weeks after transplantation. Yet our oldest mosaic animal [ ] is now 11 weeks old and appears to be healthy and vigorous’, she wrote in a Proceedings of the National Academy of Science article in 1975 (Mintz and Illmensee 1975: 3588). On the basis of these experiments, Mintz drew important conclusions about the development of malignancies. ‘The origin of this tumour from a disorganized embryo suggests that malignancies of some other, more specialized, stem cells might arise from comparably thorough tissue disorganization, leading to developmental aberrations of gene expression rather than changes in gene structure’ (Mintz and Illmensee 1975: 3585). Gardner’s team from Oxford and Evans from London together confirmed in 1975 that these teratocarcinoma in vivo could add to normal morphogenesis and differentiation (Papaioannou et al.1975).
The results were impressive and hopeful but, nevertheless, the researchers did not succeed in the transmission of the teratocarcinoma cell line into the germ line of the mouse (Papaioannou 1998; Bradley et al.1998). An additional complication was the method of culture. The teratocarcinoma had to be cultured in vivo as an ascites tumour, which made it hard to manipulate the cells prior to injection into the blastocyst. This problem was solved when Evans and Kaufman in 1981 reported about a pluripotent ES cell line that could be kept in an in vitro culture. These ES cells had, in contrast to cells derived from the embryonic carcinoma, a normal karyotype (Evans and Kaufman 1981). Another great advantage of the ES cells was that they could be cultured directly from the embryo. The technique to culture ES cells from embryonic cells was of great importance to the later gene transfer technology. Evans and Kaufman initially named their cell line the EK cell line, an acronym of their own names. But Gail Martin, who conducted similar experiments a year later, introduced the term ‘Embryonic Stem cell’, ES cell, a term that is still in use. (Evans 2001). In 1984, Alan Bradley and his co-workers Evans, Roberston and Kaufman succeeded in the transmission of these ES cells into the germ line of chimeric mice (Bradley et al. 1984). ‘The appearance of a pup with dark eyes in a litter caused great excitement in the Evans’ laboratory. This pup was fathered by a male chimera generated from cultured embryonic stem (ES) cells. […] Unbeknownst to us at that time, this germ line transmission event signalled the emergence of a new age in mouse genetics’ (Bradley et al. 1998: 943)[16]. The possibility to transmit ES cells to the germ line of mouse chimeras indicated that the technological pathway to create a transgenic mouse was available. If one were to use ES cells carrying foreign DNA and transmit them to the germ line, the offspring of the chimera would be transgenic.
1980-1981: the birth of the first transgenic mice
However, it was not via the ES cell-chimeric mice route that the first transgenic mice were created. The first transgenic mice were created in the early 1980s by the micro-injection of foreign DNA fragments into the pro-nucleus of a fertilized mouse egg cell[17]. In the period between December 1980 and November 1981, six groups reported independently about the birth of the transgenic mice (Constantini and Lacy 1981; Brinster et al. 1981; Wagner (E.F.) 1981; Wagner (T.E) 1981; Harbers 1981; Gordon et al. 1980). The first group that was successful in the creation of transgenic mice was Frank Ruddle’s team from Yale University (Gordon et al. 1980). They injected a recombinant bacterial plasmid into the pro-nucleus of a fertilized egg cell of a mouse. The plasmid they used contained DNA segments (thymide kinase) of the human herpes simplex virus (HSVtk) and the monkey SV40 virus. The foreign DNA fragment they injected seemed to have integrated in the genome of their mice, but since the plasmid only contained a cDNA, the DNA fragment could not be expressed (Palmiter 1998). The news was covered by two New York Times reporters who envisioned the ‘creation of animals with new traits and, ultimately, of cures for hereditary diseases amongst humans’ (Ferrel and Slade 1980: 7). Six months later, Mintz’s group reported the successful introduction of the human Beta-globulin and the HSVtk gene into the genome of mice foetuses (Wagner (EF) et al. 1981). They also use a plasmid as vector. Unlike Ruddle, they did observe the expression of the HSVtk gene in one of their (late foetal) animals, although it was barely detectable. The researchers were clear about the implications of these results for medical research: ‘These experiments provide a practical basis for novel investigations of the developmental control of normal gene expression in vivo of the cause and possible cures of genetic diseases’ they wrote in their article (Wagner (EF) et al. 1981: 5016). Jaenisch (at that time located in Hamburg) also successfully created transgenic mice (Harbers et al. 1981). The cloned viral DNA (M-MulV) that he injected was integrated and expressed at different levels in different mouse tissues. To Jaenisch and colleagues it was the greatest challenge to predict the expression of genes in different tissues (Harbers et al. 1981). Thomas Wagner (from Ohio) and co-workers (of the Jackson Laboratory) and (independently) Franklin Constantini and Elizabeth Lacy (working at Oxford) showed that the rabbit beta globulin gene could not only be expressed[18] (at low levels) but also transmitted to the offspring (Wagner (TE) et al. 1981; Constantini and Lacy, 1981).
Although these researchers were very successful in the integration of foreign DNA into the mouse genome, little or no gene expression was observed. (Palmiter 1998). The first experiment that delivered convincing evidence of gene expression was conducted by Richard Palmiter and Ralph Brinster (Brinster et al. 1981). In the fall of 1979, Brinster, who was originally trained as a veterinarian, contacted geneticist Palmiter and asked him for chicken ovalbumin messenger RNA. Palmiter, in turn, supplied him (by mail) with the requested RNA constructs for Brinster’s micro-injection experiments[19]. After a series of experiments with ovalbumin mRNA, Palmiter decided to create a gene construct of the thymidine kinase TK gene and the metallothionine (MT) promoter. Because of the specific properties of the MT promoter, the expression of the MT-MK fusion gene could be induced or increased by exposure to heavy metals. In the hope of potentially increasing the expression of the MT-TK gene, Brinster injected the mice with Cadmium (Arechega, 1998). This turned out to be a good set-up. Some of the mice thus born showed ‘phenomenally’ high thymidine kinase activity in the liver. In 1981, these mice appeared on the front page of the journal Cell (Brinster et al. 1981, Arechaga 1998). In their second publication on their genetically engineered mice, Palmiter and Brinster introduced the term ‘transgenic’ for mice that carried foreign DNA in their genome (Palmiter 1998)[20].
Giant mice
In November 1981 Palmiter and Brinster heard about dwarf mice, mice with a genetical growth deficiency, also known as little. They decided to try to correct this growth defect by providing these mice with an exogenous growth hormone (GH) gene. Their plan was to create a fusion gene similar to the MT-TK gene construct used in their previous experiments. They would fuse the MT-promotor gene with the gene that codes for rat growth hormone. They contacted Ron Evans, who had just given a lecture on the cloning and characterisation of the rat growth hormone. Together with Evans, they designed a suitable gene construct for their transgenic experiments (Palmiter 1998). In 1982 the metalloine rat growth hormone (MT-rGH) fusion gene was ready for injection. The birth of giant mice in that same year meant a real breakthrough. In December 1982, the results were published in Nature (Palmiter et al. 1982). The dramatic image on the cover of Nature of a dwarf and a giant mouse (see Figure 1) received considerable attention both from the scientific community and the media. The news about the giant mice was widely covered. ‘When these experiments were published, scientists, cartoonists, comedians and animal rights activists were aroused to the potential of transgenic technology. The ability to change the phenotype of the animal was so dramatic that everyone took notice, even though the experiments we published a year earlier clearly demonstrated the potential of the technique’, recalls Palmiter 16 years later (Arechega 1998: 871). The image of the giant mouse soon took on a life of its own. The technology behind the size of the mice was not always well understood. In retrospect, Palmiter said he wished they ‘had used a GH from an animal smaller than the mouse, because many people mistakenly thought that the transgenic mice grew larger than normal because we used a GH gene from rat. Thus, some people missed the salient point that directing the expression of a gene to a more abundant cell type (such as hepatocyts) enhances the accumulation of protein in the blood and prevented normal feedback regulation’ (Palmiter 1998: 849).
The event was covered by Harold Schmeck from The New York Times, who wrote about ‘a new era in genetic engineering, from which important practical as well as scientific effects could be expected’ (Schmeck 1982: 1). Two and a half weeks after Harold Schmeck covered the scientific breakthrough in The New York Times, an anonymous reporter published a much more critical piece on the giant mice in the same newspaper. This reporter was surprised that so little attention was paid to the ethical implications of this new technology (Anonymous 1982). ‘Though it is just a matter of time before such interventions become technically feasible in humans, the issue has received remarkably little public discussion from the biologists who are fast developing the tools for reshaping the handiwork of evolution … This asks for a critical review of the conclusions of the report of the President’s Bioethics Committee about the subject’, he wrote. ‘There are no ethical or religious reasons to stop the research […]. The only restriction the committee proposes is against human-animal hybrids.’ According to this New York Times reporter, these restrictions were ‘both too late and too soon’, too late because the first steps had already been taken with the introduction of the human insulin gene into bacteria, and too early because nobody thinks of creating mermaids or centaurs (Anonymous 1982: 18). A year later, on the 18 November 1983, his words were already outdated. That day, Palmiter and Brinster again reported about their giant mice (Palmiter et al. 1983). This time the giant mice carried the human growth factor. Some of them grew twice a large as their normal littermates. ‘Scientists are setting out to grow breeds of giant mice that are genetically a little bit human’, wrote Harold Schmeck in a response to this news in The New York Times (Schmeck 1983: 1). The species barrier between mouse and man was crumbling.
Transgenic farm animals
In the early 1980s, the public and in particular scientists were impressed by the mouse experiments, but probably nobody would have guessed that 25 years later the mouse would still be the leading character in the world of animal biotechnology. Palmiter and Brinster clearly saw a great future for this new technology, both medical and non-medical, but they had other, bigger, animals in mind. They talked about the ability to mimic or correct genetic disorders with this technology (Palmiter et al. 1982). They were very much interested in the processes of gene regulation. But, from their statements in their research papers and to the press, one can conclude that they were primarily interested in applying this technology to farm animals. ‘Practically nobody’s interested in big mice, but there are obvious applications to agriculture’ Palmiter acknowledged to a reporter of United Press International (Khalsa 1983). They were particularly interested in the effect of increased growth hormone expression on animal size: ‘The implicit possibility is to use this technology to stimulate rapid growth of commercially valuable animals’ (Palmiter et al. 1982: 614). Another interesting application of gene technology on farm animals that they mentioned was farming[21]. ‘The exceptionally high levels of GH found in the sera of some of these mice, raises the possibility of extending this technology to the production of other important polypeptides in farm animals’ (Palmiter et al. 1982: 614). But, before applying the technology to larger animals, the technique needed to be improved. This was at that time the biggest challenge. Optimising the conditions for integration and expression of foreign genes in mice should facilitate the eventual application of these techniques to other animals.
As a result of this future perspective, the transgenic experiments that followed often involved other animals than mice. In the early 1980s, Palimiter announced in several interviews that they would proceed with gene transfer experiments in sheep, rabbits and goats ‘to document the principle’ (Anonymous, 1983). In response to initial successes, Thomas Wagner from Ohio likewise changed the focus of his research to farm animals (Schmeck 1983). He announced that he had extended his research to sheep. He expected to create animals that would grow faster with the same amount of food, a commercially attractive efficient form of meat production. He saw no ethical objections to this kind of research since ‘people have been manipulating the evolution of farm animals for thousands of years’ (Schmeck 1983: 1). A year later, he rejected ethical objections in a similar vein. He saw no threat from genetic manipulation of farm animals because ‘animals cannot infect the environment and they cannot escape from human control, in contrast to the image people have by watching horror movies’ (Anonymous 1984). In 1985, the birth of the first transgenic pigs and rabbits was reported by Palmiter and Brinster in Nature (Hammer et al. 1985). In spite of the optimistic tone of both Hammer’s article and the News and Views commentary, the results failed to be as impressive as the results achieved earlier in mice. The dramatic effect on growth in the mice could not be repeated in pigs and rabbits (Hammer et al. 1985; Lovell-Badge 1985).
In the years that followed, the experiments with farm animals continued to disappoint. The injection of human growth hormone into pigs had disastrous effects. The Belstville pigs (named after the laboratory where the pigs were ‘created’) suffered from arthritis, impotence, and weak muscles. Sheep with added human growth hormone, that were bred in Australia, developed diabetes, abnormal kidneys, and malformed bones, and survived less than a year (Kohn 1994). Because of the technical difficulties and lack of public acceptance of transgenic meat, there was never a market for this form animal biotechnology. In 1993, the creation of transgenic farm animals for meat production was simply put off the agenda. ‘The science wasn’t ready yet to make it economically feasible’, explained James Sherblum, President of a biotech company, to the reporter from The New York Times (Andrews 1993). But though the experiments with the farm animals were disappointing, those with the transgenic mouse continued to be successful.
Gene targeting and controlled gene expression
The results of the early 1980s microinjection experiments were impressive, but the approach had some major disadvantages. If a DNA fragment was injected into the pro-nucleus of an implantation embryo, there was no control over the site where the foreign DNA has inserted. Furthermore there was no control over the activity of the DNA and the number of copies that could be integrated. These problems could be circumvented by the more complicated ES technology discussed earlier. The great advantage of the use of ES cells for the creation of transgenic mice was that a large number of techniques could be applied to manipulate the genome. For example, foreign DNA could be introduced in the ES cells by mutagenesis or with the aid of retroviral vectors. But, even more important, was that it gave the researchers the opportunity to select or screen for a clone with a rare genetic change from millions of cells in culture before constructing a mouse chimera (Robertson et al. 1986; Gossler et al. 1986; Bradley et al. 1998). In 1987, the first successes were reported of experiments in which ES cells that were manipulated in vitro were transmitted in the germ line of chimera. The offspring of these chimeras were the first mice with a modification of a specific endogenous gene through the modification of a cell line in vitro (Hooper et al. 1987; and see also Kuehn et al. 1987 in the same edition of Nature). Both the group from Cambridge UK (Evans, Bradley, Robertson and Kuehn) and the group led by Hooper made an animal model for the Lesch-Nyhan syndrome with this technology. This rare disease, that only affects male individuals, is the result of heritable genetic mutation in the HPRT gene. The male mutant mice made by the researchers of this strain had similar biochemical defects as Lesch-Nyhan patients (Hooper et al. 1987; Kuehn et al. 1987).
‘They are knock-outs’
Halfway through the 1980s, a number of important discoveries and inventions were made that would be of major importance for the further development of transgenic technology. In 1983, Kary Mullis invented the polymerase chain reaction (PCR), for which he received the Nobel Prize in 1993. The PCR technique made it possible to make several copies of DNA sequences in a short time. A second development that was of great importance to mouse gene technology was the discovery by Oliver Smithies and Capecchi (who worked independently of each other) of homological recombination technology. Homological recombination made it possible to turn specific genes off in a directed way (Smithies et al. 1985, 2001; Capecchi 1989, 2001). In 1985, Smithies and his team published an article in which they discussed how they could modify a specific human gene (in bone marrow cells) by means of homological recombination (Smithies et al. 1985), a breakthrough because at that time the prevailing view was that the mammalian genome was much too complex for incoming vector DNA to search, find, and recombine with a homologous target before the efficient non-homologous recombination pathway effectively inserted the vector into a random location in the genome. ‘How wrong this view was!’, remarks Bradley 13 years later (Bradley et al. 1998: 946). In 1987, both Capecchi’s group and Smithies’s group applied this technology (also called gene targeting) successfully to embryonic stem cells (Thomas and Capecchi 1987; Doetschman et al. 1987). Like Evans and Hooper, Capecchi and Smithies chose the HPRT gene for their gene targeting experiments. With these ES experiments, they laid the foundation for the possibility to correct gene defects or to eliminate, or to ‘knock out’, genes in mice (Koller et al. 1989; Thompson et al. 1989). In 1989, Capecchi wrote a review about the ‘new mouse genetics’, in which he claimed that through gene targeting, the potential existed to generate mice of any desired genotype (Capecchi 1989). Soon afterwards, the first knock-out mice, mice whose β2-m gene was disrupted by targeted mutation, were born in Jaenisch’s lab at MIT (Zijlstra et al. 1990).
The new technologies used by these pioneers in mouse genetics did not only receive attention from the media. They also attracted researchers from other laboratories. ‘Now that scientists can create desired mutations in mouse genes almost at will, instead of working with mice that turn up occasionally with accidental mutations, they are excitedly planning systematic experiments to resolve longstanding questions in biology’, wrote Richard Saltus in 1990 in The Boston Globe (Saltus 1990: 29). But the method was at that time extremely complicated. In 1990 there were only a few laboratories that had mastered the techniques for making knock-out mice. These laboratories received a considerable number of requests from scientists who wanted to come and visit for week and learn the technique (Saltus 1990). Within a few years the ‘knock-out mouse’ became a familiar phenomenon, both within the scientific community and the public.
Gene Control Switches
With the introduction of knock-out technology, the foundation was laid for an explosive increase of transgenic mouse models. In November 1992 The Washington Post communicated that researchers estimated the number of genes that had been examined with the knock-out mice to be 100 (Rensberger 1992b). In that same article, Capecchi explained to Rensberger that ‘the he next challenge was likely to be in making mice whose genes are not knocked out from the start but with engineered “switches” – regulatory sequences spliced onto the end of a replacement gene – that can be thrown to knock them out at later stages of life, or even toggle them on and off’ (Rensberger 1992b: A3). The most important technological breakthroughs that followed were indeed refinements of this type of the existing method. In 1992, two groups published on the application of the Cre-Lox system in transgenic mice (Lakso et al. 1992; Orban et al. 1992). Using the Cre-Lox system, which originally stems from bacteria, it is possible to modulate genes in vivo in a controlled and site-directed approach. With the Cre-Lox system, genes can be knocked out in a specific cell type. The Cre-Lox system was already known, but only received particular attention when it was patented by DuPont in 1992. This patent was the beginning of a fight that lasted for many years between researchers under the guidance of Harold Varmus, Director of the NIH, and DuPont concerning the right to use the Cre-Lox system for research in non-profit institutions (Marshall 1998). When in 1995, Rajewski’s team placed an interferon dependent promoter in front of the Cre-recombinase gene, the ‘genetic on and off switch’ was literally ‘found’ (Kuhn et al. 1995). By using this promoter, the activity of a specific gene could be induced by administering interferon[22]. ‘This is real genetic engineering,’ said Ronald Evans from the Salk Institute when talking to a reporter from Science about these kinds of knock-out technologies. ‘As soon as you get to a certain state of technology, you can think of nice tricks and questions you wouldn’t normally think about, and that is fun’ (Barinaga 1994: 28).
‘They glow in the dark’
Another important breakthrough in mouse gene technology has been the development of bioluminescent and fluorescent genetic markers such as the green fluorescent protein (GFP) and luciferase. GFP, originally found in jelly fish, was discovered in 1962 by Osamu Shimomura and rediscovered in the 1990s when scientists decided to use it as a marker for gene expression. In 1997, a Japanese research team led by Masaru Okabe used an enhanced version of this bioluminescent protein to produce transgenic mouse lines (Okabe et al. 1997). The spectacular photographs made by Okabe and his team showed green fluorescent mice. These mice produced by Okabe were used as a source of green cells in the context of cell transplantation experiments. These and other experiments indicated that GFP could be a powerful in vivo tool for non-invasive real-time visualisation of gene expression in living animals (Yang et al. 2000). Another bioluminescent marker is luciferase, the protein responsible for bioluminescence in the firefly. The gene coding for this protein was also first transferred into mice in the late 1990s (Contag et al. 1998). The difference between luciferase and GFP is basically the type of emission (light versus fluorescent) and the need for a substrate. Only in the case of luciferase is a substrate needed. Before GFP and other bioluminescent markers became available, the measurement of gene expression in response to physiological signals was extremely difficult. Every data point required killing and dissecting experimental animals and measuring the distribution of a reporter gene (Yang et al. 2000). The different types of visible light imaging are developed, patented and marketed by biotech companies. For example, Xenogen, one of the industry leaders in the usage of bioluminescent markers, develops light-producing animal models. These LPTA® animal models are transgenic mice with a luciferase reporter driven by a specific promoter[23]. AntiCancer incorporated, based in San Diego, has pioneered the use of fluorescent markers such as green fluorescence protein. AntiCancer offers products such as oncobrite® and genebrite®, gene constructs that can produce fluorescent tumour cell lines (see Figure 2)[24]. Using imaging systems like these, researchers can observe tumour cells emitting light, they can keep track of their growth and calculate their growth rate. Subsequently, they can administer drugs and determine whether the light goes away. Since it is non-destructive, you can use the animal for an extended period of time. If the tumour cells develop resistance to the drug, this is indicated by the light coming back. The technique essentially records a glow from the inside of the animal (Stokstad 1999). The next step is to use different markers with different colours at the same time so gene interaction or even protein-protein interactions or nerve cell activity can be observed in vivo (Ray et al. 2002). A spectacular example of this is ‘Brainbow mouse’ developed and patented by Jeff W. Lichtman, Jean Livet and Joshua Sanes working at Harvard University. In the brain of this mouse each nerve cell glows with a different colour. Brainbow mouse is genetically engineered so its neurons produce fluorescent proteins in a random combination of colours. As a result, the colours mix and give each cell a different colour (Cook 2006).
The year of the mouse
If 2000 was the year of the human genome, 2002 was the year of the mouse. In August 2002, a physical map of the mouse genome was published, followed four months later by the initial sequence and comparative analysis of the mouse genome, both in Nature (Gregory et al. 2002; Waterston et al. 2002). Using the C57BL inbred mouse strain, an international consortium of researchers had deciphered nearly the entire DNA sequence of the mouse (Travis 2003). According to 86 authors who did the job: ‘The sequence of the mouse genome is a key informational tool for understanding the contents of the human genome and a key experimental tool for biomedical research’ (Waterston et al. 2002: 520). It is clear that this breakthrough has to be valued in relation to the sequencing of the human genome two years before, as a prelude to studying the genomics of human disease. Nicholas Wade from The New York Times has an interesting perspective on the relatedness of man and mouse: ‘Now that the mouse’s genome has been decoded, revealing just as many genes as its host, the 25 million mice that work in the laboratories throughout the world may be demanding a lot more respect. It is the close cousinship that makes this vast labour force of furry little human surrogates so useful for exploring the human genome’ [italics mine]. The mouse genome-sequencing consortium wrote in similar vein: ‘The sequence of the mouse genome will have a huge impact on biological research and human health. It will provide critical information and reagents for use in mouse experimental models. It will become possible to unravel the mechanisms of complex mammalian biological processes and human disease’ (Gregory et al. 2002: 743). The mouse genome offers additional information and tools when compared with the human genome. The most important difference between the humane genome and the mouse genome is that ‘the mouse genome encodes an experimentally tractable organism’ (Bradley 2002: 512). By this, Bradley means to say that because the mouse, unlike man, is a laboratory animal, it is now ‘truly possible to determine the function of each and every component gene by experimental manipulation and evaluation, in the context of a whole organism’ (Bradley 2002: 512). The mouse has become the Rosetta stone for understanding human biology (Travis, 2003). One of the outcomes of a comparative genomic analysis was the enumeration of the total number of genes shared by man and mouse. The consortium estimated that the mouse has 27,000-30,000 protein-coding genes of which 99 percent have a sequence match in the human genome (Boguski 2002). It is this ‘conservation of synteny’[25] between mouse and man that constitutes the value of the mouse. It is also the source of imaginings of the future. Mark Boguski predicts that ‘the comprehensiveness and precision afforded by the genome sequences will allow effective cross-reference of locations of any genetically mapped traits in the mouse with genes in the orthologous regions of the human genome (and vice versa). This will greatly accelerate the isolation of disease genes. It will also be important for precise deletion (knock-out) of mouse genes to study their functions and for targeting human sequences to their syntenic locations in the mouse genome, allowing the mice to be ‘humanized’ for various traits’ (Boguski 2002: 515).
‘A fuzzy furry test tube’
During the transgenic revolution the traditional research methods in the study of genetics changed radically. The traditional approach, where the study of genes was based on whole organisms, implying the crossing of endless numbers of animals and mathematical calculation, was replaced by a molecular approach. Researchers were no longer limited to studies in patterns of inheritance based on phenotypes. The molecular technology allowed detailed studies of gene regulation in both in vitro and in vivo models. A specific change in the genotype made in vitro could now be observed in vivo. The gaze on the mouse turned from the outside of the mouse to the inside of mouse, and even entered the nucleus of its cells, in search of the animal’s genetic core, the heart of the matter. Researchers no longer had to look ‘look through’ the animal towards its invisible genes: now, they could ‘really’ gaze at its genes resulting in a true molecular gaze on the mice.
At the beginning of the previous century, the mouse transformed from a research object into a tool. During the transgenic revolution, the mouse further developed into a high-bio-tech tool, a sophisticated or fuzzy test tube, a living laboratory. The animal as a furry envelope of genes became a litmus-paper that allows us to see whether genes are successfully expressed. They became tailor-made animals; canvases upon which researchers do genetic transplantations. Or, as one reporter put it, ‘molecular biologists now struggle to genetically manipulate their mammals into research masterpieces’ (Schrage, 1993). But, as the control over its genes increased, the ‘animal’ behind the genes gradually disappeared.
As the disappointing experiments with farm animals have shown, from a biotechnological perspective the mouse is a very special animal. No other mammal can be genetically manipulated so easily as the mouse. In the 25 years since its genome has been altered for scientific purposes, the mouse thus proved to be the perfect candidate for the development of animal biotechnology. One of the reasons why this is the case has to do with the inbreeding of mice. The inbreeding resulted in a unique population of animals with unique genetic and embryonic characteristics, some of them highly convenient for animal biotechnologists (Beck et al. 2000). Earlier, the unique characteristics of the 129/Ev strain were discussed. Even today, the ES cells of these inbred mice are used for the production of transgenic mice. But also, the B6 and the FVB deserve particular attention. The FVB is well known for its large embryos. The large pronuclei of these embryos makes them very suitable for micro-injection. The B6 has the unique property that its blastocysts are very easy to manipulate. These mice are therefore very important for the production of transgenic mice with the ES cell-mouse chimera route. The blastocysts of most of the other mice do not develop into healthy mice if cells of other mice are placed in them. Moreover, the female B6 mice have proved to be very good foster mothers. As Rader remarks: ‘The suitability of these animals for research was not determined, but engineered. These rodents’ physical bodies, as well as their representations, were not static. They were adapted and constructed for a scientific culture that valued genetically controlled answers to biological and medical questions’ (Rader 2001). Selection of the fittest in the lab means selection of those mice best adapted to living in a lab and most suitable for transgenic technology. It means selecting the most bizarre, easiest to manipulate, most extraordinary mice. In retrospect, it is probably legitimate to say that the mouse could never have been that successful in transgenic technology had it not been for the extended process of inbreeding it had already undergone over the years.
Part Three: Transgenic mouse models
Animal models and human diseases
In the history of medicine, animals have always played an important role, but animals suffering from genetic diseases that paralleled human conditions have been of special value. Some well-known examples of animal models are: dogs with haemophilia caused by a defect in factor IX; hypercholesterolemia that is found in rabbits as a consequence of a defect in the low-density lipoprotein receptor; and pigs with arteriosclerosis as a result of genetic variations in apolipoproteins (Smithies 1993). A number of inbred mice strains are also well known as animal models for human diseases: for example the oncomice of the CH3 strain described earlier. But other mice that have emerged in the hundred years that they have been at home in the laboratory have also played important roles in the development of medicine. For example, the obese mice that were discovered in 1962, have been used in the study of the role of lipoproteins in obesity. The naked mouse and the SCID mouse have been of great value to both cancer research and the studies in immunology. The naked mouse has no thymus and cannot develop T cells for this reason. The SCID mouse does not have T and B cells. Because the naked mouse and the SCID mouse do not develop an adequate immune response to human cells, these mice can well be used for studying human tumours in vivo. This explains their unique value to research. However, useful animal models do not occur spontaneously that often. And, if it happens, their specific gene defects may be as difficult to identify and to characterise as their human counterparts. Another problem that arises with naturally-occurring animal models is that the affected animals often differ from unaffected control animals in other genetic factors besides the gene in question. These problems do not arise in the highly controlled transgenic mouse models. In addition, mice are easier and less expensive to raise than many other species (Smithies 1993).
The promise of transgenic mouse models
Soon after the birth of the first knock-out mice, the value of the transgenic mice became clear, both within and outside the scientific community. With the transgenic knock-out technology, a technique became available that made it possible to selectively eliminate genes in order to mimic human diseases. From that moment onwards, the career of the transgenic mice was predominantly determined by the demand for reliable mouse models for human diseases. The knock-out technology spread rapidly through laboratories all over the world. In 1993, the Chicago Sun-Times was already talking about a routine: ‘Scientists now almost routinely knock-out animal genes in an embryo and plunk in human ones, including mutations that mimic human traits or maladies. In effect, the scientists are creating miniature patients to examine some or the world’s deadliest and most baffling diseases. The creatures provide living laboratories in which scientists can study diseases that ethically cannot be inflicted on human subject’ [italics mine] (Cone 1993: 28). There has certainly been an explosion in transgenic mouse models for disease, as remarked in 1993 by Caltech’s Daniel Kevles, co-author of the book The Code of Codes (Schrage 1993). It is clear that in the early 1990s the great potential of the mouse as a model was recognised. The expectations were high. ‘From the California Institute of Technology to the Pasteur Institute of Health, these four-legged ”biomedia” will ultimately determine which human diseases get cured and when. The better engineered the mammal, the better – and possibly, more cost effective – the medical options for humans’, wrote Michael Schrage in The Washington Post, after talking to researchers from GenPharm and Caltech (Schrage 1993: F3). To Kenneth Paigen, at that time the Director of the Jackson Lab, it meant a scientific revolution: ‘We suddenly have the ability to create tailor-made mammalian models of human disease which offers the opportunity to study complex physiological phenomena, such as the nervous system, cancer and aids, as never before’ (Connor 1993: 4) [26].
One of the first mouse models that was created was the mouse model for sickle cell disease. Sickle cell anaemia was one of the first diseases demonstrated to be a molecular disease (Bedell et al. 1997). The cause of the disease was found to be an alteration of the B-globuline gene. Since the gene was already known, sickle cell anaemia was an obvious candidate for a mouse model. In 1990, two groups reported about the sickle cell mouse model, one in Science the other in Nature. However these animals mimicked the sickle trait rather than the sickle cell disease[27] (Ryan et al. 1997). Subsequently, several other groups worked on the sickle cell mouse model. Several times it was claimed that the model was created, but none of these models modelled the severe haemolytic anaemia observed in human sickle cell disease. It was only in 1997 that, for the first time, a mouse was created that developed a severe haemolytic anaemia and extensive organ pathology similar to that observed in human patients (Ryan et al. 1997). Nevertheless, in spite of the mouse model, to this day no cure for sickle cell disease has been found.
Another high-potential mouse model was the mouse model for cystic fibrosis. Cystic fibrosis is the most common lethal disorder of Caucasian populations. It is a recessive disease that is carried by 1 out of 22 individuals of European descent. One out of 3,600 (Dutch population) newborn babies is affected with the disease. It is caused by defective chlorine transport and excess mucus production by epithelial cells. In animals the disease does not occur, and therefore a naturally-occurring animal model is not available. In 1989, the gene coding for the protein responsible for this disorder (cystic fibrosis transmembrane conductance regulator – Cftr) was isolated. Since that time, several mouse models for CF have been constructed through gene targeting in ES cells (Bedell et al. 1997). The first mouse models were created within three years after the discovery of the CFTR gene. In August 1992, the group of Oliver Smithies and Beverly Koller was the first to report about the animal model for cystic fibrosis. In the scientific journal Science they described how they had created Cftr-/- mice with gene targeting. The animals displayed many features common to young human cystic fibrosis patients, but they usually died before 40 days of life as a result of severe intestinal obstruction (Snouwaert et al. 1992). Because of the early death of the animals, the mouse model was not very useful for studying the disease. A month later a Scottish group led by David Porteus reported about their animal model in Nature (Dorin at al. 1992). They used an alternative splicing allowing a low level of residual Cftr expression. As a result, their mice suffered from a less severe form of cystic fibrosis and mimicked the pulmonary disease found in CF patients more closely (Bedell et al. 1997). In the years that followed, several approaches were used to successfully correct the intestinal and pulmonary defects in mice carrying the severe and leaky Cftr mutations described above. However, the majority of CF patients carry much more subtle CTFR mutations, and strategies that interfere with such mutant proteins may have to be different from those required to correct defects resulting from the absence of normal protein (Bedell 1997). The search for a reliable CF mouse continued. As a result, a number of different mouse models of CF exist today. In 2001, researchers Davidson and Dorin wrote about 12 mouse models in their extensive review of CF. In their conclusion, they stated that: ‘Despite some tantalizing similarities between CF lung disease in humans and mouse models of CF, under the experimental conditions described, the suitability of these models remains controversial and significant differences are evident’ ((Davidson and Dorin 2001 :15). This did not imply that the mouse models were worthless. As Davidson and Dorin wrote: ‘By recognizing the key similarities and differences, mouse models of CF might provide useful in vivo systems for the analysis of specific aspects of CF lung disease and for testing the validity of specific hypotheses’ (Davidson and Dorin 2001: 15-16). Studies with the different Cftr knock-outs have shown that the disease results from a failure to clear certain bacteria from the lungs, which leads to mucus retention and subsequent lung disease. But, so far, the mouse models have not yet led to a breakthrough in the treatment of cystic fibrosis in human patients.
That the creation of a reliable mouse model is not as easy as was initially expected was also illustrated by the Alzheimer mouse model. The first animal model for Alzheimer’s disease (AD) was presented in 1991, but later it turned out that this mouse did not develop Alzheimer’s at all (Cone 1993). And the mouse model presented in1993 also did not develop AD. Since then it seems as if every year the new model for AD is being presented. In 2001, researchers concluded that because of phylogenetic differences, as well as fundamental differences in behavioural ecology, exact replication of AD in mice may not be attainable (Janus and Westaway 2001). But ‘rigorous comparative analysis of cognitive behavior observed in various mouse models of AD should provide a framework for better understanding of molecular mechanisms underlying cognitive impairment observed in AD patients’ (Janus and Westaway 2001: 882). Today the number of transgenic research models for AD at JAX is 41[28]. A cure for AD has not yet been found.
What the stories of the sickle cell mouse model, the cystic fibrosis mouse model, and the Alzheimer’s mouse model demonstrate is that it is not that easy to make a reliable mouse model, as was initially expected. In spite of the high expectations, scientists already had to admit in 1993 that the use of transgenic animals had not led to substantial medical breakthroughs (Cone 1993). ‘In an embarrassing public failure, scientists who initially reported that they had created mice with Alzheimer’s disease had to retract their findings, and other researchers remain slightly off the mark in mimicking various diseases’, wrote Marla Cone in a critical article in the Chigaco Sun-Times (Cone 1993: 28). The first scientific review article about animal models by Oliver Smithies that appeared in 1993 in Recent Developments in Genetics was also very modest about the scientific achievements so far. As Smithies wrote: ‘One of the biggest uncertainties when modelling human genetic diseases in mice is whether the resulting phenotype will be equivalent to that observed in humans’ (Smithies 1993: 113). Mice are not humans. Still, scientists maintain that these gene-altered rodents are the best hope they have (Cone, 1993). On the basis of that hope, the number of mouse models has increased exponentially.
The Knock-Out Mouse Project
A project that will most likely boost the growth in the number of mouse models is the Knock-Out Mouse Project (KOMP) (Austin et al. 2004; Collins et al. 2007). After the sequencing of the human and the mouse genome, the focus of attention of the genetics research community turned to elucidating gene function and identifying gene products that might have therapeutic value. An effective approach to study gene function in vivo is through knock-out technology. But, despite public and private sector initiatives to produce mouse mutants on a large scale, the total number of knock-out mice described in the literature in 2004 is still modest, corresponding only to about 10% of the about 25,000 mouse genes (Austin et al. 2004). In October 2003, a large gathering of members of the genomic community met at the Banbury Conference Centre to discuss the advisability and feasibility of a ‘dedicated project to knock out alleles for all mouse genes and place them into the public domain’ (Austin et al. 2004: 921). Ambitious targets were set at that meeting by the mouse geneticists – 500 new mouse lines per year – in order to create a publicly available resource of knock-out mice and phenotypic data ‘that will knock down barriers for biologists to use mouse genetics in their research’ (Austin et al 2004). Recently, Francis Collins reported that the first steps had been taken (Collins et al. 2007). The first step was the acquisition of 251 knock-out strains (mutant mice and frozen embryos) from two private collections of knock-out mice created and ‘owned’ by Deltagen Incorporated and Lexicon Genetics Incorporated. The second step was the supporting of Mutant Mouse Regional Resource Centres to repatriate and archive their 320 mouse strains for broad distribution. The centrepiece of the KOMP effort, however, consists of two programmes that aim to create 8500 targeted mutations in ES cells in genes that have not yet been knocked out. To achieve this goal, two groups have developed high-throughput pipelines to target genes in mouse ES cell lines (Collins et al. 2007).
From model mouse to mouse model
If one listens carefully to researchers talking about their mice, one easily gets the impression that the mouse has really become the equivalent to the disease or gene defect it stands for. The mouse is always a mouse model, so it seems. In the early 1980s, the mouse was a model animal, an animal to practice on, in order to try out new techniques or ideas; an animal that could be replaced in principle by other species. Today the mouse is no longer an animal model; it has become more or less the disease itself. As a mouse model, the mouse serves two distinctive goals in biomedical research. First of all, it serves as a stand-in for us humans in clinical tests. For example, new therapies to treat human cancer can be tested in mice especially designed to develop spontaneous human tumours. Moreover, the mouse models are also used to study the development of genetic diseases. In mouse models, researchers seek to understand the complex mechanisms that, for example, lead to cystic fibrosis or Alzheimer’s. The first type of mouse models have proven to be very useful. Many anti-cancer drugs are tested in such mouse models before going into the clinic. The value of the second type of mouse models is more difficult to asses. So far, these mouse models have not led to a cure for Alzheimer’s disease, sickle cell anaemia or cystic fibrosis. There is considerable uncertainty in predicting the phenotypes that will be displayed by the mutant mice. As Bradley wrote in 2002: ‘A knock-out mouse phenotype often shamelessly displays our collective ignorance about gene function’ (Bradley, 2002: 514). However, the transgenic mouse models are presented by researchers as the promise to finding the cure for life-threatening diseases. In their battle against genetic human diseases, the genetically altered mouse is the best hope they have.
Although its gene pool has been enriched with (defective) human genes, the mouse itself remained a mouse. The disadvantage of the mouse model is that the human genes have to interact with mouse genes in the complex in vivo system the mouse is. In the mouse model, human genes will never behave exactly as similar genes would do in a human. No matter how many human genes are added to the mouse, the mouse is still a mouse and not a human. In fact, scientists who use mouse models do not study human diseases, they study the behaviour of (defective) human genes in transgenic mouse models. A mouse model can never be the biological equivalent of a human being. The question is whether problems related to the mousehood of the mouse models can be overcome by making the mouse more human. How many human genes do we have to add to the mouse genome in order to make it anthropomorphic enough?
Apparently this is a serious question for researchers. As an eye-catcher for job advertisement of the Amsterdam Medical Centre (AMC) (NRC November 2006), a photograph was presented of a young dark skinned/negro boy playing with an albino mouse (see Figure 3). The accompanying text boldly stated: ‘If we have to change a mouse into a human in order to cure AIDS we will do so’. The story behind this advertisement was a donation of 900,000 dollars from the Bill and Melinda Gates Foundation to a research group at the AMC. The mission of the Bill and Melinda Gates Foundation Global Health Program is to encourage the development of life-saving medical advances and to help ensure they reach the people who are disproportionately affected. Funding research devoted to find a cure for AIDS is in line with this mission. For several biological reasons, the animal models now available are not suitable for HIV research. Although the available mice have a transplanted human immune system, part of their own immune system is still intact. The money donated by the Bill and Melinda Gates foundation will be used to develop a transgenic mouse model that will be more suitable for HIV research.
I find the advertisement highly provocative for several reasons, but what strikes me most is this: apparently, the advertisement suggests that scientists in Amsterdam are claiming that they can change the mouse into a human being. This advertisement does not only illustrate the need for more humanlike mouse models, it also illustrates the strong motivation of researchers to continue on the path of changing the mouse into a human.
New frontiers: ES cells and human-mouse hybrids
The history of the laboratory mouse is far from completed. For the pioneer species in the new era of biotechnology now entering the 21st century, the story has only just begun. After the knock-out mouse project, the next technological frontier is already awaiting us: the growing possibilities of ES cells. Scientists are rapidly discovering the potential of ES cells. Pioneers in embryonic stem cell research Andras Nagy and Janet Rossant reported in 1993 about the production of completely ES cell-derived mice (Nagy et al. 1993). In 1999, Science reporter Gretchen Vogel discussed the results of experiments performed in Hawaii by reproductive biologist Teruhiko Wakayama, who cloned mice out of ES cell lines that had gone through more that 30 cell divisions (Vogel 1999). What will be the next step in ES cell technology? In his book Challenging Nature, Lee Silver discusses experiments with ES cells that can grow into egg and sperm cells. ‘The implication of such experiments is in theory that (human) ES cells could almost certainly engage in sexual reproduction with others (in a petri dish) to produce (human) embryos with unique genomes. A child born from the development of such an embryo would not have parents who had ever been born themselves (Silver 2006a: 143)! Another thought provoking possibility of ES cell technology is the creation of human-mouse chimeras. Mice are not only used as models to study the expression of human genes, but also the behaviour of whole human cells are studied within the mouse. The genealogy of the laboratory mouse started with the transplantation of human tumour cells into mice. I want to complete this retrospective with another form of human-mouse hybrids, the fusion of our brains and reproductive systems. The whole idea of making human-mouse hybrids stems from the recent interest in stem cell therapy. Stem cells, ‘a kind of universal clay’, have high promise as an all-purpose material for repairing many degenerative diseases of ‘old age’, such as Parkinson’s, cancer, and heart disease (Wade 2002a). Stem cells, like other biomedical materials, have to be studied in laboratory animals before they can be applied to human patients in the context of therapy.
The potential for good of ES cells seems unlimited, but when brain or reproductive cells are used to create human-mouse hybrids one might feel less optimistic. What if a human being is born from an ES cell that originates from a mouse, or what if a man-mouse hybrid with human brain cells starts thinking? These questions are not ‘far out’ questions based on science fantasies. They are legitimate questions in response to scientific experiments. In 2002, researchers transplanted neural stem cells derived from human foetal brains into neonatal NOD-SCID mice, in order to see whether human nerve cells could develop into functional cells in a ‘mouse transplantation model’ (Tamaki et al. 2002). The resulting animals had man-mouse hybrid brains. The human neural cells were distributed and showed neural differentiation in NOD-SCID neonatal recipients. This result supports the potential usability of neural ES cells in human brain transplantation therapy. A therapy that, of course, needs to be tested in other man-mouse hybrids before being applied to humans. Will these mice develop something similar to human cognitive functions? In December 2006, Fred Gage from the Salk Institute injected human embryonic cells into the brains of developing mouse foetuses still inside their mother’s uterus. The human cells became active human neurons that successfully integrated into the mouse forebrain, the place where higher brain function is localised (Silver 2006b). Another useful application of human-mouse hybrid is a mouse that can produce human oocytes (unfertilised eggs). Tissues made from ES cells are likely to be rejected by the patient’s immune system. One way to avoid this problem is to create ES cells from a patient’s own tissue, by transferring the nucleus from the patient’s skin cell into a human oocyte whose own nucleus has been removed. However, these nuclear transfers are highly inefficient and require some 200 oocytes for each successful cloning (Wade 2005). Where do we get so many human oocytes from? Chimeric mice that make human ooccytes could be the answer (Wade 2005). Imagine a mouse making human eggs cells mating with a mouse making human sperm cells. Would their baby be a human being?
Concluding remarks
What can we conclude from the history of the transgenic mouse? The most important conclusion is that this mouse is as much a man-made artefact as a biological species. The transgenic mouse is a living artefact. But, when saying this, it is important to note that the transgenic mouse was not created ex nihilo. The history of the ‘man-made’ mouse did not begin with the birth of the first transgenic mice in the early 1980s. It was in the course of a long process of development that the laboratory mouse became increasingly artificial, starting with the fancy mice that were brought into the laboratory at the beginning of the 20th century, and eventually giving rise to today’s mouse models. A crucial moment in the evolution of the laboratory mouse was, of course, the start of the intensive inbreeding programme. But the selection for distinct genetic properties, such as the high incidence of spontaneous tumours and the culture of teratocarcinoma and ES cells by the early embryologists, has also been of key importance to the mouse’s fate. Together all these human interventions in the mouse as a biological species have paved the way for the introduction of foreign DNA.
Another conclusion to be drawn from the mouse story is that the mouse is a unique animal. Its susceptibility to genetic modification seems without precedent among mammalian species. As a result of the unique genetic and embryological characteristics of the inbred mice, the mouse became the pioneer species in mammalian biotechnology. Biomedical science has had an enormous impact on the mouse genome. The mouse, in turn, has been the animal that has altered our scientific and medical landscape (Clarke 2002). Transgenic mice can be found all over the world, and they have become part of the standard equipment of the modern biomedical laboratory. The genetically altered mouse models are the best tools scientists involved in these areas have. In the process of becoming a mouse model the mouse has lost much of its identity as a mouse. To some researchers, the mouse is basically a fuzzy furry in vivo test tube, to others it is a surrogate for exploring human biology. Maybe it is both. After the transgenic revolution the mouse evolved more and more into a living test tube. As a result of the introduction of various human genes the laboratory mouse also evolved into a more humanlike species. This evolution has by no means reached its end. As long as scientists proclaim that, if they have to turn the mouse into a human in order to banish a life-threatening disease, they will do so, then mice will continue to become increasingly human.
The mouse story is not only a unique story about the scientific career of a particular animal species; it is also a story that mirrors the development of the life sciences. Being the pioneer species in biotechnology their history reflects how, at what pace, and in what directions, the life sciences are evolving. Mice that are born out of the ‘mating’ of two ES cell lines instead of real living mice, an experiment that, according to Lee Silver is theoretically possible, have a devastating impact on our perception of the mysteries of life. Moreover, biotechnologically or genetically engineered mice are living proof of the fact that scientists are gaining control over life. The successful experiments with transgenic mice have illustrated how easy it is to modify mammalian DNA. In addition, transgenic mice have showed that DNA is universal; all living species share the same DNA. DNA can be placed from any organism into another. Whether they derive from a human being, a jellyfish or a firefly, the mice express these genes as if they were their own. This not only questions our perception of species barriers, in particular the one between mouse and man, but also what it means to be ‘human’. If the mouse genome is malleable, and if DNA is universal, then the human genome is also malleable.
The history of the transgenic mouse also renders a number of bio-ethical concepts problematic, such as the concept of animal integrity in the context of genetic modification. The artificialisation of the laboratory mouse is an ongoing process, and the introduction of techniques to add or delete genes is simply one step in a broader development. At what point in this history does its integrity become affected? That question becomes rather difficult to answer. What is the use of a concept such as ‘integrity’ when looking at Brainbow mouse’s neurons illuminating in the dark? Other bio-ethical concepts that are becoming more and more problematic are related to our understanding of nature and what is natural. How should we understand the notion of a ‘natural species barrier’, as it appears that such barriers can so easily be transgressed? What is the status of ‘unique life forms’ created out of genetically modified ES cell lines? But perhaps one of the most burning ethical questions about the mouse gene technology is whether these technologies can, will, and should be applied to man. Therefore, the history of the laboratory mouse may actually be a prelude to, or anticipation of, posthumanism. We may use the history of the mouse in order to reflect on our own approaching future in an anticipatory manner. As the laboratory mouse is already a stand-in for future patients, it may also become a stand-in for future human (or post-human) individuals.
[1] Kenneth Paigen (1995) A miracle enough: the power of mice, Nature Medicine, Vol. 1, No.3, p. 215.
[2] James Shreeve (2004) The Genome War. How Craig Venter tries to capture the code of life and save the world. New York: Knopf, p. 265.
[3] John Berger (1991/1977) ‘Why look at animals’, in About Looking, New York: Vintage Books, p. 4.
[4] This introduction is based on exploratory ‘fieldwork’ carried out at the NKI in the Spring of 2003.
[5] The three steps are defined on basis of content not chronology: the different steps run parallel in time.
[6] Rader (1997, 1999); Mobraaten and Sharp (1999); Russell (1978); Paigen (2003a); Strong (1978); Hogan et al. (1986).
[7] Three inbred strains in particular play a role in the transgenic revolution: the FVB, because of the large nucleus of its embryos; the B6, because of its easy to manipulate blastocysts; and the 129, the only mouse strain that has ES cells that can be cultured in vitro.
[8] In 1900, three botanists – Hugo de Vries, Carl Correns, and Erich von Tschermak – independently rediscovered Mendel’s work.
[9] Mice breeder Lathrop also reports in 1908 the occurrence of spontaneous tumours in some of her mouse strains. Lathrop and researcher Loeb started a study on the inheritance of cancer in mice. But unlike Little they did not link their cancer data to the Mendelian laws of inheritance (Rader 1999, p. 330).
[10] The origin of inbred mice by Morse III (1978) is accessed via the digital version on the Iinternet. Therefore, the page numbers of quotations from this book are not available.
[11] Congenic strains were created by repeated back-crossing of the F1 to one of the parents and selecting those individuals from the F2 that carried the H2 type coming from the other parent (Paigen 2003a).
[12] Mendel himself started his research on the genetic inheritance of coat-colour traits by breeding mice he kept in his two-room living quarters. But, according to the Austrian Bishop Anton Schaffgots, it was inappropriate for a monk to share his living quarters with creatures that had sex and copulated. Mendel was forced to move his research to the garden, where he continued his scientific work with peas (Paigen 2003a).
[13] A reference to ‘the right tool for the job’, the title of a session during the meeting of the International Society for the History, Philosophy and Social Studies of Biology in 1989, and a book edited by Adele Clark and Joan Fuijimura (Lederman and Burian 1993).
[14] An ascites tumour is a tumour that is kept alive in the abdomen of the mouse. As a result the mouse develops a painful ascites.
[15] Beatrice Mintz is, however, more sceptical about his results (Mintz and Illmensee 1975).
[16] In response to this achievement Ralph Brinster sent a letter to Bradley with the simple message: ‘Congratulations’. Bradley felt honoured that an individual of Brinster’s stature had taken the trouble to send a letter to a graduate student ‘Clearly Dr. Brinster recognized the breakthrough’ (Bradley et al. 1998).
[17] The ‘true’ birth date of the first transgenic mouse can be debated. Already in 1976, Rudolph Jaenisch at the Salk Iinstitute in San Diego, had injected pre-implantation embryos with the M-MulV virus. He observed that viral DNA could also be transmitted to the germ line of the mouse (Jaenisch 1976). But Jaenisch was at that time not thinking about the creation of transgenic mice by means of viral transfection. He was interested in the infection of mammalian cells by DNA tumour viruses such as the M-MulV virus. He studied the activity and integration site of the virus in the different organs and the Mendelian inheritance of the viral DNA by the offspring of infected individuals. Whether he – unintentionally – created the first transgenic mice in 1976 is a matter of dispute. It was the first time a researcher had introduced exogenous DNA into a mouse embryo and observed the integration of foreign (viral) DNA. But the virus was not used intentionally as a vector to integrate a specific DNA fragment into the mouse genome.
[18] It is interesting to note that, according to Palmiter, in both cases there was no expression of the B-globulin gene (Palmiter 1998).
[19] These early experiments conducted by Brinster with Palmiter’s DNA constructs were the beginning of a very productive collaboration. The combination of genetics and embryology proved to be a fruitful one. Together they published over 120 articles in a 10-year period (Arechaga1998).
[20] In the following, I will use both ‘transgenic’ and ‘genetically modified’ or ‘genetically engineered’ to refer to these mice, with a preference for the last. In the strict sense, ‘transgenic’ indicates the introduction of foreign genes to its genome. In this book I also discuss knock-out technology. Knock-out mice are not transgenic but are genetically engineered or modified.
[21] Later, the term for this practice was changed into pharming.
[22] For a more extensive review on the conditional control of gene expression in the mouse, see Lewandoski (2001).
[23] <http://www.xenogen.com/wt/page/pdf_library#light_animals>.
[24] <http://www.metamouse.com/oncobrite.html>
[25] In comparative genomics, synteny (a neologism meaning ‘on the same ribbon’; Greek: σύν, syn = along with + ταινία, tainiā = band) describes the preserved order of genes between related species. During evolution, chromosomal rearrangement occurs and hence even closely-related species have different patterns of synteny (Wikipedia).
[26] Conner is quoting Kenneth Paigen from Nature.
[27] Although expressing the sickle cell mutant β-globulin, they did not become ill because of the interference of endogenous mouse major β-polypeptides (Bedell et al. 1997).