Genetic Mysteries: Genomics and Cardiovascular Disease
In the vast landscape of medicine, the intersection of genetics and cardiovascular disease (CVD) has emerged as a focal point of exploration and innovation. Genomics, the study of an organism’s entire genome, has provided profound insights into the intricate mechanisms underlying cardiovascular health and disease. From unraveling genetic predispositions to identifying novel therapeutic targets, genomics has revolutionized our understanding of CVD and holds immense promise for personalized medicine in cardiology.
The Genomic Landscape of Cardiovascular Disease:
Cardiovascular disease encompasses a range of conditions affecting the heart and blood vessels, including coronary artery disease, heart failure, arrhythmias, and congenital heart defects. While environmental factors such as diet, exercise, and smoking play significant roles in CVD development, genetic predispositions also exert a substantial influence. Advances in genomic technologies have enabled researchers to identify numerous genetic variants associated with various cardiovascular disorders, shedding light on their pathogenesis and clinical implications.
Genetic Risk Factors:
One of the primary goals of cardiovascular genomics is to identify genetic risk factors that predispose individuals to CVD. Genome-wide association studies (GWAS) have uncovered thousands of genetic variants associated with traits such as blood pressure, lipid levels, and susceptibility to specific cardiovascular conditions. These findings have elucidated the complex genetic architecture of CVD and provided valuable insights into disease mechanisms.
For example, genetic variants in genes encoding proteins involved in lipid metabolism, such as PCSK9 and APOE, have been linked to dyslipidemia and coronary artery disease. Similarly, mutations in genes encoding cardiac ion channels, such as SCN5A in long QT syndrome, can predispose individuals to life-threatening arrhythmias.
Precision Medicine in Cardiovascular Care:
The advent of genomic medicine has paved the way for precision medicine approaches in cardiovascular care. By leveraging genetic information, clinicians can tailor interventions to individual patients’ genetic profiles, optimizing treatment efficacy and minimizing adverse effects.
Pharmacogenomics, the study of how genetic variations influence drug response, holds particular promise in cardiology. Genetic testing can identify patients who are more likely to experience adverse drug reactions or poor treatment outcomes, enabling clinicians to select medications and dosages with greater precision. For instance, genetic testing for variants in the CYP2C19 gene can inform the choice of antiplatelet therapy in patients undergoing percutaneous coronary intervention.
Emerging Therapeutic Targets:
Genomic research has also identified novel therapeutic targets for cardiovascular disease. By elucidating the molecular pathways involved in disease pathogenesis, researchers can identify druggable targets for the development of new therapies.
For example, the discovery of genetic variants associated with familial hypercholesterolemia has led to the development of PCSK9 inhibitors, a new class of drugs that effectively lower LDL cholesterol levels and reduce the risk of cardiovascular events. Similarly, insights into the genetic basis of cardiomyopathies have facilitated the development of targeted therapies aimed at modulating aberrant signaling pathways implicated in disease progression.
Challenges and Future Directions:
While genomic research has yielded remarkable advancements in cardiovascular medicine, numerous challenges remain on the path forward. The translation of genetic discoveries into clinical practice requires robust evidence demonstrating the clinical utility and cost-effectiveness of genetic testing and targeted therapies. Furthermore, ethical considerations surrounding genetic privacy, consent, and equitable access to genomic technologies must be carefully addressed.
Looking ahead, ongoing efforts in cardiovascular genomics aim to unravel the complexities of genetic interactions, epigenetic modifications, and gene-environment interactions underlying cardiovascular disease. Collaborative initiatives such as the Million Veteran Program and the UK Biobank are poised to accelerate genomic discoveries by leveraging large-scale genetic and clinical data.
In conclusion, genomics has emerged as a powerful tool for unraveling the genetic mysteries of cardiovascular disease. By elucidating the genetic basis of CVD, identifying novel therapeutic targets, and guiding precision medicine approaches, genomic research holds tremendous promise for transforming cardiovascular care and improving patient outcomes in the years to come. As we continue to unlock the secrets encoded within the human genome, the future of cardiovascular medicine shines bright with the promise of personalized, genetics-driven approaches to prevention, diagnosis, and treatment.
More Thoughts:
Here are some thoughts from the experts in the field of genomics As a few of you may know, I had — I lost an argument with a couple of intervertebral discs in my C spine on April 13, 2012, and was unable to deliver the lecture, although I heard that you got a great lecture. Dr. Elaine Ostrander filled in for me about dog genetics. I would have liked to have heard that myself, actually. But I was horizontal for that day. So what I want to talk with you about a little bit is some of the work we’re doing in genomics and clinical genomics to try and understand the spectrum of heritability in human diseases and human traits, including disorders that some of you are likely dealing with. So what this is about — some of you may have heard this phrase thrown around. There’s several phrases that are sort of the phrase du jour here. They all mean pretty much the same thing. I use the phrase “individualized medicine.” Some people use the phrase “personalized medicine.” Some people are calling it “genomic medicine.” But the notion is the same, regardless of what word you might choose to use, which is to think about trying to customize or individualize care based on individual risks instead of population risks. And we are currently in a phase of medicine where, to push it a little bit, we sort of worship at the altar of the large, blinded, controlled trial. And we do that because — not because it’s bad, but we do it because it works, and it allows us to find and evaluate therapeutics in a way that works for large numbers, substantial numbers, substantial fractions of large populations of people. And that is good. Although, I will tell you, in the course of this little encounter I had with those discs in my spine, I was on the phone with a colleague of mine, who, believe it or not, was a lawyer, and he was commiserating with me because he’s had some of the similar problems, and he just went on this long harangue about the anti-inflammatories, or oral pain meds. And he said, “You know, they’ll tell you that this medicine works in 40 percent and that medicine works in 40 percent and then the third medicine works in 40 percent.” And he’s lecturing me, “Don’t you believe it for a minute, because those 40 percent aren’t the same people.” So here’s a lawyer telling me that we should be practicing individualized medicine. I thought that was pretty hilarious. So what we want to do is acknowledge that, that what he’s saying is probably correct, and I think all of us know that from our practices, and we use as much information as we have at our command to try and make those decisions about how to apply which treatment to which patient, and we use our intuition, and our hunches, and our clinical acumen to try and make those decisions. But it would be better, I think, if we had some data that would allow us to direct the treatments toward individuals in a rational way. So we want to do several things, right? We want to apply the treatments to the patient where it’s most likely to be efficacious in that patient and where it’s the least likely to have toxicity or adverse effects. We also would like to begin to move toward treating and preventing diseases before the patient actually gets sick. And this is a little bit of a radical notion, and it has a lot of challenges associated with it, but it is a very important goal, and I think that’s important in cardiology as well as in many other fields, like cancer. It won’t be easy to do. And lastly, though this isn’t a very popular thing to mention, but there are certain times — and we, again, use our clinical judgment to do this all the time in current practice — but when we have a patient we’re taking care of where the treatment is futile, it’s good clinical judgment to stop rendering that treatment, in most cases. If it’s not helping the patient and the disorder is going to progress, why are we doing this, especially, again, if there are adverse effects? And there’s some great examples now in oncology where the oncologist can identify patients who are extraordinarily likely, essentially certain, to be refractory to certain treatments. And in those cases, it’s the wrong thing to do to continue those treatments. And we have to figure out who these people are. So to do this stuff, all of these things that we would like to be able to do with individual patients, we have to be able to make predictions about the medical and physiologic attributes of those patients at the level of the individual, and the large, randomized clinical trial will never get us there. We have to do it a different way. Okay, so what are we looking for? We need the ability to assay or test some attribute of our patient that defines either the presence of occult disease or disease that is yet to manifest, future risk of disease, response to a treatment we haven’t yet tried, adverse effects, et cetera. And, you know, truthfully, we do this all the time, right? We use physical signs all the time to detect, primarily, occult disease. Splinter hemorrhages in the fingernails is a sign that, by itself, means essentially nothing to the patient, yet that tells us what’s going on with that patient’s cardiovascular system, right? So we’re used to this concept at the clinical level, and what genomics is going to do is expand that into heritable disease and allow us to make predictions based on that patient’s heritable susceptibility or propensity to develop disease. But predictions is a tough business, and we’re dealing with a complex system here, right? The human organism is a complex critter. And I like this quote from Albert Einstein. I’ll read it to you. “Occurrences in this domain — ” of course, he’s talking about particle physics, right — “are beyond the reach of exact prediction because of the variety of factors in operation, not because of any trace — any lack of order in nature.” So we recognize that the system is complex. We recognize that we won’t be able to make precise predictions. But that doesn’t mean that you can’t make predictions. So that’s a nice way of saying this, which was the gentleman in the foreground, Niels Bohr: “Prediction is very difficult, especially if it’s about the future.” Some of you may have seen that. That has been misattributed to our friend Yogi Berra, but it was actually Niels Bohr who said that. And I just like the picture of the two guys together and the two quotes side-by-side, and I think it gives you a nice feel for this problem. Okay, so — and going back to health care predictions, we can ask the question, “Can we do this for traits, for diseases that have a substantial heritable component?” So what are the tools we need to do that if that’s what we want to do? So we need some sort of an assay that broadly assesses the risk of these traits or diseases. Until recently, that was very difficult to do, because the testing, the genomic genetic testing that we had until recently, was low-throughput and focused, and the clinician had to know what disease they were looking for to order the test to determine the susceptibility to the disease. And so that was a very intrinsically limiting problem. But now that the technology is changing, we can ask the question in an open, prospective way with an individual patient, and extract that knowledge that we’re looking for. All right. So what are we actually talking about here technologically? So we have to do a little bit of genetics here, and so I have a graph here. This is a theoretical graph that shows on two axes the frequency, what we call the minor allele frequency; that is, any position in the genome or in a gene where there is a difference, a variation in the population, the less common of the two states, the minor allele. Frequency can range from zero — almost zero to 50 percent. Can’t be more than 50 percent, because then it’s the major allele, right? So that’s the frequency of some allele in the population. And then we think about a heritable trait, and we can ask the question, “If a person has the minor allele, how likely is it that that person has the trait?” And that, we call “penetrance.” And that can range from zero to one. So if that variation in every single person leads to the trait being manifest, the penetrance is one. And if that variation has a very, very low contribution to that trait, and maybe only 1 percent of people who have that variant have that trait, then that’s a low-penetrance trait. There’s a general relationship for variation that we know in the genome versus traits that sort of hovers in a cloud, if you will, along an axis between here and here. All right. A lot of you have heard about SNPs and GWAS, genome-wide association studies. What that kind of a study is doing is assessing common variation in the population — so alleles that have minor — variants in the genome that have minor allele frequencies, from about 1 percent to up to 50 percent — and asking the question, “Are those variants associated with some trait?” And that has been done very successfully for a number of traits, including cardiovascular diseases like atherosclerosis, lipids, blood pressure, et cetera. And you can find these variants, and you can discover the relationship between genotype and disease, and this is an incredibly fruitful and productive area of science. Then there’s this stuff up here that Jean [spelled phonetically] referred to that I have worked on in the past. And these are variations in the genome that are individually extremely rare variations going down in frequency to what is essentially one over the population size; that is, variants that are essentially unique, and we see that in the population. And these rare variants, many of them can have very significant impacts on phenotype. That is, when they’re present, 100 percent of the people, essentially, who have that variant have that trait. Now, there are — this cloud exists because this stuff down here is stupendously hard to figure out. If a variant is uncommon in the population and it has a low effect on the trait, statistically it’s really hard to find. So this, we are sort of thinking, for all practical purposes, until we can assay the entire population on the planet, is really unknowable. We just can’t figure that out. This, between here and here, so these rare diseases that we currently know about and these common variants that lead to low-penetrance traits, there is a huge cloud here, and this is currently unknown, but it is becoming knowable because of the technological advances that we’re now seeing. And the notion is, what we have to do in genomics and clinical genetics and in medicine is to connect these two clouds of variants and understand this full range of genomic variation and the relationship between genotype and phenotype in the full spectrum of frequency to penetrance. Okay, so to summarize again, you can think of, in general, our current understanding is there’s two classes of genomic variation: common variation and rare variation. So common variants are relatively easy now to assay and analyze. These are done — this is done by what are called SNP chips, so chips — DNA chips that assay two million, sometimes upwards of five million different common variations across the genome. And then statistical testing is done to relate those variations with some trait. And there’s — that’s all very straightforward to do. Now the statistics are all worked out, and we know how to do that. The problem is, again, it requires large cohort sizes, right? This makes sense, because if the effect of the variant is small, you’re going to need a large population of people to find the relationship. That’s plain, old-fashioned statistics. You can’t escape from that. So, again, it goes back to this notion of the large trial and averaging across patients. It is incredibly useful and has illuminated a number of different relationships that we didn’t previously appreciate, connecting genes and the proteins they code for with traits and diseases, and has been a real boon to understanding the pathophysiology of human disease. The problem is, is that for us. as clinicians, it doesn’t necessarily help us with individual patients, because, again, each of these variants is relatively poorly predictive of phenotype. Rare variants, until recently, were nearly impossible to assay genome-wide, but now, because of sequencing, they are getting much easier to find and generate the actual variants, but it’s still hard to analyze them. I’ll talk a little bit about that. The associations, because they’re powerful, can require smaller numbers, so that’s a good thing for us as clinicians. And what I really like about it is we can then bring this into the clinic and start to think about assaying the genome, the individual patient, and making a prediction about an individual person that is highly likely to be correct. Okay, so a little bit of background on genes. Genes have a number of parts. This is a strand of DNA. This dark part here, these green blocks are the parts of the DNA that encode for protein. Genes have elements within them that control their expression, when and how they’re expressed. Those are called promoters and enhancers. And then there’s lots of DNA between the genes. That is spacer DNA; it used to be called “junk DNA,” but that’s no longer correct because we know all of it has function. So these pieces of the gene are then spliced together. Here’s an individual gene spliced together. An open reading frame is what makes a protein. The protein is what does the job in the cell. So common variants are, for the most, not in the protein coding parts of genes. And, in fact, a fair number of them aren’t in genes at all. They’re in this DNA outside of genes. The rare high-penetrance variants that we’re talking about, that stuff in the upper left-hand corner of the graph, turns out almost all of those are in these green blocks, these coding parts of these exons, and the cool thing about that is, even though there’s 20,000 genes and 300,000 exons across the entire genome, this green stuff only comprises about 1 percent or 2 percent of the DNA. So if you’re interested in the high-penetrance, individual prediction parts of genes, you only have to look at 1 to 2 percent of the DNA to extract that information. And the technology that allows us to do this is incredibly cool. It’s very complex, but you can show the concept here, which is that we take DNA from the patient, shear it up into little blocks, and then we basically take artificial DNA that’s complementary to those green parts of the genes, mix that with our patient’s DNA, and it has little beads on it that allow us to extract that DNA with, believe it or not, actually a magnet, which is kind of cool, so you magnetically separate out the DNA you’re interested in, and then that’s the DNA which represents those green parts of the genes. Then you take each of those pieces of DNA and adhere it to a slide, and hundreds of millions of these pieces of DNA are added to a slide at one time, and then that is — each one of those molecules is sequenced simultaneously in a reaction.
So that’s why this sequencing, it’s commonly called “next-generation sequencing.” More correctly, it should — it’s called “massively parallel sequencing.” It’s massively parallel because you’re running hundreds of millions of sequencing reactions all at the same time. And then each one of these sequences is read off of the slide, and then it’s fed into the computers that analyze those sequences, find which part of the DNA, of the genome, they comprise, lines them up, stacks them up, and reads the base pairs. So sequencing instruments actually look like this, these nice boxes with the cool glowing blue lights. That makes them really cool, right? And these things are really awesome, right? So you can now sequence a whole genome or six to eight exomes — and exomes is just that 1 percent to 2 percent — in about three days. Okay? Ten years ago, this took — the first genome took about 15 years to sequence and cost $1 billion to $2 billion. So now it takes about three days, costs about 10,000 bucks for a whole genome. But you can do the green parts, the exome, or the exons, for about 1,000 bucks or less, which is a stupendous drop in cost, which is starting to make this technology comparable in cost to a lot of tests we order on patients all the time. And that allows us to simultaneously, in a single reaction, in a single assay, evaluate all genes. And that’s the tool we need, from the previous slide, to make these predictions and assay these genes, again, without knowing what disease the patient has. All right. This is not all goodness and light, all right? There’s some bad news to this stuff too. This is not as easy as it looks. It generates stupendously large amounts of data. So for every genome we put into one of these instruments, the typical output is about three million variations. So any two of you I were to sequence, I look at your two genomes, you will differ in about three million nucleotides. Some of — most of that is benign variation; some of that is variation that is associated with a disease. The trick is to figure out which is which. Nontrivial. So interpreting this is a huge challenge. We’re just scratching the surface of this. A small fraction of it can be interpreted, and then the glass is half full, the glass is half empty. Half empty is, we can only interpret a small fraction of it.
Glass half full is, yeah, but the fraction that we can interpret is useful. And I’ll show you how that is useful. So, as these instruments were being developed a few years ago, a group of us got together and said, “Well, you know, the biologists are using this to understand the biology of genomes. Why don’t we docs get together and figure out how we can use this to help take care of patients?” And so we put together a project called ClinSeq, which was a translational research project to use genome sequencing in clinical care, clinical research to figure out the relationship with disease and build an approach to developing this as a clinical assay. So we set up a study. Our initial target was to recruit 1,000 people into the study. Our initial phenotype was cardiovascular disease. We thought that was a great trait to start with because it has a lot of attributes that are amenable to this kind of an approach. We know that cardiovascular disease — atherosclerosis, myocardial infarction susceptibility, hyperlipidemia — has a high degree of heritability. We all know that. There are common variants, common variations in the genome that lead to low-penetrance susceptibility to lipid levels, as well as rare variants that lead to high-penetrance lipid syndromes, other cardiac phenotypes as well. So what we wanted to do was develop a cohort of people who had a range of phenotypes for atherosclerosis, from completely unaffected to affected, and then assay those patients by whole genome sequencing. So we did what we called “binning,” which was we recruited patients into the study based on Framingham risk scores, 250 each of each of these categories, the Framingham risk scores, and one bin of patients who have the disease, and we’re still recruiting for patients who are affected with atherosclerosis and have had myocardial infarctions. Sequence them, and then we do the follow-up studies. We interpret the variations that we can extract from their genomes, validate them, return them to patients, and try and start managing these patients based on these variants to, again, test the model of individualized medicine. The recruitment was for folks that were between 45 and 65 years of age. It was open to any ethnic group, both sexes. We did want to exclude smokers. In Phase I, we did require that they have a primary care physician, recognizing that we were going to find things in their genomes that would need to be evaluated and followed up by their physicians, and so we wanted to make sure they had that in place. People — and we were looking for people who wanted ongoing involvement in the study. And we also have set it up so that the patients themselves don’t have direct access to the data. This has actually come to be an interesting issue, which we could talk about if people have questions. So, clinically, what do we actually do to the folks who enroll in the study? We take only a brief history, because here’s the problem with the genome. We can assay all 20,000 genes, but no clinician can do a history or a physical that evaluates a patient for all 20,000 gene traits. That is just not possible. So what we have to do is set this up in a way where we do it iteratively. That is, we start with some brief phenotyping and history-gathering, bring the patients in, consent them for ongoing involvement, and say, “We’re going to come back to you after we look at your genome. When we find variations in this, that, or the other gene, we’re going to phenotype you for traits related to those genes, and do it in a directed, iterative way.” So the information we gather is pretty minimal, by my standards, to tell you the truth. Brief history just related to cardiovascular disease, a family history, a few anthropometrics, electrocardiogram, echocardiogram, coronary calcium, a pretty broad panel of chemistries, which you can see here, for your reference, and then some research samples, DNA, RNA, and we make cell lines for the patients. Okay. So here we have these patients who are interested in doing this. We have the genomic or exomic data sets on the patients. We have the baseline phenotypic data. How do you actually go about, then, using these data to find conditions in patients? So we set out to do some pilot studies. And this is one of three of our early pilots. And what we decided to do was to screen a set of patients for cardiovascular traits that were not related to the reason why they enrolled in the study. And this is a really important caveat. We enrolled patients for atherosclerotic heart disease, and what we’re doing here is assaying them for something other than atherosclerotic heart disease. So we had an exome set of 572 patients that had been exome sequenced, and we asked the question, “How many of these patients have a gene variant that predisposes them to have either a cardiomyopathy or a rhythm — cardiac rhythm disorder.” So we did a literature search, searched the textbooks, and came up with 41 genes that have been found to be related, highly related to cardiomyopathies of various types, and I listed those here. Some of the folks in the room, I’m sure, are more familiar with some of these traits than am I. And then a number of rhythm disorders, like atrial fib, long-QT syndrome, et cetera, and asked the question, “How many patients have variants in those diseases?” So when you take the exomes, again, you find enormous amounts of variation. Remember, this is 572 people. And it is — I think it is 63 genes. Yeah, 41 plus 22. So you take just short of 600 folks and 63 genes. How many variations do you find in their genomes? About 1,200. Right? So that’s an average of two variants, two genetic variants per person, in a set of traits that we know are not common traits. These are rare disorders. So what that’s telling us is that there is much more variation in the genome than there are pathogenic causative mutations for these traits, and that means that only a small subset of them are actually causative. So which ones are those? And that’s the trick of the interpretation at genome-scale interrogation. So what we do is what we call filtering. So you take those 1,200 variants, and you begin to look at them and ask questions about them, and basically do exclusions if those variants have attributes that make you think that they are not pathogenic. So, of course, one of the first ones is, if we can look and validate the sequence technology, the values, the quality values that are coming off the instruments. If we’re not convinced about that, we can eject the variants. Frequency’s a big one. So we use frequency. And this is a little bit of circular reasoning, but it’s practical, and it works, which is that if a variant — so let’s take any one of those genes for those traits. If a single variant for that trait is present in the cohort at a frequency that is substantially higher than the trait itself — so let’s say the trait affects one in 1,000 people, all right, and let’s say there’s 100 variations that cause that trait. You know that any one genetic variation that can cause that trait is a subset of that one in 1,000. If any variant is present at a frequency that’s as common as the trait, you know it can’t cause that trait, because otherwise the trait would be much more common than it is. You have to be a little careful with that, but that’s how we can start filtering. That pushes out an enormous number of variants. There are certain types of variation that are more likely than others to actually be pathogenic. So we can exclude some of the ones that aren’t — that don’t have those attributes. And then it gets down to sort of brute force, good old-fashioned pulling up the literature and analyzing the cases of the patients who have been reported to have those variations or very similar variations, and determining, based on clinical judgment, if those reports of causation are, in fact, true. And that’s a challenge, but works quite well. All right. So when we look at 63 genes and 572 people, what do we find? Well, turns out you find pathogenic variants. So about 1 percent of these patients have pathogenic mutations in one of these genes. One of these was dilated cardiomyopathy. This is a gene that’s called phospholamban, and this exact variant has been found to be present in patients with dilated cardiomyopathy. Hypertrophic cardiomyopathy, two different patients, each with their own unique variation. These are — for those of who aren’t familiar with this, these are standard mutation nomenclature. This means there’s a stop mutation in the gene, so that gene protein product is truncated prematurely. That’s a severe mutation. This is a mutation — remember how there were the little green blocks in the gene that were spliced together? This is a mutation of one of the sequence elements that causes that splicing not to occur, so the gene is never put together correctly. And this is a change in an amino acid, so this is what we call “missense variant” in a gene. That has been shown to cause this trait. Then some of the rhythm disorders. We had three patients who had cardiac rhythm disorders. Variants have all been described in several families, and one of which I’ll tell you a little bit more about. Okay, so then we go back and, again, look at the patients, and do this in an iterative way. So for the cardiomyopathy patients, when we went back and pulled their echoes, they did not have current evidence of cardiomyopathy. Okay? So that can mean one of two things. Either we’re wrong and these variants do not actually cause these diseases, even though they’re published as being causative variants, or what we have done is exactly what we set out to do, which is to find disease susceptibility before the disease manifests. Right? When we look at the family histories, we find several of these individuals have a striking family history of unexplained cardiac deaths. Now, that could be attributed to a number of things, and those diagnoses are hard to assess, because these are secondhand reports of disease and death. But we were quite impressed at how many people have relatives who have unexplained symptoms and deaths. And this is that last patient I mentioned, who was a very interesting patient in our study. This is a lady who enrolled at the young end of our age eligibility, late 40s, and she has had ongoing problems with unexplained syncopal episodes. She has a left bundle branch block that is not explained by a known coronary artery or other cardiac disease. She has, on our electrocardiogram, a clearly abnormal QT corrected interval. And she also has a child who also has had episodes of unexplained palpitations. So here is a patient who came into our study to be enrolled for atherosclerosis, doesn’t have atherosclerosis, but instead, when we sequence her, we find a variant that I think is highly likely to be pathogenic for a serious cardiac rhythm abnormality, and we have diagnosed this disease in a patient who didn’t know that she had it. So what is going on here? This is actually pretty radical stuff, because it flies in the face of how, essentially, all of us were clinically trained. So what we did is we took a cohort that was not selected in any way for the presence of cardiomyopathy or dysrhythmia, or for family history of sudden death, right, because that’s how we normally do genetics. We try to go out and find patients who have these rare phenotypes or have family histories of these disorders, and we sequence them. We’re not doing that here. We’re taking unselected patients and just screening their genomes and asking the question, “What tiny subset of this population has this trait?” We sequenced all genes and then selected the genes retrospectively to look at and analyze, and there was no indication for doing this testing in these people. And what we found is that more than 1 percent of our cohort have apparently pathogenic genes in these diseases that we consider to be rare monogenic forms of non-atherosclerotic cardiovascular disease. So, again, this was done without a chief complaint, without a history, without an exam, without any clinical testing to suggest a disorder was there, without a family history, and ordered every test — ordered tests for every gene in the genome. And I can tell you — and I think most of you would say the same thing — if I had even suggested doing such a thing when I was in my clinical training, they would have practically hit me with sticks. You don’t do that. What I was trained to do is that I only ordered tests when I knew that the patient had an indication for the test, that I understood what the test was for, and that the alternative outcomes of the test would change the management of the patient I was taking care of. That’s what I was taught to do. We are completely throwing that head over heels and saying the exact opposite. So this is a radical thing to do. But, again, if we’re serious about wanting to do individualized, predictive medicine, we do have to be willing to throw these things overboard and try some different approaches. So, again, this is contrary to everything we’ve been taught and is a new way to think about how to practice medicine. The interesting thing is, when you stop and consider our old model, our old model works, but it’s kind of perverse, in a way, isn’t it? What we’re basically telling people is, “You know what? We are not going to understand your disease until you’re either already sick or people in your family have died. Until that happens, leave us alone, because we’re not going to take care of you.” That’s our current paradigm. And you have to ask the question: Is that really what we should be doing? And there’s good reasons why we practice medicine the way we do. And I’m not here to say that the chief complaint, the history, the exam, the differential diagnosis is useless, because you and I all know that it is absolutely not. It is essential. It will always be used. It’ll always be useful. That skill set will always be important. But now it’s not the only way to do it. There is another way to do it. All right, so, getting back to the original focus of the study, let’s think about dyslipidemia. So this is not a surprise, right? If you recruit patients into a cohort and you want to study atherosclerosis and you select for patients who have disease, you darn well better find people who have dyslipidemias, because we know that dyslipidemias cause atherosclerotic heart disease. So here’s another participant in our cohort at the higher end of the age scale, 65-year-old female. She was diagnosed with high cholesterol at the age of 25 years. She was very, very well-managed. And you can see her numbers were in good shape. Although, she is clearly suffering from this dyslipidemia and has a stupendously high coronary calcium score, although she has not yet had an MI. So here’s a lady that we evaluated and we found that she had a pathogenic mutation in the low-density lipoprotein receptor, which is a very well-known cause of hypercholesterolemia. And we then talked to this patient, and it turns out she had several family members who had said, “Oh, yeah, I think some of my relatives also have high cholesterols.” And we went through the family and identified four other people who have this trait as well. And this is a really interesting phenomenon, because when I talk to practicing physicians, internists, family practitioners, pediatricians, they will tell me — and they’re very comfortable being frank with me. That’s what the diplomatics call it, a frank and honest exchange. And they say, “I don’t need no freaking genetic test to tell me if my patient has hypercholesterolemia.” Right? You can do that biochemically. They don’t need it. Well, actually, I would say that we do need it, and here’s why. For every patient we’ve discovered that has a genetic cause of a dyslipidemia, we have found, by looking at them and their relatives, between four to eight patients, relatives of them, who have hypercholesterolemia that is either undertreated — either completely undiagnosed or significantly undertreated. And yes, it is true that you don’t need to analyze the genome or the proband to understand that they have hypercholesterolemia, because you can diagnose that, you can treat it, and you can manage it perfectly fine. But we can actually leverage this, because what happens is we can identify multiple other individuals in the family, and that marginal cost of identifying those other people is very, very small. And what I’m beginning to understand is, I think, actually, the genomic or the genetic result forces us and our patients, because it occurs to the patients at the same time, to ask a question, “Oh, this is a genetic trait, and it’s a simple genetic trait. We understand exactly how it’s inherited. So, Doc, who else in my family could have this?” And then the dominoes start falling. And, in fact, I think what genomics is doing is forcing a conversation to occur that we are currently ignoring. All of us are trained to take care of the patient in the office. Our colleagues in family practice are, arguably, a little better at this than most of us — pediatricians, internists, ob-gyns, et cetera, are — because we need to start thinking about the family as the patient. There are more patients than one. And including this family, because here this grandson, if I remember correctly, was 10 years old and had wildly abnormal cholesterol. And, as some of you may know, the American Academy of Pediatrics has now recommended institution of statin therapy for children with familial hypercholesterolemia, not garden-variety hypercholesterolemia, but familial hypercholesterolemia, starting at age eight, because it is clear that this lifetime burden of cholesterol is what leads to the buildup of atherosclerosis over time. And this lady’s calcium of 1,700 is because she wasn’t diagnosed until her third decade of life, partly attributable to that. And so we need to start treating these people much earlier. In fact, we thought this was so clever, but, of course, when we go into the literature, it turns out in several Scandinavian countries they have these wonderful single-pair health care systems and public health systems where every person who gets a diagnosis of familial hypercholesterolemia, a public health nurse is sent to that person’s home. They get a family history. Then they go back to the unified medical record system that they have; they pull up all their relatives. They go to their houses, and they bring them into the clinic, diagnose them with familial hypercholesterolemia, and treat them. So they’ve been using this for about 10 or 15 years, and it works, and they have a very low incidence of this trait in the population now. Okay. Okay. So I’ve told you about two examples of how we can use genomics clinically. And so we’ve diagnosed these six cardiomyopathies, nine dyslipidemias. We’ve also gone through this cohort and asked the question, “How many people in this study have cancer susceptibility syndromes?” As some of you may know — and I think you’ve had a lecture on cancer genetics already — there are a number of inherited cancer susceptibility syndromes where individuals in these families have an extremely high rate and early onset of cancers. And we, again, did the same 572 people, and eight individuals in those families have an early onset cancer syndrome, and interestingly, only half of them were known at the time that they enrolled in this study. And half — the other half were individuals who had either negative family histories, because the family was small or, in the case of several of them, they had hereditary breast and ovarian cancer gene mutations in the family, but just by chance, these families had a preponderance of male births instead of females, so there just weren’t that many people to manifest. And, again, that gets to that notion of, “We’re not taking care of you until you have the disease or your relatives start dying of the disease, and we can find them prospectively.” Two patients in the cohort have malignant hyperthermia susceptibility syndrome, which is a very important trait, and good, easy medical interventions to eliminate and reduce the risks of that phenotype. Three patients with a peculiar form of neuropathy that’s supposed to be rare, that I can’t explain why it’s so common in our cohort, but there it is. And we’ve also defined one patient with an occult metabolic disorder that is — was previously thought to only affect children. Now we know it can affect adults as well. And really here, we’re just scratching the surface. This is a decent amount of people. So, again, 500, 600 people. Five percent of these people have an occult or unrecognized rare disorder that we thought only affected a few families here and there but, in fact, is scattered throughout the population at a significant level. And I would actually ask all of you to consider, for those of you who are in active practice, if you have 2,000 or 3,000 people in your practice and we sequence them, these data would suggest that 5 percent of those patients, so 50 of your patients in your practices right now, have diseases like this. And, for the most part, we probably don’t know about that. That’s a concerning thought to me. And, again, we’re just scratching the surface, so really, there’s more than that in there. And while clinical sequencing, routine clinical sequencing is not indicated yet, it’s going to be soon. You’re going to start seeing patients who have this done, and we’re going to be able to find this. So there’s lots of other traits. This is only a small fraction of genetic traits. There’s hundreds of other dominant diseases in humans that we are going to start going through. Pharmacogenetic data can be extracted from exome and genome sequencing that will allow us to begin to better select drugs, and so patients who, for example, have a rare variant in a gene that gives them a myopathy from atorvastatin, we can find that. We communicate that to patients and providers. Those drugs can be avoided. As well as all of us have mutations for which we are heterozygous carriers that have reproductive risks for our descendents, and we should consider that as well. Okay. So this all look really easy, right? I love this picture. [laughter] There’s so much unsaid. What I really like here is you can just barely see in the background all these guys just sort of standing around, and I just can’t imagine how annoyed those guys are. [laughter] All the hard, hard, awful work that went into making that beach ready for this guy to sort of prance on with the photographer in front of him. He makes it look easy. All the hard work has been done before he even got there. And there are a lot of criticisms to this notion of individualized and personalized medicine. So there are some statistical analyses that say that heredity is not as good as we think it is at predicting these traits. And this is an evaluation of identical twins that suggests that our power to do this may not be quite as high as we think it is. And one of the big reasons for that is, remember that penetrance graph I showed you? All of those data on penetrance and variation are based on ascertaining rare families. So rare families are selected for the attribute, that when they have this single variation, they manifest this disease, which means they probably have some other genetic attributes that make the disease highly likely to be manifested in those families, and then if you go outside of those highly penetrant families, that what you will see is that the penetrance of the same variant will drop off, and so that we may be overestimating that, and that will make our predictions not as strong as we think. We’ve been dealing with these kinds of problems for a long time, though. All of us are used to this, right? We have, in almost all tests we do, a significant probability of false positive results. I’ve told you, in sequencing, you know, we found more than 1,000 variants in these cardiomyopathy and rhythm genes, and all but six of them were probably benign or unlikely to be pathogenic. In every clinical pathology test we do, the same thing is true. We use normal ranges, statistical normal ranges that mean that at least one out of 20 of every test that we do has a false positive just by statistical variation, right? The famous chem-20 that we order, the odds are pretty good that one of those 20 values is out of whack; just for statistics, not for pathology. And certainly in imaging — my goodness. We do coronary calcium scores by CT scanning. The rate of false positives in incidental findings in CT scanning is more than 5 percent. Plenty of false positives and abnormal findings there. So the next question is, there’s a lot of challenges for geneticists and for clinicians. Are patients ready for this? What is going to be involved in sitting in a room and talking to a patient about a whole genome test? So, again, genome generates enormous numbers of results, millions of variants. We know, from clinical experience, it can be a challenge to communicate one test result to a patient in the clinic. Can you imagine talking to them about three million? Not even conceivable. So we have to figure out how to cope with this information overload, because that’s — we are swimming in data here, and you cannot bring that into clinical practice. So we’re going to have to develop new approaches for how we use those data, how we parse them out, how we roll it out to patients over time. And to do that, we need to know what the patients think, what they want, and how they use it. So we’re studying our participants in ClinSeq also to understand how they view the information and what utility they are making about it. So we did what’s called qualitative interviewing, open-ended questions to ask patients what it is that they’re looking for, what they expect, how they imagine using that. And what we found was two basic clusters of answers to these very open-ended questions. First was — and this is a wonderful thing about Bethesda and the NIH when we recruit patients in — patients, these people are really altruistic, and it is absolutely amazing to me how willing these people are to help us understand what we’re trying to study, even if it doesn’t benefit them. They also, though, are equally interested in their own health. They believe that they will — we will find and that they will receive from us information about their genomes that will allow them to change something about their health care that will help them or their family members. And what’s important about this — and this is important methodologically here — this is not asking people, “Wouldn’t it be interesting to think about having your genome sequenced?” These are people who you’re actually bringing in, and they have to put their arm out, and we take the blood, and we’re doing it. So this is the real thing, and this is what we can really expect people to want to do. We also assess their preferences of what they’ll want from exome and genome sequencing at their baseline enrollment and then following the consent, and then gave them four scenarios for what kinds of classes of results we might find and evaluated them on that. And what was interesting, these are self-selected, very interested, eager folks. Nearly all of them said they wanted to learn their results. Six of them, interestingly enough, were uncertain about it. Even signing up for this study, they weren’t sure they wanted their genome results returned to them. They were a little anxious about what that might mean. They were interested in using it for prevention, and they felt they were very committed to the notion that they — having — them having this information would better equip them to either prevent or manage diseases when they did manifest. There were also some comments about preventive measures that could be implemented by their docs, and then also using the results to change their environmental exposures, their diet and exercise, et cetera. About a third of the patients — a third of the population were just curious. It’s really — there’s an intense curiosity about our heritability, about our families, about our genomes. And these people are of the opinion, “all knowledge is positive.” I think that’s a really interesting thing, because I would bet that every clinician in this room would say, as I would say, that’s just not true. It is not the case that everything I can find in all these genomes will be a positive thing. There are things that we can learn that will be wrong. There are things we can learn that we will understand there’s nothing good we can do about. Not all information is positive, yet the patients hold stupendously highly optimistic views of this, and that’s an issue we’re going to have to deal with in matching the optimism of patients to the reality of this testing. About a third of them wanted the information because they wanted to understand something either about a trait that they thought was familial and in their family history, or things they wanted to use to transmit to their children to help their children plan for their futures. And some of them actually came to the study with a specific condition in mind, and that may be the explanation for why we’re seeing some rare disorders perhaps a little more commonly than we ought to, is a very subtle form of self-selection, is people having this vague but potentially correct feeling that, “You know, there’s something going on in my family, and I think you guys can figure it out.” And we are then stumbling across those traits. Most of it was related to heart disease, again, which is appropriate with the focus of our study. But it may be more general than that, and I think that’s an interesting thing to consider. So, again, they’re very enthusiastic about learning all range of results, even results that we would say are of uncertain clinical significance. The patients are interested in that and desire that. They do recognize the distinction among the types, even though they’re generally enthusiastic, and they want these, what we call, actionable results, genomic results that they can take to their doc and do something with. Knowledge of the patients. Again, if you recruit from Bethesda, you’re going to get a knowledgeable and sophisticated group of folks, and they had very high levels of knowledge pre- and post-counseling, so they have a long genetic counseling session, 45-minute session, where we explain to them what the sequencing is and what it is not. And we — even though they came in highly educated, there was an increase in understanding about the power and the limitations of genomics pre- and post-counseling, so that is an important part of it. So the big picture here is I want to challenge you. I’m a very — I’m very proud, and I think I’m pretty darn good at phenotyping patients, but I also recognize that I am not — I am far from perfect at it. And when I sequence a patient’s genome, I learn things about my patient that I didn’t know before I sequenced it. And I can figure out things that I’m pretty sure I would have never figured out without the genetic data. And I think, because we have only had our diagnostic abilities to find disease in the past, we rely on that exclusively, but that doesn’t mean that that’s the only way that we can do this. I think we practice a lot more trial-and-error medicine than some of us would like to admit. Again, that’s all we can do, and so we do the best we can. We have good data to try and tailor medicines and treatments to patients based on their attributes, but we’re not that good at it. We can do better. Our ability to currently predict disease onset, disease susceptibility, the severity, and the course of the disease; we would also like to be able to predict that in patients. Efficacy and side effects of treatment is, again, limited and is ripe for improvement. And the way to think about this: Is genomics going to perfectly solve all of these problems? No way. But the other way to think about it is, in many respects, we’re not as good at this — nearly as good at this as we would like to be. And even if we can improve it only a little bit in all of these attributes, that would make an enormous difference to medical practice. So, for sure, there is a lot more work that needs to be done. We have to do an enormous amount of work to really tighten up this relationship. We have to be able to precisely predict genotype — predict phenotype from genotype and know the limitations of those predictions. We have to develop and test approaches to pre-symptomatic management. We really don’t have ways right now, effective approaches to managing a patient who has hypertrophic cardiomyopathy gene mutation before they have hypertrophic cardiomyopathy. We need to figure out how to do that. And we have lots of work to do to build the infrastructure and methods for managing and disseminating and using this information with patients and in our health care records. Lot of arguments about whether we should or shouldn’t do this, and some people are really critical of it. And the truth is, I think those arguments — that’s now water under the bridge. This stuff is out there. Genomes can be ordered clinically, and you will start seeing patients in your practices who have had this kind of testing done, and that may be for reasons completely unrelated to your care of your patient. For example, you may be taking care of an adult who has been sequenced because they have a child with autism. Right? So when we do sequencing for things like autism, we usually sequence the kid and both their parents. You sequence the parents, you’re going to find this other stuff whether you want to or not. We’re going to have to learn how to work with it. So it’s coming, and not to mention the fact that there’s 1,000 people in metropolitan Washington who have been sequenced in ClinSeq now. Some of those may end up in your practice as well. So we have a lot going on, and we are going to have to figure out how to use these genomes clinically, and these patients are there. There are other downsides here. Genetic discrimination. It was really fun. We had a — NIH has a Science in the Cinema series, and they screened, on Tuesday night — or Wednesday night, the movie “Gattaca” over at the AFI in Silver Spring. Good — pretty darn good movie about predictive medicine and discrimination. We have to think about genetic discrimination. We have passed a law called the Genetic Information Nondiscrimination Act that addresses some of these concerns, but it has not completely been put to rest. And we really, again, have to struggle with this notion of prediction, and so is genetics going to be able to predict these things, or are we going to end up like these guys? Right? Here’s a mode of prediction that isn’t all that respectable anymore. And I would actually suggest that genetics is going to do much better than this. But other — quote I like about predictions is, “The groundhog is like most other prophets; it delivers its prediction and then disappears.” So, for better or worse, I think us genomics people are here to stay. I think we’re going to be a thorn in the side for a while. But I think this stuff is going to work, and it’s going to expand. We’re going to build it out. And we’ll be able to predict what’s going to happen to our patients before it happens. So I’ll stop there, take your questions. And thank you for your attendance. [applause] Any questions? Yes, sir. Male Speaker: [inaudible] look at the other side of the coin. In other words, if there’s a very strong family history, and the individual doesn’t have a disease, is there something protective about their sequence and not only the fact that they don’t, or is it related to environmental factors? [inaudible] In other words, looking at it from a, “Why don’t they?” kind of thing [inaudible]. Les Biesecker: Fabulous question. And that gets to this notion of the penetrance. That’s exactly what you’re asking about. And there’s two levels of that. The first is, absolutely, it’s harder to figure that out. We call that genetic modifiers, when it’s other genes that modify the traits, right? And, you know, we sort of know that. Sort of our intuition tells us that what you’re saying is true, and we see that when two families come together, family that has a high incidence of some trait or character, and when they marry into another family that is maybe genetically very different from them, you can see that trait disappear in the descendents, and that’s exactly the phenomena that you’re talking about. The other one is that we’re actually — there’s a great study going on called the centenarian study, and what they’re doing is sequencing people who have lived to the age of more than 100 years. And the notion will be that those people will have genetic variants that are uncommon in the rest of the population, and those variants allow them to live into their 11th decade, and so it’s a great question. And people are working hard on that right now. And environment is always important. Always, always. I like to say it’s the — I get the — when I talk to people about this in social situations, I always get what I call the “Uncle Walter story.” The Uncle — “My uncle Walter — ” and, you know, when I get this story, they have at least one hand on their hips, right, when they tell me. “My uncle Walter ate bacon and eggs for breakfast every single morning and smoked his cigar after dinner, and he lived to be 600 — ” you know. People love telling those stories, and there are people who have those protective alleles. And, you know, frankly, that’d be a good thing to know about. Female Speaker: Idea about the costs? Les Biesecker: Costs. Okay. I gave two numbers for cost, but you should be very skeptical of those numbers. Those are research costs, okay? So my cost for doing a whole genome sequence in a research context is about $10,000 for an exome, and I think it’s currently $850. When that is rolled out clinically, as you’re well aware, costs go up because there’s a lot of things you have to do when you do clinical testing that you don’t have to do when you do research testing, and it makes it significantly more expensive. You can buy from at least one commercial company that is selling clinical whole genomes. I believe that actually their retail cost — and you can order that today for about 10,000 bucks. So you can get retail genomes for that price today. And people are ordering them, patients who have severe intractable cancer. Some of you may have caught that series in the New York Times this past week. Patients who are having whole — actually, they get two genomes done. They have their normal genome, their peripheral blood genome, done, and then they have their cancer genome done, and they compare the cancer cells to the noncancerous cells to figure out what’s going on in the cancer. So they actually have two of them done — 20,000 bucks. Male Speaker: Is RNA done, too? Les Biesecker: Say it again? Male Speaker: RNA sequencing. Les Biesecker: And there’s RNA sequencing, and there’s proteomic analyses, so there’s lots of these -omics that we can begin to think about doing. But I think you should think about this in the $1,000 to $10,000 range, is where this testing will be, which is a lot of money. We can’t pretend that that’s cheap. But, again, it is in the realm of other medical tests that we do all the time, medical tests that can’t tell us, in fact, as much as we can learn about a patient from a genome, in some ways. So it’s now come down to where you can start to begin to think about this, and you’re going to start seeing it in practice. Male Speaker: Those two patients who had the malignant hyperthermia, did they gave a clinical [inaudible], is that what they [inaudible] to these studies? Les Biesecker: Great question. Great question. So those two people, if I remember correctly, did not have any hint in either themselves or any relative of malignant hyperthermia. One of those two mutations I’m absolutely sure is correct. It’s on a panel of the 20 most common mutations that cause malignant hyperthermia in European-derived persons on this planet. It’s a very common mutation. It’s rock solid. I will also, though, tell you that there is a third person in our cohort who does have a family history of malignant hyperthermia. Male Speaker: [inaudible] medications to stay away from? Les Biesecker: Yeah, so malignant hyperthermia is most commonly caused by — I think it’s isoflurane. Are there anesthesiologists here? I think it’s isoflurane and succinylcholine are the triggering agents. And, of course, then the antidote, the treatment for that is dantrolene. And so these patients, we recommend simply that they wear a medic alert bracelet and have it in their chart that they have malignant hyperthermia susceptibility. This costs nothing. The treatment really costs essentially nothing. And that if they do go into surgery, that the anesthesiologist has a vile of dantrolene sitting there and watches that patient’s temperature in the OR, and if the temperature starts to go up, then you know. But the third patient is interesting. We have a guy in the cohort who absolutely has a very strong family history of malignant hyperthermia. We sequenced him, his whole genome, whole exome, and he does not have a mutation in the gene. In fact, he has a variation that we showed doesn’t cause malignant hyperthermia. So that’s a very important thing to recognize about whole exome sequencing, is that it is not 100 percent sensitive. I mean, what of our tests are, right? But we have to remember, we use the word “whole” genome sequencing, and we use the word “whole” exome sequencing, and that’s just a little bit of a fib, because it’s not whole; it’s not 100 percent. And so we know that that person, we missed it, even though we know he has it. So the one that had the family history, we didn’t find the variation. The two where we found a variation, they didn’t have a family history. And again, that sort of conflicts with how we’re currently practicing medicine and genetics, when we require those two things to happen together and they don’t. Yes, sir. Male Speaker: So could you speak a little bit to your opinions — oh, [inaudible] mic. Les Biesecker: Yeah, I don’t think you need it. [laughter] I like that. Male Speaker: Could you speak a little bit to your opinions on who should own individualized genomic information and should have access to it? Les Biesecker: So how I think the information needs to be managed is, I think the ideal system would be what I call a two-key system. The information I think of as more sequence is a health care resource; it’s not a test. If we think about a whole genome sequence as a test, it gets — you get tied up into knots because you can’t figure out what the heck to do with it. It’s too many results to analyze. It’s too many results to turn to a patient. It’s just too much of everything. So what we have to do is take a step back and say, “This is a resource; it’s not a test.” And the patient will have this done for some reason or have it done to them because of some reason in a relative. And that resource, then, becomes a part of their health care record or resources. And then when that person has a need for some information from their genome, they and their clinician will have the ability to access that, if that both agree that that’s a proper thing to do. And so I would envision a partnership between the patient and the clinician where if they are in agreement about it, you use it, and if they’re not in agreement, you don’t. And it stays safe and secure and is integrated into our health care system in a way that makes sense with our current practice analysis, because I am not one of these people who likes to go around saying that genetics is going to revolutionize medicine, because what I’ve found is that most people actually don’t like revolutions. They’re really kind of messy things. And let’s try and evolutionize medicine instead of revolutionize it, and use these data in ways that makes sense with what we currently know how to do and can insinuate it into our practices in a way that’s useful to us as clinicians and where we know what we can do with our patient, and the patients want it and perceive it as useful and perceive it as necessary, and then go forward that way. Maggie [spelled phonetically], behind you. Male Speaker: If the genome sequencing is repeated, let’s say, at five-year intervals, how accurate, that is to say, how reproducible is it? Les Biesecker: Great question. Now, here’s your classic, again, glass half full, glass half empty argument. So good, quality genome sequencing or exome sequencing is between 99.9 percent and 99.99 percent accurate. Impressive number, glass half full. Then you say, “Okay, how big is the genome again?” The genome is about three billion nucleotides, and so you can say, “That’s, then, how many errors?” That’s hundreds of thousands of errors. So those errors will come up. The error rates in the models for analyzing them are constantly improving, so that number will actually go down over time. But the dominator is so big that even very small error rates can have significant implications. For that reason, we are currently doing things in a way that all of our sequence data are generated in a research sequencing laboratory. And then if we are going to use any of those data to do anything to a patient, they are completely — the variant that we’re interested in using for clinical care is completely replicated in another testing setting to make sure that the two results are the same from the two different methods. And that, then, dramatically lowers the error rate. But when you consider the genome as a whole, there are errors in it. Anytime you assay anything that big and you’re less than perfect, which we always are, there’s going to be errors. That’s a great question. Then that gets also into the whole question of mosaicism. And we have this lovely fiction that every nucleated cell in our body has the same genome in it, right, because they’re all identical. Well, actually, it’s not true. There’s mutations that occur within different parts of our bodies, and there’s variation within us. And we now know that that can cause a number of diseases. Cancer is the most extreme example of somatic mutations, right? That’s where there’s hundreds of mutations in a cell or a tissue. But we know, actually, that extends down to much finer grades of variation and can cause a number of other traits and diseases. Thank you all very much.