Archive FM

ASHPOfficial

AJHP Voices: Implementation of clopidogrel pharmacogenetic clinical decision support for a preemptive return of results program

Duration:
35m
Broadcast on:
24 Jul 2024
Audio Format:
aac

In this podcast, Dr. David Kao discusses the AJHP Descriptive Report, “Implementation of clopidogrel pharmacogenetic clinical decision support for a preemptive return of results program,” with host and AJHP Editor in Chief Dr. Daniel Cobaugh.

The information presented during the podcast reflects solely the opinions of the presenter. The information and materials are not, and are not intended as, a comprehensive source of drug information on this topic. The contents of the podcast have not been reviewed by ASHP, and should neither be interpreted as the official policies of ASHP, nor an endorsement of any product(s), nor should they be considered as a substitute for the professional judgment of the pharmacist or physician.

Welcome to AJHP Voices, a series of discussions with AJHP authors and interviews focused on contemporary practice issues. AJHP is the official journal of ASHP, and its mission is to advance science, pharmacy practice, and health outcomes. Hi, this is Daniel Coba, the editor-in-chief of AJHP. Thanks for joining us in this episode of AJHP Voices. Marco Genomic Management has emerged as a powerful tool to help personalize medication use. Joining me today to discuss his article entitled, "Implementation of Clopidogrel-Pharmacogenomic Clinical Decision Support for a Preemptive Return of Results Program," is Dr. David Kayo, medical director of the Colorado Center for Personalized Medicine. David, welcome. Thanks so much for joining me. Thank you for having me. Lots of questions about your article, but first, I really wanted to get a sense of your journey as a physician. Where did you enter into the world of pharmacogenomics and what drove your desire to begin focusing in this area? For me, it was sort of a roundabout path, but I'm a cardiologist and clinical informatologist. My research has been in data science, omics, large data set analysis, that sort of thing. So my first work at CU was in pharmacogenetics of beta blockers and beta one adrenergic receptors, specifically with one of our senior faculty here. So I did a lot of targeted research on that. When the Center for Personalized Medicine and the Biobank sort of came into being, one of the goals was to be able to return clinical results to research participants out of that biobank, that involves not only the pharmacogenetic knowledge and clinical recommendations piece, but a very complex and in-depth implementation through the electronic health record. So what we found at the outset was that we needed sort of pharmacogenetic expertise. That's my partner, Dr. Chris Bocolante. And then the data science and electronic health record informatics experience, which is how I kind of fit into the whole picture. So I did have a pharmacogenetics research background, but my role in this has been really in operational implementation within our health system. We feel like you really need both to be successful. Got it. And before you established the partnership with Dr. Akilante, had you had previous experience working with clinical pharmacists and the care of your patients with cardiac diseases? We have a couple of pharmacists that support our heart failure transplant program in particular, who I had worked with both clinically and in the research space. That was very, you know, clinical application oriented, like domain specific kind of stuff. But yes, I had worked with a number of PharmDs prior to that and other sort of advanced practice providers as well. So as we start off, I think there's been a lot that's been published about clopidogrel specifically and the pharmacogenomic characteristics of this drug, but maybe at a high level, you could give the listeners really a summary of the pharmacogenomic profile of clopidogrel and what makes it such a good agent to study and to focus on as you're implementing a program such as you have at the University of Colorado. Definitely. So clopidogrel is an antiplatelet medication that's used in the setting of recent heart attack and stent placement in the coronary arteries as well as stroke and other sort of thrombosis related applications. Clopidogrel is a prodrug so that in order to be active as being metabolized into its active form, the pharmacogenetics come into play in that the enzyme CYP2C19 that converts it into an active form can have variations in its activity level. If it's a reduced metabolizer phenotype, then you do not have as much active drug and therefore not as much effect on platelet function. This is important in particular for recent heart attacks and stent placement in that in the absence of aspirin and another antiplatelet agent like clopidogrel, there's a high chance following stent placement of what's called stent thrombosis or blood clot forming. In the recent stents, that's an extremely dangerous event which can result in death about half the time when it occurs, 50% of the time when it occurs. So when you don't have active or sufficiently active clopidogrel reporting the patient at tremendous risk, in the past we've had no way of really knowing that and so there's been a population of people that have had these events and a lot of them pass away. So the reason it's sort of attractive as a target is that the setting is very clear. Post-stent placement for coronary artery disease, the recommendation for clinical action is also very straightforward. It's using a different agent that's not affected by that variant and it's a fairly narrow set of providers at least at the outset that are involved in making this clinical decisions. So as a first application, you have a very high impact clinically, you have a very clear alternative suggestion and you have a relatively small group of people to educate and to sort of help with the design of the decision support around it. If I'm recalling correctly the use of it along with patients who have stents, there's also people who have vascular disease or folks who have had strokes as well. And is the use of pharmacogenomics by other groups, is it becoming as widespread as it has been in the cardiology setting in terms of the management of clopidogrel? For us, it's now available to all providers across the system. When we were building all this IT infrastructure, we started with a fairly small cohort of providers and this was five years ago-ish so that we could kind of get the hang of it, so to speak. Consequently, we increased the availability to neurologists for stroke treatments and gradually to peripheral arterial disease and now it's just wide open for anyone to access or the alerts are presented to anyone, I should say, and it sort of depends then on the clinician as to whether they're comfortable or feel that are appropriate to make that decision or not. So we have expanded to all 5,000 plus providers at this point for all indications. And you've made some references as we've talked about the BioBank at the Colorado Center for Personalized Medicine, but maybe if we could talk about the BioBank and then the process for obtaining individuals' genetic data and ultimately how that information is shared with the clinician who's caring for the patient. But to start off, David, talk about the BioBank. What is it exactly? The BioBank is a repository of specimens collected from patients at UCHealth that are primarily used for genomic analysis but could be used for other applications as well. We are still really only using it for genomics. It was created in about 2014 and it had a dual vision really. There was kind of a classic research biobank looking for genetic association kind of research and discovery. But at the same time, we also from the outset wanted to be able to return some clinical value to the folks who participated in it and very early on are really from its design and have had that in mind. So it's always been a clear cap certified in an accredited lab so that the results can be used directly for clinical decision making and don't have to be validated by another assay and started very early on the pharmacogenetic path in terms of returning results into the electronic health record. Right now, the BioBank is about 250,000 people enrolled, we're almost to 150,000 samples from those participants collected and a little over 100,000 of overall analyzed by some means whether it's a microarray, whole exome sequencing and so on. And the way that it works from a participant perspective is that they enroll through our patient portal which is part of our electronic health system. We use a 100% electronic consent process when they log in, they're offered the opportunity to participate. They go to an area of our research area of that application where the consent lives. It's a self-consent model so the patient reads it and decides whether they want to participate or not. When they participate, an order is generated so that the next time they have a blood draw for any other reason, an extra tube is collected and sent to the BioBank for all these analyses. So the goal there was not to increase cost to the patient, the center or payers in the lab draw, convenience and that the patient doesn't have to get stuck on two different occasions for different things and we find we capture more people that way because most of our participants are getting care here and have blood draws for other reasons. So we made that transition to a full electronic model several years ago and it's worked quite well since then. Most of our samples are blood samples but recently we started to explore at home collection of saliva samples, so we mail out kits to patients who want to do it that way. They spit in the tube at home and return it to the BioBank and we can do all the same analyses for it. You described real success in terms of enrolling people into the BioBank yet I would imagine that on some level there might be some hesitancy from individuals regarding sharing of their genetic data. How have you overcome that and really experienced such success with BioBank? Yeah, it's a great question and I'm sure there are populations that have more and less concern about that particular point. The consent is pretty explicit about how the information is used and that it is not shared outside of the health system in any way in terms of the research kind of results that are generated with respect to the clinically actionable stuff, the things that we're talking about here with pharmacogenetics. We do say let people know that those results, that a certain very small subset of results will return to the electronic health records with the intent of benefiting their care and those can be accessed by anyone who accesses the electronic health record. So far, that has not been a barrier that we have gotten wind of. Because it's a self-consent process, they're sort of self-selected. We won't know who has felt that way and who hasn't. The people who have enrolled have already accepted the situation and the framework there. I think because that econcent is pretty explicit and they have plenty of time to review it. That's the other nice thing about doing it through the portal, that it's not like someone throws a piece of paper at you and you have to sign it right there, they can take however much time they need to consider it. And that's a good segue. You made reference to the sharing of data that are actionable as part of a patient's care. What is the process and you talked also about your informatics background? What is the process for moving data from the BioBank over to the EHR and making it available to clinicians for action as they're caring for patients? Frankly, it's a pretty cool process that I think represents what the future of this is going to be. But basically, we run all the arrays in-house, micro arrays that have a pharmacogenetic content on them. Those results go through an internal analytic pipeline just to generate variant calls basically. Data or those results go to an external vendor who sort of translate those into relevant variants, for example, SIP2C19, and then package those into message formats to return into our Epic electronic health record. So those genotypes and the associated phenotypes are then returned into our lab framework in the Epic electronic health record. So it's like a creatinine or a complete account or something like that. It arrives in that same kind of environment and into a discrete field. For example, this is your SIP2C19 genotype and then you can sort of build reports and more importantly decision support around that in order to surface information to providers. We never release pharmacogenes without medical decision support to tell the providers what the treatment recommendations are. We believe that it's unreasonable to expect that we can keep 5,000 providers up to date on all the things we're doing, especially because that knowledge is changing frequently. So we have sort of a large toolbox of ways of providing that information to providers when they try to prescribe a medication that has an interaction with a patient's genotype. So that's kind of the conceptual version of it. The informatics behind it are pretty complex in terms of translating raw research data into a clinical message and then creating the structures and the electronic health record that can be used in the way I described. We did a lot of custom building, especially for SIP2C19, because Epic didn't have anything that really addressed this need, this particular need. So a lot of our stuff was custom. They have since released a genomics module that has some of these functionalities built in, which we converted over to a couple of years ago. They're adding more functionality all the time, which is quite nice and convenient. But understanding the intricacies of what is required to create, for example, decision support applications was probably the first two or three years of, at least Dr. Occalante and my work was braiding the structures that do that. We think this is very nice because it can scale sort of almost infinitely in terms of what you can put in there, limited only by time that people have to sit down and type things because it builds on itself. As we add more informatics, functionality becomes easier to add more genes and more drugs, and as knowledge changes, we can update it or add applications, push new genes like all that sort of thing without doing new testing, and the providers needing to be reeducated. So what that has meant is that we've had a sort of an exponential growth curve since we went live in production with this, rather than kind of plateau or stepwise, sort of a thing, which is often what happens. You do one gene and one drug and do that, you know, run that for a while and then do one more and so on, and we haven't been like that, we've been able to just put stuff in with pretty good speed just in the pharmacogenetics content. So with all of this foundational knowledge in place, David, can you give us a sense at a high level of the work that's described in the article, the goals of the program, and then we're going to get into some additional details and terms of what your experience has been, but at a high level, talk about the program itself. So the work that's described there is a bit more detail on what I just kind of summarized, but the request or the intent, I should say, from our health system and their investment in the Center for Personalized Medicine and the BioBank was to be able to deliver a clinical value back. So the broad questions were, how do you get those results into the EHR and in front of providers and patients in order to do things? So the process was first trying to decide the best delivery vehicle for all of that, and that we had made the decision to do it through the EHR. Once it was in there, how would we surface it to people? Once we decided on that, we had to then design the interventions and the alerts that would be most effective and most sort of palatable, I guess you could say, to the providers so that they would see it as an asset rather than a nuisance. And then there was, once we had sort of made all those design decisions, actually building all of that, testing it both with our technical analysts as well as the providers themselves. And then this phased rollout that I kind of talked about, our initial rollout was only in the cath lab at our central University of Colorado Hospital with about 10 providers. That was the very first one and then scaled over the next two years really following that. So it describes sort of the process and decisions that we made in the rationale behind them in terms of how we ended up doing this and how it allowed us to scale a lot faster than a lot of our colleagues around the nation. In the title of your article, you use the terminology pre-emptive return of results. Can you talk a bit more about that specifically, David? What do you mean when you describe bio-bank initiative as fully pre-emptive? Great question. So most of the research in, for example, C2C19 has been reactive testing. The clinical scenario comes up, let's say someone has a heart attack, you test at that point and depending on the genotype, make a medication decision. The problem with that has been the turnaround time and sort of the appropriateness of when that decision is made and the logistics of having it be done overnight are tremendously difficult and if you delay too much, you expose the patient to risk. So reactive genotyping has frequently been the way things are done, which is again testing at the time of an incident. Pre-emptive genotyping we think of or we define as genotyping that happens at some arbitrary time well ahead of any need for the information. So in the case of the bio-bank, you have your sample drawn, these genetic results become available even though you may not have coronary disease at that point or depression or whatever condition it is that we have decision support for and then those results are available right at the time when you begin treatment for those things. So the genetic results are just there and then become their surface when they become relevant. We sort of think that this is the way genotyping probably should go in that you get genotype once early in life and then those things are utilized the rest of your life as appropriate rather than in this reactive one-off pattern because you can do an enormous number of genes all at once rather than drawing a new thing each time and retesting for the next gene in the next gene. So in the case of the bio-bank, we have people who have results from eight years ago that had nothing until last week and have an alert because they were found to have depression and started on as a telepramp or something like that and then it became available then as far as dosing recommendations or alternative agents and so on. That's pre-emptive in that it's done without any particular indication well ahead of time and then you use it when it becomes relevant. And you stimulated a couple of other questions that I guess the first is any sense at this point of continuing with clopidigrel as an example of the number of patients who are started anew on clopidigrel at UCHealth. Any sense of what percentage of those individuals have pre-emptive results available to guide their care? 65,000 patients have two C19 results in their record right now and that grows by a few thousand every month. As far as how many of those develop coronary disease and have an intervention or stroke or something like that, I don't have the most recent numbers but we have about 2 million patients total at UCHealth and again of those we have about 100,000 samples that have been run than 65,000 or so with results that have been returned. So I don't know if I can do the math on the fly on that one and I should say that we return the results for seven genes from the bio bank now. So that translates right now into about 400,000 individual kinetic results for those 65,000 patients. And when you add all that up, it ends up being that most I think it's like 80% or something like that of all of those participants have at least one actionable variant in their health record now and that's only seven genes. So it's becoming more and more obvious, providers are becoming more and more used to it. It's becoming important and more and more acute and high risk settings and in some cases has been life saving. There's another example you mentioned, the patient with depression who's treated with sotelopram, how extensive in terms of the drugs, we know that there are a number of drugs that we have a good sense of how they're affected by SIP2C19. How extensive then is the program at UCHealth in terms of the medications that are actionable and that are that we're therapies being modified based on the pharmacogenetic data? There are a couple of different facets to that. One is that early on we decided we were not going to adjudicate treatment recommendations for this internally as an institution. So we use the CPIC established guidelines in terms of drug gene pairs and which of those to choose and then the recommendations that follow from that, we pretty much follow that to the letter. We believe that's the strongest evidence and kind of keeps any biases we may have out of it. So given that, when we do go live with a new gene, we do all of the CPIC guidelines that are relevant to that drug gene pair or that gene anyway in terms of decision support that's built. We have different types of decision support depending on the acuity or the risk associated with the situation with things like antidepressants being more passive alerts where they're sort of not disruptive in any way. They're educational within the order itself, say, you know, just so you know their metabolizer status is abnormal and these are the recommendations around that but it doesn't stop your work flow in any way. Whereas things like Plavix or DPYD and chemotherapies or things like that, those will put a wall in front of the providers and you know, we need to know that you saw this and this is the alternative if there is one. So called interruptive. Yeah, those are interruptive alerts, right, where you have to acknowledge the information in some way. So the scope is the CPIC guidelines and then there's an added layer of the method with which we provide that information back to the clinician that does impact kind of how forceful our voice is. That translation of the CPIC guidelines into clinical decision support, so you have the data coming in from the BioBank and you've described that's a pretty complex process involving informatics. You were bringing in the CPIC guidelines to develop the clinical decision support. Talk a bit about the clinical decision support itself and updates to it. Who's involved? How long does that process take to have actionable clinical decision support integrated into your EHR for the various genes that are affected? I think that's part of the secret sauce of our program is the way that's built. So we have a joint team that involves four FARMDs, myself as an MD and then I think three or four of our epic analysts, you know, folk that are in clinical decision support team operationally for the whole system and the FARMDs and, you know, clinicians like myself and them kind of develop the recommendations, the trigger criteria in terms of when is an alert surface and work with the EPIC team to translate that into the software options that are present. So we have a team, if you include all of the leadership and so on, it's about 10 people who are focusing on that and it's that collaboration that I think is really what allows this to be really impactful. As far as the time it takes to implement something or develop a new thing, so it's been changing over time in that we had to build a lot of new technology early on and so it took, you know, years to make the first thing but now we can do new drug gene kind of applications within a few months or less depending on how different it is than previous things. So for example, CYP2C19 is very clear, like it's very explicit in terms of all of it. What Warfarin dosing is not, Warfarin dosing is a lot more complicated and that's what we're up to now is how do you do that and so Warfarin has taken a little longer several months but things that are far less complicated like say CYP2D6 and on Dansitron or something like that happen relatively fast because we have the tools built already. So that's why I say it's accelerating over time because it's not like you have that many different tools to use with an Epic but B you don't really need that many tools given workflows that clinicians have which kind of figured out where to place stuff. So if you got something that was fairly discreet like CYP2C19 probably take a couple of months. So when you talk about a team of 10 people and the effort that goes into doing this, what that says to me is that there's apparently a pretty major organizational commitment to this program and I think listeners one of the things that they would be interested in knowing is how you successfully make that case to the C-suite to invest in this program to advance care in the organization. I'm wondering how that was done so effectively at the University of Colorado. What drove organizational investment in a program like this? Our health system has been really tremendous in allowing or helping us make this happen. As far as how it came to be, they were an investor in the bio bank from the very beginning in terms of financial support and it kind of had made it clear throughout their time that they wanted to eventually have clinical impact. They already had an idea that they believed this kind of technology is going to be part of the future of medicine. So they had sort of made that decision long ago and it was a matter of the how and what did we need to do it. So all that is to say I think we enjoyed having an engaged C-suite early. As far as how to engage as the C-suite early, the Clopidogrill, for example, project made it easier to sell to other people who hadn't been engaged in that initial investment in that it was a life or death interaction, it was a pretty clear recommendation and all that sort of thing and it was very visible and it was defensible to anybody who would say why are you doing this new technology thing that's not covered by insurance. So I think in addition to an engaged leadership who already has some vision for it, the other things was being very strategic in what applications we chose and then being able to frame it to them in a few different dimensions. One was how much is it going to cost to do? Two is is there value that can be returned? Three is what's the optics of it to either a health care provider or the patient community and maybe we can talk about becoming an actual draw to our health system which it has and then sort of where does this go in the long term as value-based care comes into play? So we were pretty thoughtful about how we escalated in that rather than just like jumping into C-suite with a very messy implementation and a lot of confusion around it, we're deliberate in it. It's not just saying this is kind of neat to do but really saying how does this fit into the future of medicine and kind of the operations or the business of things and we were creative I think in the dimensions we went after like marketing for example. So that's a good segue into my last question for you today, David. Maybe my second to the last because there's another one I want to get in there but what of your experiences with the program have been so far? My personal experiences have been, I think if I had to use a word it would be inspiring and pride in the team. I live in this clinical informatics space I have for a long time and I interact with a lot of people around the world and the team here is one of a kind like truly and that's from the analysts all the way to the DIO, the chief innovation officers for UC Health as a system and each has played this really pivotal role in making this happen. I think what it has allowed us to do is create a future, not future proof, nothing really is future proof but I think the most progressive and forward looking strategy for doing this and I don't know of another place where it could be done because of the mix of people that happen to be here or it could have been done the way that we did when we started. So seeing all that come together, seeing kind of the way the university and the health system have been able to combine to produce this truly novel kind of way of doing things that everybody assumes is kind of already there, someone heard it hasn't been has really been amazing and watching people's growth in those different aspects and collaboration has been I think it's the most inspiring to me. And I also could imagine that you will be inspired as you see this become more and more pervasive throughout the community so that every patient can be advantaged by having access to their pharmacogenomic information. What's your vision or is there a vision already in place in terms of movement further into the community and community hospitals I think sometimes the reaction is well these things are easier done at a large academic medical center but what about in the community setting? What about in the smaller and rural settings? Where are we going in that direction? What kind of in the process of exploring that now UCHealth is a large system that has both academic and rural like pretty small community clinics and hospitals and so on. So we're getting some sense of that and that we have a centralized testing process that becomes immediately available to everyone throughout our health system. In terms of beyond UCHealth, I think that there's a lot of data sharing technology that would make this possible. I think that right now we're still in a position where like if you get genotype share at UCHealth it's not going to help you that much if you moved to Maryland for example and you might need to have that done again there if a system there out of the technology. What I would hope is that we arrived at a place where you could get genotype once and have it available through various data sharing and health information exchange strategies so that it would be kind of available everywhere and it becomes less about what is the availability of an academic medical center but just can you access your regular health data. I think that's the way that it can get out and I imagine that as EHRs move to the cloud that will become much easier that's sort of my impression and looking at other places of the world. I think that is starting to bear out. To me a lot of it's and of course an informaticist but a lot of it's data science to share that information widely. There are technologies to share decision support. We've done some of that, we work on some of that but rather than each site's building their own CIP2C19 clopidogrel thing which is how it is now. Being able to share those kinds of applications would just make it easier for small venues to use them as well because they don't need the team to build it. Dr. Aulante and I have a grant to investigate this with colleagues in Montana as far as what are the barriers to implementing these things in small rural hospitals and even in you know first nation care settings and all of that so it's something we're very sensitive to I think the technology and the environment's not quite there yet but that's how I would do it if it were me. And I hope I get a chance to talk to you about the results of that work in those settings and with that vision that's all the time we have today. I want to thank Dr. David Keo for joining me today to discuss his article implementation of clopidogrel pharmacogenetic clinical decision support for preemptive return of results program which was recently published on aghp.org. Please join us here each month for discussions on contemporary practice issues and interviews with age HP authors. If you've enjoyed this podcast please share it with your colleagues and via your social media of choice. Thank you for listening to AJHP Voices. For more information about AJHP the premiere source for impactful, relevant and cutting edge professional and scientific content that drives optimal medication use and health outcomes. Please visit aghp.org. [Music]