Quality Considerations for a Robust Data Integrity Program

  • Share:

Accurate and complete data gives us confidence in product safety and quality, so data integrity demands a great deal of attention in the life science industry. This is truer than ever with increased focus from the FDA, EU, and industry standards on data integrity issues and best practices. Guidance seeks to clarify the role of data integrity in current good manufacturing practice (CGMP) for drugs, in response to an increase in data integrity lapses in recent inspections. Data integrity procedures and programs alone are not sufficient: they must be part of larger data governance framework.

Video Transcription:

Christy: Welcome to today’s USP and Pharmatech webinar, Quality Considerations for a Robust Data Integrity Program. Thank you for taking time out of your busy schedules to join us. Next. Each registration is unique and grants the right for the individual registrant only to view this webinar. Sharing this webinar with unregistered viewers, either during or after its performance, violates USP’s copyright and is prohibited. Next slide. If you have not already done so, we ask that you take some time to read the USP Education Course Disclaimer. You can download this presentation, including the disclaimer from your USP Education account. Please follow the instructions in your confirmation email to download the presentation. Next. We have two presenters today for today’s webinar, Stephanie Gaulding, Managing Director, Pharmatech Associates, and Mike Porter, Vice President, Regulatory Compliance, also with Pharmatech Associates. Welcome and thank you to you both. I will now hand the presentation over to Mr. Porter.

Mike: Thank you. And Stephanie and I would both like to thank USP Education for the opportunity to present today and to each of you for joining. We look forward to interacting more directly through the Q&A portion of the program at the end. We’ve taken today’s presentation on a robust data integrity program and broken it into three key areas. We’re gonna start with the global regulatory focus. We’ll then talk about the key elements of a robust program, and we’ll talk a little bit about audit trail review and the expectations around that before moving into the Q&A. But to get us started, I’ll turn it over to Stephanie to talk about the global regulatory focus.

Stephanie: Thank you, Mike. Very happy to be here today and talking with you guys about considerations for a robust data integrity program. As Mike and I were preparing for our discussions today, we really wanted to start with kind of laying a foundation around what we would refer to as a re-sharpening global regulatory focus around data integrity. And the reason I say re-sharpening is that data integrity has really been around in our industry for a long time. Basically, all the way back to the early days of regulations, integrity and quality of data has always been a concern of health authorities, whether here in the U.S. or in other locations around the world. But I would say that over the last probably 10 to 15 years, we’ve seen a lot of increased focus and scrutiny on data integrity issues and even more concrete discussions and guidance.

And what we wanted to highlight in this slide is if you look at the prevalence of various health authority industry-level guidance, you still see guidances coming out. So, some of the earliest guidance documents that we got, I think actually one of the earliest guidance documents that we actually got was the MHRA guidance specific to data integrity. But very quickly, we did see stuff coming from TGA. We have seen information coming from the WHO. We’ve seen stuff coming from the FDA. We’ve also seen in the last few years ISPE and PDA as two well-known industry associations that operate globally put together their own interpretations of data integrity. So, ISPE in partnership with GAMP have really built a really nice practical guide for data integrity.

And PDA has actually looked at it from two particular lenses that are similar, yet different, looking at laboratory systems and manufacturing and packaging systems. But what you see is this increased focus and increased transparency around defining those requirements. And one of the things I like to really correlate, Mike, if you’ll transition to the next slide, is a lot of the reason I fundamentally believe that this has happened within the industry is if you look at the nature of our operations over the last, say, 10 to 15 years, we have gotten more automation, more computer systems, more opportunities for issues to arise. And when you look at data integrity today in 2022, as we’re sitting here in December of 2022, almost at the end of the calendar year, really as we look at it today through our lenses, data integrity’s really actually becoming more a part of a larger structure that’s out there around data governance.

As more automation exists, as more systems exist, as more crosstalks exist, there is this ever-growing understanding that fundamentally centers around, at the core of this figure, people, processes, and technology. And we’ll circle back and touch on this as we go through today. These three core elements of data governance, people, process, and technology are rooted in all of these data integrity requirements and recommendations that you see in those guidance documents from health authority and industry best practices. There are some things that are in data governance that go beyond just even data integrity, but data quality, metadata management, data security, there is certainly that element that is there. And I think it’s important to understand and have that context as we talk about requirements for a robust data integrity program to link that and understand that it also should really be part and parcel of a larger data governance framework within organizations. But since we’re gonna focus a little bit more sub-focus on data integrity, I’ll hand things back to Mike and he’ll kind of walk us through and start that conversation on robust program elements.

Mike: And you’re right, Stephanie, most of the remainder of the presentation will take that sub-element of data integrity under the data governance model. And we wanted to start by looking at a couple of the hallmarks of a robust program, or if you will, sort of the big-picture objectives of what we’re trying to achieve with a data integrity program that we implemented at our company. And part of this is that we want this to be integrated into our existing quality management system processes. This really should not be a separate standalone program that requires additional resources or a separate focus, but really should be integrated. And we’ll talk about that a bit as we move through today’s discussion to look at how to integrate that effectively to make it efficient and to leverage existing resources to help with this.

We also wanna make sure that we’re really focusing in on the prevention and detection of any issues related to data integrity. Really, this is to help support our inspection readiness, it’s also to help ensure a robust supply of product to patients because we don’t want to be running into issues that prevent us from releasing the product to patient use. So, how do we identify these issues through this process and how do we work to build solutions that will prevent these issues from happening? It’s important to note that this also applies to both paper and electronic records. While we’re gonna talk about systems and audit trails and audit trail review, keep in mind that this also is relevant to data that is on paper-based systems that are in effect today, and that we need to consider that as we’re looking at the overall program.

Also, we want to drive reliable and accurate data at the outcome. Again, patient supply, a robust supply patient confidence in our supply is important. And this is the data that underlies that as we release our products into the marketplace. And finally, we want the process to be flexible and risk-based. And we’ll talk more about that as we move through the course of today. And speaking to that piece, I think Stephanie mentioned the MHRA guidance from March of 2015. And clearly, the regulators are saying the same and emphasizing the same, that the program should be commensurate with the risk to product quality. So, we need to understand that risk, and we need to build the effort for data integrity in a manner that supports the risk that we’ve identified.

And it needs to be balanced against other demands. This shouldn’t be all-consuming at the expense of other elements of our quality management system but should be integrated and balanced across that. Really, we’re looking for a system that’s designed and operated in a way that provides an acceptable state of control, right? But we have to define what acceptable is, and we need to have the justification and the written down and documented to support that so that when we present it to inspectors at the end of the day that it’s well understood and that we are in compliance, we can show that we’re in compliance with the program that we’ve defined.

Stephanie: Yeah, so one of the core principles that you’ll see discussed when you start to talk about data integrity as a concept as well as a program really centers around these concepts of ALCOA and ALCOA Plus. And for those that are based in the U.S., you’re likely very familiar with ALCOA, but you may or may not be as familiar with ALCOA Plus. ALCOA Plus really is an add-on of a couple of core attributes beyond the original ALCOA acronym and ALCOA being attributable, legible, contemporaneous, original, and accurate as the words that actually make up the acronym. And especially prevalent in the European communities and other spots around the world, they felt that maybe that wasn’t quite enough and added on a few more hallmarks to that definition, and kind of thus, ALCOA Plus was born by adding in complete, consistent, enduring, and available.

And I wanna talk a few minutes about each of these components as it comes to data integrity, because really as you look at building programs, these are the nine things that we are trying to do with these programs. We need to make sure that data, regardless of whether data is coming to us in paper records or through electronic systems and you’ve got a full electronic batch record system, electronic QC systems, electronic release, any of those points along that spectrum from full paper to full electronic, all of these elements need to be there. The principles are the same, regardless of which modes you’re operating in. And that first piece really is attributable. I mean, many of us who…like me and Mike who’ve been in the industry for, you know, 30-plus years at this point, you know, we remember the old days where it was drilled into us. And maybe some of you still experience this, you know, every time you’re writing data or you’re recording things, you’re putting initials down, you’re putting dates and times down. The reason behind that is we’re trying to attribute that data to the person, the activity, and the date and time where it was generated.

So, that attribution of the data and connection of the data to the person is fundamentally one of the most important and central elements of what we’re doing. Legibility, I love computer systems because it really kind of takes the legibility factor out of the equation. But, again, if you’ve been around for a while and you’ve lived in, or maybe you are currently living in a quality management system where you have a large degree of paper records, legibility is a concern. You need to make sure that all of the data is permanent. It’s understood by anybody who looks at it.

It lives the length of time that the records actually need to be kept by the local regulations or marketing regulations around the world. So, legibility is certainly there, and I am one that I like to tell people nowadays, you know, my handwriting when I was just out of school and coming into industry was much neater, much easier to read than it is today. So, legibility is always a concern, right? Making sure that not just at the point where you record the data, but somebody can read and understand what you wrote two or three or four years down the pike.

Contemporaneous, this is really making sure that data is captured real-time. I love when I teach GMP basic training in corporations and with companies. I love to kind of take folks through an exercise. Want you just to kind of think about this. Do you remember what you had for dinner last night? Well, what about three days ago? What about a week ago? What about three weeks ago? Right? If you don’t record and capture that information at the time you’re doing it, the likelihood that you’re gonna have a good memory of what that was 2, 3, 4 days even is significantly reduced. So, you really wanna make sure that data is captured as you are generating it, especially if you’re in a paper-based world. The computers almost take care of that for us. But when you’re in a paper-based world, you just need to make sure that you’re capturing everything as it comes out.

Original. I come from a time and industry where original meant the original handwritten records. As we’ve migrated towards technology and adoption of computer systems, original’s kind of taken on a new connotation. It can be either that original paper record if that’s where the data was generated, or it can be an original electronic record. And that needs to be maintained over the lifespan in which you need to maintain those records. And, of course, with originality, we need to make sure that the data is accurate. We need to make sure that we’ve caught the errors if they’re there, we fix them. We wanna factor in rounding dates, times, weights, make sure that that data is truly transparent. And with accuracy comes completeness. This is often as simple as making sure that all information’s recorded, you know, all the sections are filled out in records. This can mean you may have policies about how to handle unused sections. Maybe you’re marking them as N/A, maybe you have other practices in your computer systems as to how you handle those unused sections. But you wanna make sure that the record at the time it was completed is clear, transparent, accurate, and complete, that there isn’t an appearance, maybe, that there’s missing information in the record.

You wanna make sure that data is consistent. And this isn’t just consistency between records. So, if you’ve got multiple computer systems, maybe you have an enterprise resource planning system and you have a QC LIMS system and you wanna make sure that they’re consistent between them as you’re passing data back and forth from one to another. But you also wanna make sure that as people are recording data, they’re recording it in the same way, right? That people don’t record dates and times in different looks and feels. It just makes it more challenging to interpret and read the data as time progresses. And then, of course, the last two pieces of the ALCOA Plus philosophy are really are two related entities around being enduring and being available.

And enduring and available had one connotation, I think when I came into industry, you talked about scads of paper, lots and lots of paper being stored. So, making sure you had the right ink choices on them, that you had good paper quality, and that you stored them in physical storage locations that you could actually get them back for inspection purposes or investigation purposes. But nowadays, that’s become an interesting conversation to have when you start to look at computer systems, making sure that as computer systems upgrade. Maybe you’re working in a laboratory and you’re dealing with a spectrophotometer that’s maybe 15 years old. Well, how do you make sure that that data is still around and retrievable when you need it? Should you need to go back to those files. So, again, making sure that you’ve got that forward portability in electronic systems addresses making sure that we have enduring records and they’re readily available when we need to use them.

So, again, these are the fundamental principles which support those hallmarks that Mike talked about in a data integrity program that says, “We need to make sure that our data answers and addresses these nine elements so that we are sure and confident in the data that goes into our product applications that we use to make release decisions on.” We need to make sure that we have these elements addressed. So, Mike, if you’ll go to the next one.

And so, for here, as I started to talk about towards the end, this notion that computer systems has really come into place is certainly something where this conversation around data integrity really started to escalate. I don’t know how many of you were in industry at the time where the U.S. FDA had put out the original part 11 regulation and there was a lot of conversation around all different kinds of requirements. But certainly, one of those hot topics, and is still a very hot topic, is around audit trails. And audit trails, if you think about it, we don’t apply audit trails to paper records because we have that physical signature. If we make a change, we’re actually recording. You know, we’re all familiar with lining something out with a single line, writing the new entry, writing a comment to address why we’ve made a change in the paper record.

But for computer systems, what the health authorities have told us is we need that same level of assurance and trail through the electronic data as well. And so, regardless of where you look in the worldwide regulations, audit trails are required for any type of computerized systems. But part of the reason we’re focusing our conversation on audit trails a little bit this afternoon or this morning, depending on where you are globally, is that there are lots of layers to the audit trails. And audit trails are gonna capture changes from the moment of records created. So, they’re gonna talk about who’s created the record, when the record was created, and the data set that goes with the record and all of the metadata with the record at the time it was created. But then you also have to consider each and every time that record is modified, or if you delete that record, you need to understand who completed the task, when the task was done, and why the record was modified or deleted.

A lot of folks, and especially when you look at the systems themselves, they’re great at capturing the who and the when, but sometimes they get a little bit, for lack of a better word, squishy about capturing the why. And sometimes you might need an extra paper log or some sort of procedural control that helps keep track of the why things are getting deleted. The why is just as important as the who and the when, when it comes to modification and deletion. And when I talk about the layers, thank you, Mike, I wanted to kind of draw a picture of how complex audit trails can get in our modern computer systems.

So, I’m gonna use an HPLC system as an example. Yeah, many of you are probably familiar with this. There’s usually a computer system that’s acquiring the data, there’s usually a larger processing or data storage system, whether that’s a chromatographic platform, or it ultimately feeds into a LIM system for your sites. But what you’re looking at with audit trails and looking at data and data integrity is varying layers of complexity. You start first at the system level, and at the system level, you will start to see things like who’s logging into the system, who’s logging off of the system, system-level configuration changes are audit trailed, tracked, etc. And then when you get beyond the general entry point in, you’re gonna start to look at the specific application, right? So, looking at a particular test method, for example, you start to see how that application is set up and what changes in settings and parameters that are necessary there. Along that same level of application level changes that are audit trailed, you will also see product level information.

And really what this gets at is, how are you storing the data in the system, and what is that storage mechanism? Are you storing it by product class, by product code, by molecule type? What does that look like? And what is the audit trail around that storage location where that data actually lives? And the most, what I tend to refer to as nitty-gritty levels of audit trails, are the very specific methods in a chromatographic system, the sequences in which we run the data sets and the associated results, the real ins and outs of if we’re testing a sample, how that sample moved through the system and capturing all of that data. All of this information is audit trailed to some level, and making sure that you capture both the information from the record creation, but also more importantly, modifications and deletions. And part of the reason I picked a chromatographic system as an example, just to kind of drive this point home, is we’ve worked with a lot of clients.

And as we work with our clients, one of the things that we get to see is them go through either routine GMP inspections, pre-approval inspections, post-approval inspections, and there’s almost always an element of focus when it comes to QC testing laboratories in particular, and these complex chromatographic systems. And having people who understand how these systems function and this layering approach is important. You can have a system, for example, there are systems that are out there that create kind of these temporary processing files as they’re moving through the data, and then the system itself deletes that temporary in favor of the final data file. And we often find that if you don’t have people who understand how these applications work, how these audit trails are layered, and what all of those different pieces mean, then you can run into some difficulties as you’re getting questions from a health authority.

They may look at the audit trail, they may ask you to pull it up or print it out for them, and they may see something’s deleted. And if you can’t explain that to them, then you might end up in a situation where you’re dealing with a formal observation from a health authority. So, you certainly wanna understand in these super complex systems how audit trails layer together, what are ultimately generated and handled by the system automatically versus things that are actually associated and tied to user or interfaces between the user and the system. And the PDA guidance that we adapted this particular image from has a lot of great detail on exactly this topic, which is part of the reason I wanted to make sure that we touched on this today. So, Mike, if you go to the next one.

Mike: Yeah. And, Stephanie, really to follow up on the complexity that you’ve been discussing, we wanted to highlight the fact that that complexity needs to be analyzed and understood at the point that the system is being validated and not after the fact when the system is in use. And I think it’s important to understand the complexity that Stephanie just talked about the layering and how that’s going to look in use as you begin to analyze the data from those audit trails. But to make sure that in the validation that you’ve understood that approach and that you can test the accuracy and that the audit trails will meet the regulatory and organizational requirements. Audit trails have come a long way. At early on, there were some complexities with trying to utilize an audit trail effectively to be able to look at the data after the fact.

Most systems now are maturing in their approach to that. And understanding that as part of your system validation and how that’s intended to be used is important and certainly should be documented as part of your system acceptance that you have a defined process to examine the audit trail and to use the data for data integrity purposes. And again, as a part of what we talked about earlier, making sure that this is integrated, we should leverage this activity as part of the validation master plan, or, you know, elements of our quality management system tools that already exist and are in place to support the validation activities and make sure that the audit trail and the use and the testing of that audit trail is a part of the overall system acceptance.

And so, if we have a system and it’s validated or gonna be part of our validation, then there’s a couple of key steps that we wanted to walk through to look at an audit trail review program. And it really starts with first determining the system risk. Most people have a process in place to do this already, to understand whether the system is GXP or non-GXP and understand the overall risk of the data that’s contained within the system, whether that’s related to patient safety, product efficacy, quality, or business risk, overall complexity, right? There are numerous factors that can go into determining the risk of the system that’s being utilized or the system that’s being…you know, the laboratory equipment or the computer-based system that’s being evaluated. And that risk is the first step in determining the audit trail review requirements that need to be in place.

And once we understand that overall system risk, then we can dive into the data that it’s collecting because most systems will have a significant amount of data for review. And the question is, what is the most critical data that I need to look at? One of that data is most important to me when understanding patient safety or product efficacy, and, you know, which data is it that I want to focus on? So, again, this gets back to the MHRA comment about making the effort commensurate to the risk, right? And we’re probably not going to have the capacity to look at all the data. So, if that’s the case, then how do we determine that critical data we need to evaluate, and what’s our justification that we provide for that decision? Once I know the data, then I can look at my process for reviewing that data.

In many cases, I might already have a process in place to do that. If it’s data that’s looked at in batch release or for lab results, you know, I might already be looking at that data with an existing process, and I can point to that and use that existing process rather than creating something on top of that work that’s already happening. But in other cases, it may be data that I need to review, and there is no process, and I need to define what that process is going to be. Once I know what that process is, then I need to understand how often am I gonna look at it, into what degree, right? Is it something I’m gonna look at every batch release or every lab test that comes off the equipment, or is this something that I’m gonna review quarterly and look for issues within the data on a quarterly level?

And once those decisions are made and the justifications understood, then I need to document all of this in my audit trail review plan, which is a formal plan for that system that captures the outcomes of these decisions and also becomes then a nice training tool for people who might be doing the audit trail review work, or to hand to an inspector when trying to explain the thought process that went behind the decisions that were made. And once the plan is in place and I understand what needs to be done, then I also need to make sure that I’m evaluating the effectiveness of the implementation of that plan through my auditing process. And this could be, again, integrated into your existing internal audit program, where you can make sure that through that program you’re auditing various of these plans throughout the year to make sure that, you know, we have compliance, that we have robust metrics, and that any issues are being escalated and dealt with.

So, this five-step process kind of takes us through to get to a appropriate level of work, to review the critical data, and to show that we’re managing our data integrity program in a robust manner. And with that, there are a number of other elements to be considered. And if we think about the audit trail itself, I’m not just interested is the number 10 in the audit trail where it’s supposed to be, but looking back at that metadata that Stephanie was talking about a bit ago as to who entered that data, and when was it entered, and was it changed, and if it was, why? To look for potential flags that might raise questions around the data, I need to make sure that that metadata is available and being looked at as a part of the overall consideration.

I also need to make sure that, you know, I’m thinking about this in the larger context of that system, and this is a good time to step back and also evaluate my system access. Have I updated my system access to remove people that no longer need it or have left the company, right? Because that also can impact the overall data integrity concerns. But the people that we choose to implement this audit trail review task are important as well, right? We want people that see the various puzzle pieces and are interested in looking at each one to evaluate whether or not the puzzle comes together coherently and compliantly, or whether there are potential issues that need to be flagged and discussed. And so we want these people to be inquisitive. We want them to kind of have an investigator hat as they take on this task of reviewing the audit trail.

And certainly, we need to make sure that those people are trained, that they understand the audit trail review plan, they understand the justification behind it, they understand the significance of the work that they’re doing. This comes together then to provide a robust experience and objective evidence to show that we do have integrity of our data and that the data can be relied upon in an inspection as we get to that point. But if I have this robust program in place and I’ve done all of this work, then, you know, I think it’s important that we also discuss what happens when an issue might arise. And, Stephanie, I think this is something that needs to be planned ahead of time so that once the issue starts to arise, we’re ready to address it.

Stephanie: Yeah, agree, Mike. And I think that when we look at this question, because, you know, things are gonna go wrong, mistakes are gonna be made, data’s gonna need to be corrected. And, you know, really, as Mike talked about in his key elements, you know, when issues arise, having the right defined escalation pathways and the right investigation pathways in place is really that first level of triage opportunity that’s there and exists. But it’s also important to understand that there are ramifications of data integrity issues that when they’re uncovered, it can have trickle effects, it can delay a product approval. We worked with a client couple years ago that was dealing with a CDMO that had some or had an inspection issue come up where there was a lot of questions around some laboratory data, and the product was…approval was being held up.

And, you know, in this case, it was a really valuable medicine that needed to get to a really rich patient population who really could use the medicine. So, to see that product get delayed because of something as simple as their CDMO not having the right level of control around data integrity and audit trails in particular was certainly painful for us. And just kind of as an anecdote, they’re still trying to get that product through with that CDMO and address and resolve the issues. So, you know, delay issues can be months, but it can also be years, right? You can also see in the news especially there are things where recalls can happen, people can lose their job. And fundamentally, we look at kind of this loss of confidence and trust, and that really comes from two different angles.

It comes from both the public, for the astute public that’s paying attention to what’s going on. But also when you look at the health authorities, the health authorities are looking for that relationship with us that they want to trust us to do a good job, but if we don’t, and we have that, or we’re not transparent in communicating issues that we’ve uncovered around data integrity with them, it can kind of etch away and eat away at that confidence and trust that they instill in us. And I wanted to walk through a relatively recent example that has been published, and I pulled the information from a good partner of ours, Redica, does a lot of great industry analytics on inspections, enforcement trends, and things like that. And they put together this really nice timeline around AveXis’s product.

And in this case, the data integrity issue was reported internally within the company while the product was actually undergoing review by the agency for approval. And in this case, this was an FDA approval process. And the company was granted approval by the agency, but one of the things I kind of wanna point out is the company knew about this issue in March. It was brought to their attention, it was raised up. But what you’ll see in the timeline as you move down, they didn’t disclose to the FDA that there was a potential issue there until June. At that point, the agency had already approved the product. So, then from that perspective, the agency had to go back and say, “All right, you’re informing us that some of the data that you submitted as part of your licensure application was inaccurate. And now, we need to actually take a look at what this ramification was.”

So, the FDA and ultimately AveXis parent company issued press releases in August that described what had happened, that described the actions that were taken. And if you actually go to the link that’s there at the bottom of the image, you can actually get a link to the FDA press release and things like that that are embedded in the article. But certainly, if you advance again, Mike, some of the ramifications that came out of this included a very detailed remediation plan. So, when you have an event like this where there’s a data integrity issue detected, in this case, it was in their BLA, obviously, there’s a request for remediation plan. And the agency, the FDA, of course, held approval over that remediation plan.

And part of the core elements that we have now come to understand that were in there is the firing of two senior executives who were implicated in the data manipulation. So, there were actually people that they uncovered in their own investigation that actually manipulated the data. So, this was an intentional data manipulation so, they lost their jobs. Certainly, a level of retraining and re-emphasis. They actually created a position for a data integrity officer. I’ll go out there and say that this is not necessarily the norm in most companies. Most companies don’t create their organizational structures with the data integrity officer in place. However, the second piece of what they did in conjunction with this data integrity officer around encouraging and putting in place speak-up programs are very commonplace. And if you look at a lot of the industry-generated guidance in particular, you’ll see a lot of ties between cultural excellence frameworks, speak-up cultures, and the robust data integrity programs all kind of coming together.

And then they also brought in a third party consultant as sometimes when you’re in a little bit more of a pickle, a little bit more hot water with a health authority, you bring in somebody to help build confidence for yourself, but also for the health authorities that you’re doing the right things, that you’re making the right calls as you actually look at data and data oversight. But what I found most interesting out of all of these commitments was actually the last piece, is that they actually put into their remediation plan and made the commitment that they would notify the FDA within five days of any credible allegation regarding data integrity tied to a submitted BLA. So, what they came back and realized is that part of the discussions that if you read through the article and you read through the FDA press release and Novartis’s press releases around this event, is that there’s kind of this admission and acknowledgment that AveXis should have reported this a lot faster. There should have been a bit more transparency with the agency when they detected the issue.

It’s okay to go to a health authority and not have all the answers. What the agency needed and what they really wanted was that transparency that says that we’ve detected it, this is the steps we’re taking, this is where we are in the process. And instead, they waited until they were done and had their interim investigation report before they actually self-disclosed to the agency. So, I think that that’s an important lesson that we can kind of take, looking at this type of example where, fortunately, it hasn’t resulted in removal of the product. I did check yesterday. Product is still on the market here in the U.S., so that’s good. And the agency doesn’t question the clinical data. The issue that was actually highlighted here was in some of the early talks data that was in the BLA.

But it highlights the importance and accuracy of data, not just in the day-to-day operations of a company, but all the way through the submissions that actually go into the health authorities. And we really wanted to highlight and be able to share an example with everybody on the webinar today that shows kind of when it goes really bad, what does really bad kind of look like. And this is really kind of that consequence. I think the only potential worse outcome would be the product actually being pulled from the market after it had already been approved. And that could have been a worse outcome, but, fortunately, in this case, that was not the case. And then I think from there, you know, Mike and I wanna wrap up our thoughts kind of talking about data integrity and leave plenty of time to answer what is a very healthy question bank there.

Really one of the key pieces I think Mike and I really want to talk about as we kind of wrap up our prepared comments and notes is, fundamentally, I think there’s two messages for me. One, accurate and complete data is what ultimately gives us health authorities and the general public, the patient populations that we serve confidence in product quality and safety. And if you were in or had the opportunity to watch our webinar that we did with USP Education back, I think it was in August around the difference between quality and compliance, we talked a lot about this concept and notion of patient and product safety and product quality. And that importance of that view, data is the core element of that, right? It is the foundation of the public, the regulatory authorities, our own selves having confidence in the product safety and quality.

And the other piece that, you know, I like to make sure people understand is that procedures alone are not sufficient, right? Just having a good procedure on paper isn’t really actually gonna be sufficient to have a robust program. You really need to tie it into a larger data governance framework. And especially the more electronic, the more computer-intensive your systems and processes are, the more significant that governance framework needs to be. And it needs to be embedded as part of the overall organization and the people, the technology, and the processes. Mike, any final thoughts that you’d like to share?

Mike: You’ve highlighted great points here, Stephanie. And I think the only other thing for me is to make sure that you commit to do what is of greatest risk and that you’re compliant with what you define because it’s important that we do what we say in regards to data integrity to further these points that Stephanie has talked about. And hopefully, this has presented some tools for you to help to not only think through it, but also to document those in a way that you can explain the decisions that were made and implement those effectively across your company. So, with that, we’ve also provided a slide that has our contact information. After you receive these slides, feel free to reach out to us with the information that’s provided here. And with that, Stephanie, as you said, I think our remainder time was intended to address the Q&A and the questions that are coming in. And so the first one here is, “Is there any mandate or are pharmaceutical companies encouraged to have a whistleblower policy and review in-house?” And I think you talked a little bit about that in the example clearly with the speak-out mechanism that was implemented and the data integrity officer. But would you like to expand further here?

Stephanie: Yeah, absolutely. And these are, by the way, all great questions, and it’s not too late to put in questions if you’ve got something that’s on your mind. If we don’t get to it in the remaining time, we’ll certainly follow up with USP Education and get answers to everything. But looking at this question around whistleblower policies, I’m gonna start by saying that mandate is a pretty strong word, and with the global audience we have on the call today, really, my best advice is for you to check your local regulations and laws around whistleblower policies, etc. It is certainly a practice that is encouraged in industry. It is a best practice to have those types of reporting mechanisms, not just even just around data integrity, but really kind of any issue. And a lot of pharmaceutical companies, biotech companies have practices and processes in place where employees, partners, vendors, etc, can report anonymously any potential concern, including things like data integrity where maybe they’re concerned that something was intentionally manipulated or intentionally falsified. You can certainly do that.

And I would add to that question just to add to clarification, there’s also, in many countries like in the U.S., there is an option for people to go directly to the U.S. FDA if there is a concern and the FDA will handle it and work with the company to handle a particular issue. So, you know, you’ll see those kinds of dynamics happen. So, while I can’t guarantee that it is or is not a mandate in the particular locality that each of you might live in, I would certainly encourage your organizations if they don’t already have a process for that for people to be able to do that. And then for the company to have a process is how they’re gonna handle and investigate those reported complaints or issues. So, absolutely.

Mike: Okay. And the next one is around our thoughts on how regulatory bodies view companies that have older equipment that are not Part 11 compliant in addressing data integrity through SOPs and administrative controls. And I think, you know, part of this goes back to the conversation that this principle applies to both paper and electronic systems. It’s not an electronic system only. And so I think the agency is certainly aware that some of these thoughts and principles will be defined in such a way as to apply to not only paper records, but probably some of the older Part 11…older systems that are not Part 11 compliant. And that is gonna be through people, through training, through administrative controls, through SOPs, all of those things are gonna come into play.

Your quality management system needs to speak to how you’re looking at your data integrity even in those situations where you don’t have necessarily the latest and greatest Part 11 audit trail to rely on. So, I think certainly, you know, it does apply. Certainly, those are acceptable. But again, I think a justified position in a plan or some other part of your quality management system to have that written down and signed off and approved and available is gonna go a long way towards explaining how you keep controlling those kind of situations. But anything else, Stephanie, you’d like to add to that?

Stephanie: No, I think you did a great job answering the question.

Mike: Okay. “How many years should the audit trail be preserved, especially if your memory is getting full?” And I guess I’ll start and then you can jump in, Stephanie, but I guess my thought is that this kind of should align with your document control strategy and your document retention strategy. And, you know, if you’re keeping clinical study information for 15 years after the study ends, then you need to think about keeping the data in a similar kind of way. And so, I think it’s kind of gonna depend a little bit on the type of data in this situation, but I think, you know, leveraging, again, your document retention policies can be an effective way to help. But, Stephanie, I’ll let you add onto that.

Stephanie: No, absolutely I agree with you, Mike. So, you know, the simple answer is check your local regulations for the markets in which you’re selling your products because that’s really gonna govern how long you need to preserve those audit trails. Now, the second piece of this that I’m gonna read a little bit into the person who wrote the question, there are systems like PLC-based systems, etc., that memory may get full. And what I would encourage everybody to do is as they’re specifying their equipment, that they really put those data retention requirements in their fundamental user requirements for the system, so that as you’re specifying and selecting and designing those control systems, that you can ask those questions and have those conversations with your vendors to make sure that you understand how much can the system really hold live and then what happens, right?

It is possible to archive data if the system is designed with those functionalities present. And if it’s not, then you have to, again, you have to develop the plans, the strategies for what you’re gonna do in that particular instance. It’s a kind of an interesting piece of the technology because I’d probably say 15, 20 years ago, this was a huge concern, memory was expensive, storage was expensive. Now sitting here in 2022, most of the people who are dealing with this are typically dealing with older technology and haven’t upgraded. And one of those things that I always look at with clients is, you know, you don’t have to upgrade today, but maybe what is that plan to get you into something where maybe you don’t have a storage constraint or something like that. It puts that investment into the organization so that you don’t potentially run into a data integrity issue where you’ve overwritten data or lost data because you’ve got a system that’s a bit more antiquated.

Mike: Okay. Very good. And the next is around, “Is it necessary to review the audit trail of each batch released once the system is validated and integrated?” And I say here, again, this goes back to the upfront planning around your audit trail and audit trail review approach, right? What’s the relative risk? What’s the data that you’re looking at? How critical is that data? And when it comes to batch release, I think it’s important to consider the fact that if you were to review it three months after the batch was released and you found an issue, what would the impact be? And is that going to potentially lead to a recall or a field alert or some other type of action that certainly is expensive and can bring reputational concerns? So, I think that all needs to factor into the justification for what you’re doing. Is it necessary too? You know, I think there’s elements to consider that say it’s important to do that to avoid potential negative outcomes. But I don’t know if there’s anything that says that you have to do it, right? It really comes down to the justification, the approach, and the risk that you’re taking to it. So, anything there, Stephanie, that you’d like to add?

Stephanie: Yeah, I think the only thing I’ll add to what you said is going back to that layered diagram that we put up, the system level, the application or product level, and then the specific, you know, batch-related tests, the actual method sequences and results for that particular batch. I would encourage folks as they’re doing exactly what you’re talking about looking at it from a criticality and a risk perspective to actually look at those layers. And maybe it’s not all audit trails, maybe it’s certain pieces of the audit trail that you decide are that critical to be reviewed as you’re doing batch release. But I think, Mike, your point about, you know, understanding if an issue is detected later, that ramification and that impact may also be a significant factor in driving that decision on how much of an audit trail you review as you’re releasing product.

Mike: Okay. And the next two are kind of both related around true copies of original data. So, Stephanie, I might let you kind of talk a little bit about copies versus keeping the original.

Stephanie: Yeah, so they’re two different questions, but really do center on this concept of, you know, do you have to keep…if you’re generating something on paper, do you have to keep the original paper? Again, with varying requirements globally, I’m always gonna encourage you guys to really look at the local requirements you need to comply with as well as for each of the individual markets. In generality, I would say today in 2022, there is tolerance for archiving non-electronic data, so, paper records through data scanning. And as long as there’s a process to verify that the scan is accurate and complete before you would actually go to destroy the record. Some companies will choose to destroy the record at that point, and then that scan is now their original true copy of the record just for storage space. I mean, I don’t know if you guys have been around in paper storage systems. It usually involves large document rooms or using offsite storage to be able to house that.

So, a lot of folks are going in this direction. Now, in terms of the retention length, that’s gonna be the same answer as it was for the paper records. You’re gonna have to keep that scan around as long as the record retention requirement is there. I think the difference primary in some of those decision factors for some organizations is that computer memory, cloud memory, electronic storage is a lot cheaper than physical storage can be. But again, encourage you to look at your local regulations, make sure that there isn’t something that prevents you from doing that. And then make sure you just have the procedures and processes around how you’re actually gonna do that before you go off and start scanning and discarding original records.

Mike: Okay. And I think the next one also gets into the similar type of conversation around true copies. So, in essence of time, I may move on to the next one about working with IT. And one of the key points as you’re building your strategy, your justification, and writing this up to your plans, whether it’s your validation plan or whether it’s your audit trail review plan, I think it’s important to have the right stakeholders in the conversation, and that does include IT and their input and their need to understand and agree with everybody else around the criticality and the severity of the system, right? And then also, it includes obviously QA, equality and it includes any of the user groups that ultimately would be responsible for that data and potentially reviewing it. So, I do think it’s important to make sure that IT understands the regulatory compliance aspect of data integrity and the reasons that we are concerned about it, and then finding those people that will partner with you to have those conversations and make those decisions. But that needs to be across, you know, numerous groups and bring everyone together to come out with an agreed and finalized plan.

Stephanie: Yeah, I agree, Mike. And I think that you kind of hit on where I would’ve gone with the question as well is that more often than not, sometimes with IT professionals, especially if they’re not from the pharmaceutical industry, understanding those requirements is kind of the first starting point with them before you actually get to how you address those specific situations. You wanna ground them in the basics and make sure that they understand the requirements that you’re asking them to do and why you need to have those paper-based processes around those non-compliance systems.

Mike: All right. And last one here that we’ll have time for and the others will address with the presentation material when that becomes available outside of today. “But do you need to verify validating equipment software that states it’s compliant with Part 11?” And the answer is yes. You still need to maybe not go as deep to the software itself, especially if you have a validation package that came with that material or with that system. But you still need to look at the implementation of that into your local hardware, software configuration to make sure that it’s operating correctly and that the communication pathways are working. So, there is still a bit of work to be done even in your local instance or, you know, the use of the cloud to make sure that the Part 11 functionality that was designed in is working appropriately within your configuration. So, with that, we will try to provide answers on these other questions, like I said. But it is top of the hour, and Stephanie and I both would like to thank you all for your time, and attention, and engagement, and we look forward to future webinars where we could continue these kinds of conversations. Thank you.

Stephanie: Yeah, thank you.

 

  • Share:

Contact Us

Let’s Get Started

Our unique approach can help your company navigate the complexities of launching a drug, biologic or medical device into multiple markets. The first step? Connecting. Simply fill out our form and a representative will follow-up shortly.

1.877.787.0177

510.732.0177

22320 Foothill Blvd. Suite 330, Hayward CA 94541