Episode 388
#388: Elements of an Effective CAPA Program
In this episode of the Global Medical Device Podcast, Etienne Nichols hosts Georg Digel, a seasoned expert in Corrective and Preventive Action (CAPA) systems.
Georg shares insights into setting up an effective CAPA program, discussing essential topics like identifying CAPA triggers, executing root cause analysis, and implementing corrective actions that not only ensure compliance but also drive meaningful improvement within medical device companies.
With over a decade of experience, Georg brings practical knowledge on avoiding common pitfalls such as "death by CAPA" or failing to recognize high-risk systemic issues. The episode also delves into the importance of verification of effectiveness (VoE) checks, the distinctions between corrective actions and preventive actions, and how to balance a proactive approach with pragmatic solutions.
Key Timestamps:
- [03:15] – Defining CAPA and its critical role in Quality Management Systems
- [10:45] – Common CAPA triggers: Balancing overuse and underuse
- [18:20] – Root cause analysis vs. corrective action: A step-by-step approach
- [25:40] – Containment vs. correction: Key differences in addressing nonconformities
- [38:10] – Verification of effectiveness: Best practices for ensuring long-term solutions
- [50:30] – Continuous improvement through CAPA: Avoiding system overload
- [1:00:05] – Practical tips for balancing CAPA triggers with company priorities
Memorable Quotes:
- “CAPA isn't just about compliance; it's about driving real improvement in your organization." – Georg Digel
- “The worst thing that can happen is losing oversight on the serious issues because your system is flooded with trivial ones.” – Georg Digel
- “Root cause analysis isn’t about fixing the symptom, it’s about ensuring the issue never comes back.” – Etienne Nichols
Key Takeaways:
MedTech Trends:
- CAPA as a Key to Continuous Improvement – How CAPA systems fuel company growth by addressing both high-risk and systemic issues.
- Data-Driven CAPA Triggers – Integrating post-market surveillance and production data for more proactive corrective actions.
- Regulatory Impact on CAPA – The importance of understanding evolving regulations and their influence on product safety and quality management.
Practical Tips for CAPA Implementation:
- Sharpen Your Triggers – Clearly define CAPA triggers to avoid flooding the system with low-risk issues.
- Effective VoE – Ensure VoE checks are specific to root causes, not just symptoms, for long-term success.
- Cross-Site CAPA Learning – Use internal audits and external findings from similar companies to prevent systemic failures across sites.
Future Questions in MedTech:
- How will advances in AI and machine learning improve CAPA systems in the future?
- Can companies move toward more preventive action frameworks, or is CAPA inherently reactive?
- How will stricter global regulatory updates reshape CAPA strategies in medical device manufacturing?
References:
- FDA 483 and Warning Letters Database – A resource to monitor common CAPA violations and avoid systemic errors.
- LinkedIn Profile - Georg Digel – Follow Georg Digel for daily insights on improving CAPA systems.
- Etienne Nichols LinkedIn – Connect with Etienne Nichols for further discussions on CAPA and MedTech trends.
MedTech 101: CAPA Systems
CAPA stands for Corrective and Preventive Action. It is a crucial part of any Quality Management System (QMS), designed to investigate and address nonconformities (errors or defects) in medical devices. Corrections fix the immediate problem, while corrective actions aim to stop it from happening again. CAPA ensures continuous improvement and regulatory compliance for medical device manufacturers.
Audience Poll:
Which aspect of CAPA do you find most challenging in your company?
- Identifying CAPA triggers
- Conducting root cause analysis
- Implementing corrective actions
- Verification of effectiveness
Share your thoughts or any questions at podcast@greenlight.guru!
Feedback Call-to-Action:
Enjoyed this episode? We’d love to hear your thoughts! Leave a review on iTunes, and feel free to email us at podcast@greenlight.guru with any feedback or suggestions for future topics. Your feedback helps us grow and deliver even better content.
Sponsor Mentions:
- Greenlight Guru – Streamline your quality management with Greenlight Guru's eQMS, a comprehensive system designed for medical device companies. Say goodbye to spreadsheet chaos and hello to efficiency! Check it out at Greenlight Guru.
- Rook QS – Scaling fast? Rook QS offers quality-as-a-service solutions tailored to growing medical device companies, helping you maintain compliance through every stage. Learn more at rookqs.com.
Transcript
Georg Digel: I'm doing fine, and you did it much better than I could have done it. So thanks for that.
Etienne Nichols: Well, I appreciate you coming on, and I appreciate you really sharing your knowledge on LinkedIn. It's really fun to watch what you post and to see the interactions on there. But how to set up an effective Kappa program? Maybe what we could start with just talking about what Kappa is and what it isn't.
Georg Digel: Yeah, that's a good way to start it. So the thing is how it should be is that the Kappa system is. Yeah, obviously a part of your quality management system, one of the most important ones, because the intent is to solve either the high risk issues within your company or systemic issues. Right. And with that, the first questions might start, what is a high risk issue? What is a systemic issue? The way I try to look at it is if you're a company, you could break it down into three steps. Step one would be identify all your events, your Kappa triggers, I call them, so, you know, when should I follow up? Basically, afterwards, if you escalate your event to the Capa system, then the question is, how do I execute it properly? So how do I solve the issue? How do I implement meaningful action, and how do I prove that the action taken was effective? And afterwards, if I'm done with my cuppa, I need to have some kind of governance within my company. So the questions should be, obviously, was the cup effective? But maybe it could also be in the terms of, do problems which I solved in the past come back? Do I see real improvement within my company? Or was it just wasted time?
Etienne Nichols: Yeah. So those different phases, or I guess, aspects of a kappa system, that's interesting. And I wonder if there's one that stands out to you as more poorly implemented than others. For example, when I think about the Kappa triggers, I used to work in a drug delivery combination product company, and I remember the tension between pharma and medical device in that pharma seemed like they wanted to open a cap for everything. But on the, on the other, far on the opposite end of the spectrum, medical device, we didn't want to open a cap for anything. Somewhere in the middle seemed to be the safer ground. But I wonder if you could speak to those triggers and the tendency to open or not open kappas.
ecalls. So if you want to get:Etienne Nichols: Ambiguity.
ay you find one record out of: ustration you gave one out of:Georg Digel: Yeah. So what I've seen over the years is that there's a lot of philosophical discussions around that and the way I was taught it, and I learned a lot of the things I do with Johnson and Johnson. So it's big Metac and pharma player, and they are obviously more tailored to FDA requirements than to the ISO standards. And the way I learned it was, if you're in your NC system, you don't really do a root cause investigation. You try to find out what the assignable course was, but then you stop after correction. Usually if you're in the Kappa system, then you go definitely for root cause investigation. Because the idea of the Capa system is to identify the root cause and ideally eliminate it, or at least mitigate it to an acceptable area so that the nonconformity due to this root cause doesn't happen to offer anymore. Now, if you look at the ISO standard, it's not exactly stated like that. Right. So what the ISO standard says is you have a chapter on improvement. It should be 851, I think. Sorry if it's not that one either. Yeah, yeah, it should be 851. And what it basically says is if you are in the corrective action subchapter or subpart of, you try to identify the causes of a nonconformity. Same for preventive action. So it doesn't really separate when to do an investigation. If you look in the ISO standard, but if you look at different chapters, example, giving at the one with the control of non conforming product, it wants to have you give an evaluation whether you had the need for investigation. Right. And this is the world where I live in that you have this constant discussion, do I need to do an investigation or not? Right.
Etienne Nichols: Yeah, that makes sense. And then the implementation, you kind of gave some different steps, the triggers, the investigation, and then the implementation of those. Can you speak to that? And some of the downfalls that companies get into when they try to implement corrective actions.
Georg Digel: Sure. So maybe I can even go one or two steps back. So let's assume we are clear on the trigger and we want to open up a kappa for it. A common way to handle it or to deal with it is to have a so called problem statement. So it's about what do I try to solve in my company, and what is associated with it is the so called Capascope. So in the Kappa scope, you're clear about the product that is affected, the processes. If you're in a multi site setup, then maybe which sides are affected or which regions. And the Capa scope together with the problem statement gives you like an intro to your story, basically. Right. This is the first step where I see many people mess up. So the idea of the problem statement is that you can read it and then it's crystal clear what the kappa is about. Right. So there are different frameworks to do it. There's four w, two. HS is not analysis. So there are many things to do it, but the ideal output would be something on a date, a certain function identified. Following issue, we know it's an issue because it's deviating from a requirement. The requirement is either in an ISO standard or in the regulation or in the process description. This is how often it happened. That means we need to definitely follow up. So something like that. And the Kappa scope would say we only found it for product a and site C. Right. So if you're an external auditor or you're an investigator that you get a feeling for, what does this company try to solve right now? The next step would be about current containment or initial corrections, initial actions. This is already where I see a lot of confusion happening because the term correction and corrective action doesn't seem to be too clear.
Etienne Nichols: Yeah, why don't we talk about that for a moment? So correction versus corrective, how do you define them?
the ISO standard. It's in the:Etienne Nichols: Can you give a concrete example so it's just a little easier to understand? Yeah, let's. Yeah, yeah, I'll let you come up with an example.
Georg Digel: Yeah, so my favorite example is actually from the production environment. So imagine you have a manufacturing line and the idea. So the output of this process would be to have a certain part which should be 6 cm long. I'm from Germany, writes centimeters, whatever that is. And then you identify, because you have your in process control, that all of your parts start to be 7 cm. This is obviously a non conformity, right? It's not the specified 6 cm, it's the 7 cm. You would stop the production. You would say, hey, I can't do anything here. You put your NC tag on it, you have your red band across it, you have your communication that no one touches the machine, no one does it anything. This would be a containment action because it prevents the current NC from spreading from being pushed out to the next process step. Actually, you don't find a definition for containment, but many people and companies still use this term. So that's also the reason why I talk about that. And the question about what is the correction? If it's possible you would cut down your potential. So it's seven centimeter right now should be 6 cm. So you take it, you cut it down to 6 cm. That's it. That's the correction part. Now about the corrective action part, with this cutting down, you didn't address the cause. Right. So if you don't look into the machine, maybe the setting is wrong, or maybe something else had a hiccup, then the machinery will continue producing seven centimeter long parts of. So you go into your root cause investigation, you should have your due diligence. You try to separate symptoms from contributing factors from root causes, and maybe you find out that the parameters were inaccurate. So you had the software update. After the software update, all parameters have been messed up. No one checked it. And due to that you had the seven centimeter long parts. And I think at that point it gets confusing already, because what is actually the root cause? Was it that the machine did have the wrong parameters or was it that the software update resulted in incompliant outputs? Let's say from my point of view you still need to address both, but the contributing factor would be adjusting the right parameters. Again, the root cause in this scenario was that the software update process was not done in a due diligent manner. So you need to address this software update process.
Etienne Nichols: Yeah. And maybe we can take this on to the preventive action as well, because I think there's a really example that could potentially be extrapolated out. So your corrective action, let's just, maybe we'll go all the way and say it needs to work with product development. Anytime there's a software update and they have to do a full due diligence analysis or whatever on this. So the preventive action, what would the preventive, are there any preventive actions that could be taken from this?
Georg Digel: Yeah, and this will lead to a lot of debate whether it's possible to come up with meaningful preventive action. And one of the first arguments against it would be the root cause already happened. Right. Or the cause, the nonconformity, let me be specific. The nonconformity happened. And that's also the big separation between corrective and preventive action. Corrective action is to prevent the recurrence of nonconformity. You had these seven centimeter long parts, it already happened. So every action that you will take will be corrective when it addresses the root cause. A different one would be there were never seven centimeter long parts which came out, but you identified there's this threat that it might happen. This would have been a preventive action because it addressed the cause of an, of a first time occurrence of a nonconformity. Going back to this specific example, I see some companies bounded based on production lines, products, maybe even different sites. So they would say it only happened for manufacturing line one. But I know that this software update process happens for my other machinery too. So I will check all other machines, whether the programs are still accurate and the parameters are valid. And if I see that it's fine that this will be preventive action, other companies or processes try to do it that way that they separate per qms. So if you're in a multi site setup and you have separate qms, so it means also with own certificate that you'll say hey, I had this issue at site A, site B, C, d please cross check, do an internal gap assessment if you find out anything. This would be good preventive action as long as no non conformity happened yet at site BCD.
Etienne Nichols: And you know, I think about this preventive action in some different ways. I've heard people use the, they try to turn it around from Kappa to PaCA and say we should focus more on preventive action. But I don't even know if that's practical or possible or what are your thoughts on that?
Georg Digel: I guess that's one of the most challenging questions in the overall Kappa program the company has. So how to identify preventive action in meaningful matter. What I read often and heard often is linked to risk management. But for me it's always what does it exactly be? Because if you look into a company and you have your risk management file and you have your complaint handling data and you have your internal process controls and you see how much scrap you have and whatever else, the idea is that you constantly update your risk management files and that you have the reference. So if you're a complaint handling unit, you have the direct access to the risk management file, right? Or if you're in the quality assurance group you always have access to the risk management file. I never seen this link being implemented in a meaningful way. Maybe I did. I'm to dump to identify.
Etienne Nichols: I think it's across the medical device industry. I think you're exactly.
Georg Digel: Yeah, yeah. It's just that I don't know what it exactly should mean. Right. With this. Turn it around to PECA because you need to have so many prerequisites to do it. So what you could think of is if you have strong tracking and trending data analytics, so you have the ideal overview of your production data. Let's say you have like an alarming limit and you have like this upper and lower process specification, and you see you have a negative trend. And so maybe it's coming closer and closer to the warning limit or also to the upper specification. And if you could identify it, implement preventive action because you didn't breach the upper limit or the lower limit yet. This would be meaningful preventive action. But I just think many people underestimate how challenging it is to do it in a meaningful manner. And also with a pragmatic approach without hiring hundred data analysts or the most expensive software is. Right. So there are too many challenges associated to that. And don't get me wrong, I don't say don't think about preventive action, I just say this term, turn it around. Peca comes with more challenges than people might think.
Etienne Nichols: Yeah, you talk about approaching that lower limit on a tolerance band, or let's say you're drilling a hole and the hole seems to be getting a little bit bigger as parts go through. Maybe the drill is starting to wobble, or maybe if it's getting smaller, maybe the tool is wearing down. But I guess I feel like this should be captured to a certain extent with continuous improvement efforts. Yes, those are, those are typically applied to manufacturing efficiency, but it could also be applied to design iteration. We talk about risk based approach, researching risk. Like you said, it's part of medical device development, so the search to prevent issues can be a little asymptotic. You can't actually ever reach it, you know, perfection, which doesn't mean you shouldn't try, obviously. But yeah, that's. That's one of the difficulties that I've seen.
Georg Digel: Yeah.
Etienne Nichols: One thing that I think about, because you mentioned some of the data analytics and different things, customer feedback, post market surveillance could be part of that. What about learnings from other companies, headlines, news, things like that? Is that applicable or what are the thoughts there?
Georg Digel: I think that's one of the most meaningful and easiest ways for company, especially with the FDA. They are quite transparent with what they find. Right. So if someone requested in 483, you might find it on the database. Warning letters are published without any requests on the database. So what you could do is either look for similar competitors like the company you have, or you work for, and then see what does the other company have in terms of recalls, or also 483 warning letters. And especially with the 483 you can cross check. So you find so many deviations against NC and Kappa, for example, and you could just look at it and then say, okay, this company forget to verify the effectiveness of implemented actions. How good is my company with that, for example, or also the overall external audit program or FDA inspection management program? If you have a bigger company and you get the finding for one side, for one qms, it's quite smart to let the other qms do an internal gap assessment. Right.
Etienne Nichols: You mentioned verification of effectiveness check, and I'll just throw one thing out there. When you look at those 483 SDHE, the FDA also posts kind of an analysis of how many. Forty three s and for what their reasons were. And cap is consistently one of the top three. And so that is, that's a great place to go look and see why these companies are getting those 43. So I kind of want to emphasize what you said there, but the verification of effectiveness check, I want to ask what your opinions are there, and I want to just share briefly before you answer. I can remember a haunting day when I used to as a manufacturing engineer, I managed a few different kappas, some that I inherited, some that I had to were on my product line. And I remember the quality assurance coordinator, her name was Hannah. She came six months after I thought we'd solve this problem, said, hey, do we have that verification of effectiveness? And we had decided what that was going to be. And I just remember thinking, oh, no, I've got to go make sure it actually worked. And I was a little bit dreaded going out to the floor. It worked. But what are some. So the good, the bad, and the ugly about boes and verification of effectiveness checks?
Georg Digel: Yeah. So I would even add two things before that. And technically, even before you implement actions, you should do a so called verification and validation of the actions paired with a negative impact assessment or adverse effect assessment. So I think this causes a lot of confusion because I see people throw everything into one bucket. So it's this verification and validation part, the negative impact assessment. It's the verification of effectiveness and the verification of implementation. And this will also link now to the challenges and the ugly parts about it. So let's go back to our example we just talked about. Let's say it was not the software, but you decided that your current production process is not capable to do any conforming compliant parts anymore. So you would change the process. Before you can use the process, you need to validate it, right? Or if you don't need to validate it, at least you need to verify the output, that the output will meet predetermined specification. This is one part of the carepa process, but I see many people missing it. Next thing would be this negative impact assessment. So you come up with an action. You also know how you would verify or validate it, but you should assess whether the implementation of this action would negatively impact my QMS, my product, or maybe the ability to meet regulatory requirements. So it's written in the QSR. It's also written, was specified in the ISO. And also if you look at the QSET, so like this inspection quality system inspection, what is the T technique? Technique. Yeah. Thanks. So you also see that the investigators need to check this, right. Whether the company did it or nothing. And if I give you an example. So let's say you had a complaint in the root cause, showed that people misplaced the screws, right? So there are free holes for screws, but they have different diameters. But the diameter is so close that you can also pick the wrong hole. And you think the easiest way would be to use color coding. So you implement a color coding and you have a yellow screw going to yellow hole and green into green, blue into blue. But afterwards, after you've done all of that in your verification of effectiveness check, you realize that you get complaints because you have residue. So you used just some kind of color you didn't think about it should be bioactive, biocompatible. Then you will have infections coming from the procedures. And this is what the negative impact assessment should catch that does your action make sense or not? So those are those things before that. So let me summarize. The verification validation of the action together with the negative impact assessment, and then the verification of effectiveness part is not a verification of implementation. So what I often see is verification of effectiveness is corrective action is implemented. My copper was effective or process was trained. My kappa was effective. Yeah.
Etienne Nichols: When in reality you should be checking to make sure, oh, we've never had this non conformity. It's no longer an issue.
Georg Digel: Yeah. If you want to be a little bit more sneaky, you could scope it down to the root cause. So it's not that the overall non conformity shouldn't happen anymore, it's. It's the combination of the nonconformity due to the root cause. Right. And this is linking to the way of phrasing verification of effectiveness criteria. So a common framework to use is the smart framework. So the one with specific and measurable and all of that, it should be linked back to the initial problem statement that's also the reason why I'm kind of pedantic about the problem statement and the CAPA scope. But the achievable part should also be in there. Sometimes I see voe criteria that are either so unspecific that everything would pass. Yeah, pass it. Right. Or it's maybe so strict that no company in the world could achieve it without investing so much money into it that you wouldn't have any money anymore to. To operate the company, basically.
Etienne Nichols: Yeah. No, I like that specificity of applying it to the root cause, because that's ultimately the purpose of that verification of effectiveness check. That makes sense.
Georg Digel: Yeah.
Etienne Nichols: Yeah. You know, as you were talking, it made me think, when we're applying the principles of a kappa, an effective Kappa program, maybe. Maybe one of the downfalls of companies is, oh, we're just gonna fix this problem. There's a problem. You know, maybe if the assumption is, if we've designed the product correctly, it's run for five years appropriately, and something changes, well, then someone did something wrong, someone broke the system, and that we just got to get the system back to where it was. We forget about some of the external factors. And I'm going to throw an illustration out there and see what you think of this. Let's say you have a surgical piece of equipment that maybe it's a cast aluminum part. And so suppose a regulation in California has changed how the casting process can take place, which invariably offers, maybe creates a little bubbles inside the casting so it can break more easily now, but due to the regulation for the actual casting process, you can't get cast parts that are. That don't have those porosity, that the high levels of porosity or whatever, or trying to think of another one. If maybe the state of the healthcare has changed, where maybe it's a part that supports a part of the body, maybe like a limb, when you're doing surgery on the wrist, and they've actually decided you need to put more force on it, because it's actually better procedure if you change the way you apply the force. So now the actual user need is changed. There's no longer. I'm just gonna throw something out there. No longer a 50 pound force requirement on your part, maybe now it's an 80 pound force requirement. So it's not just that the process has always been good, and we just got to get back to the original process. There may actually be things that need to change, and we have to understand what the true root causes. Why are there more breaks in the field? Is it prosody is it the standard of healthcare? Theres lots of different external factors that could be going on as well that could require change. What are your thoughts there?
Georg Digel: Yeah I totally agree and id say thats also one of the reasons why there are so many findings on Capa. Overall it's a quality management system. So it's not that Kappa equals qms. There's also a lot of other subparts. Right? Yeah. Complaint handling, your PM's, you have internal audit and whatever right? There's so many parts and just saying yeah we changed something therefore we have now NC's on the market or complaints or whatever. That's a lacking part of problem solving right? It's like this first idea. What I also advise against, if you go to root cause analysis, pick your first plausible course and then address it, right? So it's the very same. And coming back to the 483s. So yes, Kappa is cited since ten years. I looked one time ten years back, it has always been cap in the top three. But you can mess up so much with Kappa, right? If you read those 483s it's you had an initial complaint and you in the complaint investigation you missed to address the fact that it's actually a systemic issue and then you didn't escalate it in timely manner for corrective action, right? And then it's the thing, is it really a flaw of the Kappa system then if the investigation in the complaint process was not timely enough or did oversee some patterns? Because usually if you look especially at bigger companies, those are different departments, right? It's not that the Kappa guys sit in the complaint handling unit or the other way around. And just like you said, so you have those requirements with PM's and vigilance program and the complaint handling part. And you see you have an increase of mdrs or mirs here in Europe, or you just get those signals from the market that maybe you have a race of a failure code, maybe you have more complaints for product, maybe you have more complaints from a region. Those are signals. The question is then what do you do with those signals? Just like you said, you had the change of a different regulation, therefore the procedure is different. The question is why was your qms not capable to identify this change in regulation? Because in an ideal world, I don't know how it is in the QSR, but definitely in the ISO standard, the regulatory affairs team is also responsible to see, not the team alone, right? But the company is responsible to identify changes in different applicable regulations too right? And the ideal world would see, hey there's one major procedure. All of our products are used for this procedure. They will change the force requirement. This means we need to go to our design team and let them redesign it and go through the whole cycle. Maybe we need to have a field notice to say, hey, don't use a product revision b for the new procedure because you will get a product revision C for it. The only use product revision C B for this, this and this procedure. This would be the ideal world. Right. But I think it's quite a stretch.
Etienne Nichols: We talk in theoretical terms sometimes with the ideal world, but there is the real world as well, in that doctors are a bit of a rebel. This is a new thing. We're going to do this. You know, just. I'm sure it'll take it, but. So there's. There's interesting things there. I really enjoyed this conversation. We're going to have to do this again sometime. Any last piece of advice you want to give to medical device companies out there any less, places you want to point them, directions you want to take?
Georg Digel: Yeah, I would say less is more. And remember that CApa is a big part of continuous improvement. And what happens is if you escalate every and each issue to the Kappa system, that you will flood the Kappa system with so many also low risk issues that you will look or lose oversight on the important issues. And that's the worst thing which can happen to your company because it's first. Yeah, it's about patient safety. Right? Patient safety. The CApa system should be there for the really high risk issues, the really systemic issues. You just see that you're in jeopardy for patients life threat. So that would be my thing. If you're unsure about your CAPa system as of right now, start to handle the serious ones, the obvious ones, and over time, continue the discussions and sharpen your triggers. Be more clear about what needs to be handled via Kappa, let's say via corrective action or preventive action and whatnot.
Etienne Nichols: I think that's good advice. Those of you listening, definitely check out Georg's LinkedIn. We'll put links in the show notes so that you can find him. He gives advice nearly every day on how to improve your Kappa system. So really appreciate that. Georg, thank you so much for coming on. I'll let you get back to the rest of your day, but look forward to future conversations, everybody.
Georg Digel: Take care.
Etienne Nichols: Thank you so much for listening. If you enjoyed this episode, can I ask a special favor from you? Can you leave us a review on iTunes? I know most of us have never done that before. But if you're listening on the phone, look at the iTunes app. Scroll down to the bottom where it says leave a review. It's actually really easy. Same thing with computer. Just look for that. Leave a review button. This helps others find us, and it lets us know how we're doing. Also, I'd personally love to hear from you on LinkedIn. Reach out to me. I read and respond to every message because hearing your feedback is the only way I'm going to get better. Thanks again for listening, and we'll see you next time.