Can we make better use of less controlled data while increasing the quality of trials, reducing costs, and creating more effective drugs? Gunnar Danielsson and Sverre Bengtsson discuss this riddle and ask, should we change our ways of working in the future? Watch the webinar, and read on below.

With a long and illustrious career, Gunnar Danielsson has been in the business of clinical research for more than 40 years. More than half of his working life has centred on the pharmaceutical industry, where he’s lead and audited projects. He’s also worked with the Swedish Medical Products Agency, the Regulatory Agency of Sweden, and as a GCP inspector for more than 10 years.

“When I first got started in monitoring, we weren't even allowed to view hospital records of the participating patients. They were considered confidential. So we were more like advisors, providing scientific input. The process has changed a lot, of course, especially the regulation itself and the legal implications. But we produced some effective medications back then. I remember the first clinical trial I worked on. It was based on 200 patients and four weeks of treatment. Half of the patients were on placebos, and in the end the drug was approved.”

Sverre Bengtsson has been in the industry for some 30 years. He has always been on the service side in the industry, working with the companies that need to carry out clinical trials.

“I started out in what probably was the first company in the world doing electronic patient reported outcomes (ePro), and not too much later I co-founded a company that eventually became Viedoc Technologies. I’ve been on a learning curve ever since as the industry is ever changing.”

Data capture and safety are two matters Gunnar and Sverre have had in common for a long time. Both have always been paramount to good quality studies, and as society has changed dramatically over the last thirty years, their role within clinical studies has also changed.

Source Data Verification is an essential part of a quality trial. Its history dates to the 1980s, when several cases of fraudulent behaviour among clinical staff led to false data, patients that didn't exist and dishonest results. It became apparent that better control was needed to keep the clinical trials under control, and as anyone in the industry knows, demands on high-quality data and more all-encompassing regulations have grown steadily and dramatically since that time to prevent fraudulent or poor-quality trials.

Gunnar expands.

“People tend to want to believe that the quality of trials is better in certain geographical areas, but you can actually find good quality trials all over the world. The problem is not an absence of good quality. The best study I have ever inspected, with the highest quality of research I've ever encountered, was in the middle of the jungle in Ghana. They were doing a unique sort of research but conducting it in a flawless way. They introduced processes that are impossible to conduct in Europe, and they did a thorough risk assessment and implemented it seamlessly.”

“Most fraudulent trials occur for economic reasons,” Gunnar explains. “In countries where trials generate economic gains, the trials are most often conducted honestly.”

He continues.

“Source Data Verification, a critical part of quality control, ensures that all the players involved in clinical research are checked, so that you can trust the resulting data. But as an inspector, I’ve observed SDV not achieving the desired result, and it's almost always costly and time consuming. Also, there's still a misconception among many monitors that SDV and monitoring are the same thing, but SDV is only one part of monitoring.”

If Source Data Verification is being mismanaged, then what can be done? And what is a reasonable level of Source Data Verification to use in trials?  

Gunnar tries to answer.

“I think there should be flexibility, depending on the design of the study or on the drug. It also depends on the therapeutic area you're working within. Clinical research and all the processes within clinical research should be based on common sense.  You should concentrate on what's important and avoid spending time on irrelevant tasks. I believe that Source Data Verification is important in terms of making sure the clinical site understands the protocol, the assessment they're engaging in and documenting it in a correct way. But what is the purpose in repeating that over and over, 100 times? Good trials are good because they’re conducted in a good way, and SDV should only be used to assure your team and society that the resulting data is credible.”

“Furthermore, the only person who can accurately determine the ideal amount of SDV to use in a trial is the site monitor. They are, or should be at least, the quality control expert. It’s important to use their expertise and competence as they are the only people who actually see what's happening.”

Sverre interjects.

“So, at the end of the day, we want to know that the drug is efficacious, reasonably safe and eases the life of the patient. What’s driving the discussion about reducing SDV then?” 

Gunnar explains.

“Regulators are responding to a concern raised by the industry about the cost of developing drugs, which is increasing so dramatically that a lot of companies can't afford to conduct trials anymore. I heard, for example, that the cost to develop a major drug nowadays hovers somewhere around $4 billion U.S. dollars. And half of that cost comes from clinical research. And half of those costs are in monitoring activities. So, we are spending at least a billion dollars on monitoring alone and changing on average only 1% of the data.  You have to ask, is it worth it?”

He continues.

“I believe that Source Data Verification is an important part of quality control, but it should be used appropriately. And a risk-based monitoring approach gives you a tool and the opportunity to decide which aspects of your trial require complete control. So, risk-based monitoring can be used as an integrated part of quality control and reduce the burden of Source Data Verification. Risk-based monitoring is the hands-on activity that enables clinical researchers to concentrate on the most important indicators, so that you increase the quality of your trial by doing less.”

When asked what concerns, if any, regulators have regarding eSource and Direct Data Capture, especially when the data is directly captured and recorded in the EDC rather than on paper first, Gunnar provided a strikingly simple answer.

“They have no concern whatsoever. Historically, regulators have been very clear in stating that data can be transcribed directly or documented directly into the CRF without any problem, whether using paper or recording electronically. It simply doesn't matter. In regulations, no differentiation is made. The same rules, regulations and concerns apply, and the same quality control is required. I know that there's a tendency among the pharmaceutical industry to stress the importance of a paper version for everything, but that has no real relevance. There is no formal requirement for this within regulations.”

“Furthermore, if you use eSource as your original document, there's no need to do Source Data Verification, because by default you already have the source data. You don't need to do Source Data Verification from a laboratory when you're getting it directly in your computer system. So, if you enter the data directly into your EDC system, that is by default, and that is approved by EMA, by FDA and all the other regulators, as the source. The important thing is to know what the source is. And remember, there may be legal requirements for medical records. In Sweden, for example, it's very common.”

As the whole industry moves towards electronic source, Gunnar stresses the need to share to ensure better control.

“We're always looking to the future, and there's always been a pot of gold at the end of the rainbow. A longing, if you will, to find systems that automatically extract hospital records directly into the EDC. We're not there yet, because there are so many different systems and so forth, but I'm pretty sure that it’s all going to be available one day. And I look forward to that moment.”

“The interesting thing about the pharmaceutical industry, is that it’s essentially a dinosaur. And a colossal exercise. But when it gets moving, it can move very quickly and very efficiently. It’s hard to change behaviour, but I think we must be open minded, in terms of how we conduct clinical trials. We are moving towards more decentralised studies and the regulators are positive, even supporting this idea. The industry itself, and specifically the academics feeding it, are positive too. So why not use the resources that we have and look openly to the future? “

Gunnar continues.

“SDV is an important part of the quality control of a clinical study, regardless of whether you’re doing central monitoring, onsite monitoring, a centralized, or a decentralized study. We will always need a certain amount of SDV, but we must be prepared to reduce it to a level that makes sense. We need to challenge ourselves to go back to basics and change the design of the studies, so that we can concentrate on saving time and money by doing the right thing at the right time. This won’t be easy, of course. SDV’s role has been discussed back and forth for over 10 years, but the discussion is gradually changing. So, maybe in 10 more years we’ll all see fundamental changes taking place.”

Sverre concludes.

“It seems like we are talking about trust. Trusting the sites, trusting the processes, trusting the experts and the evolution of SDV. Ultimately, trusting ourselves in a time of development and change.”