At our recent webinar, we were joined by Dr. Shirley Sylvester, Therapeutic Area Medical Affairs Leader, Infectious Diseases and Vaccines at Johnson & Johnson, Vardhini Ganesh, Associate Bioinformatician, Immunology - Vaccine at Sanofi, Dr. Richa Goyal Rai, Principal and HEOR expert at IQVIA, and Lee-Anne Bourke, Account Executive, MLIS at Evidence Partners for an insightful discussion about how literature review automation and efficient processes streamline real-world evidence (RWE) health economic outcomes research (HEOR) studies. The conversation was moderated by Dr. Patti Peeples, President at The Peeples Collaborative, LLC.
Q: Real-world evidence (RWE) is used in multiple stages throughout the drug, medical, and diagnostic process, the drug discovery and post-approval stages. How are you using RWE and real-world data (RWD) in your particular job?
Shirley: I mostly use RWE to inform decision-making processes and policy guidelines as it relates to changes in clinical practice, how women seek care and how they receive this care in the healthcare ecosystem. There is a vast amount of information available, as you rightly pointed out, so it is critical to have a strategy to navigate the sea of data.
Richa: I have seen the evolution of systematic literature reviews (SLRs) and RWE throughout my career. We know RWE is an amalgamation of primary and secondary data. I have the opportunity to work with the secondary data through systematic reviews or targeted reviews that feed into the RWE evidence. This evidence is used by regulatory bodies for reimbursement purposes.
Q: What are some of the major pain points that you are dealing with at the point of delivery?
Richa: Ten years ago, if you typed any disease into Medline, it showed hundreds of articles. If you do the same today, the results have grown exponentially. The challenge is that we know we will not have double the money or double the time to complete the SLRs. So how do we manage them efficiently within the same time frame, maintaining the same quality while keeping up with the current speed in which literature is being published? This is one of the major challenges we face today.
Q: As a data visualization scientist, what are the pain points you are experiencing?
Vardhini: RWE in the research & development world typically means that we need to make use of it in the most efficient way in order to produce cutting edge technology, vaccines, and drugs and make them available to the public in a timely fashion. The COVID-19 pandemic is the best example of how RWE application has accelerated vaccine development and approval efforts. The use of RWE made the emergency use of the vaccine possible. If we hadn’t gone that way, we’d still be waiting for vaccine approval. Now, going back to the actual process of gathering evidence through an SLR, staying on track while navigating the vast amount of data and literature related to a simple keyword search are among the major challenges for literature reviews and for scientific research and development. It’s critical to have clear-cut exclusion and inclusion criteria and to focus on the major outcome you’re trying to demonstrate, as opposed to relying solely on casting a wide net. Having to do these things manually is another challenge in itself, especially when you have hundreds of reviews to complete in front of you. Employing automation is the correct approach to expand your bandwidth and ensure you’re able to keep up with the pace dictated by R&D.
Q: How did you evolve the “go fish method” to better define your screening process for more accurate results using automation?
Vardhini: My favorite catch phrase is “sanity of search”. It is at the core of everything I do and it keeps me on track. Having a defined protocol and criteria is essential for any type of review. This will ensure that you cast a wider net through your search string but set proper boundaries so that the expected outcomes are well captured. We also use more close-ended questions rather than just open-ended questions to narrow down our results.
Q: What are some of the pain points you have as an internal stakeholder of this process, and have they changed as we have become better at automating and developing platforms that can give more accurate results?
Shirley: Indeed, things have changed for the better in terms of automation. Thankfully, gone are the days when we had to use excel spreadsheets to collect and screen evidence. These automation processes must be in place and continue to evolve. I expect everyone in my ecosystem to be working efficiently to gather insights faster, even external vendors. I don’t expect them to be fishing in the wrong pond for answers. Just to give you an example, during the COVID-19 pandemic period my team was conducting an SLR with over 1 million references internally. We had 6 months to go through it. We narrowed down the search criteria in an attempt to decrease the number of references to screen but it wasn’t enough. It was a monumental effort and we failed because we tried to do it manually.
Q: Automating gives you more freedom as the stakeholder or user of the evidence. If you haven’t asked the correct question, you’re not stuck with sub-optimal results, you can go back and refine the process. Am I right?
Shirley: You are spot on, Patti. Automation gives us the freedom to iterate as we go and change course if we are not fishing in the right pond. It gives you the flexibility to change. You may start with a protocol, but realize along the way, that the search or inclusion criteria have to be tightened or the keywords have to be refined. Ultimately, automation allows you to change course for the better and gather the right insights to communicate with the stakeholders or the people you serve efficiently.
Q: How do you manage to bring new literature into the process while working on a search strategy and a review in real-time?
Richa: I am relying heavily on automation to be able to meet my current demands. At the moment, I am working on a project based on common diseases, like the flu and COVID-19. This project requires weekly updates and the evidence is being published at record speed. Automation helps us find proper keywords and run the screening efficiently. As we all know, screening is the most tedious and time-consuming part of a systematic review. By utilizing a platform that automates and expedites the screening process, we can meet any deadline.
“Automation gives us the freedom to iterate as we go and change course if we are not fishing in the right pond. It gives you the flexibility to change. You may start with a protocol, but along the way, you may realize that maybe the search or inclusion criteria have to be tightened or the keywords have to be refined. Ultimately, automation helps you gather the right insights to communicate with the stakeholders or the people you serve efficiently.”
Q: What are your thoughts on separating good and bad data?
Vardhini: Literature is out there and at this point, it’s not about whether it’s good or bad. It is about what is relevant to your outcome or research question. Anything else becomes junk. It’s very important for us during the screening process to be able to make that distinction, to bucket those relevant searches and isolate those irrelevant searches. The human mind is tricky; it gets excited at every point discussed when a protocol is taking place. It’s important to bind yourself to those protocols in a systematic fashion and to stick to those relevant data articles. Any data can be good or bad, depending on how you use it.
Q: How do you properly capture RWE studies given the limitations in appropriate indexing?
Lee-Anne: The indexing of any data for literature reviews is always a challenge, whether it is done by machines or actual humans. You are looking for a perfect balance between precision and recall when conducting your search. The indexing is not as strong for some publications that include RWE so you might have to broaden your search strategy which means more results to screen.
“Literature is out there; at this point, it’s not about whether it’s good or bad. It is about what is relevant to your outcome or research question. Anything else becomes junk. It’s very important for us during the screening process to be able to make that distinction, to bucket those relevant searches and isolate those irrelevant searches.”
DistillerSR allows you to case your net a bit wider and be more inclusive and comprehensive in your initial search. Once you are in DistillerSR, you can design a smart workflow with labels and filters. You can also create a hierarchical data extraction process. There are many ways you can insert logic into your protocol to triage and filter data. At the end of the process, you may identify relationships that you might not have been able to see otherwise.
Q: Have you found that most RWE is in unpublished or grey literature? What sources of information do you use, and is DistillerSR capable of managing this and importing it?
Richa: In order to conduct a holistic search, we have to go beyond the traditional databases such as PubMed, EMBASE, and Cochrane. Google Scholar is one source we rely on, for example, to include unpublished/grey literature. Social media networks such as Facebook, LinkedIn, and Instagram are a great source for RWE data from patient groups. We then bring all the searches in a common format into DistillerSR and continue the process within the platform.
Q: What are the options for displaying and reporting data at the end of the data extraction process?
Vardhini: It is fine to report data in excel spreadsheets in rows and columns; the major obstacle is reshaping the data to display the outcome you want to visualize. DistillerSR helps with the data extraction process because of the ability to create forms so you don’t waste time reshaping the data. You can then use these forms for in-depth analyses.
Lee-Anne: With DistillerSR, you can easily make changes to your forms or workflows, and all the references that require an extra data point can be automatically assigned to a specific reviewer; nothing will be lost. Furthermore, you get to choose how you want to report the data.
Q: Today, the focus is on building an easily reliable and reusable base with an understandable interface. What about efficiency and performance?
Lee-Anne: Automating every stage of a literature review eliminates the manual administrative work while reducing the chance of error significantly. The reviewer is more efficient with numerous tools at his disposal such as those for risk of bias, quality control, and highlighting.
Richa: DistillerSR can reuse existing forms in different projects; this also speeds up the process of doing the literature reviews or systematic reviews. This saves time and ensures the efficiency remains intact since the prototype is already available.
Q: Does DistillarSR have any specific benefits for rare diseases and understanding the landscape for rare diseases?
Lee-Anne: You can design a workflow protocol in DistillerSR that would allow you to cast your net a little wider. In rare diseases, we don’t always have enough information so your initial search should be broader. DistillerSR can create buckets of information for things that are related but not on topic; the data can be kept where they can be compared at a later time. This makes the work manageable.
Shirley: It’s important to expand your search to include other sources of information that will provide you with insights on failed products/compounds and on pain points of patient groups living with rare disorders.