1 Department of Experimental Neurology, Charité Universitätsmedizin Berlin, Germany
2 NeuroCure Clinical Research Center, Charité - Universitätsmedizin Berlin, Germany
3 Center for Stroke Research, Charité Universitätsmedizin Berlin, Germany
4 German Center for Neurodegenerative Diseases (DZNE), Berlin Site, Berlin, Germany
5 Berlin Institute of Health, Berlin, Germany
5 German Center for Cardiovascular Research (DZHK), Berlin site, Berlin, Germany
Address for correspondence:
Dept. Neurology and Experimental Neurology and Center for Stroke Research Berlin,
Charité Universitätsmedizin Berlin
10117 Berlin, Germany
Based on research, mainly in rodents, tremendous progress has been made in our basic understanding of the pathophysiology of stroke. After many failures however, few scientists today deny that bench to bedside translation in stroke has a disappointing track record. I here summarize a number of measures to improve the predictiveness of preclinical stroke research, some of which are currently in various stages of implementation: We must reduce preventable (‘detrimental’) attrition. Key measures for this this revolve around improving preclinical study design. Internal validity must be improved by reducing bias, external validity will improve by including aged, comorbid rodents of both sexes in our modeling. False positives and inflated effect sizes can be reduced by increasing statistical power, which necessitates increasing group sizes. Compliance to reporting guidelines and checklists needs to be enforced by journals and funders. Customizing study designs to exploratory and confirmatory studies will leverage the complementary strengths of both modes of investigation. All studies should publish their full data sets. On the other hand, we should embrace inevitable ‘NULL results’. This entails planning experiments in such a way that they produce high quality evidence when NULL results are obtained and making these available to the community. A collaborative effort is needed to implement some of these recommendations. Just as in clinical medicine, multicenter approaches help to obtain sufficient group sizes and robust results. Translational stroke research is not broken, but its engine needs an overhauling to render more predictive results.
Over the last few decades preclinical research on stroke has led to tremendous progress in our basic understanding of the pathophysiological events that follow focal cerebral ischemia1,2. Numerous cellular and molecular targets have been identified for brain protective and restorative therapies, and many of these are highly effective in rodents. The incidence as well as morbidity and mortality of stroke3 have decreased. Stroke units4 and recanalization via intravenous t-PA5 or thrombectomy6 are impressive clinical success stories benefitting many patients. Intriguingly, however, practically none of these clinical breakthroughs are the result of ‘bench to bedside’ translation. On the contrary, almost all therapies that were preclinically successful have failed in actual stroke patients7. This exceedingly high rate of attrition in translational stroke research has already been the subject of a number of articles. There are no simple explanations, and stroke research is certainly not the only biomedical field struggling with a translational roadblock. In the following I would like to emphasize factors for which quantitative meta-analytical evidence exists that suggests that they contribute to attrition. My selection is biased towards items in the preclinical realm that pose straightforward opportunities for improvement. In addition, inspired by work from bioethics8,9, and meta-research10. I would like to propose, somewhat counter-intuitively, that there are instances where we need to embrace attrition. Collectively, I argue for an update of our intellectual framework for translational research.
To a large extent, bench to bedside translation is a ‘black box’. Innumerable factors impact on whether and to what extent preclinical evidence is transferable to clinical evidence. Attrition lurks on all levels: Preclinically, when moving to first in man, when trying to obtain safety or initial signs of efficacy, and in large clinical trials aiming at regulatory approval.
Can mice mimic human stroke pathophysiology?
The most basic and dramatic threat to the validity of bench to bedside translation concerns the question of whether preclinical models can predict human pathophysiology and therapeutic outcomes. This relates to the concept and construct validity of how we model the different types of strokes, and in a more general sense to whether non-human (in particular rodent) physiology and pathobiology are sufficiently similar to that of humans. There are indeed very few best practice cases which unequivocally demonstrate translational success for a treatment. Unfortunately, tPA, which was effective in a rabbit model of embolic stroke11 before its clinical efficacy had been established in the seminal NINDS trial12, has proved to be the exception rather than the rule. We therefore have to rely on indirect evidence, such as similar phenotypes of pathophysiologic phenomena in experimental and human stroke. Examples include immunodepression after stroke13 or spreading depolarization14. More examples, and a more elaborated argument for why modeling of stroke in rodents can indeed be predictive for human pathobiology and treatments can be found in Dirnagl and Endres15.
Internal validity: Keeping cognitive biases in check
To provide a solid basis for clinical development, evidence at the bench must be robust and reliable. Lack of these attributes can lead to attrition and wasted resources, unethical use of animals in research, and can potentially put patients at risk. Robustness and reliability of research are threatened by a number of biases and consequently low internal validity. Selection bias is controlled by randomization, which safeguards that experimental groups are similar except for the experimental manipulation. Concealing the allocation to experimental groups, a form of blinding, prevents performance bias. Finally, detection bias is kept in check by blinded assessment of outcomes. The conceptual framework of these measures, which are intended to ‘keep all other things equal’ (save the intervention), was well developed decades ago for clinical trials. In this highly regulated area of biomedical research, internal validity is a central consideration when planning and reporting a study and a key criterion of review boards and regulators. Surprisingly, in experimental biomedicine, internal validity appears to be much less of a concern. Indeed, in the wake of what has been termed a ‘reproducibility crisis’, the internal validity of preclinical research has recently been put into question16. Meta-research10 has provided ample evidence that despite an international discussion and the introduction of guidelines17,18, improvements in experimental design, conduct, analysis and reporting are overdue and very slow in coming19. We have recently studied the effect of attrition bias, which so far has received relatively little attention although it seems highly prevalent and may substantially skew experimental evidence. We reviewed 100 randomly selected reports published between 2000 and 2013 describing 522 experiments that used rodents to test cancer or stroke treatments, and compared the numbers of animals reported in the papers’ methods and results sections. In close to two-thirds of the experiments it was impossible to trace the flow of animals through the study, and thus to decide whether any animals had been dropped from their final analysis. Of those that did report numbers, around 30% reported that they had dropped rodents from their study analysis, but fewer than 25 % of those explained why. Using simulated data we demonstrated that this can lead to a major distortion of the results20, especially when group sizes are small.
Power failure: False positives and inflated effect sizes
Group sizes in preclinical medicine are exceedingly small. An analysis of more than 2000 experimental stroke studies performed over the last few decades reveals a mean group size of 821. The CAMARADES database21 also contains the normalized effect sizes of all these studies, which can be used to calculate the mean statistical power. With a mere 45 % the statistical power of the preclinical stroke literature is slightly lower than that in effect when we toss a coin (50 %). Yet, perplexingly, this is still superior to the 23 % median power calculated for over 700 primary neuroscience studies22. A power of 45 % means not only that an effect, if indeed present, can only be detected in 45 % of those cases (high false negative rate). It also means that in these cases effect sizes will be overestimated by more than 40 % (‘Winner’s curse22,23). In addition, and most worryingly, given reasonable prior probabilities for the effectiveness of the tested compounds (or hypotheses) a power of only 45 % will lead to false positive rates of around 50 %. John Ioannidis concluded in his 2005 landmark paper24 that due to the synergistic effects of insufficiently controlled bias (e.g. by non-blinding, see above) and low statistical power, ‘most published research findings are false’.
Publication bias: Show me the evidence
In clinical medicine, where lawmakers, regulators, and journal editors mandate preregistration of trials, it is estimated that the results of only 50% of all studies are eventually published25. In preclinical medicine, where preregistration is virtually non-existent and studies often have no clearly defined beginning or end, we cannot know what and how much high quality evidence is produced but never reported. However, judging from the almost complete dearth of any studies in the preclinical literature that reject the NULL hypothesis, we can only speculate that there is an exceedingly strong publication bias towards ‘effective drugs’ or ‘confirmed hypotheses’. Sena at al. systematically reviewed 525 original publications in the preclinical stroke literature26. Only ten publications (2%) reported no significant effects on infarct volume and only six (1.2%) did not report even one significant finding. If this correctly reflects the success rate of hypothesizing and drug treatments in the experimental stroke field, I argue that it is wasteful and potentially unethical to conduct experiments at all: Experiments must necessarily demonstrate that what has been hypothesized must be true (or that drugs work)! Oddly, at the risk of producing a false negative! Meta-analytical evidence has unequivocally demonstrated the prevalence of publication bias and the detrimental effects it can have. In the experimental stroke literature, publication bias accounts for at least 30% of the published effect sizes26.
Low external validity: Stroke treatments for healthy young male rodents
With few exceptions, stroke research is conducted in healthy, very young, male inbred rodent strains raised under specific pathogen-free (SPF) conditions and fed a diet optimized for high fertility and overall health. The equivalent human cohort would be healthy pubertal twins raised in 6 m2 isolator tents on an enriched granola diet (fig.1), but stroke patients are elderly, of both sexes, comorbid, on multiple medications, and have had exposure to numerous pathogens and antigens throughout their life. Studies on outbreds, different strains, comorbidities, aging, gender and diet as well as housing conditions have demonstrated the strong impact of these factors on stroke outcome in experimental animals and on treatment efficacy. As a general rule, the closer stroke models have mimicked stroke patients, for example after aged animals and comorbidities had been studied, the more substantial has been the loss in efficacy of the treatment effects (e.g. 27,28,29). Counterintuitively, in view of the lack of efficacy of the same drugs in clinical trials this may serve as a further indicator for the good prediction of rodent models of stroke!
Figure 1: Questionable external validity. Top: Human cohort equivalent to rodents studied in stroke models. Bottom: Typical cohort of humans at risk for stroke
Timing of treatment: Same tissue clock in mice and men?
Clearly, the exceedingly high attrition rate of clinical stroke trials cannot be blamed on low internal or external validity of preclinical stroke research alone. A multitude of reasons may have led to false negative clinical results, for example insufficient sample sizes paired with overoptimistic expectations concerning effect size. Another problem frequently quoted may have been study designs in which time to treatment exceeded the biological time window within which brain tissue is not yet fully committed to cell death. In a recent review30 we looked at the time window of major recent neuroprotection trials and found a median targeted inclusion time window of 16 hours. In animal models of stroke robust protection is usually observed only with the first 1-2 hours after occlusion of the artery. The fact that the time window for efficacy of thrombolysis is almost identical in rodents31 and man is indirect evidence that the tissue clock of brain tissue after focal cerebral ischemia is indeed similar in rodent and man. Except for the FASTMAG trial32 none of the plethora of stroke trials were aimed at treatment within the ‘golden hour’. Although this may be understandable from a practical standpoint, it may be speculated that a number of potent neuroprotectants have been ‘wasted’ because treatment was too late. Due to the neutral study results the drugs will probably never be tested again. Fortunately, innovations and improvements in hyper acute stroke care and trial methodology now allow us to put neuroprotection to the ultimate test in the golden hour. Several ongoing trials use randomization in the field and ultra-acute treatment either by paramedics (Frontier trial, ClinicalTrials.gov Identifier: NCT02315443, and ref. 33) or dedicated mobile stroke units with specialist teams deployed to the home of the patient34. These approaches enable a one hour time window for treatment and thus offer the ultimate test for neuroprotection.
Two study modes: discovery and confirmation
Most preclinical stroke researchers aim at finding new pathophysiological mechanisms or drugs (‘exploration’, ‘discovery’). To emphasize the clinical relevance of their findings they often at the same time use the results of these exploratory studies to support inferences confirming the utility of their discovery for treatment in humans. This may increase the chances of publishing this work in prestigious journals. But these claims are often not backed up by the sort of robustness and external validity required to advocate translation to humans. Measured against their promise, all attempts to translate preclinically effective treatments into guideline-based stroke therapy have failed. I propose that confounding exploration with confirmation can be a major contributor to the translational roadblock. In a recent article8 we have posited that distinguishing between exploratory and confirmatory preclinical research will improve translation. In exploratory investigation, researchers should aim at generating robust pathophysiological theories of disease. In confirmatory investigation, researchers should collect strong and reproducible treatment effects in relevant animal models. We should disentangle these two modes and customize design and reporting guidelines for each mode. Table 1 gives a tentative overview on how discriminating between exploratory and confirmatory studies might entail different study designs. Such a policy can leverage the complementary strengths of both modes and will help to improve the refinement of pathophysiological theories, as well as the generation of reliable evidence in disease models for the efficacy of treatments in humans. Adopting this approach would also reveal that most preclinical stroke research date is heavily biased towards exploration, and that confirmation is often missing.
Table 1: Suggested differences between exploratory and confirmatory preclinical study designs.
High sensitivity (high type I error rate, low type II error rate): Find what might work
High specificity (low type I error rate, high type II error rate): Weed out false positives
Can failure foster progress?
The standard model of bench to bedside translation is linear. It leads from novel mechanism or compound to a new and effective therapy in a straight line, and as quickly as possible. Disruptions of this process at any stage we call attrition. The metrics for translational success are time spent from discovery to licensing, and the ratio of licensed drugs to failed attempts (‘attrition rate’). In this model attrition equals failure. Provocatively, London and Kimmelman35,36, as well as Ioannidis37 argue that failures in the translational process may not only be necessary, but may often be more important than success. Although the quest for novel effective treatments remains the prime mover, they argue that the ultimate goal should be to increase useful information, which includes high quality negative results. Any negative result in well designed studies that provide good quality evidence results in information which can, among other benefits, correct mechanistic concepts, define dosing and timing of treatments, and free up resources for other avenues of investigation. Failures can rule out dead ends, identify research lines that should be modified and inappropriate methods. In particular in the clinical stages of translation, unsuccessful translation trajectories can be critical for maximizing efficient healthcare. Drugs are only clinically useful if we know dosages, treatment schedules, and timing at the bedside. We collect this knowledge by probing the windows beyond which a drug is no longer clinically useful. Kimmelman and London36 remind us that in basic and preclinical biomedical research
‘[…] identifying promising interventions is akin to exploring a vast, multidimensional landscape of agents, doses, disease indications and treatment schedules. The methods used to explore this landscape […] often rely on small sample sizes and/or surrogate endpoints. This allows large areas of the landscape to be explored quickly and at relatively low cost. However, economy and speed come at a cost, since small and less rigorous studies tend to produce more false positives (i.e., studies that show spurious clinical promise due to bias or random variation). […] In particular, base rates for discovering truly effective interventions are likely to be low in areas where our knowledge of disease process, mechanism and pharmacology is underdeveloped. As is well known in diagnosis, when base rates are low, false positive tests due to random variation are frequent, even if tests are sensitive.’
As discussed above, biomedical research is heavily biased against NULL results. As a consequence, and if Kimmelman and London are correct, this must be highly wasteful since available information remains un- or underutilized. Embracing attrition has a number of important implications for experimental design and reporting: We should plan experiments so that they lead to useful information even when the NULL is rejected; we should not stop our experiments as soon as first signs of a potential NULL results appear, and of course we should publish NULL results.
Overcoming the roadblock
Few scientists today deny that bench to bedside translation in stroke has a disappointing track record. Analyzing the strengths, weaknesses, opportunities, and threats of this complex process can help to improve its efficiency. Based on the evidence provided by recent meta-research and several best-practice examples I propose the following measures, many of which are currently in various stages of implementation:
We should try to reduce preventable (‘detrimental’) attrition. Key measures to achieve this revolve around improving preclinical study design. Internal validity should be improved by reduction of bias, e.g. by randomization, blinded treatment allocation and outcome assessment, and predefined in/exclusion criteria, among others. External validity can be improved by including aged, comorbid rodents of both sexes in our modeling. False positives and inflated effect sizes can be reduced by increasing statistical power, which essentially means increasing group sizes. Adherence to reporting guidelines and checklists (such as ARRIVE) is currently low, and needs to be enforced by journals and funders. When planning, conducting and reporting preclinical studies we should discriminate between exploration and confirmation. Customizing study designs can leverage the complementary strengths of both modes of investigation. In particular for confirmatory studies we should consider publication of study protocols. All studies should publish their full data sets ('Open science').
We should embrace inevitable ‘NULL results’. Failures are a necessary element of the translational research enterprise, and may actually promote our understanding of pathophysiology and successful translation in the long run. This implies planning experiments in such a way that they produce high quality evidence if the NULL results were obtained, and of course, then making these available to the community.
Some of these recommendations are hard or even impossible to implement for individual laboratories. A collaborative effort is needed to overcome these bottlenecks. Just as in clinical medicine, multicenter approaches help to obtain large enough group sizes and robustness of results. MULTIPART38, a European Union funded project with participation of NIH/NINDS will provide a scalable framework for such efforts. First multicenter preclinical stroke studies have recently been published39,40 ,41.
Translational stroke research is not broken, but in need of an overhaul. Its engine must be made more efficient, its results more predictive. I have focused on a few potential remedies informed by recent meta research, and limited my analysis to preclinical research. Researchers, funders, journals, and professional societies must work together to develop desperately needed novel and effective therapies for a disease which puts a tremendous burden on patients, their families, as well as health systems and economies.
I would like to thank John Ioannidis, Malcolm Macleod, and Jonathan Kimmelman for inspiration and guidance.
The author declares no conflict of interest.
1. Dirnagl U, Iadecola C, Moskowitz MA. Pathobiology of ischaemic stroke: an integrated view. Trends Neurosci. 1999;22:391–7.
2. Moskowitz MA, Lo EH, Iadecola C. The science of stroke: mechanisms in search of treatments. Neuron. 2010;67:181–98.
3. Feigin VL, Mensah GA, Norrving B, Murray CJL, Roth GA, GBD 2013 Stroke Panel Experts Group. Atlas of the Global Burden of Stroke (1990-2013): The GBD 2013 Study. Neuroepidemiology. 2015;45:230–6.
4. Stroke Unit Trialists’ Collaboration. Organised inpatient (stroke unit) care for stroke. Cochrane database Syst. Rev. 2013;9:CD000197.
5. Wardlaw JM, Murray V, Berge E, del Zoppo GJ. Thrombolysis for acute ischaemic stroke. Cochrane database Syst. Rev. 2014;7:CD000213.
6. Yarbrough CK, Ong CJ, Beyer AB, Lipsey K, Derdeyn CP. Endovascular Thrombectomy for Anterior Circulation Stroke: Systematic Review and Meta-Analysis. Stroke. 2015;46:3177–83.
7. O’Collins VE, Macleod MR, Donnan GA, Horky LL, Van Der Worp BH, Howells DW. 1,026 Experimental treatments in acute stroke. Ann. Neurol. 2006;59:467–477.
8. Kimmelman J, Mogil JS, Dirnagl U. Distinguishing between exploratory and confirmatory preclinical research will improve translation. PLoS Biol. 2014;12:e1001863.
9. Kimmelman J, London AJ. The structure of clinical translation: efficiency, information, and ethics. Hastings Cent. Rep. 45:27–39.
10. Ioannidis JPA, Fanelli D, Dunne DD, Goodman SN. Meta-research: Evaluation and Improvement of Research Methods and Practices. PLoS Biol. 2015;13:e1002264.
11. Zivin JA, Fisher M, DeGirolami U, Hemenway CC, Stashak JA. Tissue plasminogen activator reduces neurological damage after cerebral embolism. Science. 1985;230:1289–92.
12. The National Institute of Neurological Disorders and Stroke rt-PA Stroke Study Group. Tissue plasminogen activator for acute ischemic stroke. N. Engl. J. Med. 1995;333:1581–7.
13. Meisel C, Schwab JM, Prass K, Meisel A, Dirnagl U. Central nervous system injury-induced immune deficiency syndrome. Nat Rev Neurosci. 2005;6:775–786.
14. Dreier JP, Reiffurth C. The stroke-migraine depolarization continuum. Neuron. 2015;86:902–22.
15. Dirnagl U, Endres M. Found in translation: preclinical stroke research predicts human pathophysiology, clinical phenotypes, and therapeutic outcomes. Stroke. 2014;45:1510–8.
16. Begley CG, Ioannidis JPA. Reproducibility in science: improving the standard for basic and preclinical research. Circ. Res. 2015;116:116–26.
17. Kilkenny C, Browne WJ, Cuthill IC, Emerson M, Altman DG. Improving bioscience research reporting: the ARRIVE guidelines for reporting animal research. PLoS Biol. 2010;8:e1000412.
18. Landis SC, Amara SG, Asadullah K, Austin CP, Blumenstein R, Bradley EW, et al. A call for transparent reporting to optimize the predictive value of preclinical research. Nature. 2012;490:187–91.
19. Macleod MR, Lawson McLean A, Kyriakopoulou A, Serghiou S, de Wilde A, Sherratt N, et al. Risk of Bias in Reports of In Vivo Research: A Focus for Improvement. PLoS Biol. 2015;13:e1002273.
20. Holman C, Piper S, Grittner U, Diamantaras A, Kimmelman J, Siegerink B, et al. Where Have All the Rodents Gone? The Effects of Attrition in Experimental Research on Cancer and Stroke. PLOS Biol. 2016;1–12.
21. Camarades. Collaborative Approach to Meta Analysis and Review of Animal Data from Experimental Studies [Internet]. [cited 2015 May 23];Available from: http://www.dcn.ed.ac.uk/camarades/
22. Button KS, Ioannidis JPA, Mokrysz C, Nosek BA, Flint J, Robinson ESJ, et al.. Power failure: why small sample size undermines the reliability of neuroscience. Nat. Rev. Neurosci. 2013;14:365–76.
23. Colquhoun D. An investigation of the false discovery rate and the misinterpretation of P values. R. Soc. Open Sci. 2014;1–15.
24. Ioannidis JPA. Why most published research findings are false. PLoS Med. 2005;2:0696–0701.
25. Chan A-W, Song F, Vickers A, Jefferson T, Dickersin K, Gøtzsche PC, et al. Increasing value and reducing waste: addressing inaccessible research. Lancet. 2014;383:257–266.
26. Sena ES, Bart van der Worp H, Bath PMW, Howells DW, Macleod MR, van der Worp HB, et al. Publication bias in reports of animal stroke studies leads to major overstatement of efficacy. PLoS Biol. 2010;8:e1000344.
27. Liang AC, Mandeville ET, Maki T, Shindo A, Som AT, Egawa N, et al. Effects of Aging on Neural Stem/Progenitor Cells and Oligodendrocyte Precursor Cells After Focal Cerebral Ischemia in Spontaneously Hypertensive Rats. Cell Transplant. 2016;25:705–14.
28. Fouda AY, Kozak A, Alhusban A, Switzer JA, Fagan SC. Anti-inflammatory IL-10 is upregulated in both hemispheres after experimental ischemic stroke: Hypertension blunts the response. Exp. Transl. Stroke Med. 2013;5:12.
29. Sandu RE, Buga A-M, Balseanu AT, Moldovan M, Popa-Wagner A. Twenty-four hours hypothermia has temporary efficacy in reducing brain infarction and inflammation in aged rats. Neurobiol. Aging. 2016;38:127–40.
30. Chamorro Á, Dirnagl U, Urra X, Planas AM. Neuroprotection in acute stroke: targeting excitotoxicity, oxidative and nitrosative stress, and inflammation. Lancet Neurol. 2016;0:245–254.
31. Sena ES, Briscoe CL, Howells DW, Donnan GA, Sandercock PAG, Macleod MR. Factors affecting the apparent efficacy and safety of tissue plasminogen activator in thrombotic occlusion models of stroke: systematic review and meta-analysis. J. Cereb. Blood Flow Metab. 2010;30:1905–13.
32. Saver JL, Starkman S, Eckstein M, Stratton SJ, Pratt FD, Hamilton S, et al. Prehospital use of magnesium sulfate as neuroprotection in acute stroke. N. Engl. J. Med. 2015;372:528–36.
33. Ankolekar S, Fuller M, Cross I, Renton C, Cox P, Sprigg N, et al. Feasibility of an ambulance-based stroke trial, and safety of glyceryl trinitrate in ultra-acute stroke: the rapid intervention with glyceryl trinitrate in Hypertensive Stroke Trial (RIGHT, ISRCTN66434824). Stroke. 2013;44:3120–8.
34. Ebinger M, Kunz A, Wendt M, Rozanski M, Winter B, Waldschmidt C, et al. Effects of golden hour thrombolysis: a Prehospital Acute Neurological Treatment and Optimization of Medical Care in Stroke (PHANTOM-S) substudy. JAMA Neurol. 2015;72:25–30.
35. Kimmelman J, London AJ. The Structure of Clinical Translation: Efficiency, Information, and Ethics. Hastings Cent. Rep. 2015;45:27–39.
36. London AJ, Kimmelman J. Why clinical translation cannot succeed without failure. Elife. 2015;4:e12844.
37. Ioannidis JPA. Translational research may be most successful when it fails. Hastings Cent. Rep. 45:39–40.
38. MULTIPART. Multicentre Preclinical Animal Research Team [Internet]. [cited 2016 May 23];Available from: http://www.dcn.ed.ac.uk/multipart/
39. Llovera G, Hofmann K, Roth S, Salas-Pérdomo A, Ferrer-Ferrer M, Perego C, et a. Results of a preclinical randomized controlled multicenter trial (pRCT): Anti-CD49d treatment for acute brain ischemia. Sci. Transl. Med. 2015;7:299ra121.
40. Kleikers PWM, Hooijmans C, Göb E, Langhauser F, Rewell SSJ, Radermacher K, et al. A combined pre-clinical meta-analysis and randomized confirmatory trial approach to improve data validity for therapeutic target validation. Sci. Rep. 2015;5:13428.
41. Maysami S, Wong R, Pradillo JM, Denes A, Dhungana H, Malm T, et al. A cross-laboratory preclinical study on the effectiveness of interleukin-1 receptor antagonist in stroke. J. Cereb. Blood Flow Metab. 2016;36:596–605.