listserv messages

Replied by David Wojick on 06/27/2017 - 08:09
This Confidence Levels RFI strikes me as wrongheaded. If a claim, conclusion, hypothesis, theory, etc., is controversial (as many related to public policy are) then there is no way to assign a Confidence Level. That is typically what the controversy is about.  Even in the general case, expert opinion is likely to vary greatly. Their implied theory of the nature of science and evidence seems oversimplified and wrong. What can they be thinking? David David Wojick, Ph.D.... [Read more]  
Replied by Phillip Phan on 06/28/2017 - 01:00
Seems to me that this problem has already been solved with meta-analysis. Phil   [Read more]  
Replied by David Wojick on 06/28/2017 - 03:13
How is that? How does meta-analysis assign non-controversial confidence levels to controversial claims?David [Read more]  
Replied by Phillip Phan on 06/28/2017 - 07:41
A meta-analysis is designed to objectively report the confidence intervals of a body of empirical research on a (controversial or non-controversial) question. Because it involves a thorough review of the published (and done correctly, non-published) research it provides the scientific basis for making a claim of confidence. If there is insufficient evidence to complete an MA, then the question is under-researched and discussions of confidence intervals are premature anyway. If a question continues to be controversial in the light of an unequivocal meta-analytic result,... [Read more]  
Replied by David Wojick on 06/28/2017 - 04:35
I see two issues here. First, how does the MA of a body of literature translate to a CL for each individual claim in every article? Second, are you claiming that every MA result is independent of who does it, such that everyone must get the same result? I doubt this very much, since the MA depends on personal judgement. In controversial cases I can easily see analysts on different sides getting very different results. This is because the weight of evidence is relative to the observer (a principle that I recently formulated after careful study of complex issues).David [Read more]  
Replied by Matthew Shapiro on 06/28/2017 - 22:06
What you are suggesting, and really both of your issues are related, is that certain findings will be excluded, intentionally or otherwise. A meta-analysis is as complete as resources allow, so any failing on the part of the meta-analysis author(s) is due to shoddy and/or constrained research and not a selection bias. When the pace is research is expanding rapidly, I suspect that these constraints could lead to crucial omissions of the most recent findings. Matt [Read more]  
Replied by Klochikhin, Evgeny on 06/28/2017 - 17:15
Hi Matt, I can't entirely agree with your statement that selection bias in MA is unusual. In fact some work that we've done on systematic literature reviews shows that a major constraint comes on the part of information retrieval, i.e. incomplete or biased literature search that precedes meta-analysis per se. Social scientists and economists who conduct SLRs and MAs do not always have good computational (and in fact computer science) resource to estimate how complete their literature search is before implementing the actual analysis. The issue is that databases... [Read more]  
Replied by David Wojick on 06/29/2017 - 02:42
I am more concerned about interpretation than selection. When a hypothesis is controversial the  proponents and opponents weigh the evidence differently. A good example from the physical sciences (which I am more familiar with) is the debate over wave versus particle theories of light, which lasted over 100 years. Proponents of the wave theory thought certain evidence was telling but the particle proponents disagreed, and vice versa. Many studies and experiments were done. How would MA have handled this?More generally, the scientific frontier is a realm of complex controversy. I see... [Read more]  
Replied by Jeffrey Alexander on 06/29/2017 - 04:49
I can see what the proposed program is getting at.  Note that the RFI does not state that respondents are expected to offer methods for assigning quantitative confidence levels, so the term "confidence" is being used very loosely.  In a sense, the RFI seems to be seeking ways to evaluate the degree to which individual studies are trustworthy, especially relative to other studies.To the program manager's credit, he also is looking to "unpack" this notion of confidence--so, for example, one could imagine a system where a certain study is noted as using rigorous methods but produces a... [Read more]  
Many thanks to David, Phil, Matt, Jeff, and Evgeny for for discussing this in a public forum. For me this has been the most engaging debate in recent memory. Is this an important enough issue that it would be worth the effort to write a paper which takes the reader step-by-step through a meta-analysis and calculates how various upstream  search strategies, points of view, initial assumptions, etc. would result in differing downstream confidence intervals or even conflicting conclusions? Aaron... [Read more]  
Replied by Belter, Christopher (NIH/OD/ORS) [E] on 06/29/2017 - 11:18
Aaron,   Such papers have already been written and have been available in the biomedical literature for some time. See, for example,   Higgins, J. P. T., & Green, S. (Eds.). (2011). Cochrane Handbook for Systematic Reviews of Interventions Version 5.1.0 [updated March 2011]: The Cochrane Collaboration. Available from   National Research Council. (2011). Finding What Works in Health Care: Standards for Systematic Reviews. Washington, DC: The National Academies Press... [Read more]  
Replied by Aaron Sorensen on 06/29/2017 - 07:56
Chris,Thanks for this list of papers. I'm wondering if you think they are detailed enough in the nuances raised in this conversation. In other words, do you think the debate would have been avoided if all participants had previously read the references you cite?AaronOn Thursday, June 29, 2017, Belter, Christopher (NIH/OD/ORS) [E] <> wrote: Aaron,   Such papers have already been written and have been available in the biomedical literature for some time. See, for example,  ... [Read more]  
Replied by Belter, Christopher (NIH/OD/ORS) [E] on 06/29/2017 - 12:31
Yes and no. On the one hand, some of these publications like the National Academies book and the Cochrane Handbook do go into some detail about the effects of the methodology on the results and reliability of meta-analyses. And the short answer there is perhaps unsurprising: the more rigorous the meta-analysis is, the more reliable the results tend to be. And the opposite is also true: less rigorous data collection, synthesis, and analysis tends to result in less reliable recommendations.   But the whole question of systematic review and meta-analysis... [Read more]  
Replied by Stephen Fiore on 07/01/2017 - 16:29
Hi Everyone - I'm glad to see a lot of discussion around this topic.  Remember that this is a "Request for Information".  One of the purposes of these is to see what folks think about the topic and how to help evolve the topic as something researchable (and help clear up misconceptions or anything vague in the topic).  Given the variety of ideas/insights discussed in this thread, I'd encourage some of you to consider drafting a response to this RFI as I'm sure the program manager would appreciate your input on the topic. I'm attaching a PDF of the full RFI as it provides... [Read more]  
Replied by Stephen Fiore on 07/31/2017 - 18:00
Given that this email about the recent DARPA RFI generated a bit of discussion last month, I thought folks might be interested in the coverage it got in Wired.  It includes some comments by the PM as well as by some submitting to the RFI. Best, Steve DARPA WANTS TO BUILD A BS DETECTOR FOR SCIENCE -------- Stephen M. Fiore, Ph.D. Professor,... [Read more]  

Posted by Bornmann, Lutz on 06/27/2017 - 00:16
It would be definitely interesting to study empirically the quality of available publications lists. However, it is best practice in bibliometrics that publication lists of single researchers which are used for research evaluation purposes are validated by the researchers themselves. Thus, I expect higher quality lists from databases for which I know that researchers have produced/ controlled their lists. Von meinem iPad gesendet Am 26.06.2017 um 20:37 schrieb William Gunn <>: Please... [Read more]  

Posted by Julia Ingrid Lane on 06/26/2017 - 13:20
HiThose who are interested in working with the UMETRICS data being collected and integrated at the Institute for Research on Innovation and Science ( will be delighted to know that the first tranche of data are also now available through the Census Federal Statistical Research Data Center network.  Additional tranches are expected to be added as the currently 62 university campuses that have committed to join IRIS (accounting for over half of federal university R&D expenditures) send data. ... [Read more]  

Posted by Bornmann, Lutz on 06/21/2017 - 08:26
Dear colleague,   You might be interested in the following paper:   Can the Journal Impact Factor Be Used as a Criterion for the Selection of Junior Researchers? A Large-Scale Empirical Study Based on ResearcherID Data   Early in researchers' careers, it is difficult to assess how good their work is or how important or influential the scholars will eventually be. Hence, funding agencies, academic departments, and others often use the Journal Impact Factor (JIF) of where the authors have published to assess their work and provide resources and... [Read more]  

Posted by Jing Liu on 06/19/2017 - 15:10
Hello, all,Could someone point me to literature (methodology) on looking at research productivity (number of publications and citations) based on English AND non-English journals?  I know databases like SCOPUS have non-English journals, but they are not as well indexed or comprehensive as English journals.  Thanks in advance.Jing Liu-- Jing Liu, PhDResearch Area Specialist LeadMichigan Institute for Data ScienceUniversity of MichiganAnn Arbor, MITel: 7347642750    Email: ... [Read more]