listserv messages

Posted by Dave Rench McCauley on 10/24/2015 - 13:17
Hello all,   I wanted to draw everyone's attention to the public input request regarding the reauthorization of the America COMPETES act. The request is active until October 30. Of particular note to this group is the following question:   "What factors should federal agencies consider to measure the impact and success of the federal STEM education portfolio, and to decide whether to expand, modify, or replace individual programs given limited resources?"       -- Dave Rench McCauley, Ph.D.... [Read more]  
Replied by Dave Rench McCauley on 10/27/2015 - 11:30
Hello again everyone,   I apologize, but the link I sent originally appears to not work. Please use this link to access the request:   http://www.commerce.senate.gov/public/index.cfm/pressreleases?ID=B7CF9776-704E-4D91-BCD7-7C597616C877&dm_i=1ZJN,3RCFY,E29NFR,DJDJF,1   -- Dave Rench McCauley, Ph.D. AAAS Science & Technology Policy Fellow | Solar Energy... [Read more]  

Posted by Loet Leydesdorff on 10/24/2015 - 05:59
Large Networks among Institutional Addresses: InstPlus.EXE This website provides a routine to generate a large network of institutional addresses based on co-authorship relations in a set of papers downloaded from the Web-of-Science (v5). The input is not limited to a number of institutions. The output is written as a file “mtrx.net” in the Pajek format (edgelist). A further routine generates a second matrix “fmtrx.net” containing the same information as “... [Read more]  

Posted by Gregoire Cote on 10/22/2015 - 14:04
We are expanding our bibliometrics team!   Sorry for cross-posting. If you know interested candidates, please let them know!   Science-Metrix is a leading international provider of bibliometric services, based in Montreal, Canada, but with a worldwide client base. We specialize in the assessment of science and technology activities, working with the government, education, research performing organization and research funding sectors. In the process of developing new indicators and new quantitative methods, our team... [Read more]  

Posted by Susan Fitzpatrick on 10/19/2015 - 11:19
As someone who has selected and managed review panels (somewhat broad in expertise) for a private foundation for 25 years – I agree that we need more research and less intuition – but I must introduce some notes of caution about interpreting results.  1) There are good reasons for having face to face conversations even if the effects on outcome might be small.   The key is articulating the reasons.  2) The funding organizations review criteria and outcome goals matter – the mismatch with outcome if reviewers impose their own criteria is a common occurrence.   3) Reviewer selection... [Read more]  
Replied by Stephen Gallo on 10/19/2015 - 12:59
Thank you Dr. Fitzpatrick for your comments. And I agree, there are some challenges with teleconferencing and videoteleconferencing that result in subtle but nevertheless important effects on review outcomes. We are looking into lessons learned from the team science community to explore the reasons behind these effects.    And certainly more work must be done to understand the variables that affect the decision making processes of reviewers and how well these decisions align with programmatic funding goals. If peer review can be improved upon, it is crucial that this... [Read more]  
Replied by Susan Fitzpatrick on 10/19/2015 - 12:08
Absolutely agree!    We need less ad-hocing (is that a word?) and less intuition.    While we need to use processes to achieve desired ends – that means we have to be very clear on the goals.   Thanks for your efforts!   We are always looking for evidence and scholarship to help us be better at what we do.   Because I am now notorious on this listserve for anecdotes, here’s one:    I was in a room where a reviewer on a panel supporting research to identify new treatments for brain tumors asks – what we really care about is the excellence of the science right?   It does not matter  if... [Read more]  
Replied by Zemankova, Maria on 10/19/2015 - 19:44
Interesting reading.   Howeve, I find the NIH study design quite strange. Why not to have the same number of proposals and panelists in all 4 panels? Why not to have the same proposals in all 4 panels and have 2 in-person and 2 teleconferencing panels – although even these numbers would be rather small for any significant confidence levels.   Rather than focusing on the scores, it would be perhaps more informative to compare ranking of the proposals across the panels (if the proposals are the same).   I also would not use PIs names from... [Read more]  
Replied by Holly Falk-Krzesinski on 10/20/2015 - 08:01
As part of the research, we need to also examine the benefits (or not) to the reviewers who participate in person.  Anecdotally, I have made important connections, and even one collaborator, via the people I've met while serving as a reviewer.     Regards, Holly Sent from my iPad   ***************************************       Holly J Falk-Krzesinski, PhD | Vice President, Strategic Alliances Global Academic Relations | Elsevier 453 Cedar Ct S | Buffalo Grove, IL 60089 | USA... [Read more]  
Replied by Stephen Gallo on 10/20/2015 - 05:33
Thank you Maria for providing your perspective from your NSF experience. This was very informative!    I completely agree that reviewer convenience, carbon footprint and cost are all extremely important variables. However, while the NIH study is relatively small, we have seen in our data subtle but statistically significant differences in discussion time and scoring shifts between teleconference and face-to-face review panels (BMJ Open 2015;5:e009138). How this translates to reviewer/panel decision making processes (which so far are poorly described in the literature) is... [Read more]  
Replied by Stephen Gallo on 10/20/2015 - 05:52
Excellent point Holly! In a recent poll of AIBS reviewers, 70 percent felt that their participation in peer review was particularly useful in exposing them to emerging scientific areas and technologies (http://www.the-scientist.com/?articles.view/articleNo/35608/title/Opinion--Learning-from-Peer-Review/). No doubt this exposure is enhanced by face-to-face interactions and likely leads to some collaborations.     Steve   [Read more]  
Replied by Susan Fitzpatrick on 10/20/2015 - 05:40
Indeed – this attitude was at one time an academic norm – one participated in peer review (be it grants or publications) because of a sense of professionalism to the community (when asked to review it was seen as a sign of recognition and prestige) and because it was a great way to keep abreast of emerging ideas and findings.    It is also one way to advance particular scientific approaches and ways of thinking.    Perhaps removing the “social” component (of meeting, breaking bread together, forming community with a shared purpose) extracts a cost external to the review process but... [Read more]  
Replied by Susan Fitzpatrick on 10/20/2015 - 06:08
Agree with all Maria wrote – we also use a process that departs from the “score” approach – triaging via ranking, discussion, final recommendations but then JSMF is a boutique, not retail, funder.    Our advisory panels report that they find the process rewarding.   But then we do always look at what studies we can find to improve our processes.    We also run some experiments – albeit it with very small n.   Susan M. Fitzpatrick, Ph.D. President, James S. McDonnell Foundation Past-President, Association for Women in Science   awis.org Visit JSMF... [Read more]  
Replied by Richard J Bookman on 10/20/2015 - 06:12
And I would add to this, based on many years on various NIH and NSF study sections, that nothing improves your ability to instantly recognize great grants and great grant writing like reviewing and learning about 20-30 at a time. (Of course, same can be said for recognizing bad ones…)    For me, (an anecdote of 1,  to harmonize w/ Susan….) reviewer experience was instrumental in improving my grant writing skills.  As a mentor, I always encourage post-docs and junior faculty to get as much reviewer experience as early as they can.   Clearly, that... [Read more]  
Replied by Stephen Gallo on 10/20/2015 - 12:22
Thanks Mike and Richard. It would be interesting to get a sense of how much reviewer learning goes on in face-to-face versus teleconference meetings of the same size.    [Read more]  
Replied by Norris Krueger on 10/20/2015 - 10:29
There are, of course, big positives to virtual -- having a strong culture in a group is not necessarily a plus. Is it easier to dissent, especially against a strong personality (and/or Big Name Person)??   Norris  "How can I help you to grow entrepreneurs?"  Norris Krueger, Ph.D. Entrepreneurship Northwest      208.440.3747 Blog: http://bit.ly/NKblog2 Presentations: http://bit.ly... [Read more]  
Replied by Joshua Rosenbloom on 10/20/2015 - 16:58
This conversation is very much at the heart of what SciSIP needs to be elucidating.     The findings about how different review formats lead to different process dimensions are intriguing, and Maria’s observations about the need to actually experiment with parallel reviews of the same proposals is an excellent idea, but one that neither NSF nor NIH is likely to be willing to do (despite their advocacy of evidence based policy).   Richard’s observation that “…nothing improves your ability to instantly recognize great grants…” is probably something most of us... [Read more]  

Posted by Stephen Fiore on 10/17/2015 - 22:08
Good Evening Everyone - Here is a new discovery for the ‘citizen science’ (CS) files.  For those studying CS from the standpoint of collaboration and science, the publication resulting from this ‘find’ only has three citizen scientists (about 10% of the entire author list).  So it would be interesting to see the varying levels and/or forms of collaboration and contribution in this group (e.g., what did these three do differently from the rest of the 'planet hunting' citizen scientists?).  Below the arXiv link, I've cut-and-pasted the key text from the actual scientific article that... [Read more]  
Replied by Kevin G Crowston on 10/20/2015 - 16:04
My group has been studying talk in Planet Hunters for a couple of years now. I don't know the details of this paper, but science teams members interact with the participants in these (and other) fora regularly. So I doubt they'd have to analyse the discussions to know who had contributed to the discovery; they'd already know from working with them.    Kevin Crowston | Distinguished Professor of Information Science | School of Information Studies Syracuse University  348 Hinds HallSyracuse, New York 13244 t... [Read more]  

Pages