Should the NIH allow AI for peer review?

Of course, as a partial aid.  Instead we get this, as outlined to me by correspondent Raghuveer Parthasarathy:

On the same day of your post on a Nature editorial on AI the NIH blogged about its new policy statement, “The Use of Generative Artificial Intelligence Technologies is Prohibited for the NIH Peer Review Process“; its blog post is “Using AI in Peer Review Is a Breach of Confidentiality“. It’s bizarre. There are legitimate arguments to be made for avoiding AI when reviewing grants, but these aren’t present; rather, it’s that using AI would violate confidentiality:“Reviewers are trusted and required to maintain confidentiality throughout the application review process. Thus, using AI to assist in peer review would involve a breach of confidentiality. In a recently released guide notice, we explain that NIH scientific peer reviewers are prohibited from using natural language processors, large language models, or other generative AI technologies for analyzing and formulating peer review critiques for grant applications and R&D contract proposals. Reviewers have long been required to certify and sign an agreement that says they will not share applications, proposals, or meeting materials with anyone who has not been officially designated to participate in the peer review process. “The comments at the blog are good, with nearly everyone pointing out that one can host AI locally, avoiding sharing confidential information. The official policy (second link above) is as unclear as the blog post, prohibiting AI, ” the NIH prohibits NIH scientific peer reviewers from using natural language processors, large language models, or other generative Artificial Intelligence (AI) technologies”, but then later referring to “online generative AI tools”. The lack of understanding, or of clear writing, is striking. My own blog comment: “As others have pointed out, this policy rationale makes little sense, regardless of one’s thoughts on whether AI should or shouldn’t be used in peer review… Presumably the authors mean non-local AI services, but if so, that should be stated. Or do the authors mean ideas that come from unsourced others? (If that’s the case, why is confidentiality the issue?)”

He is absolutely correct, and dare we expect just a wee bit better from our scientific authorities?

Comments

Comments for this post are closed