Skip to content

Feedback on reviewer recommendation notebook #1649

Open
@kbarnhart

Description

@kbarnhart

I just tried out the reviewer recommendation notebook. Very cool and very easy to use. My feedback covers two topics (A) assessment of results and (B) the interface.

A. Assessment of the results.

I don't know much about NLP, so I'm not sure how to provide feedback in the most constructive way. I'll try.

I tried the tool out on the submission dorado (openjournals/joss-reviews/issues/2568) and then compared its results with my standard practice for finding reviewers (combination of people I know are working in related areas, author recommendations, github search, and reviewer list search). I would rate the notebook results as neutral in that all the recommended reviewers have expertise in water (as opposed to transportation or astrophysics). So I would say the results provided a good first order match.

However, in finding reviewers there are some additional things I take into account. For example, this was a submission about water and a passive particle method, so I sought out a reviewer who knows about particle methods. And it was a submission about water and earth surface processes, so I sought out a reviewer who does work in this area. I suspect that often these higher order considerations include the methods used and the way in which the package is applied. And that these aspects of submissions are very important for finding appropriate reviewers.

I don't know how this will compare with the next time I try the notebook on a new paper, but will report back then.

B. Interface feedback

This feedback is minor and doesn't really need to be done, but here are some ideas if anyone wanted to improve the interface.

In the binder interface, the most relevant recommendation is to create some mechanism by which the results are exported (.txt or email). Another idea would be to just require the DOI, and then internally create/fetch the PDF.

If the interface was streamlined I can think of two ways I'd However, I could envision two interface options that would work really well for how I handle submissions.

  1. Being able call @ whedon recommend 10 reviewers would list the top ten reviewers on a pre review issue.

  2. Something like the test paper generation page that would allow you to plug in a repo, it would then find/build the PDF, compare against the corpus, and print out a table of recommendations.

tag: @arfon

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions