How Many Articles Does an Editor Review Each Day

  • Journal Listing
  • F1000Res
  • 5.five; 2016
  • PMC4962294

Version 2. F1000Res. 2016; five: 683.

The effects of an editor serving equally one of the reviewers during the peer-review process

Marco Giordan

1Fondazione Edmund Mach, San Michele, Italy

Attila Csikasz-Nagy

2Rex's Higher London, London, UK

ivFaculty of Information technology and Bionics, Peter Pazmany Catholic University, Budapest, H-1083, Republic of hungary

Andrew M. Collings

3eLife Sciences Publications Ltd, Cambridge, Britain

Federico Vaggi

oneFondazione Edmund Mach, San Michele, Italian republic

Version Changes

Revised. Amendments from Version 1

In this new version, we try to accost the excellent comments we received by reviewers. We fixed a few minor errors in figure labels (Figures ane and 2), we tried to use the same terms more consistently, and we improved all the figures, especially Figure iv. Our previous version included an assay of citation rates as measured on the 29th of February. Nosotros downloaded updated citation rates for all papers, and recalculated all the assay with the new citations, comparing the unlike results (Figure 3).

Peer Review Summary

Review appointment Reviewer name(s) Version reviewed Review condition
2016 Oct twenty Ivan Oransky and Alison Abritis Version 2 Approved
2016 Jul 26 Ivan Oransky and Alison Abritis Version i Approved with Reservations
2016 Jul 5 Bernd Pulverer Version ane Approved with Reservations
2016 May 12 Alesia Zuccala Version one Approved

Abstract

Groundwork

Publishing in scientific journals is one of the most important ways in which scientists disseminate research to their peers and to the wider public. Pre-publication peer review underpins this procedure, simply peer review is subject to various criticisms and is under pressure from growth in the number of scientific publications.

Methods

Here nosotros examine an chemical element of the editorial process at eLife, in which the Reviewing Editor unremarkably serves equally 1 of the referees, to come across what effect this has on conclusion times, decision type, and the number of citations. We analysed a dataset of 8,905 inquiry submissions to eLife since June 2012, of which 2,747 were sent for peer review. This subset of 2747 papers was then analysed in particular.

Results

The Reviewing Editor serving equally i of the peer reviewers results in faster decision times on boilerplate, with the time to final determination ten days faster for accustomed submissions (n=1,405) and five days faster for papers that were rejected after peer review (n=1,099). Moreover, editors acting every bit reviewers had no effect on whether submissions were accepted or rejected, and a very modest (but pregnant) effect on citation rates.

Conclusions

An important aspect of eLife's peer-review process is shown to be effective, given that decision times are faster when the Reviewing Editor serves equally a reviewer. Other journals hoping to ameliorate decision times could consider adopting a similar arroyo.

Keywords: peer review, decision times, eLife

Background

Although pre-publication peer review has been strongly criticised – for its inefficiencies, lack of speed, and potential for bias (for example, see 1 and two) – it remains the gold standard for the assessment and publication of research 3. eLife was launched to "meliorate [...] the peer-review process" 4 in the life and biomedical sciences, and one of the journal's founding principles is that "decisions well-nigh the fate of submitted papers should be fair, constructive, and provided in a timely manner" 5. However, peer review is under pressure level from the growth in the number of scientific publications, which increased past eight–ix% annually from the 1940s to 2012 6, and growth in submissions to eLife would inevitably claiming the capacity of their editors and procedures.

eLife ( https://elifesciences.org/) was launched in 2012 to publish highly influential research across the life sciences and biomedicine; inquiry articles in eLife are published within xv wide subject areas, including jail cell biological science and neuroscience (with the well-nigh publications), through to ecology and epidemology/global health (with fewer publications; https://elifesciences.org/search).

eLife's editorial process has been described before vii, 8. In brief, each new submission is assessed by a Senior Editor, normally in consultation with one or more than members of the Lath of Reviewing Editors, to identify whether it is appropriate for in-depth peer review. Traditionally, editors recruit peer reviewers and, based on their input, make a determination virtually the fate of a paper. Once a submission is sent for in-depth peer review, however, the Reviewing Editor at eLife has extra responsibility.

Commencement, the Reviewing Editor is expected to serve as one of the peer reviewers. One time the full submission has been received, it is assigned past staff to the Reviewing Editor who agreed to handle it, both as the handling editor, and as 1 of the reviewers, unless the Reviewing Editor actively decides against serving as a referee. A common reason for non serving as i of the referees is workload: for example, a Reviewing Editor already treatment 2 papers as an editor and a reviewer may be less likely to accept on a third, unless they tin take the third one without providing a review. Another common reason for not serving as one of the referees is when the paper is outside of the Reviewing Editor's firsthand area of expertise: yet, eLife editors are still encouraged to serve as a reviewer in these circumstances as a review from this perspective can be informative in helping to assess a paper's wide appeal. We cannot rule out the possibility that some Reviewing Editors self select the well-nigh interesting submissions to provide a review themselves, just the journal takes various steps to encourage the practice of providing a review wherever possible: for case, by tracking the tendency on a monthly footing, by explaining this expectation when a Reviewing Editor first joins, and past asking for a justification when Reviewing Editors decides against providing a review of his or her own.

2d, once the reviews have been submitted independently, the Reviewing Editor should appoint in discussions with the other reviewers to attain a determination they can all hold with. Third, when asking for revisions, the Reviewing Editor should synthesise the divide reviews into a unmarried set of revision requirements. 4th, wherever possible, the Reviewing Editor is expected to make a decision on the revised submission without re-review. At other journals, the Reviewing Editor may instead be known as an Academic Editor or Associate Editor.

Since editors have extra responsibility in eLife's peer-review process, here we focus our analysis on the effect of the Reviewing Editor serving as i of the peer reviewers, and we examine three outcomes: 1) the issue on decision times; ii) the effect on the decision type (accept, reject or revise); and three) the citation rate of published papers. The results of the analysis are broken down by the round of revision and the overall fate of the submission. We do not consider the effect of the discussion between the reviewers or the issue of whether the Reviewing Editor synthesizes the reviews or not.

Methods

We analysed a dataset containing information nigh 9,589 papers submitted to eLife since June 2012 in an anonymised format. The dataset independent the date each paper was first submitted, and, if it was sent for peer review, the dates and decisions taken at each step in the peer-review process. Data most authors had been removed, and the identity of reviewers and editors was obfuscated to preserve confidentiality. This dataset was obtained in collaboration with the editorial staff at eLife, who contributed to and collaborated on this manuscript.

Every bit a pre-processing step, we removed papers that had been voluntarily withdrawn, or where the authors appealed a decision, as well as papers where the records were corrupted or otherwise unavailable. After clean up, our dataset consisted of a total of eight,905 submissions, of which 2747 were sent for peer review. For the rest of the paper, we focus our analysis on this subset of 2747 papers, of which i,405 had been accustomed, 1,099 had been rejected, and the residue were still nether consideration. The commodity types included are Research Articles (MS type one), Brusk Reports (MS type xiv), Tools and Resources (MS type nineteen), and Research Advances (MS type 15). Registered Reports are subject to a slightly dissimilar review process and have not been included.

Earlier discussing the results, we innovate a few definitions: the " eLife Decision Time" is the amount of time taken by eLife from the version of the submission being received until a decision has been reached for a particular round of review. The "Author Time" is the amount of time taken by the authors to revise their article for that round of revision. The "Total Fourth dimension" is the time from first submission to credence, or amount of time taken for eLife to accept a paper from the moment it was first received for consideration. Past definition, the "Total Time" is equal to the sum of the " eLife Determination Time" and the "Writer Time" across all rounds, including the initial submission step. "Revision Number" indicates the round of revision. We distinguish between Reviewing Editors who served equally ane of the reviewers during the get-go round of review and Reviewing Editors who did non serve equally one of the reviewers (i.e., did non personally try to critically evaluate the scientific content of the article) with the "Editor_As_Reviewer" variable (True or Faux).

We illustrate the variables with a real example taken from the dataset ( Table 1).

Tabular array 1.

An instance from the dataset.

MS
Type
Revision
Number
Received
Appointment
Decision
Date
eLife
Conclusion
Time
Total
Time
Author
Time
Editor_As_
Reviewer
5 1 2012-06-xx 2012-06-21 1 North/A Due north/A
1 1 2012-06-27 2012-07-25 28 N/A 6 True
ii 2012-09-05 2012-09-05 0 77 42 True

The case submission from Tabular array 1 was received equally an "initial submission" (MS TYPE 5) on 20th June 2012. Ane day after, the authors were encouraged to submit a "total submission" (MS TYPE 1) that would be sent for in-depth peer review. The total submission was received on 27th June 2012, when the Reviewing Editor was assigned and reviewers were contacted. In this example, the Reviewing Editor also served as one of the reviewers (indicated by the "Editor_As_Reviewer" variable).

On 25th July (28 days subsequently), the Reviewing Editor sent out a decision request for revisions to the authors, who submitted their revised manuscript on 5th September. The paper was accepted on the same mean solar day that information technology was resubmitted. In this case, the total eLife Decision Time was 29 days (including the pre-review stage), the Writer Time was 48 days, and the Full Fourth dimension ( eLife Decision Time plus Author Fourth dimension) was 77 days. Total Time refers only to the total time across all rounds and revisions for each newspaper - and therefore does not vary across rounds. Since nosotros are focusing on the role of the editors in the peer review process, in the rest of the paper nosotros volition ignore the time spent in the pre-review stage.

All of the statistical analyses were performed using R and Python. On the Python side, we used statsmodels, scipy, numpy, and pandas for the data manipulation and assay. To plot the results we used bokeh, matplotlib, and seaborn. Details of all the analysis, together with code to reproduce all paradigm and tables in the paper are available on the companion repository of this paper hither: https://github.com/FedericoV/eLife_Editorial_Process.

To obtain the citation numbers, we used a BeautifulSoup to scrape the eLife website, which provides detailed data about citations for each published newspaper.

Results and discussion

First, we examined the effect of the Reviewing Editors serving as i of the reviewers on the time from submission to acceptance or from submission to rejection afterwards peer review (Total Fourth dimension). When the Reviewing Editor served equally a reviewer (Editor_As_Reviewer = True), the full processing time was ten days faster in the example of accepted papers and more than than 5 days faster in the case of papers rejected after peer review ( Figure 1). Both differences are statistically pregnant (see Table ii for details). Intuitively, regardless of the function of the Reviewing Editor, rejection decisions are typically much faster than acceptance decisions, every bit they go through fewer rounds of revision, and are not commonly field of study to revisions from the authors.

An external file that holds a picture, illustration, etc.  Object name is f1000research-5-10481-g0000.jpg

Decision times are faster when the Reviewing Editor serves as one of the reviewers.

We compare the total fourth dimension from submission to acceptance and submission to rejection afterwards peer review. Lite blue indicates papers submissions where the Reviewing Editor served equally 1 of the peer reviewers, while orange indicates submissions where the Reviewing Editor did not serve as one of the reviewers (i.e., the editors had more of a supervisory role).

Table 2.

Result of a Reviewing Editor serving as a reviewer (Editor_As_Reviewer) on eLife Decision Fourth dimension and Writer Time.

Counts eLife Decision Time Author Fourth dimension
Editor_As_Reviewer False True False True M-W False True M-W
Decision_Type Revision Number
Take Full
Submission
0 5.000 12.000 21.600 23.333 1.000 4.000 four.833 0.915
i 440.000 650.000 viii.802 6.949 0.006 52.209 51.168 0.402
2 115.000 164.000 4.339 3.238 0.175 16.487 14.652 0.402
iii half-dozen.000 eleven.000 3.667 2.455 0.747 6.833 9.909 0.811
Pass up Full
Submission
0 461.000 616.000 36.104 30.981 0.000 half dozen.182 six.430 0.267
1 ten.000 10.000 22.200 31.600 0.148 64.900 101.800 0.402
2 1.000 N/A 17.000 N/A 0.000 lx.000 Due north/A 0.000
3 North/A Northward/A Due north/A Northward/A N/A North/A Due north/A N/A
Revise Full
Submission
0 705.000 946.000 36.018 31.053 0.000 5.786 five.744 0.402
1 129.000 182.000 nineteen.651 15.747 0.006 66.930 64.110 0.915
2 6.000 12.000 7.833 7.167 0.747 21.333 35.250 0.730
3 N/A N/A North/A North/A N/A N/A N/A Northward/A

Ane possible reason why submissions reviewed by the Reviewing Editor have a faster turnaround is because fewer people are involved (e.g., the Reviewing Editor in addition to two external reviewers, rather than the Reviewing Editor recruiting iii external reviewers), and review times are limited by the slowest person. To exam this, we built a linear model to predict the total review time as a office of editor type (whether the Reviewing Editor served as a reviewer or not), conclusion (accept or reject), and the number of unique reviewers across all rounds (see Tabular array S1). Indeed, the total review fourth dimension did increase with each reviewer (7.4 actress days per reviewer, p < 0.001) and the effect of a Reviewing Editor serving as one of the reviewers remained meaning (–9.3 days when a Reviewing Editor served as i of the reviewers, p < 0.0001). Additionally, another possibility is that the papers with more reviewers were more technically challenging, and then required more review fourth dimension to fully examine all the complication.

Next, we examined this effect beyond all rounds of review (rounds 0, i, 2) and conclusion types (accept, reject and revise). The results are shown in Figure 2 and summarised in Table ii. Once more, we see that processing times are consistently faster across almost every round, when the editors serves as one of the peer reviewers, except in the cases where the sample size was very small.

An external file that holds a picture, illustration, etc.  Object name is f1000research-5-10481-g0001.jpg

Decision times are faster when the Reviewing Editor serves every bit 1 of the reviewers across different rounds of review.

Boxplots showing conclusion times for different rounds of review, depending on conclusion blazon and whether the Reviewing Editor served as one of the reviewers (light blue) or not (orange). Full data available in Tabular array two.

Interestingly, when the Reviewing Editor serves as ane of the peer reviewers, the eLife Decision Time is reduced, only the time spent on revisions (Author Fourth dimension) does not modify. This suggests that the actual review procedure is more efficient when the Reviewing Editor serves every bit a reviewer, but the extent of revisions being requested from the authors remains abiding.

We side by side examined the chances of a paper beingness accepted, rejected or revised when a Reviewing Editor served every bit one of the reviewers. We establish no significant departure when examining the decision type on a round-by-round basis ( Table 3) (chi-squared test, p = 0.33).

Table iii.

Effect of a Reviewing Editor serving as one of the reviewers on editorial decisions.

Editor_As_Reviewer False Truthful Totals
Accept Total Submission 566 837 1403
Pass up Full Submission 472 626 1098
Revise Full Submission 840 1140 1980
Totals 1878 2603 4481

To exam whether eLife'due south acceptance charge per unit changed over time, we built a logit model including as a predictive variable the number of days since eLife began accepting papers and whether the Reviewing Editor served as one of the reviewers. In this model, we examination whether the dependent variable (the probability that a paper is published by eLife) is affected by the number of referees reviewing a paper ( Unique_Reviewers), whether the Reviewing Editor was besides serving as a reviewer ( Editor_as_Reviewer), and the number of days since eLife began accepting papers ( Publication_Since_Start).

The only significant variable in our analysis was the number of days since publication ( Publication_Since_Start), which had a very small (-0.003) but significant effect (p < 0.02) (see Table S2 ). That is to say that the chances of a paper submitted to eLife existence accustomed take declined over time. Information technology's of import notwithstanding to highlight that we cannot say whether this trend reflects changes in eLife'due south credence criteria without bold that the average quality of papers eLife has remained constant. As the volume of papers processed by eLife has greatly increased over three years, this is a very difficult cistron to independently verify - as such, while nosotros report all the analysis and include the full dataset as well every bit the scripts to reproduce them, we propose circumspection when interpreting the results.

The terminal outcome we examined was the number of citations (as tracked by Scopus) received past papers published by eLife. Papers accumulate citations over time, and, every bit such, papers published earlier tend to have more citations ( Figure 3).

An external file that holds a picture, illustration, etc.  Object name is f1000research-5-10481-g0002.jpg

Effect of different factors on commendation rates.

Nosotros compare the outcome of dissimilar parameters on the log1p commendation rate (log(1 + (number of citations/days since paper was published)). The values of the coefficient on the left reflect the citations numbers (as indexed by Scopus) on 29th February, 2016. We at present repeat this analysis using more contempo citation numbers obtained 11th July 2016 which are the values on the right).

Nosotros examined this outcome using a generalised linear model. As variables, we considered whether the Reviewing Editor served equally a reviewer (Editor_As_Reviewer), the total amount of fourth dimension taken to review the newspaper (Total_Decision_Time) as well as the number of reviewers examining the newspaper (Unique_Reviewers).

We take advantage of the ability to upload new manuscript versions to repeat our analysis using updated citation information. We report the coefficients calculated using the original citation dataset, obtained on 29th February 2016, also equally using more than contempo citation information obtained on 11th July. The coefficients estimated in both instances have overlapping confidence intervals, but, if we simply look at the latest dataset, and then the issue of the number of reviewers on commendation is statistically significant (although barely; run into Table S4) and papers with more reviewers tend to gather slightly more citations over time. The presence of a Reviewing Editor serving as a reviewer had lead to a small increase in citations using both citation datasets (see Table S4). Papers with longer full review times tended to exist cited less (this result is pocket-size merely significant). We counsel caution when interpreting these results: the confidence intervals are quite large, and the effect size is small ( Figure 3, scarlet dots).

Ane of the nearly noticeable furnishings of a Reviewing Editor serving every bit i of the peer reviewers at eLife is the faster decision times. Yet, serving as a Reviewing Editor and i of the reviewers for the aforementioned submission is a pregnant amount of piece of work. As the book of papers received past eLife has increased, the fraction of editors willing to serve equally a reviewer has decreased. While in 2012 almost all editors also served as reviewers, that per centum decreased in 2013 and 2014. There are signs of a mild increment in the percentage of editors willing to serve equally reviewers in 2015 ( Figure 4).

An external file that holds a picture, illustration, etc.  Object name is f1000research-5-10481-g0003.jpg

A decreasing proportion of Reviewing Editors served every bit i of the reviewers equally submission volumes increased.

The boilerplate number of papers per editor, and the number of editors willing to deed as a reviewer on papers has decreased over time. The total number of papers and the number of agile reviewers have both been increasing over time, although the number of agile reviewers has non quite kept upward with the number of papers.

Conclusions

Due to an increasingly competitive funding surround, scientists are nether immense force per unit area to publish in prestigious scientific journals, yet the peer-review process remains relatively opaque at many journals. In a systematic review from 2012, the authors conclude that "Editorial peer review, although widely used, is largely untested and its effects are uncertain" ix. Recently, journals and conferences (e.1000., 10) have launched initiatives to meliorate the fairness and transparency of the review process. eLife is i such instance. Meanwhile, scientists are frustrated past the fourth dimension it takes to publish their work xi.

We study the analysis of a dataset consisting of articles received past eLife since launch and examine factors that bear upon the duration of the peer-review process, the chances of a newspaper being accepted, and the number of citations that a paper receives. In our analysis, when an editor serves as one of the reviewers, the time taken during peer review is significantly decreased. Although at that place is additional work and responsibility for the editor, this could serve equally a model for other journals that want to improve the speed of the review process.

Journals and editors should besides remember carefully most the optimum number of peer reviewers per newspaper. With each extra reviewer, we found that an extra 7.four days are added to the review process. Editors should of class consider subject coverage and ensure that reviewers with different expertise can collectively comment on all parts of a paper, but where possible there may be advantages, certainly in terms of speed and easing the pressure on the broader reviewer pool, of using fewer reviewers per paper overall.

Insofar equally the editor serving as a reviewer is concerned, nosotros did not detect any difference in the chances of a paper beingness accustomed or rejected, just we did notice a modest increase in the number of citations that a paper receives when an editor serves as one of the reviewers, although this effect is very small. An interesting result from our analysis is that a longer peer-review process or more referees does non pb to an increase in citations (note: using the updated citation information, there is an effect which is barely greater than goose egg – come across Table S4, office two), then this is another reason for journals and editors to carefully consider the impact of the number of reviewers involved, and to strive to communicate the results presented in a timely manner for others to build upon. Equally eLife is a relatively immature journal, we tin verify if the citations trend we observe will concord over longer periods as different papers accumulate citations.

Acknowledgements

We gratefully acknowledge discussions and input from Marker Patterson ( eLife's Executive Managing director) and Peter Rodgers ( eLife'southward Features Editor). We thank James Gilbert (Senior Production Assistant at eLife) for extracting data from the submission system for assay.

We also thank Stuart Male monarch (Associate Features Editor at eLife) for all-encompassing critical reading and reviewing on the manuscript too the assay.

Nosotros too thank all of our reviewers for very useful critical feedback. We tried to contain their numerous suggestions in this revised version.

Notes

[version 2; referees: 2 canonical

Funding Statement

Andy Collings is employed by eLife Sciences Publications Ltd. eLife is supported by the Howard Hughes Medical Institute, the Max Planck Guild and the Wellcome Trust.

The funders had no office in written report design, data drove and assay, conclusion to publish, or preparation of the manuscript.

Supplementary material

Table S1.

Linear model for Full Time.

We built a linear model of the Total Time as a function of whether the Reviewing Editor served as a reviewer (Editor_As_Reviewer) (categorical variable, two levels), the final decision made on a paper (Decision_No), and the number of unique reviewers. The revision fourth dimension increased with the number of reviewers, only it decreased when a Reviewing Editor served as i of the reviewers.

coef std err z P>|z| [95.0% Conf. Int.]
Intercept 85.7868 4.462 19.225 0.000 77.041 94.533
C(Editor_As_
Reviewer)[T.True]
-9.2265 one.701 -5.425 0.000 -12.560 -5.893
C(Decision_Type)
[T.Refuse Full
Submission]
-65.3500 1.622 -forty.285 0.000 -68.529 -62.171
Unique_Reviewers 7.1892 i.620 iv.438 0.000 4.014 10.364

Table S2.

Linear model for the chances of a paper being accustomed.

We used logit regression to gauge the chances of a paper beingness accustomed as a function of whether the Reviewing Editor served as i of the reviewers (Editor_As_Reviewer), the number of unique reviewers, and the number of days betwixt when a paper was published and the kickoff published paper by eLife. The only significant variable is the days since eLife started accepting papers for publication (although the consequence on the chances of a paper being accepted is very pocket-sized).

coef std err z P>|z| [95.0% Conf. Int.]
Intercept 0.6080 0.257 2.368 0.018 0.105 1.111
C(Editor_As_Reviewer)[T.True] 0.0822 0.087 0.946 0.344 -0.088 0.252
Publication_Since_Start -0.0003 0.000 -2.393 0.017 -0.001 -six.08e-05
Unique_Reviewers -0.0380 0.081 -0.468 0.640 -0.197 0.121

Tabular array S3.

Effect of a Reviewing Editor serving as a reviewer on the number of rounds of revision.

We used a GLM with a log link office to model the number of revisions that a newspaper undergoes prior to a concluding decision as a function of whether a Reviewing Editor served as i of the reviewers (Editor_As_Reviewer), the number of unique reviewers, the decision blazon, and the number of days since eLife started accepting papers. The only variable that had a significant result was the decision blazon, as papers that are rejected tend to be overwhelmingly rejected early on and thus undergo fewer rounds of revision.

coef std err z P>|z| [95.0% Conf. Int.]
Intercept 0.6856 0.099 6.951 0.000 0.492 0.879
Editor_As_Reviewer[T.True] -0.0134 0.033 -0.403 0.687 -0.078 0.052
C(Decision_Type)[T.Decline
Full Submission]
-0.7762 0.035 -22.216 0.000 -0.845 -0.708
Publication_Since_Start 1.058e-05 5.33e-05 0.198 0.843 -9.4e-05 0.000
Unique_Reviewers 0.0382 0.031 one.230 0.219 -0.023 0.099

Table S4.

Citation rates.

Linear model for the citation rates later on log1p transform. We repeated the analysis using Scopus citation information nerveless on 2 different dates about 6 months autonomously. We report the coefficients for both datasets:

February 29th 2016:
coef std err z P>|z| [95.0% Conf. Int.]
Intercept 0.0038 0.002 2.538 0.011 0.001 0.007
Editor_As_Reviewer[T.True] 0.0025 0.001 4.465 0.000 0.001 0.004
Total_Decision_Time -2.812e-05 5.27e-06 -5.338 0.000 -iii.84e-05 -1.78e-05
Unique_Reviewers 0.0007 0.001 1.241 0.215 -0.000 0.002
July 11th 2016:
coef std err z P>|z| [95.0% Conf. Int.]
Intercept 0.0052 0.002 2.448 0.014 0.001 0.009
Editor_As_Reviewer[T.True] 0.0024 0.001 3.059 0.002 0.001 0.004
Total_Decision_Time -3.081e-05 seven.42e-06 -4.155 0.000 -4.54e-05 -1.63e-05
Unique_Reviewers 0.0017 0.001 two.302 0.021 0.000 0.003

References

1. Smith R: Peer review: a flawed process at the heart of science and journals. J R Soc Med. 2006;99(iv):178–182. 10.1258/jrsm.99.4.178 [PMC free commodity] [PubMed] [CrossRef] [Google Scholar]

iv. Schekman R, Patterson M, Watt F, et al. : Scientific publishing: Launching eLife, Office 1. eLife. 2012;i:e00270. ten.7554/eLife.00270 [PMC free article] [PubMed] [CrossRef] [Google Scholar]

5. Schekman R, Watt F, Weigel D: Scientific publishing: Launching eLife, Role ii. eLife. 2012;i:e00365. 10.7554/eLife.00365 [PMC free commodity] [PubMed] [CrossRef] [Google Scholar]

half dozen. Bornmann L, Mutz R: Growth rates of modern science: A bibliometric assay based on the number of publications and cited references.2014; arXiv:1402.4578. Reference Source [Google Scholar]

7. Schekman R, Watt F, Weigel D: Scientific publishing: The eLife approach to peer review. eLife. 2013;ii:e00799. x.7554/eLife.00799 [PMC costless article] [PubMed] [CrossRef] [Google Scholar]

eight. Schekman R, Watt FM, Weigel D: Scientific publishing: A year in the life of eLife. eLife. 2013;2:e01516. ten.7554/eLife.01516 [PMC gratuitous article] [PubMed] [CrossRef] [Google Scholar]

9. Jefferson T, Alderson P, Wager E, et al. : Furnishings of editorial peer review: a systematic review. JAMA. 2002;287(21):2784–2786. ten.1001/jama.287.21.2784 [PubMed] [CrossRef] [Google Scholar]

10. Francois O: Arbitrariness of peer review: A Bayesian analysis of the NIPS experiment.arXiv:1507.06411.2015. Reference Source [Google Scholar]

11. Powell 1000: Does information technology take also long to publish enquiry? Nature. 2016;530(7589):148–51. 10.1038/530148a [PubMed] [CrossRef] [Google Scholar]

12. Jones E, Oliphant E, Peterson P, et al. : SciPy: Open up Source Scientific Tools for Python.2001. Reference Source [Google Scholar]

13. Waskom Yard, Botvinnik O, Okane D, et al. : Seaborn Plotting Library.2016. 10.5281/zenodo.45133 [CrossRef] [Google Scholar]

xiv. Seabold South, Perktold J: "Statsmodels: Econometric and statistical modeling with python." Proceedings of the ninth Python in Scientific discipline Briefing.2010. Reference Source [Google Scholar]

xv. Vaggi F: eLife_Editorial_Process: Review_Version. Zenodo. 2016. Data Source

Referee response for version ii

Ivan Oransky

iRetraction Scout, The Middle For Scientific Integrity, New York, NY, USA

Alison Abritis

oneRetraction Watch, The Center For Scientific Integrity, New York, NY, Usa

1Retraction Sentinel, The Center For Scientific Integrity, New York, NY, USA

Competing interests: AA is an employee of The Center For Scientific Integrity, which operates Retraction Watch. IO is executive director of The Center For Scientific Integrity.

Thanks to the authors for responding thoughtfully to all of our suggestions and questions. We are pleased to approve this article.

We have read this submission. We believe that we have an appropriate level of expertise to ostend that information technology is of an acceptable scientific standard.

Referee response for version 1

Ivan Oransky

1Retraction Sentinel, The Centre For Scientific Integrity, New York, NY, Usa

Alison Abritis

1Retraction Watch, The Center For Scientific Integrity, New York, NY, USA

1Retraction Lookout man, The Center For Scientific Integrity, New York, NY, U.s.

Competing interests: AA is an employee of The Middle For Scientific Integrity, which operates Retraction Watch. IO is executive director of The Heart For Scientific Integrity.

Thanks for the opportunity to review this manuscript. This is a well-done report, and the conclusions follow from the results. Nosotros would recommend accepting the article once all clarifications and revisions take been made, or the lack of doing so adequately justified.

A. While brevity is generally to be admired, nosotros would recommend a scrap more detail virtually the statistical analyses. These are critical, only are reduced to 3 sentences and a referral to the programming linguistic communication through an external link. Nosotros would suggest that the main text include the (brief) word of the analyses done, and rationale for them, rather than have those relegated to the external link.

B. The interpretation of the findings seems to be attributing causal factors - an A leads to B consideration - for which the command of variables is too limited. We believe that interpreting these as associations would exist more than consistent with the findings.

Consider the argument: "Journals and editors should as well call back carefully almost the optimum number of peer reviewers per paper. With each extra reviewer, we found that an extra 7.4 days are added to the review procedure." Given that at that place appeared to be no inclusion of either article quality or complication in the evaluation, is it not possible that issues within the article itself required the use of additional reviewers (i.e. a B leads to A perspective)? Perhaps extra reviewers with specific expertise was required, or concerns with potential problems in the manuscript led to consultations with other reviewers. It does not seem safety to presume that it was the addition of the reviewer that added extra days.

Similarly, the study centers effectually the role of the editor in the reviewing process, and the give-and-take suggested that the involvement of the reviewing editor as a peer reviewer expedited the process. There was little give-and-take of other factors that could have accounted for the statistical results. For instance, perhaps the reviewing editor selected articles that piqued his or her interest, or were more clearly presented. Perhaps the reviewing editor selected to review at times more convenient to his or her workload, while other reviewers did not take such an option. The reviewing editor might select to review manufactures perceived to be of greater or timelier value to the journal itself, which may increase the speed of the review.

Specific questions:

A. Co-ordinate to their method section, the authors country that they began with an initial Due north=ix,589. Afterward purging other articles they had an Northward=viii,905. They then isolated a total of ii,750 articles subjected to the peer review procedure for the study: "For the rest of the paper, we focus our assay on this subset of 2,750 papers, of which ane,405 had been accepted, 1,099 had been rejected, and the residue [which would equal 246] were notwithstanding under consideration."

Looking at the Excel spreadsheet for commendation counts, there are 1407 lines with entry numbers. For peer-reviewed papers, excel spreadsheet has 2747 (subsequently removing indistinguishable entries based on the MS NO column) entries for manuscripts numbered up to 12621. The excel spreadsheet for unique reviewers has 2747 entries, with a final MS NO of 12621.

The numbers do not appear to friction match, and at that place is no explanation for that in methods. Exactly how many manuscripts were reviewed, how many rejected and why, and how many were tracked?

B. In the Excel spreadsheet for citations, the second column was titled "Citations," merely these figures do not appear to have whatsoever relation to the Scopus commendation numbers. What numbers were used for the actual citation counts?

We also note that we detect the suggestions by other reviewers compelling, and would exist happy to review a revision of this manuscript should that be considered useful.

Nosotros accept read this submission. We believe that we have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however we have significant reservations, as outlined above.

Referee response for version 1

Bernd Pulverer

aneEuropean Molecular Biology Arrangement (EMBO), Heidelberg, Frg

oneEuropean Molecular Biology Organization (EMBO), Heidelberg, Deutschland

Competing interests: BP is head of scientific publications at EMBO and chief editor of The EMBO Journal.

Giordan et al analyzed two,750 manuscripts sent out at the periodical eLife for peer review (of which ane,405 ended upwardly published). The authors compare papers in which the editors functions every bit 'reviewing editor' that is equally one of three referees. Globally, and at almost every conclusion stage the process is accelerate significantly if the reviewing editor functions as one of the referees, with no or very small-scale affect in author revision fourth dimension and commendation rates, respectively.

The authors calculate that every boosted external referee adds 7.4 days to the process and advise that journals strive to residue the need for covering all the required expertises advisedly with the negative event on the speed of evaluation.

The quality and speed of the peer review procedure are topics of agile debate. Despite widespread criticism, the publication in certain peer reviewed journals continues to directly impact inquiry assessment by both funders and institutions. The quality and fairness of the procedure is therefore paramount not only to assure the reliability of the literature, simply too to inform research assessment in a balanced manner. Nevertheless the tiresome commitment of this particular referee report, speed matters in particular in fast moving and highly competitive research areas like the biosciences.

Quantitative evidence that well defined aspects of an editorial process has positive outcome on quality and or speed is therefore of significant importance.

The authors have carefully analyzed a decent sized dataset and written report a statistically meaning consequence of a well defined change in the editorial process, while also showing prove that this change has no detrimental consequence on the quality of the editorial cess, at least every bit far as the outcome is analyzed (here, in terms of two parameters: revision time and citation rate).

While this manuscript makes a significant contribution, I have a number of suggestions I would invite the authors to consider in revision:

Textural:

  1. Abstract/main text; Background: It is not only the growth of the number publications that puts the organization under pressure (later on all, in principle the editorial/peer review procedure may well be able to calibration with increased research output), merely rather the increased pressure level publish in a small number of high Impact Gene journals in an try to optimize chances of a positive bear on on research assessment.

  2. Please introduce the journal eLife, including the scientific scope, as different communities have widely unlike peer review and commendation cultures and this will probable bear upon the findings reported here.

  3. Abstract: Results. As presented, I found it confusing that the first sentence describes an apparently sizeable difference between accepted (10 days faster) and rejected (5 days faster), while the next sentence states 'there was no effect on whether submissions were accepted or rejected)'.

  4. Abstract: the dataset is described every bit consisting of an analysis of 8,905 submissions, when in reality the two,750 papers sent for review were analyzed. This could be formulated more conspicuously.

  5. I would advise for clarity to remove the 'False' and 'True' nomenclature and change to 'reviewing editor' and 'editor' assessment or similar.

Assay:

  1. It is unclear to me if the authors can exclude any biases in terms of which manuscripts were selected for formal review by an editor vs. outside just refereeing. Have the authors attempted to assess possible bias? For example, maybe the reviewing editors tend to review the manuscripts themselves that strike them every bit the more interesting ones, or maybe certain discipline areas are preferentially field of study to one approach.

  2. I am missing a clear definition of what editorial peer review is and what is not. Information technology is clear that this is likely not a completely binary situation and the authors do not describe how the decisions were parsed so clearly into the 2 groups (on p3. 'more of a supervisory role' vs. less of such a role sounds quite vague).

  3. Why were appealed manuscripts removed grade the analysis? This may introduce a bias as perhaps erroneous decisions are excluded. How many appeals were excluded?

  4. It is unclear to me if all paper invariably had iii referees (i.e. 3 exterior or 2 exterior+reviewing ed.). I presume some papers only had two referees. Were they excluded? If not, how did these score for speed, revision time  and citation?

  5. The definition of 'Total Time' on p3. Is unclear: it is stated to be both 'first submission-credence' and 'offset submission to publication': which one is it?

  6. Please land that Scopus citations were assessed when fist introducing the topic. I would recommend to utilise the same fourth dimension window for all papers (eastward.g. 12 months afterwards publication) as this renders commendation rates more comparable. Why was the eLife website scraped for citation rates, and not the main Scopus database – that information may in principle exist more reliable.

  7. The 10 days (accepted) vs. 5 days (rejected) faster: is this simply the additive event of two rounds vs. one round of review?

  8. Please include basic stats data in the figure legends – in detail fig 2, where the numbers volition decrease dramatically for 'revision i' and 'revision ii'.

  9. Information technology is unclear to me if reviewing editors were invariably faster the outside referees. Information technology would be useful to quantify this and assuming there is a striking difference to speculate why – information technology is the individuals selected by e-Life or due to policing or incentives provided past the journal? Afterward all, like strategies could be applied to outside referees. On a related point, information technology would be useful to quantify if the reports by the reviewing editors were qualitatively different (e.g. length). Ane assumes the ultimate decision on the manuscript was as as well much meliorate correlated with the reports by reviewing editors than those of the outside referees.

  10. I am confused: in fig two 'Turn down full submission' in revision 0 and revision ane is slower than 'accept'. This seems to be the opposite to fig 1 and in fact less intuitive than the results in fig 1.  Since manuscripts are rarely re-reviewed (run into p1), are all the datapoints displayed in 'revision i' and 'revision 2' for re-review processes?

  11. For fig 4, I would advise to plot the manuscript load/editor.

Non-essential further reaching assay (suggestions):

  1. it would have been useful to measure and present the acceptance/rejection rates of manuscripts appraise by iii outside referees compared with two referees + reviewing editor.

  2. information technology would have been useful to quantify the % of agreement between the reviewing editor and the exterior referees, compared with agreement between the exterior referees.

I have read this submission. I believe that I accept an appropriate level of expertise to ostend that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.

Referee response for version 1

Alesia Zuccala

1Purple School of Library and Informatics, University of Copenhagen, Copenhagen, Denmark

1Imperial School of Library and Information science, University of Copenhagen, Copenhagen, Denmark

Competing interests: No competing interests were disclosed.

I enjoyed reading this commodity, and I liked the fact that it was concisely written. I would but like to share a few comments.

The authors of this paper accept chosen to test the peer review process for eLife, which is an electronic journal for the life and biomedical sciences.  They annotation that two papers accept previously described the editorial process of eLife; however for their ain report, I remember information technology would take been useful to include a link to the eLife website ( https://elifesciences.org/).   I was curious about whether or non it was an open access journal, and I looked for the website to obtain this information.   More than importantly, I also looked to the website, because I wanted to know how many persons serve equally Senior Editors or sit on the editorial board of eLife.  What is interesting hither is that the term "editor" for this periodical is stretched to include one Editor-in-Chief, three Deputy-Editors, thirty-2 Senior Editors and a 282-member Board of Reviewing Editors.   The peer review system for this periodical is quite different from the 'traditional' journal, merely to be more precise, information technology differs because "Reviewing Editors" are specialists who have agreed to review for the journal on a regular basis, and may in some cases call upon additional 'outside' reviewers.

What we do non know from this paper, is whether or non 2 or more of the 282 Reviewing Editors sometimes cull to review the same paper.  At the eLife website, the following is noted:  "The Reviewing editor usually reviews the commodity him or herself, calling on 1 or two additional reviewers as needed".  Are the additional reviewers always from the exterior?  If not, how would this change the authors' hypothesis related to the 'effects of an editor serving every bit ane of the reviewers'?

The methods used for the information assay are explained very well, with the exception of one detail:  How did the authors learn the initial dataset of 9,589 papers?   This information is presented in the 'Acknowledgements' section, but could have also been added to the Methods department, for more than clarity.

The graphs related to the authors' findings are articulate and present interesting information, but I am non certain how the citation data were nerveless from Scopus for the peer-reviewed papers in eLifeand whether or not 'citation windows' were used for the papers depending on the yr in which they were published.  Essentially the authors are correct in saying that " papers accumulate citations over time, and, as such, papers published earlier tend to have more citations", hence citation windows are used to correct for this.  The highest rates of commendation (especially in the life sciences and biomedicine) will appear within three-to-five years following an article's date of publication.  For this reason, bibliometricians usually count citations within this three-to-five year time-frame to determine an article's initial impact.   Since the manufactures used in this study had been " submitted to eLife since June 2012" the authors should accept focus on three things:  1) the interest of a Reviewing Editor as a peer reviewer or not, two) the number of days between start of the submitted paper'due south credence and publication, and 3) the papers' citation charge per unit post-obit 3-5 years later final publication.

I have read this submission. I believe that I have an appropriate level of expertise to ostend that it is of an adequate scientific standard.


Articles from F1000Research are provided here courtesy of F1000 Enquiry Ltd


bandyeaspost.blogspot.com

Source: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4962294/

0 Response to "How Many Articles Does an Editor Review Each Day"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel