• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Six statistical issues in scientific writing that might lead to rejection of a manuscript

    2022-06-04 11:42:46EvgeniosAgathokleousLeiYu
    Journal of Forestry Research 2022年3期

    Evgenios Agathokleous·Lei Yu

    Abstract Communication plays an important role in advancing scientific fields and disciplines,defining what knowledge is made accessible to the public,and guiding policymaking and regulation of public authorities for the benefit of the environment and society.Hence,what is fnially published is of great importance for scientific advancement,social development,environmental and public health,and economic agendas.In recognition of these,the goal of a researcher is to communicate research findings to the scientific community and ultimately,to the public.However,this may often be challenging due to competition for publication space,although to a lesser extent nowadays that online-only publications have expanded.This editorial introduces six statistics-related issues in scientific writing that you should be aware of.These issues can lead to desk rejection or rejection following a peer review,but even if papers containing such issues are published,they may prevent cumulative science,undermine scientific advancement,mislead the public,and result in incorrect or weak policies and regulations.Therefore,addressing these issues from the early research stages can facilitate scientific advancement and prevent rejection of your paper.

    Keywords Journal editor·Peer review·Rejection·Science communication·Scientific writing

    Introduction

    Owing to the dedicated work of its editorial office,the diligent work of its academic editors and peer reviewers,and contributions of authors from around the world,theJournal of Forestry Research(JFR) has been transformed into a prominent forestry journal.With a 2020 CiteScore1https://www.elsev ier.com/connect/what-is-citescore-and-whyshould-you-care-about-it2Note:CiteScore is used as an index of journal ranking for illustrative purposes and its use does not imply it is considered the best or more appropriate than other of the many indexes existing.of 2.8,JFR ranks 40th among 142 journals of forestry,agricultural and biological sciences,while the updated 2021 tracker value increased to 3.8 (www.scopus.com;last updated 6 March 2022;accessed 17 March 2022).As the journal increases its profile of the world’s forestry journals,more submissions are expected,resulting in a decreasing percentage of manuscripts that can be accepted and published.

    JFR is published by non-prof it,China-based academic societies and institutions and is not subject to policies of publishing that aim at maximizing economic profit(Agathokleous 2022).Hence,the journal publishes a specific maximum number of manuscripts annually which means that no additional papers may be published,even if all are excellent,report cutting-edge scientific findings,or are game changing.For example,there were over 1600 submissions in 2021,of which only 7% were accepted for publication.With limited space,the number of papers that may be desk rejected (rejected by editors without assigning it to peer reviewers) is increasing.A desk rejection decision does not always have to do with the science itself or the manuscript quality,but it may simply be that the paper is not considered to be competitive enough among other submissions or because the journal has different publishing priorities at a given time.However,in a journal with competition for space,there are always reasons that can lead to a desk rejection,and statistics-related issues in scientific writing are among the top ones.The following is a list of issues based on ones I3When ‘I’ and ‘my’ appear hereafter they indicate a personal view of the first author (E.A.).The subjective personal pronoun ‘we’ and the possessive ‘our’ refer to both authors hereafter.have encountered frequently as an associate editor and then as an associate editor-in-chief of JFR,as well as in the framework of my editorial4Associate Editor of Forestry Research.Editorial Board Member of Science of The Total Environment (STOTEN);Current Opinion in Environmental Science &Health (COESH);Journal of Environmental Science and Health,Part A:Toxic/Hazardous Substances and Environmental Engineering;Journal of Environmental Science and Health,Part B:Pesticides,Food Contaminants,and Agricultural Footnote 4 (continued)and review5Reviewed approximately 600 papers for 85 journals ( https:// publo ns.com/ resea rcher/ 11949 15/ evgen ios-agath okleo us).Wastes;Plant Stress;Climate;Sci;Frontiers in Forests and Global Change;and Water Emerging Contaminants &Nanoplastics.Guest Managing Editor and Guest Editor in several journals,including STOTEN;Agriculture,Ecosystems and Environment;Current Opinion in Toxicology;COESH;Atmosphere;and Agronomy.works in other scientific journals (Fig.1).When such issues exist in an original manuscript,a set of them are commonly observed.However,as mentioned,these are based on our own experience (L.Y.is Deputy Editor-in-Chief of JFR) and fields of expertise in which we actively engage peer review and do not cover all the statistical areas such as mathematical modeling,computing systems like artificial neural networks,and machine learning.Moreover,we focus on statistics-related issues in scientific writing but not on technical aspects of statistical procedures themselves,such as the nature of dataset and data distribution,and code availability,i.e.the issue of making data and programing codes of data analyses publicly available.

    Fig.1 Statistics-related issues in scientific writing

    Results claimed are not in line with statistical results 6Note:this does not imply that biologically important results are statistically significant results.Statistically non-significant results can be biologically or practically important results and vice versa.(the issue of p values)

    Inference is made regarding the differences between experimental conditions,whereas either there is no statistical support for the comparison or the statistical result is not in agreement with the conclusion.The latter is more prevalent.In this case,authors often claim ‘marginal’ differences when thepvalue approaches or exceeds 0.1;the worst I have observed was for apvalue higher than 0.2 and considered to be significant.Conversely,other authors have asserted that there was no difference if thepvalue was approximately 0.05.The former case is more severe.Regarding the latter,“surely God loves the 0.06 as much as the 0.05”(Rosnow and Rosenthal 1989).However,it is my view that ifpvalues are to be used,there should be some acceptable range as a reference point.For example,numbers are commonly rounded up if the previous decimal is ≥ 5.I do not see any reason justifying that apvalue in the range of 0.051?0.054 is statistically different from apvalue of 0.045?0.050.Apvalue of approximately 0.05 suggests that the findings warrant further investigation and is enlightening.But these are my views and journals rarely have specific guidelines regarding the use ofpvalues.Therefore,it remains highly subjective,resting with the editor’s understanding,knowledge and ultimately opinion regarding what he/she finds acceptable.Nevertheless,I believe most,if not all editors,would find unacceptable the claim of significance whenpvalues are approaching or exceeding 0.1.If it means to say what we like,no matter the statistics,why are we producing statistics? As an independent editor,I cannot force authors to replacepvalues with other measures or use them with complementary more informative metrics,but I expect authors to reach a conclusion based onpvalues in a logical and justified manner.Above all,we should remember that howpvalues are used defines what results are published and thus directs science and the progress of social and environmental development.Considering the widespread,subjective and highly personalized interpretation and use ofpvalues,and how this can affect scientific progress (Dorey 2011;Masicampo and Lalande 2012),all biology journals should set precise guidelines for the interpretation and use ofpvalues in consultation with editorial board members and statisticians.This includes JFR as well.

    It should be added that the use ofpvalues in biology has long been criticized by numerous statisticians.There is a famous quote:“scientists the world over use them,but scarcely a statistician can be found to defend them.Bayesians in particular find them ridiculous,but even the modern frequentist has little time for them.”(Senn 2001).Some scientists believe that the bar for statistical significance should be raised to 0.005 or 0.001 (Johnson 2013),while others call for the retirement of the statistical significance and the use of confidence intervals instead (Amrhein et al.2019).In fact,pvalues can be replaced by or used together with other,more integrated indexes such as effect size estimates and their intervals (e.g.Agathokleous et al.2016),which can lead to better informed decisions (Connor 2004;Nakagawa 2004;Muff et al.2022a,b),while Bayesian counterparts (e.g.Bayes’ factor) perform better (Goodman 2008;Johnson 2013;Wiens and Nilsson 2017).pvalues were not meant to be the sole criterion to attribute differences and compare magnitudes (Lew 2012;Nuzzo 2014;Agathokleous 2022;Alexander and Davis 2022).However,I do not believe that the replacement ofpvalues will be occurring soon and,since they have become the backbone of biology,the so called ‘gold standard’ of validity (Nuzzo 2014),they should be used correctly.More details about statistical inference and bad practices,including the problematic hybrid interpretation of statistical results between Fisher’spvalues and the strict Neyman–Pearson approach,can be found in the literature (Connor 2004;Goodman 2008;Lew 2012;Nuzzo 2014;Muff et al.2022b).

    Concluding this section,we would draw attention to final points regarding the reporting ofpvalues,shouldpvalues be used.First,nopvalue should be reported as being equal to zero.It could be <0.001 or <0.0001 but never=0.000.Second,reporting onlypvalues without other information is impractical.A minimum requirement would be the simultaneous reporting of the value of the statistic (e.g.,F,t,orU).Forpvalues over 0.05,the exact value should be stated instead of writingp>0.05.As a point of reference,reading a widely used publication would be enlightening and helpful,such as the Publication Manual of the American Psychological Association (APA 2019).

    Issues with multiple tests or comparisons

    Scientific research has become more demanding in the twenty-first century due to the increased need for multi-factorial experimental designs in some disciplines (Rillig et al.2019,2021).This reflects a considerable increase in statistical testing within a study.For example,ecological research is often multidimensional,including numerous variables.If one examines the association of 15 soil quality parameters with the alpha diversity of communities of microorganisms,the probability of detecting one or morepvalues smaller than 0.05 increases from 5 to approximately 54%! And,if more than one index of alpha diversity is considered,this probability increases further.This leads to the question of how much uncertainty lies behind the results and conclusions of an array of studies.As the number of statistical tests and comparisons increases,the uncertainty and probability of rejection or acceptance of null hypotheses can increase,depending on how it was accounted for.But this is another issue that remains largely subjective and personalized,as journals rarely have specific guidelines on this.

    Modifications of traditional statistical testing procedures are widely applied in a range of research fields to decrease Type I errors (i.e.,rejection of a null hypothesis that is true).Perhaps the widespread and most widely used modification is the Bonferroni correction,a modification of alpha (α) by dividing it withknumber of statistical tests or comparisons.That is,for a study with 10 tests,the correctedαwould be 0.005 (α=0.05/10),ifαwas set at 0.05.The application of Bonferroni corrections reduces statistical power,highly increasing Type II errors,(i.e.,acceptance of a null hypothesis that is false),and potentially contributing to a publication bias which eventually can thwart scientific advancement (Nakagawa 2004).For example,researchers that find many variables to be insignificant might simply choose to omit them from their paper and thus never covered by future meta-analyses,thereby contributing to a ‘file-drawer effect’7Meaning that negative or non-significant results are permanently stored in the researchers’ drawer instead of being published,thus favoring the publication of positive and easier to publish results.and publication bias (Nakagawa 2004;Fanelli 2010).If the accumulation of knowledge is thwarted,an entire scientific field may be suppressed (Nakagawa 2004).A further type of correction is the sequential Holms-Bonferroni method(Holm 1979) which controls the family-wise error rate while reducing statistical power to a lesser extent compared to the standard Bonferroni correction;however,the probability of Type II errors remains considerably high (Nakagawa 2004).Including less relevant or biologically irrelevant variables in a study leads to unnecessarily increased probability of Type I errors,which often results in reviewers pointing to the need of corrections such as Bonferroni (Nakagawa 2004).Based on these issues,Nakagawa proposed that“the practice of reviewers demanding Bonferroni procedures should be discouraged,(and also,researchers should play their part in carefully selecting relevant variables in their study)”(Nakagawa 2004).These are not new issues and have long been known.For example,ending the use of Bonferroni procedures and starting to report effect sizes and/or confidence intervals for effect size or alternatives was proposed two decades ago in animal behavior and behavioral ecology research (Nakagawa 2004).

    Some journals have specific guidelines about multiple testing or comparisons.An example is that of theAnnals of Applied Biology,the journal of the Association of Applied Biologists,where more specific author guidelines regarding statistics have been developed and put into effect,while statistics editors also evaluate relevant submissions (Kozak and Powers 2017;Powers and Kozak 2019;Butler 2021).This is an example that can serve as a reference point for further development in the JFR as well as in other journals.Author guidelines of theAnnals of Applied Biologydiscourage using comparisons not based on biological hypothesis,stating“In particular,the use of multiple comparison adjustments such as Duncan’s or Tukey’s is not acceptable,nor is the use of letters to denote treatments which are ’not significantly different from each other’.”(https://onlin elibr ary.wiley.com/ page/ journ al/ 17447 348/ homep age/ forau thors.html;Accessed 19 February 2022).Instead,it has been suggested to conduct post hoc comparisons that are of most interest,using the value of least signifciant difference (LSD) based on the relevant standard error of the difference (SED) from the analysis of variance (ANOVA) (Kozak and Powers 2017).Similarly,in unbalanced studies with an unequal number of experimental units (replicates) among experimental conditions or treatments,SED values may differ among comparisons;however,only post hoc comparisons of most interest should be made,but LSDs and SEDs should be reported for each comparison (Kozak and Powers 2017).Another suggestion is that,where a large number of variables exists,controlling the ‘false discovery rate’,the fraction of rejecting true null hypothesis,may be more appropriate than controlling the probability of even one false rejection of null hypothesis (Nakagawa 2004).

    There are further options that can help with the trade-off between Type I and Type II errors.For example,the use of orthogonal or non-orthogonal linear contrasts is a good alternative,albeit their use is often complicated,difficult,or even impractical in terms of application,interpretation,and presentation,especially in light of current publishing policies of many journals.In fact,based on my experience as a reviewer,editor,referee,and author of literature reviews of numerous scientific papers,the use of post hoc comparisons in most cases is incorrect and problematic,while often planned (a priori) comparisons should be made.In highly multi-factorial studies,the number of biologically irrelevant comparisons is also high,many of which provide little or no useful information.This may be illustrated by a hypothetical example.A researcher studies the effect of various doses of the antibiotic tetracycline (0,0.001,0.01,0.1,1,10,100,1000,10,000 μM L?1) on saplings of a poplar clone grown in either charcoal-filtered air (i.e.,air pollutants are eliminated;clean atmosphere) or ozone-enriched air (polluted atmosphere).However,many of the possible comparisons are biologically irrelevant.For example,it is irrational to compare the effects of 0.001 μM tetracycline L?1on plants raised in charcoal-filtered air with the effects of concentrations of 0.01?10,000 μM tetracycline L?1on plants in ozone-enriched air.Researchers should strive for planned comparisons wherever possible (Ruxton and Beauchamp 2008;see also Wiens and Nilsson 2017).If reviewers criticize the use of correctly applied a priori comparisons,it is important to address their comments and justify why a priori comparisons are correct and should be retained.In a paper my colleagues and I published six years ago,contrasts were used to examine the most biologically relevant questions/comparisons (Agathokleous et al.2016).There were three reviewers and while endorsing the work,all had some comment(s) on the statistics and/or the way the results were presented;if post hoc comparisons among all means were done,reviewers would be satisfied.In fact,one of the issues raised was that the use of different specific questions,and thus contrasts,made the interpretation of figures and results more difficult,and dictated the repeated return to the questions/contrasts.The reviewers’ comments were helpful to thoroughly revise the manuscript by completely changing the presentation of the results,including display elements.However,this is an example where a major revision would be a minor one if post hoc comparisons were used.It could also be a rejection if there were other critical deficiencies in the paper or if one or more reviewers had recommended rejection and the handling editor were unqualified.

    Incorrect claims of sizes of differences

    As noted previously,pvalues alone do not indicate variations in the size of differences among experimental conditions (Agathokleous 2022).For example,if thepvalues of the effect of treatments A and B compared to control C were 0.011 and 0.002,no inference should be made that treatment B had a larger effect than treatment A,yet such phenomena frequently occur in manuscripts submitted to journals.An inference that may be made in this case is that if treatments A and B had no real effect,a difference from the controls of equal or larger magnitude would be observed in 1.1%and 0.2% of study repetitions,respectively,due to random error.8Note:the error rate is tightly linked with p value (Sellke et al.2001).In another example,the null hypothesis is rejected for the effect of liquid chemical treatments D and E on the mycorrhizae colonization of roots of pine seedlings grown in a cambisol soil,and the arithmetic means of treatments D and E were 50% and 10% greater than the arithmetic means of the water-treated control.Speculation that“chemical treatments D and E significantly increased mycorrhizae colonization,and chemical D had a more pronounced9Any synonym may be used.effect”is inappropriate and misleading.The point is thatpvalues say nothing about the magnitude of the effects or differences among experimental conditions.They only indicate the probability of a similar or more extreme finding than the one obtained in the study,given that the null hypothesis is true and the assumptions underlying the analysis are true to some extent (Butler 2021).A practice I often observe in manuscript submissions is drawing conclusions about the effect of size based only onpvalues or even differences in arithmetic means,such as denoting differences in treatment effects or ranking susceptibility/tolerance of different organisms or groups of organisms (Agathokleous and Saitanis 2020).Such a practice not only is harmful for the progress of science but is also misleading and thus,have societal implications (Agathokleous and Saitanis 2020).Whenever it is needed to make inference about the size of differences between experimental conditions,pvalues are insufficient.In fact,statistical significance or insignificance does not translate to biological importance (Ziliak and McCloskey 2008;Butler 2021),but effect sizes and their improving indexes can be used for biological or for practical importance (Agathokleous et al.2016).There is a variety of effect size indexes that can be used,each with its own characteristics (Sullivan and Feinn 2012;Solla et al.2018).Analysis of these indexes is beyond the scope of this paper,but there are various user-friendly software packages operating online or offline for the estimation of effect sizes as well as their improving indexes (e.g.,Lenhard and Lenhard 2016;Agathokleous and Saitanis 2020; https://lbecker.uccs.edu/;https://goodcalculators.com/effect-size-calcu lator/; https://effect-size-calculator.herokuapp.com/).The availability of such computational tools makes calculation easier,even to those who might dislike making such calculations.The only task the user must do is to input the required data.

    Redundant statistics

    A problematic practice is to conduct redundant statistics.Although it might seem surprising,this problem exists in manuscript submissions even today.For example,in one study with single and combined effects of two factors each with two levels,the researchers carried out a two-way analysis of variance,but they also conducted independentttests between experimental conditions within each factor.As a researcher,you may want to ensure that your manuscript contains no redundant statistics.Ask yourself whether some other statistical test that you have already conducted can provide answers to the questions your new statistical test is going to answer.Consider it for a while and think.If the answer is yes,then you should not conduct this further statistical test.

    A situation I have observed many times is one of reviewers asking authors to conduct different tests to trace more significant results (and editors passively transferring reviewer comments onto authors).Such a practice reflects fishing for significant results.The more statistical tests/comparisons one runs,the more significant results will likely be found.As a basic principle,no changes to the statistics (by adding additional statistics) should be done without a clear purpose of doing so,such as due to problematic or incorrect methodology.Conducting different statistical tests also reflects redundancy,even if someone does not report all the results.As mentioned before,as long as you can justify why you did what you did,the chances that you will be asked to change your statistics are lower.Even if you are asked,it does not mean you should make changes,but doing so might enhance the chances of having your paper accepted.

    Mixing up association with causation

    It might be difficult to believe but mixing up association with causation occurs frequently.Association is a relationship between two variables.An X variation in the values of one variable is associated with a Y variation in the values of another.Association can represent causation,but in many cases it does not.If your study does not account for causation,no inference should be made to claim or imply causation.For example,you could state that“factor A was negatively associated with factor B”but you should not state that“factor B decreased due to factor A”.If you want to claim causation based on association,you only need to distinguish between causal and non-causal associations (Stovitz et al.2019;Kukull 2020).Otherwise,if your study does not support causation,be careful not to state or imply causation.

    Lack of sufficient information

    Insufficient statistical information is among the most important aspects that may determine the fate of a manuscript submission.Yet insufficient information about statistics appears widely in the literature (Kramer et al.2016).As noted,often it is about justifying what one did in the scientific process.If what you did is correct,it cannot be rejected.Even if there are cases where alternatives might be advantageous,the question for an editor would be whether a potential change in the statistical processes would be beneficial (beneficial does not mean more ‘statistical significances’).What would such a change add to the scientific content of the paper? Is such a change really needed? Would such a change be rather harmful,such as violating basic principles of statistics like fishing for significance and favoring type I errors over type II errors and vice versa? These are some questions an editor must answer when performing evaluations or re-assessments following peer review.These are some examples amongst many.The point is that if you detail adequately why you acted as you did and perhaps why you did not do something else,10Excessive information is discouraged.There is no need to explain why you did not do each of the candidate alternative tests/procedures.This should be justified only in cases where what you did may be disputed such as when it comes to trade-offs between type I and type II errors.you facilitate the work of editors and can prevent possibly unfair or incorrect criticisms by reviewers,thus enhancing the chances for a smooth peer review process.However,if the information about the experimental design and/or data analysis is insufficient to evaluate the robustness and validity of the study and does not permit its replication,a desk rejection is very likely.Here,I draw attention to some issues I encounter frequently,but those who have a keen interest in more detailed explanations can refer to the guidelines of theAnnals of Applied Biology(Kozak and Powers 2017;Powers and Kozak 2019;Butler 2021) orScience(https://www.science.org/content/page/science-journals-editorial-policies#statistical-analysis).

    The first issues that immediately come to mind are the lack of clarification of sample sizes,experimental and statistical units,and measures of dispersion around the mean,which should be done for each type of analyses.Without this information,the validity of the study cannot be assessed and replicated,which are the minimum requirements of scientific research.The meaning of replicate is often unclear or what is claimed to be a replicate is not valid.Author guidelines of theAnnals of Applied Biologystate that“Particular care should be taken to explain what is meant by a replicate;only biological replication from independent units can be used to assess variation within and between treatments.Authors should consult a statistician if they require assistance in making inferences from designed experiments”(https://onlin elibr ary.wiley.com/ page/ journ al/ 17447 348/ homep age/forau thors.html;Accessed 19 February 2022).Special attention should be given to the correct experimental unit,and thus the real replicates.Real replicates and the issue of pseudoreplication have been discussed extensively in the literature(Hurlbert 1984,2004,2013;Hawkins 1986;Potvin and Tardif 1988;Heffner et al.1996;Oksanen 2001;Cottenie and De Meester 2003).Numerous reviewers recommend that a paper be rejected because the study was based on pseudoreplicates and not real ones.In some cases,authors do not identify what the replicates were.In other instances,however,a study may be acceptable and equally important even if there were no real replicates,assuming there was still statistical support.For these reasons,the experimental and statistical units should be properly identified,and,where real replicates did not exist in a study,clarification should be made as to why the study is still valid and important.Finally,reporting arithmetic means without any measure of dispersion around the mean is unscientific.Arithmetic means by themselves are of little–if any-value either biologically or statistically.Hence,these are the first issues we suggest exercising care to explain explicitly.

    A frequently occurring issue is lack of clarification of whether data transformation was applied.This is important information and should be made clear,especially when it comes to statistical tools which lead to false conclusions if data were not transformed,as is often the case for multivariate statistics.

    No specification of the type of statistical model applied and/or the type of effects/factors is another occurring issue,and care should be made to specify these.Failure to conduct a dependent-samples analysis also occurs,whereas the experimental design would require such an analysis.It might also be the case that it is unclear if a study was based upon a dependent-samples design.Hence,it is important to clarify whether it is a dependent sample design.

    The failure to clarify what post hoc test had been made is another observation that is well known (Ruxton and Beauchamp 2008).Therefore,if a post hoc test is applied,it is important to identify the test.[As noted in Sect.2,specific guidelines regardingpvalues,αvalues,and multiple testing and comparisons are difficult to find.] In the absence of specific guidelines,the peer review process and acceptance of a manuscript for publication depends on academic editors.Independent academic editors however,should remain objective and unaffected by their opinion with what is correct or appropriate.But to help the editor and enhance your publication chances,it is important to justify why you applied anαcorrection or not.Especially because the selection and use ofαcorrection is multi-dimensional and depends on a series of factors (Armstrong 2014).

    Repeated measures can be applied to give more biological information in several cases (Powers and Kozak 2019),which is often the case for many research papers submitted to JFR.However,I frequently encounter (across journals)papers where repeated measures (or dependent-samples analysis) could be applied to provide comprehensive biological information but was not applied,and/or it is unclear if it was applied.

    Finally,there is no harm in clarifying whether your hypothesis testing was one-or two-tailed.Although most journals rarely request clarification,there are some that do(e.g.,Science;https://www.scien ce.org/content/page/science-journals-editorial-policies#statistical-analysis).Commonly a two-tailed hypothesis test is the case;however,if it was one-tailed,it is important to ensure that thepvalues reported,if you usedpvalues,are the correct ones.That is,in many cases thepvalues should be divided by two because most traditional data analysis software provide results for two-tailed hypothesis testing.

    Conclusion

    The purpose of this paper is not to create more questions than answers.However,as academic editors,we can raise authors’ awareness about these issues,thus helping make proper selection of statistical tools from the earliest research stages.Authors cannot be forced to follow specific protocols but we can provide a basis for authors to consider and follow to make a correct selection of statistical procedures.No editors would reject a paper with justification of the procedure used simply because their opinions differ.But justifying the procedure that authors follow shows awareness of the issues and permits a proper evaluation of the study and the paper itself.We believe that any editor would appreciate a careful selection of tests or comparisons considering how type I and type II errors are affected.

    It should be mentioned that this editorial should not be interpreted as suggesting that authors should simply satisfy the requirements of editors and journals–although it is often about compromise.Authors obtain funding,conduct research,and write up their results.This effort is often an outcome from government support (i.e.,taxpayer funded),and authors should always bear in mind that the best choice is the one that can contribute to cumulative knowledge and society overall and not one that will facilitate the prof it agenda of a publisher.You are free to follow or not to follow any editor’s or journals’ guidelines.The ultimate decision should be based on what would be ethically correct and fairer with respect to cumulative science and society,and not what would give a pass to a specific journal.If reviewers require the exclusion of specific data because they are not‘statistically significant’ or for any other reason,you should ask yourself whether this is honest and ethically correct and what the implications to cumulative science and overall society might be.If you disagree with a particular guideline and you can provide robust scientific justification for it,you can always try a rebuttal,even if rarely successful.We hope you f ind this information useful.Editors look for good reasons to accept papers (Binkley et al.2020),instead of searching for reasons to reject papers,and the methodology behind the statistics,beginning from the experimental design,is often a core determinant.Therefore,provide editors reasons for a peer review to eventually accept rather than reject your paper.

    AcknowledgementsThe authors are grateful to Dr.Ricardo Antunes Azevedo,Editor-in-Chief of theAnnals ofApplied Biologyand Professor at the Departamento de Genética,Escola Superior de Agricultura“Luiz de Queiroz”/Universidade de S?o Paulo (ESALQ/USP),Brazil,for sharing information about editorial policies of theAnnals of Applied Biology.Information about the ChineseJournal of Forestry Researchwas provided by the journal’s office.

    Declarations

    Conflict of interestAny commercial name cited in this manuscript,e.g.,of a journal or software,is not for advertisement,and the authors do not encourage or discourage the use of their services.Readers should make their own search and select products or services that suit them.The views presented herein are those of the authors and do not represent views of the editorial board or the editorial office of the journal,the publisher,the author’s institution,or funding bodies that supported the authors.The authors declare that there are no conflicts of interest.

    Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License,which permits use,sharing,adaptation,distribution and reproduction in any medium or format,as long as you give appropriate credit to the original author(s) and the source,provide a link to the Creative Commons licence,and indicate if changes were made.The images or other third party material in this article are included in the article’s Creative Commons licence,unless indicated otherwise in a credit line to the material.If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use,you will need to obtain permission directly from the copyright holder.To view a copy of this licence,visit http:// creat iveco mmons.org/l icen ses/ by/4.0/.

    精品欧美国产一区二区三| 黄色毛片三级朝国网站| 日本一区二区免费在线视频| 18禁裸乳无遮挡免费网站照片 | 亚洲色图av天堂| 国产1区2区3区精品| 高清在线国产一区| 久久国产精品人妻蜜桃| √禁漫天堂资源中文www| 国产激情久久老熟女| 亚洲av电影不卡..在线观看| 法律面前人人平等表现在哪些方面| 精品日产1卡2卡| 精品久久久精品久久久| 天天躁狠狠躁夜夜躁狠狠躁| 两个人视频免费观看高清| 精品一区二区三区四区五区乱码| 国产亚洲精品av在线| 国产高清视频在线播放一区| av有码第一页| 男人操女人黄网站| 国产精品野战在线观看| 久久久久久人人人人人| 亚洲av日韩精品久久久久久密| 波多野结衣一区麻豆| 国产aⅴ精品一区二区三区波| 欧美精品亚洲一区二区| 久久国产精品男人的天堂亚洲| 日韩欧美免费精品| 99久久综合精品五月天人人| 9热在线视频观看99| 美女免费视频网站| 久久久久久久精品吃奶| 亚洲国产中文字幕在线视频| 少妇 在线观看| 国产精品香港三级国产av潘金莲| a级毛片在线看网站| 人妻久久中文字幕网| 亚洲一区二区三区不卡视频| av中文乱码字幕在线| 成人免费观看视频高清| 97超级碰碰碰精品色视频在线观看| 精品久久久久久久人妻蜜臀av | а√天堂www在线а√下载| 欧美丝袜亚洲另类 | 大陆偷拍与自拍| 可以在线观看的亚洲视频| 亚洲欧美日韩高清在线视频| 亚洲全国av大片| 欧美激情极品国产一区二区三区| 丝袜人妻中文字幕| 91大片在线观看| 国产精品亚洲一级av第二区| 丰满的人妻完整版| 男人操女人黄网站| 欧美最黄视频在线播放免费| 国产成人欧美| 久久久久久国产a免费观看| 国产欧美日韩一区二区精品| 亚洲av片天天在线观看| 波多野结衣高清无吗| 久久精品人人爽人人爽视色| x7x7x7水蜜桃| 啦啦啦观看免费观看视频高清 | 亚洲激情在线av| 国产1区2区3区精品| 免费久久久久久久精品成人欧美视频| 成人18禁高潮啪啪吃奶动态图| 国产av一区在线观看免费| 国产高清videossex| 国产成人啪精品午夜网站| 欧洲精品卡2卡3卡4卡5卡区| 久久青草综合色| 久久精品国产综合久久久| 国产精品1区2区在线观看.| 激情视频va一区二区三区| 亚洲少妇的诱惑av| 18禁国产床啪视频网站| 中文字幕人妻熟女乱码| 久久精品亚洲精品国产色婷小说| 在线观看免费视频网站a站| 色播亚洲综合网| 日本三级黄在线观看| 国产精品一区二区精品视频观看| 亚洲,欧美精品.| 久久九九热精品免费| 91老司机精品| 色哟哟哟哟哟哟| 天堂√8在线中文| 69精品国产乱码久久久| 国产成人免费无遮挡视频| 好男人电影高清在线观看| 欧美日韩中文字幕国产精品一区二区三区 | 国产91精品成人一区二区三区| 国产成人影院久久av| 欧美大码av| 久久精品91蜜桃| www.999成人在线观看| 99香蕉大伊视频| 窝窝影院91人妻| 国产精品亚洲一级av第二区| 麻豆成人av在线观看| 免费观看人在逋| 国产成人精品久久二区二区免费| 亚洲一卡2卡3卡4卡5卡精品中文| 一区在线观看完整版| 老司机午夜十八禁免费视频| 成人18禁在线播放| 亚洲av片天天在线观看| 国产精品乱码一区二三区的特点 | 女性生殖器流出的白浆| 搡老妇女老女人老熟妇| 别揉我奶头~嗯~啊~动态视频| 首页视频小说图片口味搜索| 日韩欧美国产在线观看| 久久精品国产综合久久久| 色老头精品视频在线观看| 嫩草影视91久久| 丝袜美腿诱惑在线| 午夜福利成人在线免费观看| 欧美日韩黄片免| 国产三级黄色录像| 色综合欧美亚洲国产小说| 在线观看66精品国产| 91精品三级在线观看| 久99久视频精品免费| 亚洲午夜理论影院| 久久婷婷人人爽人人干人人爱 | 麻豆久久精品国产亚洲av| 色综合站精品国产| 日韩欧美在线二视频| 亚洲熟女毛片儿| 99在线人妻在线中文字幕| 精品一区二区三区四区五区乱码| 国产精品1区2区在线观看.| 亚洲久久久国产精品| 久久香蕉激情| 欧美乱码精品一区二区三区| 欧美老熟妇乱子伦牲交| 99久久精品国产亚洲精品| 色婷婷久久久亚洲欧美| 一二三四社区在线视频社区8| 三级毛片av免费| 国产精品亚洲美女久久久| 精品国产一区二区三区四区第35| 香蕉国产在线看| 精品卡一卡二卡四卡免费| 国产真人三级小视频在线观看| 日韩高清综合在线| 国产亚洲精品av在线| 精品一品国产午夜福利视频| 亚洲成av人片免费观看| 欧美乱色亚洲激情| 久久中文看片网| 三级毛片av免费| 欧美国产精品va在线观看不卡| 丰满的人妻完整版| 国产国语露脸激情在线看| 搡老熟女国产l中国老女人| av在线播放免费不卡| 两性夫妻黄色片| 午夜日韩欧美国产| 国产精品久久久人人做人人爽| 一区在线观看完整版| 搡老熟女国产l中国老女人| 一区二区日韩欧美中文字幕| 国产精品秋霞免费鲁丝片| 一边摸一边抽搐一进一出视频| 男女床上黄色一级片免费看| 精品久久蜜臀av无| 久久婷婷成人综合色麻豆| 免费高清在线观看日韩| 制服人妻中文乱码| 91字幕亚洲| 最近最新中文字幕大全免费视频| 久久青草综合色| 日韩成人在线观看一区二区三区| 99精品久久久久人妻精品| 一本大道久久a久久精品| 黄色女人牲交| 国产精品美女特级片免费视频播放器 | 999久久久国产精品视频| av在线天堂中文字幕| √禁漫天堂资源中文www| 久久精品亚洲熟妇少妇任你| 无遮挡黄片免费观看| 久久亚洲真实| av天堂久久9| 国产乱人伦免费视频| 99re在线观看精品视频| 欧美老熟妇乱子伦牲交| 欧美日韩亚洲国产一区二区在线观看| 人妻久久中文字幕网| 中文字幕av电影在线播放| 侵犯人妻中文字幕一二三四区| av中文乱码字幕在线| 亚洲,欧美精品.| 日日干狠狠操夜夜爽| 99香蕉大伊视频| 桃色一区二区三区在线观看| 精品高清国产在线一区| 999久久久国产精品视频| 国产欧美日韩精品亚洲av| 青草久久国产| 熟女少妇亚洲综合色aaa.| 97人妻天天添夜夜摸| 美女高潮到喷水免费观看| 老司机午夜福利在线观看视频| 不卡一级毛片| 亚洲五月天丁香| 一级毛片高清免费大全| 国产精品免费一区二区三区在线| 女人精品久久久久毛片| 亚洲 欧美 日韩 在线 免费| 免费高清在线观看日韩| 免费在线观看日本一区| 他把我摸到了高潮在线观看| 色综合欧美亚洲国产小说| 每晚都被弄得嗷嗷叫到高潮| a在线观看视频网站| 美国免费a级毛片| 亚洲av美国av| 国内久久婷婷六月综合欲色啪| 久久久久亚洲av毛片大全| 午夜福利在线观看吧| 亚洲国产日韩欧美精品在线观看 | 99国产综合亚洲精品| 久久久久精品国产欧美久久久| 男人舔女人的私密视频| 欧美乱妇无乱码| 欧美绝顶高潮抽搐喷水| 国产精品爽爽va在线观看网站 | 黄片播放在线免费| 欧美av亚洲av综合av国产av| 制服诱惑二区| 夜夜躁狠狠躁天天躁| 99久久久亚洲精品蜜臀av| 非洲黑人性xxxx精品又粗又长| 久久精品成人免费网站| www国产在线视频色| netflix在线观看网站| 黑人巨大精品欧美一区二区蜜桃| 亚洲aⅴ乱码一区二区在线播放 | 男人舔女人下体高潮全视频| 日日干狠狠操夜夜爽| 人妻久久中文字幕网| 男男h啪啪无遮挡| 丝袜在线中文字幕| 久久国产乱子伦精品免费另类| 亚洲无线在线观看| 一区二区三区精品91| 欧美中文综合在线视频| 免费在线观看视频国产中文字幕亚洲| 国产99白浆流出| 变态另类丝袜制服| 18禁裸乳无遮挡免费网站照片 | 久久香蕉激情| 一级毛片高清免费大全| 久久人人97超碰香蕉20202| 黄色视频不卡| 国产片内射在线| 亚洲少妇的诱惑av| 在线观看日韩欧美| 丝袜在线中文字幕| 两性夫妻黄色片| 9热在线视频观看99| 成在线人永久免费视频| 精品久久久久久久人妻蜜臀av | 可以免费在线观看a视频的电影网站| 国产高清视频在线播放一区| 色精品久久人妻99蜜桃| 曰老女人黄片| 两个人视频免费观看高清| 国产精品野战在线观看| 十八禁网站免费在线| 欧美日韩亚洲国产一区二区在线观看| 乱人伦中国视频| 精品欧美国产一区二区三| 国产精品1区2区在线观看.| 黑丝袜美女国产一区| 怎么达到女性高潮| 日本精品一区二区三区蜜桃| 在线播放国产精品三级| 欧美黑人精品巨大| 色综合欧美亚洲国产小说| 国产精品影院久久| 女警被强在线播放| 日本免费一区二区三区高清不卡 | 波多野结衣高清无吗| 久久午夜亚洲精品久久| 精品一区二区三区视频在线观看免费| 嫁个100分男人电影在线观看| 免费在线观看视频国产中文字幕亚洲| 亚洲中文av在线| 久久人人爽av亚洲精品天堂| 亚洲成人国产一区在线观看| 级片在线观看| 精品人妻在线不人妻| 性欧美人与动物交配| 91字幕亚洲| 精品欧美国产一区二区三| 黑人巨大精品欧美一区二区mp4| 亚洲欧美一区二区三区黑人| 超碰成人久久| а√天堂www在线а√下载| 91在线观看av| av在线天堂中文字幕| 麻豆一二三区av精品| 十八禁网站免费在线| 欧美大码av| 久久精品国产亚洲av香蕉五月| 欧美中文日本在线观看视频| 亚洲全国av大片| 制服诱惑二区| 身体一侧抽搐| 丝袜人妻中文字幕| 精品少妇一区二区三区视频日本电影| 最近最新免费中文字幕在线| 国产精品99久久99久久久不卡| а√天堂www在线а√下载| 久久精品国产综合久久久| 国产精品日韩av在线免费观看 | 香蕉丝袜av| 亚洲国产欧美网| www.999成人在线观看| 成人18禁高潮啪啪吃奶动态图| 午夜福利一区二区在线看| 婷婷丁香在线五月| 国产成人欧美| 51午夜福利影视在线观看| 婷婷丁香在线五月| 久久精品亚洲熟妇少妇任你| 在线视频色国产色| 69av精品久久久久久| 淫妇啪啪啪对白视频| 亚洲av五月六月丁香网| 免费少妇av软件| 一夜夜www| 久久影院123| 国产精华一区二区三区| 91国产中文字幕| 高潮久久久久久久久久久不卡| 亚洲电影在线观看av| 国产主播在线观看一区二区| 叶爱在线成人免费视频播放| 久久久久国产一级毛片高清牌| 九色亚洲精品在线播放| 精品一品国产午夜福利视频| 午夜福利视频1000在线观看 | 日本五十路高清| 亚洲黑人精品在线| 黄色a级毛片大全视频| 免费看a级黄色片| 亚洲一码二码三码区别大吗| 99精品在免费线老司机午夜| 美女大奶头视频| 好男人在线观看高清免费视频 | 久久人人97超碰香蕉20202| 99香蕉大伊视频| 亚洲成人久久性| 欧美乱妇无乱码| 妹子高潮喷水视频| 男人的好看免费观看在线视频 | 天天一区二区日本电影三级 | 久久久久久免费高清国产稀缺| 国产成人影院久久av| 欧美成人午夜精品| 制服诱惑二区| 男人舔女人的私密视频| 热99re8久久精品国产| 日韩精品青青久久久久久| 99久久国产精品久久久| 波多野结衣高清无吗| 性少妇av在线| 一级a爱片免费观看的视频| 欧美黄色淫秽网站| 亚洲国产看品久久| 国产精品永久免费网站| 免费av毛片视频| 伦理电影免费视频| 黄片小视频在线播放| 久久人人爽av亚洲精品天堂| 国产午夜精品久久久久久| 一级毛片精品| 成人亚洲精品一区在线观看| 无限看片的www在线观看| 免费看美女性在线毛片视频| 激情视频va一区二区三区| 亚洲av熟女| 欧美日韩瑟瑟在线播放| 啪啪无遮挡十八禁网站| 亚洲三区欧美一区| 真人一进一出gif抽搐免费| 亚洲成人免费电影在线观看| 波多野结衣一区麻豆| 女人被躁到高潮嗷嗷叫费观| 国产成人一区二区三区免费视频网站| 亚洲自拍偷在线| 欧美黄色淫秽网站| 一区福利在线观看| 成人三级做爰电影| 久久婷婷人人爽人人干人人爱 | 精品久久久久久久毛片微露脸| netflix在线观看网站| 免费看美女性在线毛片视频| 手机成人av网站| 中国美女看黄片| 在线观看免费视频日本深夜| 国产精华一区二区三区| 大型黄色视频在线免费观看| 女人被狂操c到高潮| 亚洲精品在线观看二区| 亚洲第一欧美日韩一区二区三区| avwww免费| 多毛熟女@视频| 国产一卡二卡三卡精品| 可以在线观看毛片的网站| 一个人免费在线观看的高清视频| 不卡av一区二区三区| av视频在线观看入口| 亚洲精品一卡2卡三卡4卡5卡| 18禁裸乳无遮挡免费网站照片 | 国产成人系列免费观看| 丰满人妻熟妇乱又伦精品不卡| 国产91精品成人一区二区三区| 我的亚洲天堂| 视频区欧美日本亚洲| 韩国av一区二区三区四区| а√天堂www在线а√下载| 欧美精品亚洲一区二区| 在线观看免费日韩欧美大片| 欧美一区二区精品小视频在线| 老司机午夜十八禁免费视频| 久久狼人影院| 欧美久久黑人一区二区| 国产精品日韩av在线免费观看 | 嫁个100分男人电影在线观看| 免费在线观看影片大全网站| 国产区一区二久久| 一a级毛片在线观看| 日韩欧美国产在线观看| 国产一区二区激情短视频| 婷婷六月久久综合丁香| xxx96com| 淫秽高清视频在线观看| 亚洲专区字幕在线| 国产精品久久视频播放| 老熟妇仑乱视频hdxx| 很黄的视频免费| 亚洲自拍偷在线| 久久伊人香网站| 91精品三级在线观看| 热re99久久国产66热| 欧美日韩乱码在线| 亚洲专区国产一区二区| 国产欧美日韩一区二区精品| 午夜影院日韩av| √禁漫天堂资源中文www| 99精品欧美一区二区三区四区| 一区二区三区国产精品乱码| 亚洲一码二码三码区别大吗| 欧美日韩黄片免| 国产精品秋霞免费鲁丝片| 巨乳人妻的诱惑在线观看| 国产片内射在线| 久久久水蜜桃国产精品网| 精品久久久精品久久久| 中国美女看黄片| 一进一出抽搐gif免费好疼| 最新在线观看一区二区三区| 黑人巨大精品欧美一区二区蜜桃| av福利片在线| 男男h啪啪无遮挡| 午夜激情av网站| 日韩av在线大香蕉| 欧美绝顶高潮抽搐喷水| 亚洲中文av在线| 一区二区三区国产精品乱码| 91成年电影在线观看| 精品第一国产精品| 99热只有精品国产| 制服人妻中文乱码| 国产精品久久久久久精品电影 | 最近最新免费中文字幕在线| 欧美日韩瑟瑟在线播放| 亚洲无线在线观看| 亚洲男人的天堂狠狠| 91成人精品电影| 久99久视频精品免费| 精品国产超薄肉色丝袜足j| 一边摸一边抽搐一进一小说| 男男h啪啪无遮挡| av有码第一页| av天堂久久9| 悠悠久久av| 日韩av在线大香蕉| 国产亚洲精品一区二区www| 日韩精品免费视频一区二区三区| 一本久久中文字幕| 777久久人妻少妇嫩草av网站| 欧美日韩乱码在线| 黑人操中国人逼视频| 欧美日韩中文字幕国产精品一区二区三区 | 99国产精品一区二区蜜桃av| 亚洲av片天天在线观看| 亚洲精品国产精品久久久不卡| 久久人妻av系列| 亚洲精品美女久久久久99蜜臀| 亚洲在线自拍视频| 亚洲激情在线av| 高清黄色对白视频在线免费看| 黄色a级毛片大全视频| 亚洲国产精品sss在线观看| e午夜精品久久久久久久| 黑人巨大精品欧美一区二区蜜桃| 非洲黑人性xxxx精品又粗又长| 日韩免费av在线播放| 欧美色视频一区免费| 欧美亚洲日本最大视频资源| 青草久久国产| 精品不卡国产一区二区三区| 久久天躁狠狠躁夜夜2o2o| 成年版毛片免费区| 国产真人三级小视频在线观看| 99国产精品一区二区蜜桃av| 少妇熟女aⅴ在线视频| 琪琪午夜伦伦电影理论片6080| 精品国内亚洲2022精品成人| 久久国产乱子伦精品免费另类| 午夜老司机福利片| 午夜视频精品福利| 一本久久中文字幕| 大型黄色视频在线免费观看| 十分钟在线观看高清视频www| 一a级毛片在线观看| 中出人妻视频一区二区| 国产亚洲精品第一综合不卡| 欧美成人免费av一区二区三区| 国产免费av片在线观看野外av| 我的亚洲天堂| 老汉色av国产亚洲站长工具| 欧美最黄视频在线播放免费| 国产精品二区激情视频| 人妻丰满熟妇av一区二区三区| 国产精品免费一区二区三区在线| xxx96com| 国产黄a三级三级三级人| 一级毛片高清免费大全| 黑丝袜美女国产一区| 亚洲专区字幕在线| 亚洲aⅴ乱码一区二区在线播放 | 久久人妻福利社区极品人妻图片| 不卡av一区二区三区| cao死你这个sao货| 亚洲欧美精品综合一区二区三区| 亚洲五月婷婷丁香| 国产一级毛片七仙女欲春2 | 欧美丝袜亚洲另类 | 国产不卡一卡二| 亚洲在线自拍视频| 狠狠狠狠99中文字幕| 一进一出抽搐gif免费好疼| 免费久久久久久久精品成人欧美视频| 久久香蕉国产精品| 一级黄色大片毛片| 成人国语在线视频| 亚洲熟妇中文字幕五十中出| 亚洲午夜精品一区,二区,三区| 女警被强在线播放| 亚洲视频免费观看视频| 久久精品国产综合久久久| 大香蕉久久成人网| 美女大奶头视频| 日日爽夜夜爽网站| 久久亚洲真实| 精品人妻1区二区| 国产在线精品亚洲第一网站| 日本a在线网址| 国产欧美日韩一区二区三区在线| 国内久久婷婷六月综合欲色啪| 亚洲狠狠婷婷综合久久图片| 午夜两性在线视频| 久久精品国产99精品国产亚洲性色 | av视频在线观看入口| 亚洲全国av大片| av天堂久久9| 午夜亚洲福利在线播放| 999精品在线视频| √禁漫天堂资源中文www| av在线天堂中文字幕| 久久婷婷成人综合色麻豆| √禁漫天堂资源中文www| 黄色视频,在线免费观看| 色哟哟哟哟哟哟| 日本三级黄在线观看| 亚洲性夜色夜夜综合| 亚洲中文字幕一区二区三区有码在线看 | 国产精品一区二区精品视频观看| 很黄的视频免费| 国产精品一区二区三区四区久久 | 悠悠久久av| 丰满人妻熟妇乱又伦精品不卡| 日韩欧美免费精品| 久久久久久人人人人人| 天堂影院成人在线观看| 91av网站免费观看| 午夜精品国产一区二区电影| 日韩欧美国产在线观看| 亚洲七黄色美女视频| 久久欧美精品欧美久久欧美| xxx96com| 少妇裸体淫交视频免费看高清 | 91九色精品人成在线观看| 久久久国产成人免费| 国产欧美日韩一区二区三|