What does “Scientists rise up against statistical significance” mean? (Comment in Nature)References containing arguments against null hypothesis significance testing?Statistical significance for correlated significance testsDoes Greenhouse-Geisser correction influence the effect size estimation, the statistical significance treshold, or both of them?Is the “hybrid” between Fisher and Neyman-Pearson approaches to statistical testing really an “incoherent mishmash”?A psychology journal banned p-values and confidence intervals; is it indeed wise to stop using them?Statistical significanceStatistical test significanceASA discusses limitations of $p$-values - what are the alternatives?Is this the solution to the p-value problem?Statistical significance between years; what test to use

How many people need to be born every 8 years to sustain population?

Isometric embedding of a genus g surface

What does "tick" mean in this sentence?

How to get directions in deep space?

What the heck is gets(stdin) on site coderbyte?

How do you justify more code being written by following clean code practices?

In One Punch Man, is King actually weak?

Echo with obfuscation

Deciphering cause of death?

El Dorado Word Puzzle II: Videogame Edition

How were servants to the Kaiser of Imperial Germany treated and where may I find more information on them

Is there a distance limit for minecart tracks?

Sigmoid with a slope but no asymptotes?

What is the meaning of "You've never met a graph you didn't like?"

What does "Scientists rise up against statistical significance" mean? (Comment in Nature)

Should I warn a new PhD Student?

How to test the sharpness of a knife?

Giving feedback to someone without sounding prejudiced

How to reduce predictors the right way for a logistic regression model

Are Captain Marvel's powers affected by Thanos breaking the Tesseract and claiming the stone?

Can I cause damage to electrical appliances by unplugging them when they are turned on?

I'm just a whisper. Who am I?

What is the smallest number n> 5 so that 5 ^ n ends with "3125"?

What's the name of the logical fallacy where a debater extends a statement far beyond the original statement to make it true?



What does “Scientists rise up against statistical significance” mean? (Comment in Nature)


References containing arguments against null hypothesis significance testing?Statistical significance for correlated significance testsDoes Greenhouse-Geisser correction influence the effect size estimation, the statistical significance treshold, or both of them?Is the “hybrid” between Fisher and Neyman-Pearson approaches to statistical testing really an “incoherent mishmash”?A psychology journal banned p-values and confidence intervals; is it indeed wise to stop using them?Statistical significanceStatistical test significanceASA discusses limitations of $p$-values - what are the alternatives?Is this the solution to the p-value problem?Statistical significance between years; what test to use













3












$begingroup$


The title of the Comment in Nature Scientists rise up against statistical significance begins with:




Valentin Amrhein, Sander Greenland, Blake McShane and more than 800 signatories call for an end to hyped claims and the dismissal of possibly crucial effects.




and later contains statements like:




Again, we are not advocating a ban on P values, confidence intervals or other statistical measures — only that we should not treat them categorically. This includes dichotomization as statistically significant or not, as well as categorization based on other statistical measures such as Bayes factors.




I think I can grasp that the image below does not say that the two studies disagree because one "rules out" no effect while the other does not. But the article seems to go into much more depth that I can understand.



Towards the end there seems to be a summary in four points. Is it possible to summarize these in even simpler terms for those of us who read statistics rather than write it?




When talking about compatibility intervals, bear in mind four things.



  • First, just because the interval gives the values most compatible with the data, given the assumptions, it doesn’t mean values outside it are incompatible; they are just less compatible...


  • Second, not all values inside are equally compatible with the data, given the assumptions...


  • Third, like the 0.05 threshold from which it came, the default 95% used to compute intervals is itself an arbitrary convention...


  • Last, and most important of all, be humble: compatibility assessments hinge on the correctness of the statistical assumptions used to compute the interval...





Nature: Scientists rise up against statistical significance










share|cite|improve this question









$endgroup$
















    3












    $begingroup$


    The title of the Comment in Nature Scientists rise up against statistical significance begins with:




    Valentin Amrhein, Sander Greenland, Blake McShane and more than 800 signatories call for an end to hyped claims and the dismissal of possibly crucial effects.




    and later contains statements like:




    Again, we are not advocating a ban on P values, confidence intervals or other statistical measures — only that we should not treat them categorically. This includes dichotomization as statistically significant or not, as well as categorization based on other statistical measures such as Bayes factors.




    I think I can grasp that the image below does not say that the two studies disagree because one "rules out" no effect while the other does not. But the article seems to go into much more depth that I can understand.



    Towards the end there seems to be a summary in four points. Is it possible to summarize these in even simpler terms for those of us who read statistics rather than write it?




    When talking about compatibility intervals, bear in mind four things.



    • First, just because the interval gives the values most compatible with the data, given the assumptions, it doesn’t mean values outside it are incompatible; they are just less compatible...


    • Second, not all values inside are equally compatible with the data, given the assumptions...


    • Third, like the 0.05 threshold from which it came, the default 95% used to compute intervals is itself an arbitrary convention...


    • Last, and most important of all, be humble: compatibility assessments hinge on the correctness of the statistical assumptions used to compute the interval...





    Nature: Scientists rise up against statistical significance










    share|cite|improve this question









    $endgroup$














      3












      3








      3





      $begingroup$


      The title of the Comment in Nature Scientists rise up against statistical significance begins with:




      Valentin Amrhein, Sander Greenland, Blake McShane and more than 800 signatories call for an end to hyped claims and the dismissal of possibly crucial effects.




      and later contains statements like:




      Again, we are not advocating a ban on P values, confidence intervals or other statistical measures — only that we should not treat them categorically. This includes dichotomization as statistically significant or not, as well as categorization based on other statistical measures such as Bayes factors.




      I think I can grasp that the image below does not say that the two studies disagree because one "rules out" no effect while the other does not. But the article seems to go into much more depth that I can understand.



      Towards the end there seems to be a summary in four points. Is it possible to summarize these in even simpler terms for those of us who read statistics rather than write it?




      When talking about compatibility intervals, bear in mind four things.



      • First, just because the interval gives the values most compatible with the data, given the assumptions, it doesn’t mean values outside it are incompatible; they are just less compatible...


      • Second, not all values inside are equally compatible with the data, given the assumptions...


      • Third, like the 0.05 threshold from which it came, the default 95% used to compute intervals is itself an arbitrary convention...


      • Last, and most important of all, be humble: compatibility assessments hinge on the correctness of the statistical assumptions used to compute the interval...





      Nature: Scientists rise up against statistical significance










      share|cite|improve this question









      $endgroup$




      The title of the Comment in Nature Scientists rise up against statistical significance begins with:




      Valentin Amrhein, Sander Greenland, Blake McShane and more than 800 signatories call for an end to hyped claims and the dismissal of possibly crucial effects.




      and later contains statements like:




      Again, we are not advocating a ban on P values, confidence intervals or other statistical measures — only that we should not treat them categorically. This includes dichotomization as statistically significant or not, as well as categorization based on other statistical measures such as Bayes factors.




      I think I can grasp that the image below does not say that the two studies disagree because one "rules out" no effect while the other does not. But the article seems to go into much more depth that I can understand.



      Towards the end there seems to be a summary in four points. Is it possible to summarize these in even simpler terms for those of us who read statistics rather than write it?




      When talking about compatibility intervals, bear in mind four things.



      • First, just because the interval gives the values most compatible with the data, given the assumptions, it doesn’t mean values outside it are incompatible; they are just less compatible...


      • Second, not all values inside are equally compatible with the data, given the assumptions...


      • Third, like the 0.05 threshold from which it came, the default 95% used to compute intervals is itself an arbitrary convention...


      • Last, and most important of all, be humble: compatibility assessments hinge on the correctness of the statistical assumptions used to compute the interval...





      Nature: Scientists rise up against statistical significance







      statistical-significance p-value bias






      share|cite|improve this question













      share|cite|improve this question











      share|cite|improve this question




      share|cite|improve this question










      asked 1 hour ago









      uhohuhoh

      1264




      1264




















          1 Answer
          1






          active

          oldest

          votes


















          4












          $begingroup$

          I'll try.



          1. The confidence interval (which they rename compatibility interval) shows the values of the parameter that are most compatible with the data. But that doesn't mean the values outside the interval are absolutely incompatible with the data.

          2. Values near the middle of the confidence (compatibili5y) interval are more compatible with the data than values near the ends of the interval.

          3. 95% is just a convention. You can compute 90% or 99% or any% intervals.

          4. The confidence/compatibility intervals are only helpful if the experiment was done properly, if the analysis was done according to a preset plan, and the data conform with the assumption of the analysis methods. If you've got bad data analyzed badly, the compatibility interval is not meaningful or helpful.





          share|cite|improve this answer









          $endgroup$












            Your Answer





            StackExchange.ifUsing("editor", function ()
            return StackExchange.using("mathjaxEditing", function ()
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            );
            );
            , "mathjax-editing");

            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "65"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader:
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            ,
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













            draft saved

            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f398646%2fwhat-does-scientists-rise-up-against-statistical-significance-mean-comment-i%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            4












            $begingroup$

            I'll try.



            1. The confidence interval (which they rename compatibility interval) shows the values of the parameter that are most compatible with the data. But that doesn't mean the values outside the interval are absolutely incompatible with the data.

            2. Values near the middle of the confidence (compatibili5y) interval are more compatible with the data than values near the ends of the interval.

            3. 95% is just a convention. You can compute 90% or 99% or any% intervals.

            4. The confidence/compatibility intervals are only helpful if the experiment was done properly, if the analysis was done according to a preset plan, and the data conform with the assumption of the analysis methods. If you've got bad data analyzed badly, the compatibility interval is not meaningful or helpful.





            share|cite|improve this answer









            $endgroup$

















              4












              $begingroup$

              I'll try.



              1. The confidence interval (which they rename compatibility interval) shows the values of the parameter that are most compatible with the data. But that doesn't mean the values outside the interval are absolutely incompatible with the data.

              2. Values near the middle of the confidence (compatibili5y) interval are more compatible with the data than values near the ends of the interval.

              3. 95% is just a convention. You can compute 90% or 99% or any% intervals.

              4. The confidence/compatibility intervals are only helpful if the experiment was done properly, if the analysis was done according to a preset plan, and the data conform with the assumption of the analysis methods. If you've got bad data analyzed badly, the compatibility interval is not meaningful or helpful.





              share|cite|improve this answer









              $endgroup$















                4












                4








                4





                $begingroup$

                I'll try.



                1. The confidence interval (which they rename compatibility interval) shows the values of the parameter that are most compatible with the data. But that doesn't mean the values outside the interval are absolutely incompatible with the data.

                2. Values near the middle of the confidence (compatibili5y) interval are more compatible with the data than values near the ends of the interval.

                3. 95% is just a convention. You can compute 90% or 99% or any% intervals.

                4. The confidence/compatibility intervals are only helpful if the experiment was done properly, if the analysis was done according to a preset plan, and the data conform with the assumption of the analysis methods. If you've got bad data analyzed badly, the compatibility interval is not meaningful or helpful.





                share|cite|improve this answer









                $endgroup$



                I'll try.



                1. The confidence interval (which they rename compatibility interval) shows the values of the parameter that are most compatible with the data. But that doesn't mean the values outside the interval are absolutely incompatible with the data.

                2. Values near the middle of the confidence (compatibili5y) interval are more compatible with the data than values near the ends of the interval.

                3. 95% is just a convention. You can compute 90% or 99% or any% intervals.

                4. The confidence/compatibility intervals are only helpful if the experiment was done properly, if the analysis was done according to a preset plan, and the data conform with the assumption of the analysis methods. If you've got bad data analyzed badly, the compatibility interval is not meaningful or helpful.






                share|cite|improve this answer












                share|cite|improve this answer



                share|cite|improve this answer










                answered 1 hour ago









                Harvey MotulskyHarvey Motulsky

                10.9k44485




                10.9k44485



























                    draft saved

                    draft discarded
















































                    Thanks for contributing an answer to Cross Validated!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid


                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.

                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f398646%2fwhat-does-scientists-rise-up-against-statistical-significance-mean-comment-i%23new-answer', 'question_page');

                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Are there any AGPL-style licences that require source code modifications to be public? Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern) Announcing the arrival of Valued Associate #679: Cesar Manara Unicorn Meta Zoo #1: Why another podcast?Force derivative works to be publicAre there any GPL like licenses for Apple App Store?Do you violate the GPL if you provide source code that cannot be compiled?GPL - is it distribution to use libraries in an appliance loaned to customers?Distributing App for free which uses GPL'ed codeModifications of server software under GPL, with web/CLI interfaceDoes using an AGPLv3-licensed library prevent me from dual-licensing my own source code?Can I publish only select code under GPLv3 from a private project?Is there published precedent regarding the scope of covered work that uses AGPL software?If MIT licensed code links to GPL licensed code what should be the license of the resulting binary program?If I use a public API endpoint that has its source code licensed under AGPL in my app, do I need to disclose my source?

                    2013 GY136 Descoberta | Órbita | Referências Menu de navegação«List Of Centaurs and Scattered-Disk Objects»«List of Known Trans-Neptunian Objects»

                    Metrô de Los Teques Índice Linhas | Estações | Ver também | Referências Ligações externas | Menu de navegação«INSTITUCIÓN»«Mapa de rutas»originalMetrô de Los TequesC.A. Metro Los Teques |Alcaldía de Guaicaipuro – Sitio OficialGobernacion de Mirandaeeeeeee