Is it correct to say the Neural Networks are an alternative way of performing Maximum Likelihood Estimation? if not, why? The 2019 Stack Overflow Developer Survey Results Are InCan we use MLE to estimate Neural Network weights?Are loss functions what define the identity of each supervised machine learning algorithm?What can we say about the likelihood function, besides using it in maximum likelihood estimation?Why is maximum likelihood estimation considered to be a frequentist techniqueMaximum Likelihood Estimation — why it is used despite being biased in many casesWhat is the objective of maximum likelihood estimation?Maximum Likelihood estimation and the Kalman filterWhy does Maximum Likelihood estimation maximizes probability density instead of probabilityWhy are the Least-Squares and Maximum-Likelihood methods of regression not equivalent when the errors are not normally distributed?the relationship between maximizing the likelihood and minimizing the cross-entropythe meaning of likelihood in maximum likelihood estimationHow to construct a cross-entropy loss for general regression targets?

Can withdrawing asylum be illegal?

Output the Arecibo Message

writing variables above the numbers in tikz picture

Geography at the pixel level

If I can cast sorceries at instant speed, can I use sorcery-speed activated abilities at instant speed?

Are spiders unable to hurt humans, especially very small spiders?

How to support a colleague who finds meetings extremely tiring?

Worn-tile Scrabble

Is it ok to offer lower paid work as a trial period before negotiating for a full-time job?

What is this business jet?

Can we generate random numbers using irrational numbers like π and e?

Keeping a retro style to sci-fi spaceships?

What to do when moving next to a bird sanctuary with a loosely-domesticated cat?

I am an eight letter word. What am I?

What does Linus Torvalds mean when he says that Git "never ever" tracks a file?

What information about me do stores get via my credit card?

Get name of standard action overriden in Visualforce contorller

Old scifi movie from the 50s or 60s with men in solid red uniforms who interrogate a spy from the past

What is the most efficient way to store a numeric range?

How can I add encounters in the Lost Mine of Phandelver campaign without giving PCs too much XP?

Getting crown tickets for Statue of Liberty

Cooking pasta in a water boiler

Why couldn't they take pictures of a closer black hole?

When should I buy a clipper card after flying to Oakland?



Is it correct to say the Neural Networks are an alternative way of performing Maximum Likelihood Estimation? if not, why?



The 2019 Stack Overflow Developer Survey Results Are InCan we use MLE to estimate Neural Network weights?Are loss functions what define the identity of each supervised machine learning algorithm?What can we say about the likelihood function, besides using it in maximum likelihood estimation?Why is maximum likelihood estimation considered to be a frequentist techniqueMaximum Likelihood Estimation — why it is used despite being biased in many casesWhat is the objective of maximum likelihood estimation?Maximum Likelihood estimation and the Kalman filterWhy does Maximum Likelihood estimation maximizes probability density instead of probabilityWhy are the Least-Squares and Maximum-Likelihood methods of regression not equivalent when the errors are not normally distributed?the relationship between maximizing the likelihood and minimizing the cross-entropythe meaning of likelihood in maximum likelihood estimationHow to construct a cross-entropy loss for general regression targets?



.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








2












$begingroup$


We often say that minimizing the (negative) cross-entropy error is the same as maximizing the likelihood. So can we say that NN are just an alternative way of performing Maximum Likelihood Estimation? if not, why?










share|cite|improve this question







New contributor




aca06 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$











  • $begingroup$
    Possible duplicate of Can we use MLE to estimate Neural Network weights?
    $endgroup$
    – Sycorax
    3 hours ago

















2












$begingroup$


We often say that minimizing the (negative) cross-entropy error is the same as maximizing the likelihood. So can we say that NN are just an alternative way of performing Maximum Likelihood Estimation? if not, why?










share|cite|improve this question







New contributor




aca06 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$











  • $begingroup$
    Possible duplicate of Can we use MLE to estimate Neural Network weights?
    $endgroup$
    – Sycorax
    3 hours ago













2












2








2


2



$begingroup$


We often say that minimizing the (negative) cross-entropy error is the same as maximizing the likelihood. So can we say that NN are just an alternative way of performing Maximum Likelihood Estimation? if not, why?










share|cite|improve this question







New contributor




aca06 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$




We often say that minimizing the (negative) cross-entropy error is the same as maximizing the likelihood. So can we say that NN are just an alternative way of performing Maximum Likelihood Estimation? if not, why?







neural-networks maximum-likelihood






share|cite|improve this question







New contributor




aca06 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











share|cite|improve this question







New contributor




aca06 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









share|cite|improve this question




share|cite|improve this question






New contributor




aca06 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









asked 5 hours ago









aca06aca06

111




111




New contributor




aca06 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





New contributor





aca06 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






aca06 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











  • $begingroup$
    Possible duplicate of Can we use MLE to estimate Neural Network weights?
    $endgroup$
    – Sycorax
    3 hours ago
















  • $begingroup$
    Possible duplicate of Can we use MLE to estimate Neural Network weights?
    $endgroup$
    – Sycorax
    3 hours ago















$begingroup$
Possible duplicate of Can we use MLE to estimate Neural Network weights?
$endgroup$
– Sycorax
3 hours ago




$begingroup$
Possible duplicate of Can we use MLE to estimate Neural Network weights?
$endgroup$
– Sycorax
3 hours ago










2 Answers
2






active

oldest

votes


















3












$begingroup$

In abstract terms, neural networks are models, or if you prefer, functions with unknown parameters, where we try to learn the parameter by minimizing loss function (not just cross entropy, there are many other possibilities). In general, minimizing loss is in most cases equivalent to maximizing some likelihood function, but as discussed in this thread, it's not that simple.



You cannot say that they are equivalent, because minimizing loss, or maximizing likelihood is a method of finding the parameters, while neural network is the function defined in terms of those parameters.






share|cite|improve this answer









$endgroup$








  • 1




    $begingroup$
    I'm trying to parse the distinction that you draw in the second paragraph. If I understand correctly, you would approve of a statement such as "My neural network model maximizes a certain log-likelihood" but not the statement "Neural networks and maximum likelihood estimators are the same concept." Is this a fair assessment?
    $endgroup$
    – Sycorax
    2 hours ago







  • 1




    $begingroup$
    @Sycorax yes, that is correct. If it is unclear and you have idea for better re-phrasing, feel free to suggest edit.
    $endgroup$
    – Tim
    2 hours ago






  • 1




    $begingroup$
    What if instead, we compare gradient descent and MLE ? It seems to me that they are just two methods for finding the best parameters.
    $endgroup$
    – aca06
    2 hours ago






  • 1




    $begingroup$
    @aca06 gradient descent is an optimization algorithm, MLE is a method of estimating parameters. You can use gradient descent to find minimum of negative likelihood function (or gradient ascent for maximizing likelihood).
    $endgroup$
    – Tim
    1 hour ago


















0












$begingroup$

These are fairly orthogonal topics.



Neural networks are a type of model which has a very large number of parameters. Maximum Likelihood Estimation is a very common method for estimating parameters from a given model and data. Typically, a model will allow you to compute a likelihood function from a model, data and parameter values. Since we don't know what the actual parameter values are, one way of estimating them is to use the value that maximizes the given likelihood. Neural networks are our model, maximum likelihood estimation is one method for estimating the parameters of our model.



One slightly technical note is that often, Maximum Likelihood Estimation is not exactly used in Neural Networks. That is, there are a lot of regularization methods used that imply we're not actually maximizing a likelihood function. These include:



(1) Penalized maximum likelihood. This one is a bit of a cop-out, as it doesn't actually take too much effort to think of Penalized likelihoods as actually just a different likelihood (i.e., one with priors) that one is maximizing.



(2) Random drop out. In especially a lot of the newer architectures, parameter values will randomly be set to 0 during training. This procedure is more definitely outside the realm of maximum likelihood estimation.



(3) Early stopping. It's not the most popular method at all, but one way to prevent overfitting is just to stop the optimization algorithm before it converges. Again, this is technically not maximum likelihood estimation, it's really just an ad-hoc solution to overfitting.






share|cite









$endgroup$













    Your Answer





    StackExchange.ifUsing("editor", function ()
    return StackExchange.using("mathjaxEditing", function ()
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    );
    );
    , "mathjax-editing");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "65"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: false,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: null,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );






    aca06 is a new contributor. Be nice, and check out our Code of Conduct.









    draft saved

    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f402511%2fis-it-correct-to-say-the-neural-networks-are-an-alternative-way-of-performing-ma%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown

























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    3












    $begingroup$

    In abstract terms, neural networks are models, or if you prefer, functions with unknown parameters, where we try to learn the parameter by minimizing loss function (not just cross entropy, there are many other possibilities). In general, minimizing loss is in most cases equivalent to maximizing some likelihood function, but as discussed in this thread, it's not that simple.



    You cannot say that they are equivalent, because minimizing loss, or maximizing likelihood is a method of finding the parameters, while neural network is the function defined in terms of those parameters.






    share|cite|improve this answer









    $endgroup$








    • 1




      $begingroup$
      I'm trying to parse the distinction that you draw in the second paragraph. If I understand correctly, you would approve of a statement such as "My neural network model maximizes a certain log-likelihood" but not the statement "Neural networks and maximum likelihood estimators are the same concept." Is this a fair assessment?
      $endgroup$
      – Sycorax
      2 hours ago







    • 1




      $begingroup$
      @Sycorax yes, that is correct. If it is unclear and you have idea for better re-phrasing, feel free to suggest edit.
      $endgroup$
      – Tim
      2 hours ago






    • 1




      $begingroup$
      What if instead, we compare gradient descent and MLE ? It seems to me that they are just two methods for finding the best parameters.
      $endgroup$
      – aca06
      2 hours ago






    • 1




      $begingroup$
      @aca06 gradient descent is an optimization algorithm, MLE is a method of estimating parameters. You can use gradient descent to find minimum of negative likelihood function (or gradient ascent for maximizing likelihood).
      $endgroup$
      – Tim
      1 hour ago















    3












    $begingroup$

    In abstract terms, neural networks are models, or if you prefer, functions with unknown parameters, where we try to learn the parameter by minimizing loss function (not just cross entropy, there are many other possibilities). In general, minimizing loss is in most cases equivalent to maximizing some likelihood function, but as discussed in this thread, it's not that simple.



    You cannot say that they are equivalent, because minimizing loss, or maximizing likelihood is a method of finding the parameters, while neural network is the function defined in terms of those parameters.






    share|cite|improve this answer









    $endgroup$








    • 1




      $begingroup$
      I'm trying to parse the distinction that you draw in the second paragraph. If I understand correctly, you would approve of a statement such as "My neural network model maximizes a certain log-likelihood" but not the statement "Neural networks and maximum likelihood estimators are the same concept." Is this a fair assessment?
      $endgroup$
      – Sycorax
      2 hours ago







    • 1




      $begingroup$
      @Sycorax yes, that is correct. If it is unclear and you have idea for better re-phrasing, feel free to suggest edit.
      $endgroup$
      – Tim
      2 hours ago






    • 1




      $begingroup$
      What if instead, we compare gradient descent and MLE ? It seems to me that they are just two methods for finding the best parameters.
      $endgroup$
      – aca06
      2 hours ago






    • 1




      $begingroup$
      @aca06 gradient descent is an optimization algorithm, MLE is a method of estimating parameters. You can use gradient descent to find minimum of negative likelihood function (or gradient ascent for maximizing likelihood).
      $endgroup$
      – Tim
      1 hour ago













    3












    3








    3





    $begingroup$

    In abstract terms, neural networks are models, or if you prefer, functions with unknown parameters, where we try to learn the parameter by minimizing loss function (not just cross entropy, there are many other possibilities). In general, minimizing loss is in most cases equivalent to maximizing some likelihood function, but as discussed in this thread, it's not that simple.



    You cannot say that they are equivalent, because minimizing loss, or maximizing likelihood is a method of finding the parameters, while neural network is the function defined in terms of those parameters.






    share|cite|improve this answer









    $endgroup$



    In abstract terms, neural networks are models, or if you prefer, functions with unknown parameters, where we try to learn the parameter by minimizing loss function (not just cross entropy, there are many other possibilities). In general, minimizing loss is in most cases equivalent to maximizing some likelihood function, but as discussed in this thread, it's not that simple.



    You cannot say that they are equivalent, because minimizing loss, or maximizing likelihood is a method of finding the parameters, while neural network is the function defined in terms of those parameters.







    share|cite|improve this answer












    share|cite|improve this answer



    share|cite|improve this answer










    answered 2 hours ago









    TimTim

    60k9133229




    60k9133229







    • 1




      $begingroup$
      I'm trying to parse the distinction that you draw in the second paragraph. If I understand correctly, you would approve of a statement such as "My neural network model maximizes a certain log-likelihood" but not the statement "Neural networks and maximum likelihood estimators are the same concept." Is this a fair assessment?
      $endgroup$
      – Sycorax
      2 hours ago







    • 1




      $begingroup$
      @Sycorax yes, that is correct. If it is unclear and you have idea for better re-phrasing, feel free to suggest edit.
      $endgroup$
      – Tim
      2 hours ago






    • 1




      $begingroup$
      What if instead, we compare gradient descent and MLE ? It seems to me that they are just two methods for finding the best parameters.
      $endgroup$
      – aca06
      2 hours ago






    • 1




      $begingroup$
      @aca06 gradient descent is an optimization algorithm, MLE is a method of estimating parameters. You can use gradient descent to find minimum of negative likelihood function (or gradient ascent for maximizing likelihood).
      $endgroup$
      – Tim
      1 hour ago












    • 1




      $begingroup$
      I'm trying to parse the distinction that you draw in the second paragraph. If I understand correctly, you would approve of a statement such as "My neural network model maximizes a certain log-likelihood" but not the statement "Neural networks and maximum likelihood estimators are the same concept." Is this a fair assessment?
      $endgroup$
      – Sycorax
      2 hours ago







    • 1




      $begingroup$
      @Sycorax yes, that is correct. If it is unclear and you have idea for better re-phrasing, feel free to suggest edit.
      $endgroup$
      – Tim
      2 hours ago






    • 1




      $begingroup$
      What if instead, we compare gradient descent and MLE ? It seems to me that they are just two methods for finding the best parameters.
      $endgroup$
      – aca06
      2 hours ago






    • 1




      $begingroup$
      @aca06 gradient descent is an optimization algorithm, MLE is a method of estimating parameters. You can use gradient descent to find minimum of negative likelihood function (or gradient ascent for maximizing likelihood).
      $endgroup$
      – Tim
      1 hour ago







    1




    1




    $begingroup$
    I'm trying to parse the distinction that you draw in the second paragraph. If I understand correctly, you would approve of a statement such as "My neural network model maximizes a certain log-likelihood" but not the statement "Neural networks and maximum likelihood estimators are the same concept." Is this a fair assessment?
    $endgroup$
    – Sycorax
    2 hours ago





    $begingroup$
    I'm trying to parse the distinction that you draw in the second paragraph. If I understand correctly, you would approve of a statement such as "My neural network model maximizes a certain log-likelihood" but not the statement "Neural networks and maximum likelihood estimators are the same concept." Is this a fair assessment?
    $endgroup$
    – Sycorax
    2 hours ago





    1




    1




    $begingroup$
    @Sycorax yes, that is correct. If it is unclear and you have idea for better re-phrasing, feel free to suggest edit.
    $endgroup$
    – Tim
    2 hours ago




    $begingroup$
    @Sycorax yes, that is correct. If it is unclear and you have idea for better re-phrasing, feel free to suggest edit.
    $endgroup$
    – Tim
    2 hours ago




    1




    1




    $begingroup$
    What if instead, we compare gradient descent and MLE ? It seems to me that they are just two methods for finding the best parameters.
    $endgroup$
    – aca06
    2 hours ago




    $begingroup$
    What if instead, we compare gradient descent and MLE ? It seems to me that they are just two methods for finding the best parameters.
    $endgroup$
    – aca06
    2 hours ago




    1




    1




    $begingroup$
    @aca06 gradient descent is an optimization algorithm, MLE is a method of estimating parameters. You can use gradient descent to find minimum of negative likelihood function (or gradient ascent for maximizing likelihood).
    $endgroup$
    – Tim
    1 hour ago




    $begingroup$
    @aca06 gradient descent is an optimization algorithm, MLE is a method of estimating parameters. You can use gradient descent to find minimum of negative likelihood function (or gradient ascent for maximizing likelihood).
    $endgroup$
    – Tim
    1 hour ago













    0












    $begingroup$

    These are fairly orthogonal topics.



    Neural networks are a type of model which has a very large number of parameters. Maximum Likelihood Estimation is a very common method for estimating parameters from a given model and data. Typically, a model will allow you to compute a likelihood function from a model, data and parameter values. Since we don't know what the actual parameter values are, one way of estimating them is to use the value that maximizes the given likelihood. Neural networks are our model, maximum likelihood estimation is one method for estimating the parameters of our model.



    One slightly technical note is that often, Maximum Likelihood Estimation is not exactly used in Neural Networks. That is, there are a lot of regularization methods used that imply we're not actually maximizing a likelihood function. These include:



    (1) Penalized maximum likelihood. This one is a bit of a cop-out, as it doesn't actually take too much effort to think of Penalized likelihoods as actually just a different likelihood (i.e., one with priors) that one is maximizing.



    (2) Random drop out. In especially a lot of the newer architectures, parameter values will randomly be set to 0 during training. This procedure is more definitely outside the realm of maximum likelihood estimation.



    (3) Early stopping. It's not the most popular method at all, but one way to prevent overfitting is just to stop the optimization algorithm before it converges. Again, this is technically not maximum likelihood estimation, it's really just an ad-hoc solution to overfitting.






    share|cite









    $endgroup$

















      0












      $begingroup$

      These are fairly orthogonal topics.



      Neural networks are a type of model which has a very large number of parameters. Maximum Likelihood Estimation is a very common method for estimating parameters from a given model and data. Typically, a model will allow you to compute a likelihood function from a model, data and parameter values. Since we don't know what the actual parameter values are, one way of estimating them is to use the value that maximizes the given likelihood. Neural networks are our model, maximum likelihood estimation is one method for estimating the parameters of our model.



      One slightly technical note is that often, Maximum Likelihood Estimation is not exactly used in Neural Networks. That is, there are a lot of regularization methods used that imply we're not actually maximizing a likelihood function. These include:



      (1) Penalized maximum likelihood. This one is a bit of a cop-out, as it doesn't actually take too much effort to think of Penalized likelihoods as actually just a different likelihood (i.e., one with priors) that one is maximizing.



      (2) Random drop out. In especially a lot of the newer architectures, parameter values will randomly be set to 0 during training. This procedure is more definitely outside the realm of maximum likelihood estimation.



      (3) Early stopping. It's not the most popular method at all, but one way to prevent overfitting is just to stop the optimization algorithm before it converges. Again, this is technically not maximum likelihood estimation, it's really just an ad-hoc solution to overfitting.






      share|cite









      $endgroup$















        0












        0








        0





        $begingroup$

        These are fairly orthogonal topics.



        Neural networks are a type of model which has a very large number of parameters. Maximum Likelihood Estimation is a very common method for estimating parameters from a given model and data. Typically, a model will allow you to compute a likelihood function from a model, data and parameter values. Since we don't know what the actual parameter values are, one way of estimating them is to use the value that maximizes the given likelihood. Neural networks are our model, maximum likelihood estimation is one method for estimating the parameters of our model.



        One slightly technical note is that often, Maximum Likelihood Estimation is not exactly used in Neural Networks. That is, there are a lot of regularization methods used that imply we're not actually maximizing a likelihood function. These include:



        (1) Penalized maximum likelihood. This one is a bit of a cop-out, as it doesn't actually take too much effort to think of Penalized likelihoods as actually just a different likelihood (i.e., one with priors) that one is maximizing.



        (2) Random drop out. In especially a lot of the newer architectures, parameter values will randomly be set to 0 during training. This procedure is more definitely outside the realm of maximum likelihood estimation.



        (3) Early stopping. It's not the most popular method at all, but one way to prevent overfitting is just to stop the optimization algorithm before it converges. Again, this is technically not maximum likelihood estimation, it's really just an ad-hoc solution to overfitting.






        share|cite









        $endgroup$



        These are fairly orthogonal topics.



        Neural networks are a type of model which has a very large number of parameters. Maximum Likelihood Estimation is a very common method for estimating parameters from a given model and data. Typically, a model will allow you to compute a likelihood function from a model, data and parameter values. Since we don't know what the actual parameter values are, one way of estimating them is to use the value that maximizes the given likelihood. Neural networks are our model, maximum likelihood estimation is one method for estimating the parameters of our model.



        One slightly technical note is that often, Maximum Likelihood Estimation is not exactly used in Neural Networks. That is, there are a lot of regularization methods used that imply we're not actually maximizing a likelihood function. These include:



        (1) Penalized maximum likelihood. This one is a bit of a cop-out, as it doesn't actually take too much effort to think of Penalized likelihoods as actually just a different likelihood (i.e., one with priors) that one is maximizing.



        (2) Random drop out. In especially a lot of the newer architectures, parameter values will randomly be set to 0 during training. This procedure is more definitely outside the realm of maximum likelihood estimation.



        (3) Early stopping. It's not the most popular method at all, but one way to prevent overfitting is just to stop the optimization algorithm before it converges. Again, this is technically not maximum likelihood estimation, it's really just an ad-hoc solution to overfitting.







        share|cite












        share|cite



        share|cite










        answered 8 mins ago









        Cliff ABCliff AB

        13.8k12567




        13.8k12567




















            aca06 is a new contributor. Be nice, and check out our Code of Conduct.









            draft saved

            draft discarded


















            aca06 is a new contributor. Be nice, and check out our Code of Conduct.












            aca06 is a new contributor. Be nice, and check out our Code of Conduct.











            aca06 is a new contributor. Be nice, and check out our Code of Conduct.














            Thanks for contributing an answer to Cross Validated!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f402511%2fis-it-correct-to-say-the-neural-networks-are-an-alternative-way-of-performing-ma%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Are there any AGPL-style licences that require source code modifications to be public? Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern) Announcing the arrival of Valued Associate #679: Cesar Manara Unicorn Meta Zoo #1: Why another podcast?Force derivative works to be publicAre there any GPL like licenses for Apple App Store?Do you violate the GPL if you provide source code that cannot be compiled?GPL - is it distribution to use libraries in an appliance loaned to customers?Distributing App for free which uses GPL'ed codeModifications of server software under GPL, with web/CLI interfaceDoes using an AGPLv3-licensed library prevent me from dual-licensing my own source code?Can I publish only select code under GPLv3 from a private project?Is there published precedent regarding the scope of covered work that uses AGPL software?If MIT licensed code links to GPL licensed code what should be the license of the resulting binary program?If I use a public API endpoint that has its source code licensed under AGPL in my app, do I need to disclose my source?

            2013 GY136 Descoberta | Órbita | Referências Menu de navegação«List Of Centaurs and Scattered-Disk Objects»«List of Known Trans-Neptunian Objects»

            Button changing it's text & action. Good or terrible? The 2019 Stack Overflow Developer Survey Results Are Inchanging text on user mouseoverShould certain functions be “hard to find” for powerusers to discover?Custom liking function - do I need user login?Using different checkbox style for different checkbox behaviorBest Practices: Save and Exit in Software UIInteraction with remote validated formMore efficient UI to progress the user through a complicated process?Designing a popup notice for a gameShould bulk-editing functions be hidden until a table row is selected, or is there a better solution?Is it bad practice to disable (replace) the context menu?