Editing Talk:1838: Machine Learning
Please sign your posts with ~~~~ |
Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.
The edit can be undone.
Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
Latest revision | Your text | ||
Line 18: | Line 18: | ||
The comment that SVMs would be a better paradigm, rather than neural networks, is kind of wrong. Anyone who's worked with neural networks knows they're still essentially a linear algebra problem, just with nonlinear activation functions. Play around with tensorflow (it's fun and educational!) and you'll find most of the linear algebra isn't abstracted away as it might be in Keras, SkLearn or Caret (R). That being said, interpretability is absolutely a problem with these complex models. This is as much because the world doesn't like conforming to the nice modernist notion of a sensible theory (ie. one that can be reduced to a nice linear relationship), but even things like L1 regularisation often leave you wondering "but how does it all fit together?". On the other hand, while methods like SVMs still have a bit of machine learning magic in resolving how its hyperplane divides the hyperspace (ie. the value is derived empirically, not theoretically), the results are typically human interpretable, for a given definition of interpretable. It's no y= wx + b, but it's definitely possible. Same same for most methods short of very deep neural nets with millions of parameters. Most machine learning experts I've met have a pretty good idea what is going on in the simpler models, such as CARTs, SVMs, boosted models etc. The only reason neural nets are blackbox-y is that there's a huge amount going on inside them, and it's too much effort to do more than analyse outputs! [[Special:Contributions/172.68.141.142|172.68.141.142]] 22:43, 17 May 2017 (UTC) | The comment that SVMs would be a better paradigm, rather than neural networks, is kind of wrong. Anyone who's worked with neural networks knows they're still essentially a linear algebra problem, just with nonlinear activation functions. Play around with tensorflow (it's fun and educational!) and you'll find most of the linear algebra isn't abstracted away as it might be in Keras, SkLearn or Caret (R). That being said, interpretability is absolutely a problem with these complex models. This is as much because the world doesn't like conforming to the nice modernist notion of a sensible theory (ie. one that can be reduced to a nice linear relationship), but even things like L1 regularisation often leave you wondering "but how does it all fit together?". On the other hand, while methods like SVMs still have a bit of machine learning magic in resolving how its hyperplane divides the hyperspace (ie. the value is derived empirically, not theoretically), the results are typically human interpretable, for a given definition of interpretable. It's no y= wx + b, but it's definitely possible. Same same for most methods short of very deep neural nets with millions of parameters. Most machine learning experts I've met have a pretty good idea what is going on in the simpler models, such as CARTs, SVMs, boosted models etc. The only reason neural nets are blackbox-y is that there's a huge amount going on inside them, and it's too much effort to do more than analyse outputs! [[Special:Contributions/172.68.141.142|172.68.141.142]] 22:43, 17 May 2017 (UTC) | ||
− | |||
Does anyone else think the topic may have been influenced by Google's recently (May 17) featured article about machine learning?[[https://www.google.com/intl/en/about/main/gender-equality-films/]] | Does anyone else think the topic may have been influenced by Google's recently (May 17) featured article about machine learning?[[https://www.google.com/intl/en/about/main/gender-equality-films/]] | ||
Line 30: | Line 29: | ||
:: I lovingly think of this site as "Over-Explain XKCD"[[Special:Contributions/172.68.54.112|172.68.54.112]] 17:44, 20 May 2017 (UTC) | :: I lovingly think of this site as "Over-Explain XKCD"[[Special:Contributions/172.68.54.112|172.68.54.112]] 17:44, 20 May 2017 (UTC) | ||
− | The fuck is "Pinball"? [[Special:Contributions/162.158.122.66|162.158.122.66]] 03:59, 19 | + | The fuck is "{{w|Pinball}}"? [[Special:Contributions/162.158.122.66|162.158.122.66]] 03:59, 19 May 2017 (UTC) |
− | |||
On the topic of 'Stirring', I'm not sure why it's being associated with neural networks. It's a common thing in machine learning to randomize starting conditions to avoid local minima. This does exist in neural networks, as edge weights are typically randomized, but it's also the first step in many different algorithms, such as k-means where the initial centroid locations are randomized, or decision trees where random forests are sometimes used. [[Special:Contributions/173.245.50.186|173.245.50.186]] 13:18, 19 May 2017 (UTC) sbendl | On the topic of 'Stirring', I'm not sure why it's being associated with neural networks. It's a common thing in machine learning to randomize starting conditions to avoid local minima. This does exist in neural networks, as edge weights are typically randomized, but it's also the first step in many different algorithms, such as k-means where the initial centroid locations are randomized, or decision trees where random forests are sometimes used. [[Special:Contributions/173.245.50.186|173.245.50.186]] 13:18, 19 May 2017 (UTC) sbendl | ||
Line 37: | Line 35: | ||
====Fixing the explanation==== | ====Fixing the explanation==== | ||
Right now, the explanation has two parts, one that is simply trying to explain it for the casual reader, and another that goes into the details of machine/deep learning, linear algebra, neural networks etc. (I almost forgot composting!) The way the two parts are jumbled together makes no sense. Perhaps having a simple initial explanation with subsections for more detailed explanation of individual topics relevant to the comic would fix the mess. [[User:Nialpxe|Nialpxe]] ([[User talk:Nialpxe|talk]]) 14:08, 19 May 2017 (UTC) | Right now, the explanation has two parts, one that is simply trying to explain it for the casual reader, and another that goes into the details of machine/deep learning, linear algebra, neural networks etc. (I almost forgot composting!) The way the two parts are jumbled together makes no sense. Perhaps having a simple initial explanation with subsections for more detailed explanation of individual topics relevant to the comic would fix the mess. [[User:Nialpxe|Nialpxe]] ([[User talk:Nialpxe|talk]]) 14:08, 19 May 2017 (UTC) | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− |