Talk:1838: Machine Learning

Explain xkcd: It's 'cause you're dumb.
Revision as of 03:59, 19 May 2017 by 162.158.122.66 (talk)
Jump to: navigation, search

Appearently, there is the issue of people "training" intelligent systems out of their gut feeling: Let's say for example a system should determine whether or not a person should be promoted to fill a currently vacant business position. If the system is taught by the humans currently in charge of that very decision, and it weakens the people the humnas would decline and stenghtens the one they wouldn't, all these people might do is feeding the machine their own irrational biases. Then, down the road, some candidate may be declined because "computer says so". One could argue that this, if it happens, is just bad usage and no inherent issue of machine learning itself, so I'm not sure if this thought can be connected to the comic. In my head, it's close to "stirring the pile until the answers look right". What do you people think? 162.158.88.2 05:39, 17 May 2017 (UTC)

It's a good point but I don't think it's relevant to the comic. 141.101.107.252 13:55, 17 May 2017 (UTC)

Up the creek *with* a paddle. 162.158.111.121 07:52, 17 May 2017 (UTC)

It's a compost pile! Stir it and keep it moist until something useful comes out. 162.158.75.64 11:40, 17 May 2017 (UTC) ]

Actually I doin't think the paddle has anything to do with canoes - paddles like that are often used when stirring large quantities. In Louisiana its called a crawfish or gumbo paddle

I think the entire paragraph that goes "One of the most popular paradigms of..." needs to be cleaned up to make it human readable. Nialpxe (talk) 12:09, 17 May 2017 (UTC)

The comment that SVMs would be a better paradigm, rather than neural networks, is kind of wrong. Anyone who's worked with neural networks knows they're still essentially a linear algebra problem, just with nonlinear activation functions. Play around with tensorflow (it's fun and educational!) and you'll find most of the linear algebra isn't abstracted away as it might be in Keras, SkLearn or Caret (R). That being said, interpretability is absolutely a problem with these complex models. This is as much because the world doesn't like conforming to the nice modernist notion of a sensible theory (ie. one that can be reduced to a nice linear relationship), but even things like L1 regularisation often leave you wondering "but how does it all fit together?". On the other hand, while methods like SVMs still have a bit of machine learning magic in resolving how its hyperplane divides the hyperspace (ie. the value is derived empirically, not theoretically), the results are typically human interpretable, for a given definition of interpretable. It's no y= wx + b, but it's definitely possible. Same same for most methods short of very deep neural nets with millions of parameters. Most machine learning experts I've met have a pretty good idea what is going on in the simpler models, such as CARTs, SVMs, boosted models etc. The only reason neural nets are blackbox-y is that there's a huge amount going on inside them, and it's too much effort to do more than analyse outputs! 172.68.141.142 22:43, 17 May 2017 (UTC)


Does anyone else think the topic may have been influenced by Google's recently (May 17) featured article about machine learning?[[1]] --162.158.79.35 12:17, 17 May 2017 (UTC)

Maybe one day bots will learn to create entire explanations for xkcd. 141.101.99.179 12:38, 17 May 2017 (UTC)

Good, then maybe we won't have over-thought explanations anymore.
"That was a joke, haha" Elektrizikekswerk (talk) 07:36, 18 May 2017 (UTC)

The fuck is "Pinball"? 162.158.122.66 03:59, 19 May 2017 (UTC)