Difference between revisions of "Talk:2429: Exposure Models"

Explain xkcd: It's 'cause you're dumb.
Jump to: navigation, search
(Does anyone know why this is incomplete?)
(Does anyone know why this is incomplete?)
 
Line 20: Line 20:
 
:: It weren't me, but I can see why. There's no signs that any machine learning was employed. The text even stated "This might be the first time machine learning has been mentioned" (not sure that's right), but itself was the first obvious mention of machine learning. A model can  just be a simulation (entirely configured by the human creator), and this seems far more likely here, given nothing to say otherwise. [[Special:Contributions/141.101.98.96|141.101.98.96]] 19:55, 28 February 2021 (UTC)
 
:: It weren't me, but I can see why. There's no signs that any machine learning was employed. The text even stated "This might be the first time machine learning has been mentioned" (not sure that's right), but itself was the first obvious mention of machine learning. A model can  just be a simulation (entirely configured by the human creator), and this seems far more likely here, given nothing to say otherwise. [[Special:Contributions/141.101.98.96|141.101.98.96]] 19:55, 28 February 2021 (UTC)
  
== Does anyone know why this is incomplete? ==
+
Edit: Deleted comment. Sorry for the accidental spam. [[User talk:Quillathe Siannodel|<sup>{)|(}</sup>]][[User:Quillathe_Siannodel|Quill]][[User talk:Quillathe Siannodel|<sub>{)|(}</sub>]] 14:46, 25 March 2021 (UTC)
 
 
Without information, nobody knows which part needs fixing. If anyone knows why this is incomplete, please post the reason here. If nobody can provide a satisfactory answer, maybe we should consider removing the incomplete tag. '''Note that I am posting this exact same text on other comics of questionable incompleteness. It's not spamming, it's a conscious attempt to clean this category up.''' [[User talk:Quillathe Siannodel|<sup>{)|(}</sup>]][[User:Quillathe_Siannodel|Quill]][[User talk:Quillathe Siannodel|<sub>{)|(}</sub>]] 14:46, 25 March 2021 (UTC)
 

Latest revision as of 13:13, 7 April 2021

Is it worth making a note of the art error in the third panel, where the chair back has disappeared? 108.162.237.8 03:07, 25 February 2021 (UTC)

Someone did it. Fabian42 (talk) 09:06, 25 February 2021 (UTC)

I'm not ashamed to say that a good portion of the Bash and Google sheets knowledge I have today comes from creating a Corona spreadsheet and its automatic filling script: https://docs.google.com/spreadsheets/d/1uDTghO_ZYBs5nfs2kDc0Ms6e9bbx7clx_QgkWii7OMY and https://pastebin.com/uHzzMeac Fabian42 (talk) 09:06, 25 February 2021 (UTC)

Explained the joke(I think?)[edit]

I wrote that the joke was he was so obsessed with the charts it became a self fulfilling prophecy. Please correct me if I'm wrong.Hiihaveanaccount (talk) 15:06, 25 February 2021 (UTC)

I think it hinges on the two possible meanings of his first sentence. One interpretation is that he's building the model, with the goal being that the model, once ready, will help him limit his risk. The other one would be that the making itself is what helps him limit his risk because it forces him to stay at home. In the second case, the quality of the eventual result doesn't matter that much and it's more about having something to do instead of getting bored while sitting at home. Bischoff (talk) 15:50, 25 February 2021 (UTC)

Strange that Randall is apparently debugging a manual model when machine learning models have passed the Turing test and gptneo was recently open sourced. 162.158.63.170 21:54, 25 February 2021 (UTC)

What's with the meta-model comment? I don't get it.172.69.170.120 00:44, 26 February 2021 (UTC)

Well, it's just a guess, but using machine learning models to predict and design the behaviors of machine learning models would make a hyperintelligent system, in the extreme, no? A big thing, as a software developer, is finding ways to get the computer to do for you, what you would previously do yourself, which can mean getting more and more meta as a habit. Seems similar to the comic about the tower of babel, to me: touching on research towards hyperintelligence (and current events stemming from use of machine learning) without saying too much outright. 162.158.63.118 00:57, 26 February 2021 (UTC)


Note that alignment sounds like if the AIs end up being evil. They wouldn't be evil. They would be just fulfilling their purpose. Ignoring anything they don't have in program. So, it's kinda dangerous if we don't train the machine to be careful and not kill someone just because we don't know how it could do it ... -- Hkmaly (talk) 02:31, 26 February 2021 (UTC)

nah it has more to do with how automatically pursuing goals can discover weird approaches that nobody expects. But I guess that's what you're saying. It's just hard to rigorously define "be careful". Somebody removed all the information about machine learning from the article. 162.158.63.188 14:17, 26 February 2021 (UTC)
It weren't me, but I can see why. There's no signs that any machine learning was employed. The text even stated "This might be the first time machine learning has been mentioned" (not sure that's right), but itself was the first obvious mention of machine learning. A model can just be a simulation (entirely configured by the human creator), and this seems far more likely here, given nothing to say otherwise. 141.101.98.96 19:55, 28 February 2021 (UTC)

Edit: Deleted comment. Sorry for the accidental spam. {)|(}Quill{)|(} 14:46, 25 March 2021 (UTC)