2173: Trained a Neural Net
|Trained a Neural Net|
Title text: It also works for anything you teach someone else to do. "Oh yeah, I trained a pair of neural nets, Emily and Kevin, to respond to support tickets."
An artificial neural network, or a neural net, is a computing system inspired by a human brain, which "learns" by considering lots and lots of examples to develop patterns. For example, these are used in image recognition - by analyzing thousands or millions of examples, the system is able to identify particular objects. Neural networks typically function with no prior knowledge, and are "trained" by feeding in examples of the thing that they are told to analyze.
Here, Cueball is telling White Hat how he trained a neural net to sort photos into categories. The joke in the comic, is the engineering tip from the caption. It states that since a human brain is already a neural network, albeit a biological one instead of an artificial one, then by teaching oneself (or others) to do a task, you are de facto training a neural network to do so. So instead of designing and training an artificial neural net that could do this task, all Cueball did was manually sort the photos into categories (although he could then use those sorted images to train an artificial neural network). This is the first time such a tip has been used, but engineering tip just continues the tips trend that Protip began long ago.
It is not advisable to say this in real life, because you might then be expected to use your already-trained neural net to do a similar task (or redo the same task) with much greater speed, thus ruining the façade. However, presenting work done by humans as work done by machines has been done in real life, perhaps starting with The Turk in 1770 and continuing into the present day by various AI-themed startups. For example, Engineer.ai described itself as using "natural language processing and decision trees" to automate app development, but was actually employing humans.
The title text is a continuation of this joke, as instead of designing and training two artificial neural nets named "Emily" and "Kevin", all he has done is train two people with those names to manually respond to support tickets. Again, doing this in real life is not advisable, as people are offended when they are referred to by programmers as deterministic automata with no free will.
Neural networks have been trained to perform other tasks that are routine for humans, but formerly more difficult for computers, such as driving cars, playing games like chess, go, and Jeopardy!, and communication skills like extracting phonological information from speech as per Figure 1 here. In 1897: Self Driving, Randall suggested that crowdsourced applications like ReCAPTCHA, that have been used to train neural nets to recognize objects necessary for safe driving in photographs, may also be used for Wizard of Oz experiments. An example of such a Wizard of Oz experiment for phonological training as a form of peer learning has been explored, and related work is occurring on automating vocational training.
The extent to which computer neural nets are analogous to human neurobiology is a topic which fascinates the scientist and layperson alike. While there is no fully universal consensus on the matter, at least one apparently longstanding theoretical paradigm has received attention recently.
- [White Hat is looking at a smartphone in his hand, while he talks to Cueball, who lifts a hand palm up towards White Hat.]
- White Hat: Oh, hey, you organized our photo archive!
- Cueball: Yeah, I trained a neural net to sort the unlabeled photos into categories.
- White Hat: Whoa! Nice work!
- [Caption below the panel:]
- Engineering Tip: When you do a task by hand, you can technically say you trained a neural net to do it.
- Cueball is depicted abusing the training of such a chatbot in 1696: AI Research.
add a comment! ⋅ add a topic (use sparingly)! ⋅ refresh comments!