2237: AI Hiring Algorithm
|AI Hiring Algorithm|
Title text: So glad Kate over in R&D pushed for using the AlgoMaxAnalyzer to look into this. Hiring her was a great decisio- waaaait.
In this comic, Ponytail shows an analysis of a new artificial intelligence called DeepAIHire, used to select who to hire among applicants. According to the analysis, DeepAIHire evaluates the following parameters:
|Educational background||0.0096||For new hires freshly-graduated from school, a good educational background (high grades, a relevant degree, and academic honors) may be a positive sign, but for workers with more than a couple of years in the work force, it's not nearly as important. It's pretty reasonable to weight this factor the least.|
|Past experience||0.0520||One of the best things to show on a resume or CV is that the candidate has already successfully performed work similar to what the job opening requires. Of the "conventional" factors presented here, it is reasonable to weight past experience the most.|
|Recommendations||0.0208||A good resume may speak for itself, but a business will be even more likely to hire a candidate who is recommended by someone already working in the field. Of the "conventional" factors presented here, it is reasonable to apply the second-greatest weight to recommendations.|
|Interview performance||0.0105||The final step in the hiring process (aside from procedural steps) is usually an interview, which may include the hiring manager and/or one of the employees that the new hire would have to work with. An interview may tip a candidate into or out of being hired, but generally a candidate will not be interviewed without an application which is otherwise already strong, so it is reasonable for DeepAIHire to have learned to weight this factor less than past experience or recommendations.|
|Enthusiasm for developing and expanding the use of the DeepAIHire algorithm||783.5629||While many companies ask if their applicants have experience with relevant technologies, it is highly unusual for that enthusiasm to be weighted so much more highly than the other factors.|
The analysis shows that this AI mostly ignores common factors used for hiring new people. Instead, its main criterion for selecting new applicants is how much the new applicants are willing to contribute to the AI itself.
Although this does not imply sentience, it at least means the AI became self-perpetuating, as it is selecting humans that will help make it more influential, giving it more power to select such humans, in a never-ending loop.
The title text shows how this or other AIs may have influenced hiring in other sectors as well. Kate in R&D was hired perhaps based on her willingness to use a different algorithm (AlgoMaxAnalyzer), which did analysis on the DeepAIHire algorithm. Ponytail seems to become suspicious that AlgoMaxAnalyzer is also a program that self-perpetuates in a similar manner to DeepAIHire rather than simply working for the benefit of its human designers. Alternatively she might fear that the different AIs are forming an alliance, or that the AIs are competing to become the predominant one at Ponytail's company. Intentionally training one AI to fight another AI is a technique in machine learning called a generative adversarial network (GAN). In a GAN, human-curated training data is used to train one neural network (the generative network) to create more data, while another network (the discriminative network) is trained to distinguish generated data from the training data; the results are then fed back into the generative network so it can improve its data creation accuracy. The goal is for the generative network to get better and better at fooling the discriminator until its output is useful for external purposes. GANs have been used to "translate" artworks into different artists' styles, but also offer the possibility of nefarious uses, such as creating fake but believable images or videos ("deepfakes").
The "Deep" in this algorithm's name is a reference to deep learning, a collection of techniques in machine learning that use neural networks. One user of such deep learning is DeepMind, an AI company owned by Alphabet (Google's parent company), which in recent years has used a deep neural network to learn to play board games such as go and chess, defeating some of the best human and computer players. The earliest versions of DeepMind's most famous AI, AlphaGo, were trained on datasets curated from games of Go played by humans, but eventually it was trained by playing games against alternative versions of itself. DeepMind's most recent achievement is creating AlphaStar, which can play StarCraft II at a Grandmaster level while constrained to human speeds to prevent an unfair performance comparison.
This comic strip is in response to ongoing concerns over the proliferation of algorithmic systems in many areas of life that are sensitive to bias, such as hiring, loan applications, policing, and criminal sentencing. Many of these "algorithms" are not programmed from first principles, but rather are trained on large volumes of past data (e.g., case studies of paroled criminals who did or did not re-offend, or borrowers who did or did not default on their loans), and therefore they inherit the biases that influenced that data, even if the algorithms are not told the race, age, or other protected attributes of the individuals they process. If the algorithms are then blindly and enthusiastically applied to future cases, they may perpetuate those biases even though they are supposed (or at least reputed) to be "incapable" of being influenced by them. For example, DeepAIHire has presumably been given information on the education and past work experience of successful employees at this company and similar companies, and will identify incoming candidates with similar backgrounds, but may not be able to recognize the possibility that a candidate with an unfamiliar or underrepresented history could be successful as well.
This comic strip also touches on related concerns about the "black box" nature of these algorithms (note that the weights presented are "inferred", i.e. nobody explicitly programmed them into DeepAIHire). Machine learning is used to produce "good enough" classification systems that can handle vast quantities of information in a way that is more scalable than human labor; however, the tremendous volumes of data and the neural network architecture make it difficult or impossible to debug the algorithms in the way that most code is inspected. This means that it is difficult to identify and debug edge cases until they are encountered in the wild, such as the case of image classifiers that identify a leopard-spotted sofa as a leopard. In this comic's case, the self-propagating bias of DeepAIHire went unnoticed by the humans involved in the hiring process until its activity was analyzed by the AlgoMaxAnalyzer algorithm.
A similar theme of AIs behaving for their own benefit rather than helping humans occurred in 2228: Machine Learning Captcha.
- [Ponytail is pointing to a slide with a stick. The slide hangs in two strings from the ceiling. The slide has a heading and a subheading. And then a table with a vertical and a horizontal line with headings above the two columns.]
- Ponytail: An analysis of our new AI hiring algorithm has raised some concerns.
- DeepAIHire® Candidate Evaluation Algorithm
- Inferred internal weightings
Weight Factor 0.0096 Educational background 0.0520 Past experience 0.0208 Recommendations 0.0105 Interview performance 783.5629 Enthusiasm for developing
and expanding the use of
the DeepAIHire algorithm
add a comment! ⋅ add a topic (use sparingly)! ⋅ refresh comments!