Artificial Intelligence in Labor Market Matching

In my dissertation I study three applications of AI in labor market matching. In my first chapter I show that AI-improved but not entirely written resumes make workers more likely to be hired with no negative downstream implications to employers or to match quality. However, in my second chapter I s...

Full description

Bibliographic Details
Main Author: Wiles, Emma Benz
Other Authors: Horton, John
Format: Thesis
Published: Massachusetts Institute of Technology 2024
Online Access:https://hdl.handle.net/1721.1/155874
Description
Summary:In my dissertation I study three applications of AI in labor market matching. In my first chapter I show that AI-improved but not entirely written resumes make workers more likely to be hired with no negative downstream implications to employers or to match quality. However, in my second chapter I show that when employers are given entirely AI written drafts of a job post, the jobs posted are more generic and less likely to make a hire. Lastly, I provide evidence that non-technical workers can use AI to upskill into data science, however those skills do not persist in absence of AI assistance. My first chapter investigates the association between writing quality in resumes for new labor market entrants and whether they are ultimately hired. I show this relationship is, at least partially, causal: in a field experiment in an online labor market with nearly half a million jobseekers, treated jobseekers received algorithmic writing assistance on their resumes. I find that the writing on treated jobseekers’ resumes had fewer errors and was easier to read. Treated jobseekers were hired 8% more often, at 10% higher wages. Contrary to concerns that the assistance takes away a valuable signal, I find no evidence that employers were less satisfied with the quality of work done, using star ratings, the sentiment of their reviews, and their probability of rehiring a worker. The analysis suggests digital platforms and their users could benefit from incorporating algorithmic writing assistance into text-based descriptions of labor services or products without downstream negative consequences. In my second chapter, I study a randomized experiment conducted on an online labor market that encouraged employers to use a Large Language Model (LLM) to generate a first draft of their job post. Treated employers are 20% more likely to post the job and decrease time spent writing their job post by 40%. Among the posted jobs, treated employers receive 5% more applications. Despite this, they are 18% less likely to hire. I find no evidence that this is driven by treated employers receiving lower quality applicants. Moreover, despite the large increase in the number of jobs posted, there is no difference in the overall number of hires between treatment and control employers. These results imply that the treatment lowered the probability of hiring among at least some jobs which would have otherwise made a hire. I rationalize these results with a model in which employers with heterogeneous values of hiring can attract better matches by exerting effort to precisely detail required 3skills. I show how a technology that lowers the cost of writing and imperfectly substitutes for effort causes more posts, but lowers the average hiring probability through both marginal posts (as these are less valuable) and inframarginal posts (as the technology crowds out effort and makes the job posts more generic). I provide evidence for these mechanisms using employer screening behavior and the embeddings of the job posts’ texts. In my third chapter, we investigate if LLMs can be used to help non-technical workers adapt to technology induced, rapidly changing skill demands by “upskilling” into a more technical skillset. With coauthors at Boston Consulting Group, we run a randomized control trial on knowledge workers, who have no data science experience, to test whether workers paired with LLMs are able to perform data science tasks to the level of real data scientists. We give consultants at BCG data science problems, representative of what the data scientist role at the company demands, but whichGPT-4 cannot solve on its own. We find that treated workers given access to and training in using ChatGPT are more likely to correctly solve all three tasks, and can perform at the level of real data scientists without GPT-4 on the coding task. These results suggest that LLMs can be used to help workers gain new skills to meet the evolving, more technical demands of the labor market, but that for some types of tasks the work of non-technical workers is not interchangeable with data scientists’.