Summary: | This dataset is the result of a field experiment designed to indicate whether job applicants disclosing a criminal record in the British labour market have a lower probability of success than equivalent applicants not disclosing criminal records. The dataset also allows to conduct analyisis on whether there is an interaction between the stigma of a criminal record and race and gennder. It also includes questions on whether ther job opening on which the candidates applied to had specific questions on criminal records. These questions may be used to test the potential effects of introducing a Ban-The-Box policy in the United Kingdom, which would prohibit questions on prior convictions in the first stages of the recruitment process.
1. Research design
This study uses an experimental method known as an online paired correspondence audit study. In such studies, the researcher applies to real job openings online with two fictitious candidates. The CVs and cover letters of the candidates show equivalent traits, skills, and work experience except for one differentiating factor called the ‘experimental condition’. The researcher then compiles the responses for each type of candidate and if an average difference is detected between those candidates disclosing the experimental condition and those that do not, causal evidence of discrimination for the experimental condition is established. This method has proven to be an efficient and robust methodology to document evidence of discrimination against different groups with stigmatising characteristics, including criminal records.
I submitted two fictitious candidate applications to 1053 online job advertisements. In each pair one application specified or suggested a criminal record (henceforth also referred to as the ‘experimental candidate’), and the other (henceforth referred as ‘control candidate’), with equivalent characteristics, skills and job experience, did not.
2. Signs of criminal records for fictitious candidates
Previous research suggested that candidates might diverge in their responses depending on the severity of the sentence disclosed. Then, in half of the applications, the experimental candidate's CV and cover letter were made to suggest or signal that he or she had served a prison sentence, the most severe punishment in the UK. In the other half of the applications, the experimental candidate's application materials were made to suggest or signal that they had served an alternative sentence, namely, ‘community payback’, a type of community sentence involving activities such as removing graffiti or clearing wasteland.
The existence of a prior conviction was signalled in up to three ways. First, for all jobs, the ‘Education’ section of the experimental candidate’s CV showed an indirect signal of having served a sentence, by including recent completion of an IT training course at a detention centre or probation institution. Second, when the application requested a cover letter (24.4% of the job openings), a paragraph of the letter explained the recent completion of a 12-month conviction and re-entry efforts. Third, for job openings with questions on prior conviction (12.9% of the openings), I ticked ‘Yes’ on the corresponding box. To ensure comparability, all control candidates’ CVs mentioned an equivalent IT course conducted in an educational institution with no reference to the criminal justice system. In the control’s cover letter, there was an equivalent paragraph stating the motivation to find a job again after a spell of unemployment, and ‘No’ was always checked on questions about the previous applicant’s criminal record. In both experimental and control CVs, the applicant’s last job finished between twelve and fourteen months ago. I expected that employers would interpret the gap in employment between the previous job and the current time as time spent serving a sentence for those candidates with a criminal record or an unemployment spell for those candidates without a mark of a criminal record.
3. Signs on gender and ethnicity of fictitious candidates
For each job opening, the two fictitious applicants share the same gender and race profiles, assigned randomly. I used four gender and race profiles: Black female, Black male, White female and White male. I decided to focus on White British ethnicity since this is, by far, the largest ethnic group in the UK (86.0% in the 2011 census). To contrast the effect of race, I decided to focus on Black ethnicity since they are the minority group with the highest disproportionate contact with the criminal justice (Ministry of Justice 2021). Financial and time constraints limited the possibility of introducing other gender and race profiles.
The candidate’s race and gender were signalled through the name. I used three different names for each race and gender profile. For White profiles, I used the three most common names for White male inmates in the prison system in England and Wales in December 2019. For Black names, I used the ten most frequent names for babies born in the year 2000 in the UK from the list of ‘Black sounding’ names suggested by Gaddis (2017). I then ran these names in descending order through Google image searches in private mode until I found three names for each gender on which only individuals with Black ethnicity were in the first scroll. I used the same family names for the four gender and race profiles based on the most common surnames for inmates in the prison system in England and Wales in December 2019. A list of the names and surnames can be accessed in Annexe 1 of the Online Supplementary Materials.
4. Other Characteristics of Fictitious Applicants
I developed applicant profiles and corresponding CVs and cover letters to ensure similarity across background characteristics, but without exactly duplicating pieces of information or style. For residential addresses, I used real street names but fictitious street numbers in two different low-layer super output areas located inside three kilometres of the city centre with a high concentration of the Black population and in the first decile of the multiple deprivation index . Regarding previous job experience, the fictitious applicants had worked for 3.5 to 4.5 years in different jobs in the retail, construction and catering/hospitality sectors. For ethical reasons, I used names of fictitious companies. To make detection that this was an experiment more difficult, I situated these companies in two different UK cities not included in the sample. Both applicants stated they had been awarded a General Certificate of Secondary Education. Individuals had graduated from secondary school in 2016, so employers would infer that the individual was in their early twenties.
5. Sample
I created a library of functions in Python 3.8.3 to call the job search engine ‘reed.co.uk’ API. The code first created a database of job listings that met an inclusion criterion. Specifically, I programmed the inclusion criteria to search for entry-level job openings in retail, catering/hospitality, construction, manufacturing, agriculture/horticulture and logistics sectors. The first five sectors have been identified as priority for training ex-offenders, specifically to cover labour shortages identified in the ‘Education and Employment strategy’ of the Ministry of Justice (2018, p. 30). I also included jobs in the logistics sector since previous research stated that US gig companies in this sector could be exporting offshore background-checking practices . The code also limited the search to job openings situated 10 kilometres from the centre of the most populated city in each of the 12 UK ‘NUTS 3’ Statistical Areas (Belfast, Birmingham, Bristol, Cardiff, Glasgow, London, Leeds, Manchester, Newcastle, Norwich, Nottingham, Southampton). I only sampled large cities to ensure a sufficiently large number of potential jobs for which to apply and to avoid diluting the job market with fictitious applications. The code excluded job openings with salaries of above £30,000 per annum, the median hourly earnings for all employees in the UK in 2019; openings in a company branch to which I had already applied in the past, to avoid overburdening recruiters; and positions in a company I had already applied to in the last six months, to avoid detection. From the resulting list, the code then selected a random sample of job openings to apply for daily .
Next, a research assistant and I read the job description for each opening. We excluded those openings that required skills or experience above the ones that the fictitious candidates had (59.4% of the openings). We also excluded jobs unrelated to the sectors above (6.9%), job openings advertised by companies that advertised openings with more than one name (4.3%), and job openings that involved answering specific questions, a psychological test, or applying through a video interview (0.6%). We then applied to the rest of the job openings on behalf of the two fictitious candidates, putting the second candidate’s application in the day after the first application. In about 1.5% of the cases, the job opening had already closed, so it was impossible to apply with the second fictitious candidate. Errors were made in applying to about 4% of the openings and these were excluded from the analysis. In total, we applied with two candidates to about a quarter (22.9%) of the job openings from the original randomly selected list.
The study was conducted between March 2021 and March 2022 in three periods, from mid-May 2021 to late July 2021, from mid-December 2021 to late February 2022, and from mid-June 2022 to late July 2022. These periods coincided with the last stages of the measures for social distancing imposed during the COVID-19 lockdown. The unemployment rate ranged from 4.8% at the beginning to 3.6% at the end of the period.
5. Randomisation Procedure
As described in the preceding paragraphs, I used randomisation across several aspects of my experimental design. To summarise, I randomised: (1) the information placed on the CVs and cover letters, (2) the experimental condition used (i.e., having served a prison or a community sentence) and (4) the demographic profile of the applicants (i.e., Black female, Black male, White female, or White male). I also randomised the order of the paired applications (i.e., whether the application from the individual with criminal records was first or second). Specifically, for each month of data collection, I used Lahey and Beasley’s (2018) Resume Randomizer program to create 100 different batches of résumés for each demographic profile of fictitious job applicants and city. For each batch, I created two CVs, one for an individual with criminal records (prison or community sentence) and another without. The information on CVs and cover letters and which experimental condition to include in the batch was assigned randomly by the Resume Randomizer program. I then randomly assigned which batch (i.e., from 1 to 100) of the paired résumés to use when applying for a particular opening using the Python random function.
6. Outcome Variable (Interview_offered)
Each profile for race, gender and type of criminal record had an email account and a unique phone number, which could receive texts and voicemails. The binary outcome variable measures whether the applicant received a response from the employer asking the applicant to call them back, stating that they have contacted the applicant to have a chat about the job or the applicant’s background. I did not code as positive responses auto-generated emails acknowledging receipt of the job application or stating that the candidate succeeded in the initial stage of the recruitment process if these did not include a request for an interview. I also excluded responses when a recruiter contacted the candidate for a job different from the one applied for.
|