Responsible AI in Practice: Resume Matching


Steve Cooper
Head of Enterprise Data and Analytics Program at Amazon Web Services (AWS)

Bias in algorithms is a big topic… it’s a necessary conversation and, as with most important matters, exhibits the full range of emotion and perspectives.

The business press and mainstream media presents a consistent, albeit limited, view of the challenge. The Economist notes it as a mega-trend for 2019. It defines “algorithmic bias” as “the worry that when systems are trained using historical data, they will learn and perpetuate the existing biases. To ensure fairness, AI systems need to be better at explaining how they reach decisions” (The Economist (November 7th 2018).The World in 2019). The Financial Times discussed well known cases such as the State of Idaho Medicaid payments and Amazon’s experimental recruitment algorithm. It regards both explainability and data as sources of issues. “[I]t is impossible to know how an algorithm arrived at its conclusion and the programs are only as good as the data they are trained on.” (The Financial Times(February 13, 2019)).

Academia and AI practitioners, understandably, have a more nuanced view of algorithmic bias. As the Financial Times recognizes, in the past couple of years, experts have started to focus on mitigating bias in data; it is now a dominant theme at AI conferences and many packages are available to help explainability. Last year, our colleagues at Accenture launched a tool to help customers identify and fix unfair bias in AI algorithms.

A final school of thought is that, once tuned, algorithms will likely be far less biased than humans. This points to the recognition that humans are inherently biased creatures. Authors such as Malcolm Gladwell (Gladwell, Malcolm (2005). Blink: The Power of Thinking Without Thinking) and Daniel Kahneman (Kahneman, Daniel (2011). Thinking, Fast and Slow) have written beautifully about our necessary ability to make fast, instinctive and emotional decisions. We all love to think of ourselves as unbiased, it is not the case. If you haven’t already, I encourage you to take Harvard’s Project Implicit test that Gladwell refers to.

The Responsible AI Framework (“Responsible AI by Design”)

An understatement would be that the topic of algorithm bias is a complex area of our field. Many solutions will take years to fully mature. However, for data analytics practitioners, inaction is not an option. Within our Enterprise Platform and Studio work, we have anchored to five fundamental principles for “Responsible AI” to ensure that we pursue our journey to an “intelligent enterprise” with diligence:

  1. Data Security – ensure compliance with enterprise data security requirements such as administration authorizations, role-based access controls (e.g. column/row level security), least privileges principle, maintenance of access control, network access control, review of user access rights, auditing and logging and authorization of access etc. Full governance control and audit support including democratization of data catalog.
  2. Data Privacy – privacy at the core, complying with global legal and data privacy concepts and principles such as legitimacy, fairness, transparency and data minimization; what types of data can be ingested to the lake (non-personal identifiable information, non-sensitive personal identifiable information) and explicit approval required for sensitive personal identifiable information (PII). Particular care and attention is always required for modelling with non-anonymized or sensitive PII.
  3. Explainability – explainable AI (XAI) or interpretability has been a core principle of our Studio since inception. It requires that our models provide the user with some understanding of how a result was reached, this is typically served by showing the key features that drove a that result.
  4. Fairness in AI – remove or reduce the weight of any features or values that may produce prejudiced results for a person or group. For example, recommendations that are biased by gender.
  5. Minimum Viable Product (MVP) – finally, we fundamentally believe in the “human in the loop”. As The Economist says: “help humans make better decisions, rather than making decisions for them.” (The Economist (November 7th 2018). The World in 2019). No model is deployed directly to production; the focus of the Studio is to produce MVPs that are immediately put into the hands of our users. We’ve seen in previous blogs that this ensures we get to value sooner rather than later. Another benefit is that we get humans looking at the results, helping us understand the accuracy of the model and any potential unintended consequences.

The Resume Matching Challenge

A fantastic example of our framework in action is with our recent resume matching product. Accenture receives approximately 2 million candidate applications a year. For a company operating at this scale, the candidate can easily feel overwhelmed. It is often difficult to find the right job (we have 1500 openings in the US alone). Depending on the channel through which they approach us, it may take multiple searches and multiple steps to apply. In the hyper-competitive environment that we operate in, the candidate experience is mission critical for Accenture.

Our challenge was to simplify the experience, giving candidates the ability to simply submit their resume or LinkedIn profile, answer a few simple questions and get help in finding open jobs that are a good match for their interests and skills.

Our Solution

We applied our Responsible AI framework in developing the solution to this challenge:

  1. Data Security – in line with our data lake strategy, all data was ingested and secured on the Accenture data lake. This requires multiple source applications for job descriptions, sending data via AWS Lambda functions to our lake. The AI Matching solution is exposed by a single, secure external facing API.
  2. Data Privacy – as part of our Studio inception phase a full review of data (resume and job descriptions) and modelling approach was approved by our global data privacy team. As is typical with early modelling and MVP work, the initial scope was approved for US data. In addition, we built a differential privacy framework into the data engineering pipeline that adds noise to the underlying data to enhance data privacy and add protection.
  3. Explainability – now comes the modelling phase. Resumes and Job Descriptions are converted to vectors. TF-IDF is used to compare frequency of terms across both resumes and job descriptions. Some high-level questions (e.g. travel preference) are used to filter results. The benefits of this approach is that it is fast to train, simple and scalable. It employs an unsupervised learning technique and is therefore less liable to reinforce biases in past hiring behaviour, for example staffing of certain profiles of people or skills to specific roles. An additional benefit of the use of TF-IDF is that it should prove to be relatively language independent, being able to adapt to different languages in future phases. The simplicity of TF-IDF means that features (n-gram terms) are easily shown to users in order to explain their match to a role.
  4. Fairness in AI – the first step was to agree a definition of bias in the context of resume matching. With our stakeholders, we agreed it was a difference of more than +/- 5% between the absolute gender ratio of applicants applying to a role from the gender ratio of candidates proposed to a role. For example, if 60% of applicants to a role were women, our recommender should propose no less than 55% and no more than 65% women. Given the unstructured nature of our data, the team needed to create a new pipeline for measuring and reducing bias. The corpus of resumes and job descriptions are passed to a embedding algorithm. A gender axis is computed by subtracting the embedded vector for “woman” from the embedded vector for “man”. The cosine distance between words to the gender axis is calculated (from -1 for men to +1 for women). An absolute bias threshold is applied, above and below which words are penalized through reduction in their weights. For example, if the absolute threshold is 0.1, words with a cosine distance above 0.1 and below -0.1 will be debiased. The results have been highly promising: the aggregate applicant and post bias reduction ratio are almost identical. More importantly, as we drill into nodes of our talent competency (called Talent Segments in Accenture), we see that the majority of skewed segments have been correctly treated. For example, if a segment has 70% female applicants, our model proposes approximately 70% women.
  5. MVP – we are now at a point where we will formally start pilot of the model. In line with our principle of learning products, we will now put this model, in a controlled manner, into the hands of recruiters. We will use their feedback to determine the robustness of the model and inform our next sprints as we progress towards a full production solution.

The progress in this product is testament to a highly passionate team that came together to create the MVP. Thanks to AbhishekBenRebekahItziarPranayXinAonghus and Jango from the Studio, Diana for being a fantastic product owner, excellent coaching from RummanMollySangitaMerel and Monica and our sponsors Tracey and Rinku. This team is illustrative of perhaps a 6th extra element of our Responsible AI framework – diversity. As Fei Fei Li says, “human-centered AI recognizes that computer science alone cannot address all the AI opportunities and issues… we need to have a very humanistic view of this and recognize that for any technology to have this profound impact, we need to invite all sectors of life and society to participate”. (Martin Ford (2018). Architects of Intelligence: The truth about AI from the people building it).

Next

The journey has only just begun. Based on the feedback from our pilot, we will continue to evolve the product:

  • Additional aspects of bias reduction: race, experience, socioeconomic, geography
  • Additional language support

As ever, feel free to provide feedback and/or let us know the parts of our AI Journey that you’d like us to focus on.

Source: Responsible AI in Practice: Resume Matching

ThirdEye Data

Transforming Enterprises with
Data & AI Services & Solutions.

ThirdEye delivers Data and AI services & solutions for enterprises worldwide by
leveraging state-of-the-art Data & AI technologies.

Talk to ThirdEye