<img height="1" width="1" style="display:none;" alt="" src="https://dc.ads.linkedin.com/collect/?pid=40587&amp;fmt=gif">
Software Talent Management Technology

Artificial intelligence: How can HR use it ethically?

By HRSG Team on November, 25 2022

Article table of contents (jump to section):

  1. Bias in the machine
    1. Explainable AI
    2. Human mediation
    3. Data sets
    4. Governance frameworks
    5. Continual oversight
  2. Questions to ask your vendor
  3. Enriching HR practices with AI

Artificial intelligence can be a powerful force for good in the field of human resources, but without the right checks and balances, it can easily undermine basic trust between HR and the workforce.

Applying machine learning to a discipline with "human" right in the title may seem counterintuitive, but the trend is here to stay. Talent platforms powered by artificial intelligence are helping HR managers work faster and smarter in myriad ways across the talent lifecycle.

The use of artificial intelligence (AI) in HR has surged in recent years. According to the World Economic Forum, there are now over 250 different AI-based HR tools on the market, and data from SHRM indicates that nearly one in four organizations use automation or AI to support HR-related activities. But while HR has embraced AI capabilities, the enthusiasm needs to be balanced by an awareness of the harm AI can cause when it’s not governed by ethical principles.

If you're a talent professional who uses AI to drive your talent processes, or if you have plans to adopt AI technology in future, here is what you need to know.

 

Bias in the machine

There are excellent reasons for AI’s popularity in HR. Adopting AI-driven technologies can save time, reduce bias, and deliver valuable insights. AI can sift through hundreds or even thousands of resumes in seconds to identify high-quality candidates, for example. It can reduce the impact of personal preferences and biases in hiring, assessment, and promotion. And it can analyze and spot patterns in workplace data that humans would almost certainly miss.

But the technology is far from foolproof. In some cases, AI-based HR technologies have been found to:

  • Discriminate against disabled workers by labeling their performance "not-standard."
  • Exclude qualified but disadvantaged candidates (such as those who are immigrants, veterans, or neurodiverse)
  • Show bias against female candidates applying for technical roles

The bottom line is that AI has come a long way in a short time, but HR leaders and software developers alike need to take a close look at what AI should do as well as what it can do.

Here are some of the ways the industry is bringing greater transparency and accountability to this rapidly evolving technology.

Explainable AI

AI can deliver impressive results, but it also has a reputation for being a "black box." In other words, it's not always clear exactly how those impressive results were achieved. And if we can’t see the journey, how can we trust the destination?

Explainable AI is a set of tools and frameworks that helps developers and users break open that black box so that they can see and understand the processes, defend the outcomes, and troubleshoot problematic areas.

As AI becomes more sophisticated and its calculations become more complex, the need for transparency increases.

HERE’S AN EXAMPLE: We use AI to generate job descriptions in CompetencyCore, and over time, it has evolved to the point where it can analyze, contextualize, and interpret every word of the content, not just every sentence. This results in greater precision, but also a more complex calculation. To monitor its accuracy, we keep a detailed record of the way the algorithm was trained, the model it creates, and the results it returns so that we and our users can feel confident in the outcomes.

Human mediation

By scanning and analyzing high volumes of data automatically, AI enables HR teams to save time and resources. But AI still needs humans to guide the learning process, monitor the quality of the output, and adjust the process as needed. This "human-in-the-loop" process is essential to the quality of the data today and the quality and sophistication of the AI engine over time.

HERE’S AN EXAMPLE: The algorithms used in CompetencyCore algorithms are carefully trained and extensively reviewed by senior subject matter experts. This ensures that the advanced machine learning processes are guided and monitored by specialized human expertise. And because competency profiles are never a “one size fits all” proposition, the platform also offers built-in collaboration and validation tools that enable HR managers to fine-tune AI-generated content with input from job incumbents, managers, and experts.

Data sets

AI actively learns from the data it ingests. If that data is of poor quality or incomplete, the outcome will be compromised, and so will all future analyses. To strengthen and accelerate machine learning, AI needs access to the most complete, inclusive, and representative data sets. This is sometimes referred to as "big data," or data sets that are so large, varied, and dynamic that traditional data processing methods can't manage them.

HERE’S AN EXAMPLE: CompetencyCore integrates job data directly from one of the largest job-posting sites in the world to create a robust data set for the algorithm to learn from. The system ingests approximately 30,000 job posts a day (nearly 10 million a year).

Governance frameworks

Recognizing the potential for poorly managed AI to cause harm, technology vendors are establishing formal governance frameworks to guide the development and oversight of the AI technologies they create.

There are many national and international standard-setting bodies that have developed frameworks, guidelines, and best practices for AI. Here are a few that HRSG is consulting as we refine our own AI guidelines.

Continual oversight

AI is like a living organism. It learns and evolves over time, and the data sets it analyzes also change. Because of this, the quality of the outcomes can drift (degrade) over time if the system isn't checked regularly. When it comes to HR, many influential factors can change rapidly, from the economy to the job market to role requirements. That means it’s even more important that humans monitor the outcomes, collect feedback, and adjust the algorithms on an ongoing basis.

HERE’S AN EXAMPLE: CompetencyCore collects feedback from its customers as well as its senior HR consultants and customer support, customer success, and implementation staff to monitor the quality of job descriptions, competency profiles, and career paths generated by AI.

 

Questions to ask your vendor

If you currently use AI-based tools or plan to do so, make sure your organization is investing in technology that is ethically designed and maintained. Asking your vendor these questions can help you identify any potential areas of risk in your current HR technology stack or evaluate future additions to the stack.

  • Have you ever identified evidence of bias in the results generated by your AI?
  • How is the data used to train your AI generated? Is it created by experts in the field? What steps do you take to ensure it doesn’t promote personal biases?
  • How do you ensure that the results your AI generates can be interpreted by and is explainable to developers and users?
  • How are human review and monitoring processes built into the management of your AI?
  • How do you ensure that the data sets ingested by your AI are representative, complete, and reliable? Are there plans to review or expand the data sets at any point?
  • Do you have an AI governance framework in place? How and when was this framework developed?
  • How frequently do you check the quality of the outputs generated by your AI? What does this review process look like?

 

Enriching HR practices with AI

AI brings exciting capabilities to HR, especially in the field of competency-based management, which has historically been one of the more labor-intensive approaches to defining and empowering talent.

However, HR managers need to pay attention to the risks as well as the rewards. By recognizing the checkpoints and processes that make AI more transparent, accurate, and accountable, you can make informed choices about how to integrate this powerful technology into your talent lifecycle.

Read more on this topic: AI, LLM, and Ethical HR

Learn more about CompetencyCore the first AI-driven platform for defining and growing talent using competencies.