fbpx
Emerging Intelligence columnist John Sumser is the principal analyst at HRExaminer. He researches the impact of data, analytics, AI and associated ethical issues on the workplace. John works with vendors and HR departments to identify problems, define solutions and clarify the narrative. He can be emailed at [email protected]

When I look at my news feed, I see a lot of charged words that beg for a click. Headlines that drive traffic seem more important than the actual content. Revenue comes from clicks, not from communication. Attention-grabbing is at the heart of online content.

Then there are the words and ideas that have the opposite effect. They drive people away as soon as they are uttered. They rarely pop into my daily data flow. It’s as if there were a hidden part of language.

Ethics is one of those words. The term invariably brings to mind harshly communicated rules governing conduct. Ethics guides (or codes of conduct) usually contain sections about rule enforcement and disciplinary action. They describe the line that defines acceptable behavior. But I never see articles titled 10 Ethical Standards that Will Improve Integrity.

And I definitely never see 8 Ethical Issues with Using HR Technology and How to Talk About Them. But the use of intelligent tools opens a Pandora’s box of questions, consequences and opportunities that are hard to see in advance. Employees are rightly concerned about potential job loss and new operating conditions, including fairness and privacy. Employers want to see the new technology expand productivity and reduce cost. Vendors want to cram their products with as much value as possible. Describing these issues as ethics problems tends to limit the conversation quickly.

See also: Does your AI need a background check and a reference?

The technology changes rapidly. Our ability to understand its consequences advances more slowly. Hard and fast rules are impossible to develop in the early days of anything. We all want to understand and do the right thing. We just need to call the learning process something other than ethics.

In the coming months and years, expect to see statements, manifestos, guidelines and audits intended to help you understand how this or that entity is approaching the use of intelligent tools.

Vendors will provide assurances of fitness for purpose, explainability, development processes, privacy and data integrity. Employers will deliver documents that describe how to interact with intelligent tools, what to do if the tools make mistakes, how to use the output of the tools and the organization’s commitments to fairness and equity. Employees will organize behind strongly worded assertions of rights, privileges, the desire for intelligibility and the requirement for redress. We are already seeing some of that with employees protesting their employers’ contracts with projects or entities with which they take issue.

We will all be left wondering if any of it is useful and whether or where we have control. The sorts of underlying principles that matter (correcting mistakes, calibrating models, using probabilistic information and accounting for unintended consequences) are more organic than classic statements of ethics. Underneath all of the fuss, we will be learning 21st-century management techniques while we continue to run, work in or supply our organizations.

Along the way, we will continue to discover that our questions just get deeper. The very early days of intelligent tools featured outlandish claims about the ability of technology to eliminate bias. In hindsight, it’s clear that the vendors were talking about the earliest stages of sourcing (finding resumes), not the entire hiring process. It turns out that the bulk of hiring bias happens after the candidates have appeared in sourcing and screening.

See also: How to use machine learning to reduce recruiting bias

For example, there are many offerings that automate the interview-scheduling process. There are an awful lot of repeated processes in interview scheduling. The automated tools use machine learning to understand who the usual interviewers are for a given kind of job. They are really good at finding the “right” interviewers and scheduling them. They save money.

That’s where the trouble begins. Reducing hiring bias has to begin with the idea that the interview process is, at a minimum, suspect. Automating it just institutionalizes the existing organizational norms. Historical data always carry the overtones of historical biases.

I’ve begun a project to categorize and document organizational approaches to solving the thorny questions hinted at here. Already, we have the alpha version of a scoring rubric for AI guidelines. We are well on the way to developing a framework.

The groundwork was laid in the 2020 Index of Intelligent Tools in HR Technology: The Birth of HR as a Systems Science, a topic I’ll be talking about in my master class at Select HR Tech.

Syndication Source