Alternate Title: The First Time I Ever Spoke to an Employee of the EEOC Where I Wasn’t Simultaneously Working on Perfecting my RBF
I wrote my last EEOC charge response in the winter of 2014 – I believe the total count was 24 charge responses I wrote in 8 years’ time (all no cause determinations, thank you very much). If you had told me back then that I would get to talk to the Commissioner of the EEOC – and enjoy the conversation, I would have quite literally laughed in your face.
Yet here I am…
In our HRIS class on Monday night, our guest speaker was the Commissioner of the EEOC – Keith Sonderling. He came to talk to us about Bias in AI, and how the EEOC is working to combat it, what companies can do to mitigate risk, and more importantly – what they need to do to prepare for the inevitable emergence of AI in the workplace and mitigate risk of bias and discrimination from the output of the use of these tools.
There were a few common themes within the discussion. The most prominent one (besides the obvious theme focused on the immense risk of bias) being that regardless of the vendor who provided the AI tool, the company is 100% liable for the decisions that the tool makes or assists the company in making. That is a rather powerful assertion, as often The Vendor contractually takes on some of the risk when you use their products, and that risk they take on is typically concentrated on compliance.
This is an interesting position for the EEOC to take, and one that frankly makes sense when you really contemplate it. On its face, one would not necessarily see the obviousness here. Just as The Vendor takes on liability for items such as not transmitting payroll timely, data breaches, or being out of compliance with HIPAA regulations, it would logically follow that The Vendor would pick up the tab when their AI tool gets their clients in legal trouble. The Vendor writes and owns the algorithms and the data models, so it would logically follow that the Vendor would be liable should the company make a poor decision based on the information that is generated from their AI tools. However, fundamentally, these are not meant to be decision making tools, but rather guides to assist companies to make better decisions, or to optimize time spent on certain tasks. It is still incumbent upon the practitioner to use the skills they were hired for to make a judgement based off the AI generated data. And as such, the EEOC (rightfully so), places the onus strictly on the company making the decisions, rather than the owner and author of the algorithms behind that decision. The beauty of this – that probably few people see – is that the argument from many that “AI is going to take my job” falls flat, as responsible use of AI tools in the HR space should always require judgement on the part of the company agent using the tool. And sound judgement is still something that is and frankly always will be fundamentally a human trait.
Another issue Commissioner Sonderling discussed was my main take away from the presentation he did at the Conference Board’s People 2030 Conference I attended in NYC back in November. He really reinforced the immense need to ensure your organization’s policies and procedures adequately address the company’s stance on AI how it is used and make a declaration that your company will use it in a lawful and responsible way. He also mentioned putting statements in your policies to ensure employees use it responsibly as well. It seems like an obvious bit of advice, but in my career, I have many times seen first-hand how a company will deliberate a rather straightforward policy amendment to the point where the initial intent has been lost in the exercise, and the change comes too late, or is already obsolete by the time it is published. His advice stopped short of saying that an organization without current and robust AI policies in place need to get in their DeLorean and make these policy changes yesterday. Therefore, companies need to also keep in mind that since they are going back in time, their flux capacitors need to be fully functional. I need not remind you that Doc Brown’s DeLorean did not evolve to run on garbage until it went into the future. BIG difference there…
Of course, It would not be a conversation with the EEOC without discussion on my very favorite function of the EEOC – enforcement and charges claiming discrimination resulting from these AI tools. Oh, how I miss those envelopes I would receive with 50 pages stuffed into a regular sized envelope that barely sealed, most of which are just rules, regulations, and instructions rather than the charge itself, which was rarely more than 2 pages if that. They never took a page out of the NLRB playbook and kept the page count short and sweet, and the envelopes adequately sealed without the need of scotch tape. My favorite was the irony in the disclosure packet about the Paperwork Reduction Act – they instructed the company’s response to be succinct and not waste paper, but it was not uncommon to receive the equivalent of 2 sequoia trees in every envelope from the agency – OR to receive multiple mailings that were either redundant or otherwise unnecessary. But I digress and sincerely apologize for that drive down memory lane that took an odd detour down a dirt road. Back to the topic at hand.
EEOC Enforcement is still very much in its infancy, as AI is a revolution that is challenging for the average person to keep up with, let alone a heavily regulated agency of our remarkably efficient federal government. Nonetheless, setting aside my typical commentary, some of the reasons for this lack of enforcement is understandable. Some of it is COVID, which okay…I get it. The workplace landscape changed overnight, nobody was prepared for it, there was an immense amount of fallout as a result, yadda yadda. (Question – Is that script out in paperback yet – and in the bargain bin at Barnes and Noble yet? Because it should be by now.). But beyond the COVID aspect is the more formidable hurdle here – the knowledge gap, both on the part of the investigators (and the agency itself) as they develop investigation methodology around AI, but also the people unknowingly using these AI tools. Often the employees or applicants do not even know they are using AI tools or otherwise subject to AI technology in an interview or assessment. Consequently, Commissioner Sonderling advised that it is a good practice for companies to be transparent when an employee is using AI that is not noticeably AI.
Enforcement is indeed coming, and the EEOC does expect a large volume of charges to follow. There have been some very recent cases regarding AI and discrimination which the Commissioner mentioned, the first involving everyone’s favorite HRIS system. Mobley v Workday is a recently dismissed class action suit against one of the most prominent enterprise systems out there – Workday. The suit asserts that Workday’s algorithm-based screening tool is discriminatory against several protected classes and interferes with the employment prospects of applicants in those protected classes. A quick read of this case was fascinating, and while it was dismissed, there is little doubt in my mind that Mobley will refile with amended evidence to support his claim. This is for sure one to watch closely.
The other case the Commissioner mentioned was ACLU v Facebook, in which resulted in a change to the way Facebook was targeting ads for housing, employment or credit that had a disparate impact on people in multiple protected classes. One interesting facet of this case is that it is most likely that most Facebook users had no idea this targeted exclusionary practice was even happening. As a result, (in the case of job postings) many qualified potential applicants did not receive targeted ad for jobs they may have been interested in, simply because the algorithms wrongly assumed who would be interested in or qualified for what jobs, credit worthiness, etc. And they did not even know they were not receiving these posts. Which is something that will have to be addressed as employers begin using AI tools in their HR practices. There must be transparency with the people subject to the actions of the AI tool as well as constant oversight of the tool’s functionality and the actions taken as a result of the information the AI tool generates.
Other than learning some new terms (“algorithmic discrimination” and “dataset discrimination”), and coming to the personal realization that not every interaction with the EEOC has to result in utter frustration and curse words somewhat under my breath, I left that conversation with the very important notion that companies absolutely can control the proper and ethical use of AI, mitigate risk and in turn be fair and equitable in the process, while also using a powerful and immensely useful tool.
