
2024 – the year I finally decided what I want to be when I grow up. How sad that it took me well over 40 years to figure that out. So, it seems only fitting that a course I took at the beginning of the year (HR Information Systems) would lead me to that decision, and a class at the end of the year (Algorithmic Responsibility), would confirm that it is indeed what I want to do after graduation (or sometime soon after).
As the title suggests – the class focused on the critical application of algorithms in decision making. We discussed at length the ethical considerations around big data, and the responsible management not only of the decisions made thanks to the outputs of the LLMs, but the handling of the data itself, and how mishandling of that data can have dire consequences.
I spent a lot of my spare time this past year reading books on the ethical application of AI, watching vendor videos, reading white papers, and taking endless webinars on the subject. But what the class really impressed upon me is the necessity of an AI Governance role within any HR department that has AI functionality that is involved anywhere within the decision making process. This role must be a full time dedicated role – so not an HR manager with one duty being governance, or the HRIS manager, or someone not in HR (I’m looking at you IT and Legal). And the reason for this being a dedicated role within HR is not just the arduous amount of work that goes into ensuring your outcomes aren’t biased or otherwise corrupted in some fashion but being able to recognize that to begin with. And while I know the conventional wisdom is that everyone knows how to do HR’s job – the fact of the matter is that they don’t, and with an HR practitioner auditing the results, they are better situated to detect failure.
Case in point: I wrote my final paper in this class on the pitfalls of predictive analytics, focusing on flight risk models. Flight risk models are all the rage right now, but they’re far from perfect, at times getting it wrong or even reinforcing existing biases. And getting it wrong can be counterproductive at the very least, causing employees who weren’t identified as flight risks to quickly become flight risks. Not to mention a whole lot of other consequences that fall into the ethical category. As such, this is precisely why HR should be the one governing these systems. IT might understand the code, and Legal might ensure compliance, but neither can spot when an algorithm’s output doesn’t align with workforce realities. HR practitioners bring the context: they understand the nuances of human behavior and organizational dynamics, and they’re better equipped to identify when results are skewed or biased. Plus, we’re the ones tasked with maintaining employee trust, which means ensuring transparency and ethical use of their data. Governing AI in HR isn’t just about fixing algorithms—it’s about making sure these tools serve people, not the other way around. And that’s a responsibility HR is uniquely suited to handle.
So, given all that I have said above, I am not entirely sure I would have come to these conclusions before this class, at least not right away, or to this degree. Or if I did, I don’t think I would be able to explain why I know any of what I said to be the case. This class didn’t just reinforce what I’ve been learning about AI governance all year—it clarified why it matters so much. And on a more personal note, I end this class and this semester with a much better understanding of just now complex the ethical challenges are in utilizing AI functionality within the HR landscape. It also left me really excited in knowing that AI governance is exactly what I want to do when I grow up.

You must be logged in to post a comment.