The Hub 3 AI trends businesses should follow in 2024
Professional Development with PowerED

3 AI trends businesses should follow in 2024

By: Katrina Ingram

PowerED™ by Athabasca University (AU) artificial intelligence (AI) ethicist and subject matter expert weighs in on key changes and trends organizations need to follow

It can be hard to keep up with the many ways that AI is impacting our personal and professional lives. This is true even for someone who dedicates a lot of time paying attention to this space. While new AI developments are sure to pose new challenges, there are a few trends from the past year that stand out. 

Here are three areas to pay attention to if your organization has deployed or is thinking about deploying AI.

1. Responsible AI practices will become a ‘must do,’ not just a ‘nice to have’

AI-generated image of people learning to use artificial intelligence
Credit: AI-generated image

In late 2023, U.S. retailer Rite-Aid was banned from using facial recognition technology for five years. Facial recognition is a controversial technology as was Rite-Aid’s use of it to prevent shoplifting. However, this case has implications for all companies using AI, and goes beyond just facial recognition technologies. 

In complaints against Rite-Aid, the Federal Trade Commission noted that the company had failed to implement several due diligence items such as testing, assessing, measuring, and documenting the deployed technology.

They also failed to monitor the system over time, and ensure input data—in this case, photos—were high enough quality to ensure system accuracy. They also failed to ensure employees were adequately trained to use the system. 

It doesn’t take much to imagine how these same complaints might be levied against any company that uses an AI system and does not have these kinds of measures.

“Organizations that want to use AI must prove they’ve done so responsibly.”

– Katrina Ingram, CEO of Ethically Aligned AI, PowerED™ by Athabasca University subject matter expert

Organizations that want to use AI must prove they’ve done so responsibly and have taken concrete steps to mitigate ethical risks. Implementing a responsible AI program is core to demonstrating reasonable due diligence and care.

Key takeaway: If you’re deploying AI in your organization, ensure you have done your due diligence with testing, assessing, measuring and monitoring the system as well as appropriate staff training for using the system. AI requires an ongoing commitment to ensure it’s used responsibly.

2. AI in hiring = a high risk that requires more safeguards

AI-generated image of people learning about artificial intelligence ethics
Credit: AI-generated image

There are now two regulations that categorize the use of AI in the process of determining employment as high risk.

These include the New York City bias audit law, passed in 2021, and the European Union AI Act, passed in 2023. It’s likely that other proposed regulations, such as Canada’s Artificial Intelligence and Data Act, which also takes a risk-based approach, will deem the use of AI in employment as a higher-risk use case.

If your organization is using AI or plans to use it as part of its hiring process, it would be prudent and proactive to understand existing laws, and to take pre-emptive steps even if you are not subject to these two regulations right now.

Understanding AI in the context of human resources is one of the use cases covered in AI Ethics: An Introduction. This course from PowerED™ by Athabasca University provides a simulated scenario where students can explore the ethical risks of a potentially biased AI system while using a purpose-built chatbot.

Key takeaway: HR is a high-risk use case that might require you to implement audits to satisfy regulations. Be proactive to understand upcoming laws and take necessary steps to ensure compliance.

3. AI ethics training is essential to managing AI risks

AI-generated image of a robot acting as human resources hiring assistant
Credit: AI-generated image

AI is likely to come into your organization not only through official channels but also through unofficial channels. Some call this “shadow AI” given its similarities to shadow IT.

Having an official policy on your company’s approved use of AI is a necessary first step. However, training your staff to understand why, where, and when it’s appropriate—or inappropriate—to use AI in the context of their work is key to your ability to manage risks.

We’ve seen this play out for big companies, like Samsung, whose employees inadvertently shared sensitive information, and for lawyers whose use of AI put their professional reputation at risk. We also saw a university experience community backlash after their staff used AI inappropriately to generate content.

Organizations will not be able to effectively manage AI through technical controls or policies alone. Given the pervasiveness of AI in enterprise software and freely available tools such as ChatGPT, all staff need to be on board with using AI wisely.

The safe and responsible use of AI relies on your staff’s knowledge of AI ethics.

Key takeaway: Develop an official policy on AI use and then train your staff to ensure they use AI in accordance with organizational standards.

Main banner credit: AI-generated image

Katrina Ingram, CEO of Ethically Aligned AI, brings over two decades of experience in technology and media sectors, as well as public service. Recognized as one of the 100 Brilliant Women in AI Ethics, Ingram holds degrees in business administration and communications and technology. She is an International Association of Privacy Professionals (IAPP) certified information privacy professional and actively contributes to AI ethics organizations. Ingram hosts the podcast AI4Society Dialogues and helped develop Canada’s first AI Ethics micro-credential with PowerEDby Athabasca University. She serves on the Calgary Police Services Technology Ethics Committee and has advised the City of Edmonton on data ethics.

Published:
  • February 21, 2024
Guest Blog from:
Katrina Ingram