In popular science fiction, the portrayal of an artificial intelligence is that of an entity completely compelled by pure logic, objective fact being the only thing that factors into its considerations. The same can not be said of the artificially intelligent tools currently in use in the real world. Much like their flawed creators, many of today’s AI systems have presented some considerable biases in their operations. Let’s discuss this issue, and how it can be resolved.
There are a variety of biases that have been observed in artificially intelligent systems. These biases are as follows:
Looking over this list, a clear pattern emerges: in the vast majority of cases, the bias doesn’t come from the system. Rather, it comes from the person using the system itself.
Whether made in error or based on some prejudice or assumption, the majority of biases (particularly those that could impact the function of an artificially intelligent system) come from the user.
For instance, let’s say that we wanted to try and figure out what the most important aspect of our services was to our clientele. While this is a greatly oversimplified example for AI’s capabilities, it gets our point across. While the algorithm powering the AI could be perfectly put together, the data used can easily skew those results. For instance, let’s say that your data was specifically and exclusively collected from your business’ Facebook followers. There are numerous biases that could very much impact the accuracy of your data. For instance, drawing your data specifically from your Facebook followers is clear sampling bias (compounded with voluntary bias as your followers need to opt into providing you with said data).
That’s just one example. We’ve all seen the news articles telling stories about how AI-powered facial recognition systems aren’t able to recognize people of certain races, or in one particularly egregious case, labeling all members of certain races as criminals.
Obviously, not ideal.
AI presents a few additional biases as well, particularly when it comes to predicting unprecedented events… after all, the data to consider it just isn’t there (unintentional exclusion bias). The big problem here is that, like most biases, it takes an awareness to avoid them—an awareness that an AI system unfortunately lacks.
There are a few different steps that need to be taken to mitigate the issues that bias can create in AI. The approach to this needs to follow two parts.
In terms of creating an AI in the first place, there needs to be the capability for a human being to observe the program’s processes and catch its mistakes, with (as we always promote) frequent updates to ensure any issues are addressed and the system in general is improved upon. There also needs to be standards in place in terms of the data collected and used to ensure that the above biases are minimized as much as possible.
Likewise, your team members need to keep these kinds of biases in mind when they’re working without the assistance of AI. As we’ve established, the biases present in AI are generally sourced from human biases, which means they can potentially influence your business even if you aren’t using an artificially intelligent system. In this way, you need to make sure that your team members are keeping open minds as they process the data you collect and generate.
As artificial intelligence and machine learning becomes more accessible and commonplace, it’s likely that businesses of all sizes will be able to embrace more advanced tools. In the meantime, we’ll be here to support the tools you currently rely on. To find out more about our fully managed IT services, give us a call at (770) 448-5400