Skip to main content
Qualtrics Home page

Customer Experience

Who’s responsible: You or the machine? Everything you should consider about AI

From start-ups and large corporations to grocery stores, every single brand, business and outlet wants to get in on artificial intelligence. After all, those who fail to embrace it are destined to fall behind. But while we all know that we need AI — the question on everyone’s minds now is: how can we use it the right way… and are we?

From search engine algorithms and satellite navigation to digital home assistants, artificial intelligence plays a significant role in all of our lives. We self-diagnose our summer flu using Google, argue with chatbots about missed deliveries, and even ask our robot vacuum cleaners — which we’ve painstakingly named — to clean up after us.

Without a doubt, if we were to sit down and think with a degree of perspicacity, we would all realize just how much AI is ingrained in everything we do.

Yet despite the prevalence and widespread integration of AI, it’s only due to more recent advancements that an age-old question has been brought back into sharp focus. And that is:

Can we trust machines?

Now, we’ve all seen Deus Ex Machina, Blade Runner (the original and 2049), and perhaps most importantly of all: The Matrix — but science fiction is just that, right? Fiction.

But while we’re a ways off from sentient robots that are completely indistinguishable from humans, the rate at which AI’s evolving and being used — if left unchecked — could result in more harm than good.

The great rush for AI

Virtually every organization on the planet is talking about, thinking about, and planning to implement AI. By allowing it to manage low-level functions, teams are empowered to focus on adding value or delivering new or better services.

But that’s just the tip of the iceberg.

Look: it’s helping pharmaceutical companies to synthesize brand-new medications for serious illnesses by rapidly ingesting and analyzing large amounts of test data.

It’s helping manufacturers to carefully monitor machine applications and usage across sites, as well as highlight when maintenance is required to prevent breakdowns.

And it’s even helping retailers with optimal product pricing, packaging and in-store placement, ultimately driving more interest and connecting customers with what they need most.

According to statistics from the IDC, AI spending is expected to increase to $154bn in 2023, an increase of 26.9% over 2022.

Just let that sink in for a moment: $154bn.

But just as we’re seeing the positives, we’re also just as equally starting to ponder the negatives. And we have one recent development to thank in particular.

Disrupting the industry

In November 2022, OpenAI released a chatbot that would stir the AI pot like no other before it — one that could understand the context, nuance and even the humor behind human language.

ChatGPT.

Built on OpenAI’s foundational large language models (LLMs) like its predecessors before it, ChatGPT completely redefined the standards of AI by demonstrating one thing: that machines can indeed learn the complexities of human language.

Using deep learning techniques, it generates responses that are reminiscent of what you or I would say.

When ChatGPT was released, it gained stardom (or infamy) almost overnight as users shared what it could do on social media. Suddenly, there was a chatbot that could write stories, code computer programs, and even provide travel advice.

There’s more — users discovered that it’s fine-tuned for a variety of language tasks, including translation, summarisation, and question answering. It also responds in less than a second, making it perfect for real-time conversations.

And it does all of this in a way that sounds, well, human.

But getting to this point with any kind of AI requires continuously feeding it a lot of data. Conversations. Emails. Preferences. Texts. Anything that helps it to achieve a fraction of humanity.

Unfortunately, this kind of approach starts to raise questions in a business environment: are we using it ethically? What do our employees think? Will we get in trouble?

It all depends on how you use it

The fundamental challenge with any kind of AI is using it ethically. We all know that AI requires large, rich datasets to work — but how much and what kind of data? And where and what are the boundaries?

For a long period of time, AI development and implementation into life and business has gone largely unchecked. After all, no-one’s going to complain about a voice-activated Roomba (unless, in fact, it’s listening to you).

However, where many draw the line is when organizations use these tools to leverage personal and/or private information, listen in on conversations across channels without permission, or enforce biases that perpetuate things like racism.

Why? Because it is at that point that the “humanity” of AI — the ethics of AI — are called into question. We start to apply the same principles we believe in to the technologies we use.

And this brings us to the most salient fact of all: we are ultimately responsible for the outcomes of AI. A machine is not inherently good nor evil — the hand that feeds it is. The information we provide it is.

So while legislation around AI is relatively new, the parameters around privacy and data protection, health and safety, HR and more — are not.

If we don’t have clear guidelines to establish our ethics and considerations when using these technologies, how can we ever expect good outcomes?

Only a bad workman blames his tools

When it comes to business, the question isn’t so much whether you should or shouldn’t take the leap, but rather how you should take it.

We’re all familiar — to varying degrees — with the benefits of AI across industries, so the challenge isn’t so much choosing a solution but more implementing and using it correctly.

For example, most organizations use some form of robot process automation (RPA — a form of AI) to scale operations, reduce admin and free up vital people power for more beneficial tasks. Think customer call centers, accounts payable processes, and sending out requests to replace lost debit or credit cards.

Sure, all of this is great but it only works if the processes themselves are clearly defined, robust and well — sensible. The moment you apply any kind of automation to something that doesn’t work, or can’t withstand the rigors of “scale”, holes start to appear and are often incredibly costly to fix.

Imagine for a moment a car production line. A car manufacturing plant delivers on time, every time, every day — but for every 100 parts created, 1 is defective.

That’s an error rate of 1%. Seemingly insignificant, right? Well, what if we add automation to the equation? Suddenly, the plant is able to produce 10,000 parts… but 100 are defective. Yes, a foreman on site could quality check these parts and remove them from the cycle, but this place is fully automated — including quality control.

Moreover, the AI responsible for quality control hasn’t received the right set of parameters to sort defective parts from good parts. See how quickly it unravels?

Getting it right

The above example is nothing new, but as you think about implementing cutting-edge AI into your organization, here are a few things to consider:

How will you manage issues in real time?

Scaling processes helps to drive efficiency, but doing so also exacerbates issues — especially if you haven’t enhanced your issue resolution or quality controls.

You might find that you have to adapt your infrastructure to support your new capabilities — or that you have to hire new talent and build new teams to manage your AI-based programs.

Whatever the reason, it’s vital you consider the possibilities ahead of time, as this is what will help you deliver an ethical, high-performance solution.

What kind of data will you feed to your machines?

One of the things organizations struggle with — especially in today’s data-scrutinous and protected environment — is feeding AI algorithms and systems dense, high-quality data.

As you know, AI is simply a means to an end; something that empowers organizations to do what they already do (and then some) in less time and with less effort. But to achieve the best possible results, you have to provide the right datasets.

How will you protect customer data?

As artificial intelligence evolves, it magnifies its ability to not just capture personal data, but use it in ways that invade privacy. Take facial recognition machines, for example. These machines are designed to automatically discern who you are based on a preset of data. It effectively searches a database of existing images (and this wholly depends on what databases are connected to the recognition system) to find out who you are.

Now say for a moment, one of those databases is leaked or altered? Suddenly, someone can fool a machine into thinking they’re someone or something else. The ramifications are potentially catastrophic.

This is why the final stage of scrutiny has to involve human beings. No form of AI capability can run effectively and ethically without us. That’s just a fact.

What’s important is that you consider AI a value-add. Certainly, machines can automate more arduous and repetitive processes, but when there’s a requirement for empathy or critical understanding of what customers, employees and others might want, you must ensure that a human is on the opposite end of those decisions.

Until a time comes when AI can act with nuance and emotion — in simpler terms, recognizing what makes us human — it’s up to us, as leaders, managers, trailblazers, customers, and most importantly — people — to set the right boundaries and act with empathy.

Aaron Carpenter // Experience Management Content Strategist

Aaron is a highly skilled and accomplished content strategist specializing in experience management. With a keen understanding of the ever-evolving landscape of digital content, Aaron brings a unique perspective to the art of crafting engaging and impactful experiences for users.

Related Articles