If ChatGPT has made anything clear to me, it’s that automation is coming for us.
And, with that automation—whatever form it may take—will come automation bias. I’m assuming, of course, that AI marketing bots won’t be able to work entirely on their own. (At least not for a few years!) In the interim, an AI marketing bot will mediate my A/E/C marketing work.
I imagine that my work will look more and more like quality assurance. Does the output adhere to the RFP? Did the bot understand my prompt about that narrative? Did it incorporate the anecdote about the principal and her alternative high school experience?
I’d like to claim exception, but I’m sure that, over time, I will start to trust my bot. I’ve not doubt that the more I work with it, the more it will learn about my preferences, and the more it will offer output that I can’t find fault with.
Trust is where the danger comes in. Enter automation bias.
Automation bias is the tendency to trust the decision-making output of an automated system and ignore contradictory information from a non-automated source…even if the information is correct.
This is already a very real problem in various industries, including healthcare and aviation. Autopilot is great, except when it’s wrong. When autopilot is wrong or malfunctions, it’s possible that the pilots may continue to trust its output even when their own senses tell them otherwise. This has actually been documented as the reason behind several airplane crashes.
My job is less mission-critical than piloting an aircraft, of course, but just because no one dies when I make a mistake doesn’t mean that I don’t mind making one. I don’t want to make mistakes!
But my trust in the bot may challenge us at times. We will trust the output as essentially accurate and aligned with our favored strategy. But it won’t always be!