One of my favorite shows, “The Good Place,” had its series finale last year at the end of its fourth season. I watched every episode through to the end, which neatly buttoned up the storylines in a way that was satisfying and complete, a privilege for shows that get to choose when they end their stories.
Yet, the other day, I was delivered a headline on my news app that offered a sneak peek into season five casting, episodes and release date. Surprised, I clicked through, to be met with an article that had clearly been written as clickbait by a robot. There was no season five, of course, because the show was over, which I was certain of before I clicked. Yet, I fell for it for just a few minutes and rewarded the sneaky bot with my click.
I have always been a techy person. As a kid, we had computers in our house before most people did, going online years before most of my friends. I’m the one in my friend group who always has a new app recommendation and I’m the one people turn to for support when they can’t figure something out on their phones or computers.
But, as technology advances and becomes increasingly capable of natural language processing, automated content creation and deepfakes, we’re at the beginning of an era when separating truth from fiction is increasingly difficult, nearing impossible.
As a digital marketer, it’s easy for me to answer the question, “How are those boots I just looked at following me around the internet?” or creepier yet, “How does Facebook know that I was just discussing that beauty product with my friend?” While most people believe our phones are listening to us (and, who knows, maybe they are), there’s a simpler more direct answer to those questions: data. Data has become a bigger industry than oil.
The more data a company has on you, the more cultivated they can make the content they deliver to you. When you get more relevant content, you’re more likely to engage, and thereby more likely to earn money from advertisers for the platform delivering that information, and spend more money with the advertisers who meet you there.
We’re consuming content incredibly rapidly, increasingly so each year. The stats on data consumption and creation are staggering. Every minute, Americans use over four million gigabytes of data ranging from Google searches to biometrics to voice commands to tweets. And that data is stored, analyzed, aggregated and sold.
We’re living in a time when we can be identified by our data, with millions of data points that create a robust profile of exactly who we are and what we care about, who we know, where we go, what we do, anything really.
We’re met online by companies who have the technology to not only deliver to us exactly what we want to see or hear, but create that content as well, even if it’s not true. As rapidly as the platforms try to catch up to “fake news,” the bots get more human-like and more difficult to discern.
When we think about division in our culture over nearly every issue, ranging from human rights to health to politics to religion, both sides find themselves wondering how on earth the other side could possibly believe what they believe.
What if we’re stuck in an echo chamber not of our own design, and we don’t even realize it? What if we’re being inundated with propaganda on both sides of the issue? What if truth is somewhere in the middle? If that’s the case, how would we know, and how would we begin to figure it out, without having a productive, honest dialogue based on primary sources and lived experiences?
My hope is that we can begin to have an awareness of how information is cultivated for us, delivered to us, and in response we seek out a civil discourse with those with whom we disagree. We read primary sources, we humble ourselves to recognize our firmly held opinions might not be based in fact, just because we read it online. And, beyond all else, we assume positive intent of those around us.