The Sheep in our Head

Shannon Mullen O'Keefe
6 min readNov 7, 2023

--

What happens when technology makes us follow blindly?

Photo Credit Unsplash: Sam Carter

By Adriana Nugter & Shannon Mullen O’Keefe

Introduction

From the earliest days of our lives, we learn through communication. Children learn and imitate behaviors by watching and listening to others. They look to their parents, caregivers, siblings and others to learn how to do things. Later in life it might be a friend or a colleague. Our minds are wired this way, it seems like a sort of social shortcut, doesn’t it? We learn about the behavioral choices of others and also about the consequences of those behaviors.

And influenced also by reinforcement, we will model those behaviors that we think will get us around in life. By imitating what others are doing, we don’t have to do all the trial and error ourselves. And if we act like others, it usually helps us belong. Overtime, the modeled behaviors we adopted will become automated, and will be applied ‘empty-headed’.

Neuro-linguistic programming, the psychological approach to analyze strategies used by successful individuals and applying them to reach a personal goal, is an adult version of such child learning. Copying those that are successful, we no longer need to ask questions, we ‘apply’, unless something unexpected makes us re-think. It is a rather comfortable place to be. Many other things require attention in our life already. So thank God, a lot of essential stuff ‘just happens’, mindless.

But this is the thing.

Mindlessness can lead to, well, mindless behavior. Remember that rhetorical question? If your friends jumped off a cliff, would you too? It’s the classic thing a parent might say to an adolescent, to test their willpower to resist the social pressures of that age. We’re used to knowing that the humans around us have a large social influence on our lives. And so we have checks in place like that parent who checks in on our willpower.

But, what about technology?

Technology has a similar way of getting into our habits. We install it, we apply it, in the same way we learned how to eat with a knife and fork. Or chopsticks for that matter. We no longer think about it. So if Google Maps told you to drive off a cliff, would you?

We are always looking for guidance to tell us what to do. And in the same way we trust the social cues that we get from our caregivers, parents and peers and social networks, we mindlessly follow the cues technology offers up, too. This is even more so if guided by humanized technologies, like chatbots, or voice-controlled virtual assistants.

Take your car.

Consider the navigation system — it is easy to follow, (mostly) reliable. We are used to plug in the address we’re headed for and hitting the road. We no longer worry about that big old map, the pocket guidebook, or about taking notes in a notebook, or having a human co-pilot in the car…we just “plug and play,” for the most part.

A navigation system makes it very easy to avoid the risk of driving in the wrong direction. It also simplifies your life by eliminating the need to know how to get somewhere. Are we passing by Cambridge on our way to London? There is no longer a need to understand this. It is all about arriving in London.

But, here is the thing… Remember that question? If your friends all jump off a cliff, will you, too?

Could we become overly dependent, overly trusting, overly relying on technology?

Could mindlessness sneak up on us? Enough to navigate us over the cliff?

Consider the North Carolina man who died after his daughter’s birthday party on a rainy evening, by following the GPS instructions to a defunct bridge. This sad case is a worst-case scenario, but a scenario worth mentioning, because it is so easy to simply rely on technologies we use. While we might rarely end up in the water, the thing is we will realize the error only when we get there.

Even though, all along the way, there were (road) signs for us to notice.

What do we give up when we follow?

So, what do we give up when we follow? There is some indication that we might be giving up quite a lot. There is evidence that what we do, how we act, matters to our brains. A recent article about the impact of ChatGPT technology on literacy, points out that “Thanks to modern neuroscience, we know the brain is ‘plastic,’ meaning it is capable of reorganizing its structure or laying down new pathways, depending upon our physical or mental activities.”

For example, “London cabbies with ‘the knowledge’ of thousands of routes, streets, and landmarks have larger posterior hippocampus (the area responsible for physical navigation) than control groups.”

Thus, keeping an eye on the street signs and routes actually matters to our brains. There is more. The article points out how the process of writing matters in and of itself. “The literate brain empowers us to use writing as a canvas for witnessing our thoughts,” quoting Flannery O’Connor who said “I write because I don’t know what I think until I read what I say.” The article suggests that not only the brain itself may be impacted as we lean away from using it to find our way. There may be something more to paying attention to the signs while we’re driving–or even fumbling around with that map–or talking to a human co-pilot in the passenger seat. If paper for a writer is canvas for witnessing thoughts…what else might a journey represent for a human?

What if we needed to think more about our surroundings, about finding our way and paying attention to the signs?

Paying attention matters for another reason

The new Lincoln Navigator commercial features Serena Williams letting go of the steering wheel to allow the driverless capability of the vehicle to kick in. The commercial features a parallel track in which a man instructing a child to swim lets go of her hand as she learns to float on her own. This is an interesting juxtaposition, as in one case, the mentor lets go and the child learns to swim — -gains even more capability on their own. In the case of the car, there is a letting go — but it is not the human that gains more capability, it is the machine.

In both cases there seems to be an implied trust — the child trusts the man as he lets her hand go–she trusts his confidence in her newly gained capability.

But, can we really ever trust a machine?

Consider this recent BCG (Boston Consulting Group) research, investigating the added value that the use of generative AI (in this case GPT-4) can provide in the workplace. It found among others that in the area of creative ideation, a competence that sits firmly in generative AI’s current ‘frontier’ competence, 90% of participants improved their performance. This was the case even though participants mistrusted the use of GPT-4 in this area. However, ‘’When our participants used the technology for business problem solving, a capability outside this frontier, they performed 23% worse than those doing the task without GPT-4. And even participants who were warned about the possibility of wrong answers from the tool did not challenge its output.”

The case example demonstrates that using technology ‘without thinking’ is not, in all circumstances, that straight forward. It also calls to light how easy it is to trust the technology. Even though the participants in this study had been warned not to trust it, they still trusted it.

And turned up wrong answers because of it.

Conclusion

So, what do we give up when we hand what our minds have always done over to technology? Will we still think for ourselves? Or will we just blindly follow?

For now, the liability rests with us, for the choices we make. Whether we outsource them to technology or not. The business analyst who trusts the machine to give them the right output will be the only one in the room who has to own the machine’s mistake–the machine itself probably won’t worry too much about it. And most likely, in the case of a car crash, we will not be able to blame the navigation system, as the legal responsibility is considered to remain with us. How tenable is this going forward? When technologies are designed in such a way that we are let to believe that the sheep in us can take over? That we can trust? And under what circumstances can such reliance be safely justified?

The real question perhaps is about our own agency.

Do we want to keep it? Do we need to keep it? When?

Maybe we ought not to always trust the machine. Maybe not everything should be implemented. Maybe we shouldn’t do away with agency so easily.

So mindlessly.

Perhaps we need to talk about when and when not to lose our mind.

--

--

Shannon Mullen O'Keefe
Shannon Mullen O'Keefe

Written by Shannon Mullen O'Keefe

A lover of wisdom, dedicated to imagining what we can build and achieve together. Chief Curator |The Museum of Ideas https://www.themuseumofideas.com/

No responses yet