In Ray Bradbury’s 1953 short story, “The Flying Machine,” a Chinese emperor in 400 AD learns that one of his subjects has built a contraption that enables him to fly. Once the emperor realizes no one else knows of this feat, he sentences the man to death and destroys his creation.
In the inventor’s last moments, the Emperor explains his decision: “There are times when one must lose a little beauty if one is to keep what little beauty one already has.” This flying device might fall into the hands of an evil man: “who is to say that someday just such a man in just such an apparatus of paper and reed might not fly in the sky and drop huge stones upon the Great Wall of China?”
We may be tempted to laugh at the absurdity of the emperor’s stance. But this old story can teach us a thing or two about navigating new technology. ChatGPT is the most recent example, stirring up worries and fears that rival the emperor’s. Love it or hate it, the AI-powered chatbot is a disruptive technology. A Deloitte survey revealed that 94% of leaders see AI as critical to success in the next five years. But large language models like ChatGPT raise the stakes - and the risks - considerably. They have the power to outpace us on the things we do, and replace us in the things we don’t do well enough. The reception is mixed - from enthusiasm to amusement to outright fear. Schools and colleges have prohibited their students from using it to write their papers, and recently, a growing list of large companies like JPMorgan, Accenture and Amazon have banned its use at work.
No question, there are significant regulatory and policy implications raised by the use of a tool like ChatGPT. It will take time and intense scrutiny to sort these out. But if, like the emperor in 400 AD, you kill the technology in your organization, you will never know where the pain points are, or even more importantly, the opportunities. Of course you need guardrails: policies, guidelines, and practices. But what you need even more than protection measures is a healthy culture of experimentation in your company.
If you have this kind of culture, you don’t need to ban ChatGPT (or whatever comes next) because you have what you need to support the experimentation and exploration that disruptive technology requires. Here are three things to look for to help you decide whether your culture is ready to grasp the opportunity of ChatGPT, rather than ban the threat.
- You have high quality connections among your teammates.
Experimentation is not a solo sport. It requires colleagues who work well together. Strong relationships support innovation, enabling the cross-fertilization of new concepts. This means tapping into each other’s expertise and experience, building on each other’s ideas and considering each other’s perspectives. And cultural diversity is a plus here: new ideas develop where different experiences collide. A team approach to experimentation also provides the checks and balances of multiple mental models, protecting a new idea from being too influenced by any one individual’s point of view. Hallmarks of these high quality connections include demonstrable levels of trust and respect, strong empathy and teammates who make room for and respect multiple perspectives.
- Your employees regularly question their assumptions.
The most effective teams avoid we’ve-always-done-it-this-way thinking. Challenging assumptions means being willing to let go of firmly held beliefs long enough to try different ideas and consider discordant points of view. This isn’t easy – human beings often have trouble questioning their own expertise. In fact, science tells us that the more expert we are, the more cognitively entrenched we become, losing our flexibility to adapt to new ideas. And we tend toward selective attention: when we focus on too specific a task or a challenge, we significantly reduce our peripheral vision. Radically new technologies require an open mindset and a willingness to adopt a scientific approach. It’s about forming hypotheses but being willing to be wrong. It’s asking humble questions that yield unexpected answers. Your best experimenters are often found at all levels of your organization. A clear sign of this skill is healthy skepticism, a passion for evidence and a resistance to falling in love with an idea.
- Your colleagues have a strong sense of psychological safety.
Perhaps the most important evidence of a culture of experimentation is a palpable sense of psychological safety. The belief that a team is safe to take risks and speak up, even if others disagree. When wading into unknown waters, psychological safety lays the groundwork for learning, and ensures the freedom to ask hard questions and make mistakes. It's about acknowledging individual fallibility and honoring collective accountability. No question is too challenging or too obstructive when it's born of a genuine desire to learn. Where large language models are concerned, this is a non-negotiable attribute. AI is notorious for its embedded inequities and outright fabrications. Psychological safety and a strong moral compass will help ensure your team has the tools required to surface the threat of dangerous algorithms and blow the whistle when something is amiss.
Bradbury’s myopic emperor reminds us that, in the face of technological advancement, we’ve been here before and the natural human response is fear. We’re constantly fighting a timeless resistance to new ideas that imperil our status quo. The doomed flying machine encourages us to reframe our response to one of cautious curiosity. To explore a future that offers a beauty we cannot yet foresee, while protecting against the consequences we cannot yet predict.
To move forward at the speed of change, we don’t have the luxury to wait and see how the wind will blow. But building a strong culture of experimentation and innovation can help weather uncertainty, allowing us to take to the skies in awe and wonder, instead of killing the flying machine.
First published on Forbes.com.