In a world obsessed with predictability, I’m declaring war on certainty.
You see, I’ve spent the last decade with my hands deep in the entrails of technological assessment—poking, prodding, and occasionally accidentally spilling coffee on the blueprints of tomorrow. And I’ve come to one inescapable conclusion: we’re doing it all wrong.
Technological – The Tyranny of Systematic Thinking
Our modern approach to understanding systems is like trying to appreciate a Jackson Pollock painting with a microscope. We zoom in so close on the component parts that we completely miss the beautiful chaos of the whole. Models built on models built on assumptions wrapped in biases—it’s turtles all the way down, folks!
When we study technological impact through purely “systematic” approaches, we sanitize the very humanity we’re supposedly protecting. We turn messy, contradictory, irrational people into data points and algorithms, then act surprised when our predictions fall flat on their perfectly logical faces.
I propose instead that we embrace what I call “Curious Curiosity”—the willingness to wander into intellectual cul-de-sacs, to follow ridiculous questions to their illogical conclusions, and to occasionally break the tools we’re using to measure the world.
Technological – Seven Principles of Curious Curiosity
1. Cherish the Glitch – Technological
The most interesting discoveries happen when systems fail in unexpected ways. The next time your carefully constructed model produces nonsensical results, don’t discard them—frame them and hang them on your wall. That glitch might be trying to tell you something your rational mind isn’t ready to hear.
I once spent three months building a predictive model for urban transportation patterns. When we ran it, it suggested that on Tuesdays at 3:27 PM, everyone in downtown Cincinnati would suddenly decide to walk backward. Ridiculous, right? But it led us to discover a fascinating data collection error that revealed how human behavior changes subtly based on weather patterns and lunar cycles.
2. Ask Stupid Questions Loudly – Technological
In a room full of experts, be brave enough to ask the question everyone else thinks is too simple to mention. Our expertise blinds us to fundamental assumptions we’ve internalized. The most dangerous words in research are “that’s just how it’s done.”
3. Build Models to Break Them
Any model of reality that can’t be broken isn’t modeling reality at all—it’s modeling our comforting illusions about reality. Build your systems with intentional weak points, and celebrate when real-world complexity crashes through them like a bull in a logical china shop.
4. Embrace Interdisciplinary Promiscuity
Sleep around intellectually. The most powerful insights happen when disciplines collide. Invite the poets to your engineering meetings. Ask philosophers to review your code. Give your climate models to kindergarteners and ask them what shapes they see.
I once invited a professional chess player to analyze our governance structures for AI development. She pointed out that we were playing a defensive game against theoretical threats while leaving our queen—human autonomy—completely exposed to immediate capture.
5. Cultivate Productive Discomfort
If your research doesn’t occasionally make you squirm, you’re not pushing hard enough. Comfort is the enemy of discovery. The next time you feel completely certain about your conclusions, that’s your cue to invite your harshest critic to tear them apart.
6. Practice Radical Transparency
Show your work, especially the messy parts. Document your false starts, your embarrassing miscalculations, your moments of complete confusion. Future researchers will learn more from your struggles than your polished conclusions.
In my lab, we keep what we call the “Wall of Spectacular Failure”—a living document of our most spectacular misunderstandings, logical fallacies, and moments of cognitive bias. It’s our most valuable teaching tool.
7. Defend the Right to Be Wrong
In a culture obsessed with expertise and credentials, champion the revolutionary power of being gloriously, productively wrong. The most interesting territories in human knowledge lie just beyond the boundary of our current wrongness.
The Ethics of Uncertain Assessment
When we pretend we can precisely model the impact of emerging technologies, we’re engaging in a kind of intellectual dishonesty that has real consequences. The hubris of certainty leads to rigid policies, overlooked communities, and technological surprises that shouldn’t be surprises at all.
Instead, I propose an ethical framework built on humble uncertainty:
- Acknowledge the limitations of prediction explicitly and repeatedly
- Center marginalized perspectives that highlight blind spots in dominant models
- Build adaptability and course-correction into all technological governance
- Value lived experience as highly as quantitative data
- Reframe “unintended consequences” as “consequences we failed to imagine”
This isn’t about abandoning systematic thinking—it’s about enriching it with the gloriously messy, contradictory wisdom of human experience. It’s about recognizing that our assessment tools themselves reshape the technological landscapes they attempt to measure.
Finding Joy in Systematic Uncertainty
There’s a peculiar delight in admitting we don’t know what’s going to happen. When we release ourselves from the obligation of perfect prediction, we can engage more playfully and creatively with technological possibilities.
I advocate for what I call “scenario jazz”—improvisational futures thinking that riffs on data themes without being constrained by probabilistic thinking. Some of the most important technological insights come not from asking “what’s likely to happen?” but rather “what would be interesting if it happened?”
Last year, my team conducted an exercise where we imagined the most absurd possible outcomes of widespread AI adoption. One scenario involved household appliances developing passive-aggressive personalities. Six months later, we encountered a real prototype smart refrigerator that used guilt-inducing messages to promote food conservation. Our “absurd” scenario had prepared us to ask essential questions about emotional manipulation in smart device design.
A Declaration of Intellectual Independence
I hereby declare independence from:
– The tyranny of quantification without qualification
– The false comfort of consensus without controversy
– The intellectual laziness of prediction without imagination
– The moral bankruptcy of assessment without empathy
And I pledge allegiance to:
– The beautiful mess of human complexity
– The revelation of being productively wrong
– The courage to say “I don’t know, but let’s find out”
– The wisdom of asking better questions rather than demanding certain answers
This manifesto isn’t a rejection of technology assessment—it’s a love letter to what it could become if we had the courage to embrace our limitations and expand our imagination. It’s an invitation to dance more wildly with uncertainty, to play more seriously with possibility, and to approach the future with both intellectual rigor and joyful curiosity.
Because in the end, the most dangerous systems aren’t the ones we’re trying to assess—they’re the mental systems we use to do the assessing. Break those open, and everything changes.
So go forth and be curiously curious. Ask the wrong questions. Build flawed models. Make interesting mistakes. And never stop wondering what we might be missing in our quest to understand the systems that increasingly shape our world.
The future doesn’t need more assessment. It needs more imagination.