Woo, Quantum Storytelling, Time Crystals and Misallocation
A clever friend of mine who still works at perhaps the most visible quantum computing industrial effort once described his efforts writing papers as “quantum storytelling”. He meant, somewhat cynically, that what he needed to do was tell stories with the science, to gain profile. Both he and I never really enjoyed the vain self-promotion that passed for most research papers, but found ourselves stuck in a system filled with poorly informed dilletantes with no other choice than to hype. It’s an inefficient equilibrium where you can’t really do much but play along. Conversely, it’s impossible for everyone to be an expert at everything, and good that people get excited about things they don’t have the time to understand. In a perfect world it’s just hard to abuse that curious enthusiasm. Let’s call abuse of curious enthusiasm: “woo”. It’s a perversion of the human reward system which releases more endorphins in anticipation of a reward than after an achievment. This post is about my ideas to avoid woo.
To be honest I’m writing in response to a headline: “Why Google’s time crystals might be the most important development of our time”. Which is complete and total bulls**t hype. Literally every author of the paper had full faith the paper could be written long before it was written because it’s straightforwards. It was probably sitting on a years-long queue of papers to release to keep supporting the google quantum computing effort, and that’s fine. The paper isn’t really about new ideas, it’s about showing they could easily implement something. Meanwhile they published an actually substantial paper which struggled it’s way into PRX, the difference is the latter paper actually tries to solve problems.
Why woo abuse is bad
Resource allocation in the face of uncertain risk is perhaps the most important thing a society can do. In a Malthusian attention economy where the loudest hype wins, even earnest developers of new ideas must devote more time to hype than to doing things. If the fathers of AI had spent their time on hype, I doubt we’d have PyTorch. Woo thrives on a suspension of gravity, where the dirty details of reality can never stop a new technology or idea from changing your life. This type of thinking disincentivizes people from the unglamourous work of actually solving problems, which is all about surmounting tedious barriers. Anyone who has ever been to a Y-Combinator interview round would understand instantly what I mean ;P. We live in a world democratized by technology where large groups of people can have outsized impact, but likewise contagious diseases of thought like woo can do outsized harm. Woo leads to non-sober thinking, excessive risk-taking, etc.
Why we are weak to woo
Despite the way it feels when we wake up, people are not isolated evolutionary units. There’s a species/social benefit to indviduals taking on stupid, large risks. There’s probably even a cost evolution puts to thinking vs. just throwing a body at something, and the cost of thinking is probably higher in units of lives than we’re comfortable admitting. Other people can be quickly born to eat the fruit that grows if a ship should carry away hundreds on some ill-fated expedition. Unfortunately this reward system is probably tuned by evolution for exploring the savannah, moreso than making resource allocation decisions in our modern economy.
How to identify Woo-risk
It’s not even neccessary for other people to woo you. You can woo yourself. I regularly work with new quants at my job, and watch each one succumb to self-woo the first time they develop a new trading strategy. The pattern is this:
- Person uses technology (which could be anything the human mind doesn’t do naturally so confusion is possible) to develop a trading signal
- Person measures success of the signal in units of reward (money)
- By a random fluctuation, bug, etc. person hits of a version with high promised risk-reward.
- Evolution takes over, shutting down the critical thinking faculties of the individual. They usually immediately want to drink their own kool-aid.
- I soberly explain that if their woo were correct, they would be astronomically smarter than legions of Ph.D’s etc. and that the consequences of what they are projecting are so unlikely they are difficult to imagine.
- If they are smart, they identify their bug, if they are not they may lose money and quit, the worst possible outcome is that they randomly make money and never learn.
Watching this process play out is an incredible lesson, really I think everyone should do it. By watching this play out several times, you can clearly identify several identifiable features of woo:
- A person under the sway of woo thinks more about their reward than their thought process.
- A person under the sway of woo invokes a technology or oracle of some kind to solve problems without a detailed solution.
- A person under the sway of woo tries to stimulate woo in other people even though this would imply sharing the reward. (They likely do this to avoid the other people dragging them back to reality.)
- A person under the sway of woo does not attempt to estimate the risk or cost of their enterprise.
- A person under the sway of woo indulges in self-legendary thinking and starts adding to their identity with woo: i.e. “Woo has chosen me.”
- A person under the sway of woo dismisses alternative investments of time and energy, even when those are accompanied by greater certainty in their rewards.
When a single person begins a Quixotic woo quest it’s probably not even a bad thing. When thousands of people do without understanding the risks it’s a disaster.