|
|
|
|
|
|
“No man is a failure who has friends”
—Clarence (It’s a Wonderful Life)
|
|
|
|
|
|
|
If you enjoy or get value from The Interesting Times, I'd really appreciate it if you would support it by forwarding it to a friend or sharing it wherever you typically share this sort of thing - (Twitter, LinkedIn, Slack groups, etc.)
|
|
|
|
|
Happy New Year!
This edition is podcast heavy as most of my consumption time was at the gym or in the car this month.
|
|
|
Hidden Forces
McGilchrist is most well known for his book The Master and the Emissary, which looks at brain hemispheric differences as the source of much of what feels off about the world today.
To briefly summarize the thesis - the left hemisphere sees discrete pieces in detail while the right hemisphere sees connections and the big picture. Both perspectives are necessary, but McGilchrist sees our civilization as overfocusing on the left hemisphere's narrow, mechanistic view at the expense of the right's capacity for context, nuance, and meaning.
In this podcast, he dives into some examples that resonated with me.
When McGilchrist trained, physicians formulated probable diagnoses before touching the patient—the patient history and your relationship with them was the most important input for the diagnosis.
Today, the first step is often putting the patient in a scanner or doing bloodwork. Instead of experiencing another human being paying attention to their suffering, patients are treated more like cars - faulty vehicles going in for service that can be handled as individual parts rather than complex wholes.
It's obviously the case that feeling like the doctor cares about you probably impacts how you experience the treatment and receive their recommendations. And it's also the case that how a patient talks about an issue probably has meaning and significance beyond what a transcript would reveal.
Another way McGilchrist sees this in our society is the elevation of ‘science’ to a moral authority. Science cannot provide meaning, purpose, or value because good science starts by saying “we're not going to suppose a meaning, a purpose, or a value, and now we're going to see what we can find out about the mechanism.”
That's not a condemnation of mechanics or scientists. We need mechanics and scientists. The problem is we've elevated technique into authority where "Science says" ends discussions, leading to an unexamined philosophy of scientific reductivism as the water we swim in.
I'll leave with this view on self-actualization that resonated for me and something I'm thinking about entering the New Year:
“There is your self-actualization as a scientist, as a doctor, as a teacher, as a lawyer, as a policeman, as a whatever. These are things that you feel are callings and you really like to do them. They are not done for wealth, or at least they shouldn't be. I'm afraid nowadays, a lot of them are.
But it's not about the accumulation of capital. It's not about a marketplace. It's about the fabric of a society. The whole idea that a doctor provides a commodity in a market is an appalling idea to me. I mean, that's part of the breakdown of the idea of a cohesive society in which we are held together by duties, by obligations which come from affect, from knowing that there are things that are important here and that you can, up to a point, trust somebody. By emphasizing only one thing, which is the bottom line, trust breaks down. And that is the source of so much that's gone wrong.”
|
|
|
Art of Accomplishment
This framed a useful distinction: choices happen constantly and unconsciously, while decisions emerge when fear enters the equation. You choose what to eat for lunch, when to brush your teeth, and what shirt to wear without much thought or fear.
The moment you're weighing options with significant mental energy, you're already operating from some fear. It could be fear of particular consequences, of being wrong, or of an emotional experience you're trying to avoid.
The practical intervention is elegant: do the next most obvious thing. Don't try to make the big decision. If you're stuck asking "should I invest?" there's some unaddressed fear. What is it? Maybe you need to call references. Maybe you need to test the product. Maybe you’re afraid of people judging you for making a bad investment. Maybe you're investing an amount you can't afford to lose. Keep asking what the next obvious step is and the big decision often evaporates.
For some decisions, the obvious thing often comes down to having some set of principles. "I don't work with assholes" isn't just a preference—it's a decision-compression algorithm. When enough decisions flow through consistent principles, you end up in a reality shaped by those principles rather than by the fears you were trying to avoid.
The Joe Walker Podcast
Fukuyama’s view in the 1990s was that advances in biotechnology were going to transform the biological basis for the liberal democratic order and he believes we may now be close to seeing that emerge.
Human rights are grounded in what Fukuyama calls "Factor X"—a bundle of uniquely human traits including consciousness, emotional depth, and moral agency. The trouble is that this bundle isn't binary. Would we have granted full rights to Neanderthals? They likely felt pain, experienced emotions, and mourned their dead. But would we let them vote?
We don't let seven-year-olds vote because we feel their mental capabilities haven't sufficiently developed to make good decision. A proto-human race that never develops past that stage might warrant protection from cruelty without warranting political participation.
This creates a disturbing possibility by present day standards: genetic engineering could produce multiple tiers of natural rights within a single society. Huxley saw this in Brave New World with his Alphas, Betas, and Gammas.
The likely path isn't engineering a slave race—that's morally abhorrent in a way that would make it politically impossible. A plausible path though is elites gradually separating themselves genetically as well as socially. Historical class differences were already partly biological; medieval aristocrats were literally taller and more cognitively developed than malnourished peasants. Biotechnology (e.g. CRISPR) could make such divergences permanent and heritable.
Our current emotional and cognitive architecture results from hundreds of thousands of years of evolutionary pressure—a winning combination for species survival. Deliberately manipulating that system will produce effects no one predicted. Intelligence seems like the obvious target for enhancement, but boosting IQ might alter risk tolerance, empathy, or compliance in ways that reshape political possibilities entirely.
Fukuyama is skeptical of the impact of AI on politics. Political intelligence differs fundamentally from mathematical intelligence because it's entirely contextual. What works in China fails in India; what works in one Indian state fails in another. The best political leaders possess lived experiences that allow them to empathize and recognize pitfalls in how people actually think and act. For a computer to extract proper weightings from this contextual mess and synthesize workable solutions seems extraordinarily difficult.
Tyler Cowen has an alternate take that I find compelling: Given that the largest and most popular AI models are built in the West, they subtly and intrinsically reflect Western values in a way that will perhaps shape anyone that uses them.
One of the most interesting observations he made was how the difference in Asian and Wester cultures could lead to Asia using gene editing technology much sooner and more aggresively.
Asian cultures, lacking transcendental religious traditions like Christianity, view humans and non-humans as more of a continuum rather than sharply distinct categories. Daoism and Shinto hold that spirits inhabit all material objects—desks, temples, computer chips. This produces both more respect for the non-human world and fewer inhibitions around biotechnology. It's probably not coincidental that China produced the three CRISPR babies born so far.
|
|
|
|
|
As always, if you're enjoying The Interesting Times, I'd love it if you shared it with a friend (or three). You can send them here to sign up. I try to make it one of the best emails you get every week and I'm always open to feedback on how to better do that.
If you'd like to see everything I'm reading, you can follow me on Twitter or LinkedIn for articles and podcasts. I'm on Goodreads for books. Finally, if you read anything interesting this week, please hit reply and send it over!
|
|
|
|
|
The Interesting Times is a short note to help you better invest your time and money in an uncertain world as well as a digest of the most interesting things I find on the internet, centered around antifragility, complex systems, investing, technology, and decision making. Past editions are available here.
|
|
|
|
|
Here are a few more things you might find interesting:
Interesting Essays: Read my best, free essays on topics like bitcoin, investing, decision making and marketing.
Consulting & Advising: Are you looking for help with making decisions around scaling your company from $500k to $5 million? I’ve been working with authors, entrepreneurs, and startups for half a decade to help them get more out of their businesses.
Internet Business Toolkit: An exhaustive list of all the online tools I use to be more productive.
|
|
|
|
|