Huberman & Harris

Mindfullness & well-being in a digital age.

Howdy from Durham,

Welcome to the 2 new subscribers from last week.

Andrew Huberman and Tristan Harris are two of the people I look up to the most.

This week, we're investigating the intersection of their work and what we can learn from it.

Please read this article in it’s entirety. There’s a twist at the end.

Meet Huberman & Harris.

Andrew Huberman and Tristan Harris are both renowned figures in the field of technology and its impact on society.

While they come from different backgrounds, their work is centered around a common theme of examining the impact of technology on our behavior and well-being.

Andrew Huberman - neuroscientist and a professor at Stanford University. His research focuses on the neural mechanisms underlying vision, including how the brain processes information from the eyes to form images. He’s a big advocate of the benefits of mindfulness and meditation, and he has studied the ways in which these practices can help improve our brain function and well-being.

Tristan Harris - former design ethicist at Google and the co-founder of the Center for Humane Technology. He is known for his work on the psychological effects of technology and the ways in which it can be designed to manipulate our behavior. Harris is a strong advocate for the importance of digital well-being and has called for more responsible design practices in the tech industry.

Huberman (left) and Harris (right).

The intersection of their work.

Combining the work of Huberman and Harris, we can see how technology can have both positive and negative effects on our behavior and well-being.

On the one hand, technology has revolutionized the way we live and work, providing us with access to a wealth of information and opportunities.

On the other hand, technology can also be designed to manipulate our behavior, causing us to become addicted to our devices and the content we consume.

Huberman's work on mindfulness and meditation provides us with a way to counter the negative effects of technology. By practicing mindfulness and meditation, we can train our brains to be more present and focused, and less susceptible to the distractions and temptations of our devices.

Harris's work, on the other hand, highlights the importance of responsible design practices in the tech industry, calling for designers to prioritize our well-being and happiness over their bottom line.

In conclusion, the work of Andrew Huberman and Tristan Harris provides us with a deeper understanding of the impact of technology on our lives.

By combining their insights, we can see the importance of both mindfulness and responsible design practices in helping us create a more balanced and fulfilling relationship with technology.

Andrew Huberman publishes neuroscience research out of the Huberman Lab within Stanford University while Tristan Harris advocates for ethical tech design as the Executive Director of the Center for Humane Technology.

Here’s the twist: I didn’t write this article. AI did.

I typed in the following prompt into ChatGPT, an AI text generator: “Write an article that combines Andrew Huberman and Tristan Harris' work.”

I then chopped up the AI-output into my normal newsletter format (adjusted spacing, added headings + images) so that you wouldn’t know the difference.

That’s my point. You wouldn’t know the difference.

Outside of the output saying “on the other hand” 2 times within a 3-paragraph span and starting a paragraph with the word “by” in 2 consecutive paragraphs, the piece was pretty well-written.

Consider this in the context of Huberman and Harris’ work. They’re both advocating for the ethical use of emerging technology.

Pros: with AI, you’ve got an assistant by your side that helps you push past creative barriers, provides new perspectives that you hadn’t originally considered, and leads to the generation of new ideas.

Cons: people’s ability to produce deceptive content has increased. I vetted the piece above for accuracy, but consider this - what if someone used ChatGPT to amplify their ability to produce misleading information? What if someone used an AI image generator like DALL•E2 or an AI video generator to produce deepfakes (fake media that shows someone doing or saying something that they didn’t)?

What’s being done: OpenAI (creator of ChatGPT, the model I used to generate this article) released a tool yesterday that can determine whether or not a piece of text was generated by AI (currently has a false positive rate of 9%, so it still needs work).

My hypothesis: we’ll see a rise in verifications tools (things that authenticate the origin of text, images, and videos). I think the future state of this will be some sort of “blue check mark” (like those you see on social media platforms) for all forms of digital media. This would likely be implemented using blockchain technology (unalterable records of a work’s history).

Don’t get me wrong. AI can be an incredible tool. The piece above was accurate, informative, and could’ve served as a great first draft to work off of.

This article was simply a reminder that, as we develop more powerful technology that can be used for good, we have a shared responsibility to consider how these same tools can be used by bad actors.

My interaction with ChatGPT which created this article.

What I’m paying attention to:

David Bowie was onto something back in ‘99

“Esse quam videri”

GIF of the week:

DJ Jazzy Jeff & Jalen Hurts are Super Bowl-bound. Fly Eagles Fly

Thanks for reading

What are your thoughts on the emergence of AI tools?

Reply and let me know!

Josh

Want more Build content? Check out the links below

Reply

or to participate.