Following our second gathering, we're excited to announce our third AI Meetup at the Ahmanson Lab on March 18th, 2-3pm. 

For tomorrow's meeting, we're going to revisit a question/idea formed in our first meeting:

Do you believe that AI and human progress are the same thing? 

"In the culture we live in, technological innovation does not need to be justified, does not need to be explained. It is an end in itself because most of us believe that technological innovation and human progress are exactly the same thing." - Neil Postman

As we embrace emerging technologies, we risk overlooking Postman’s wisdom. He offered six essential questions to evaluate new technologies which are highly relevant as we consider AI:

1. What problem does this technology claim to solve?

Many algorithmic "solutions" exacerbate the very problems they claim to solve.

Take recommendation algorithms. They promise to ease “information overload” by filtering content, yet they often worsen the problem. As platforms like Facebook (2004) and the iPhone (2007) emerged, these algorithms optimized for engagement, not enlightenment—turning information abundance into “information glut.”

If efficiency becomes our only aim, it’s difficult to resist innovation. But we should be asking: Ought we do this?

2. Whose problem is it?

Marshall McLuhan noted that technology extends human capabilities while simultaneously “amputating” others. The key question: Whose capabilities are enhanced, and whose reality is diminished?

The framing of AI as a broad societal good often masks the ways in which it serves specific economic interests.

3. What new problems will be created because we have solved this problem?

Smartphones solved disconnection by making us perpetually available—but also fragmented discourse and attention.

When evaluating “efficiency,” we must ask: Are these ‘inefficient frictions’ actually preserving our social fabric?

4. Which people and institutions will be most harmed by this solution?

An important case study: Generative AI and other technology uses in education raise critical questions. When students rely on AI instead of engaging in the cognitive struggle of reading, writing, problem-solving, or co-collaborating, they lose the intellectual resilience these skills build.

We repeatedly see technology positioned as the solution to challenges, but what is the risk of outsourcing many of the cognitive processes and struggles that define deep learning? 

5. What changes in language are being promoted?

Consider how profoundly our understanding of community has transformed in the social media era (2003-Today). Most people, when they talk about joining a community, mean they've joined with people of similar interests. This directly contradicts the traditional meaning where community consisted of people who don't necessarily have the same interests but who must negotiate and accommodate their differences for the sake of social cohesion. This inversion matters profoundly. 

Technological shifts are restructuring what human relationships are, what communication is, and how people know what they know.

6. What shifts in economic and political power are likely to result?

New technologies always create winners and losers. When social media companies and other technologists seize control over information, they reshape societal narratives, while accumulating immense power with minimal accountability.

 

References

Center for Humane Technology (2025, March 6). The man who predicted the downfall of thinking. Your Undivided Attention. https://podcasts.apple.com/us/podcast/the-man-who-predicted-the-downfall-of-thinking/id1460030305?i=1000698059945

McLuhan, M. (1964). Understanding Media: The Extensions of Man. McGraw-Hill.

Postman, N. (2005). Amusing ourselves to death: Public discourse in the age of show business. Penguin.

Postman, N. (1993, October 11). Informing ourselves to death. Speech presented at the Meeting of the German Informatics Society, Stuttgart, Germany.