The Paper Clip Problem takes its name from a well-known thought experiment in AI ethics, proposed by philosopher Nick Bostrom. In it, a superintelligent AI is programmed with a seemingly harmless goal—making paper clips. But without proper constraints, it might pursue that objective with such single-minded intensity that it consumes all available resources, including human life, to achieve it.
In this newsletter, I use the “paper clip problem” as a jumping-off point to explore how artificial intelligence—when embedded in institutions like businesses and universities—can amplify both promise and peril. Drawing on my work as a law professor specializing in corporate governance and higher education, I examine the legal, organizational, and policy decisions that shape how AI is developed, deployed, and resisted. My aim is to shed light on how small, often technical choices can scale into major societal shifts—and to help readers navigate those shifts with clarity, skepticism, and curiosity.
I’m not an AI engineer. But I’ve spent my career thinking about how rules, incentives, and institutional design influence power and decision-making. That’s the lens I bring to these conversations. Whether you're trying to adopt AI in your workplace, regulate it responsibly, or simply understand it better, I hope this space becomes a useful, thoughtful companion.
(Oh, and all the images on this Substack were created using ChatGPT 4o. I plan to add more every so often by using the same prompts I started with, just to see how they evolve. For the image above, I asked for a “painting of the paper clip problem in the style of Caravaggio.”)