AI's Role in Modern Society
Reflecting on Fathom's The Ashby Workshops
I spent last week at The Ashby Workshops, which were hosted by Fathom. The premise of the workshops lies rooted in Ashby’s Law of Requisite Variability, which holds that “the greater the complexity of a system, the greater the need for nuance and variety in its controls.” To that end, the Fathom team did a remarkable job of pulling in a diverse group of domain experts from across the political spectrum representing government, academia, and multiple industries to engage in some very sophisticated and nuanced conversations about artificial intelligence.
If you think this sounds like every other conference on artificial intelligence (AI) that you’ve been to, I think there’s two reasons it wasn’t:
The conversation wasn’t dominated by “the usual suspects” and
The Fathom team did a remarkable job of creating time and space for conversations and connections among participants that might not otherwise occur.
Fathom’s Goal and My Experience
Fathom’s goal for the event was “to have diverse societal voices contribute their expertise and perspectives to the national conversation—and ultimately federal decision-making—about AI and its role in society.”
I left the week feeling like as much as the event was about informing, expanding, and enriching the national conversation about AI, it was something more: it was a catalyst for thinking differently about the future (or the many possible converging futures) in which artificial intelligences expands (and/or encroaches) into our personal and professional lives.
Fathom’s Methodology
Like many conferences, there were expert speakers, panel discussions, small issue-focused breakout groups, and team exercises to start framing, “What now?” and “What next?”
Again, this might sound like other conferences, but the pace and pacing (coupled with the diverse backgrounds of my fellow participants) set this event apart: I felt like the group became more closely knit as the event progressed. I left the event looking to continue engaging, and hopefully working with, several of my fellow attendees.
Is the Law Ready for AI?
Spoiler alert: No. It’s Not.
My alternate title for this section was “Tort, Liabilities, and Indemnities, Oh My!”
It’s good to periodically be reminded of the breadth and depth of one’s ignorance: I used this small group workshop to spend two and a half hours reflecting on how little I know about the law.
As I listened to the conversation, though, it occurred to me that regulation in other large, complex, and highly consequential industries (e.g., the financial industry), is multi-tiered: there is law and regulation, there is the Security and Exchange Commission, and there is an industry-funded a self-regulatory organization, FINRA.
If you’re not familiar with FINRA, let me give you a quick sense of scale: as of 2023, FINRA had 4,200 employees, a budget of $1.4B, and issued $85.5M in fines to its members for harming their customers.
What if the major players decided that there needed to be a self-regulatory organization for AI to stave off a government-led “Manhattan Project for AI” that might encroach on their interests? What would that organization look like?
What if this organization analyzed anonymized versions of every query asked of major public and B2B AIs and every response given (complete with the accompanying metadata)? Or investigated allegations of harm that could result in fines?
What if its staff were composed not just of computer scientists and data scientists, but also (critically) of economists, analysts, psychologists, sociologists, ethicists, and practitioners from an array of industries like healthcare, education, real estate, etc., with the mandate to police the AI industry looking for patterns of harm (e.g., UnitedHealthcare’s use of nH Predict) or instances of (illegal) algorithmic collusion (e.g., the Department of Justice’s lawsuit against RealPage and some of its clients; also “DOJ backing appeal of price-fixing lawsuit against Las Vegas hotel operators”).
What if those fines were redirected back into academic research into AI? Or public awareness programs (to include the development of curricula and classroom materials) explaining the opportunities (and risks) associated associated with AI?
(A second conceptual model that I realize exists is the Department of Defense’s Test and Evaluation Enterprise which is complemented by the Institute for Defense Analyses’ Operational Evaluation Division. I am sure that every service has its own comparable test and evaluation structure.)
Again, this is very much the deep end of the pool for me, but the realization that we’re analogous regulatory structures exist struck me as useful.
My Takeaways…and Hopes
One week on, I still find myself thinking about the conversations I participated in and the connections I made at Ashby.
I am an AI optimist…with my skepticism flowing more from the business models shaping AI’s growth than the growth of the technology itself. I often think back to Ezra Klein’s conversation with Ted Chiang, who remarked:
I tend to think that most fears about A.I. are best understood as fears about capitalism. And I think that this is actually true of most fears of technology, too. Most of our fears or anxieties about technology are best understood as fears or anxiety about how capitalism will use technology against us. And technology and capitalism have been so closely intertwined that it’s hard to distinguish the two.
While no one at The Ashby Workshops argued against AI or suggested halting its development, there seemed to be a pervasive sense that the pace and breadth of AI’s development—which has to date been largely unimpeded by known failures (see Stanford University’s “Exploring the Impact of AI on Black Americans”) and shortcomings (see Scientific American’s “AI Chatbots Will Never Stop Hallucinating”)—absent the legal, regulatory, ethical, and safety frameworks present in most other industries should serve as a call to urgent and serious action. AI’s present and potential is too significant to leave unchecked.

