AI, Analysis, and Leadership Pipelines
A friend sent me Rob Copeland’s recent piece in The New York Times, “The Worst Part of a Wall Street Career May Be Coming to an End.” In it, Copeland writes that generative AI seems poised to “…supplant entire ranks of workers.” The workers in question? The junior staff at investment banks who build PowerPoint decks, populate Excel spreadsheets, and finesse “esoteric language on financial documents that may never be read by another soul.”
My Perspective
Before I get into Copeland’s article, I think it’s fair to say that a variation of this conversation is taking place in every industry; I certainly had heard it echoed in intelligence circles. The more I think about it, there are three dimensions at play:
The personnel costs of operating in the near-term;
The robustness of the leadership pipeline in the mid- to long-term; and
The flexibility and resilience of the organization in the long-term.
The “Endless Hours” of Developing Expertise
The assertion that AI is causing many, if not all, organizations to reconsider their staffing needs is nothing new. As a recovering political analyst who participates in a lot of conversations about AI and analysis, I found myself focusing on a later passage:
Those jobs most immediately at risk are those performed by analysts at the bottom rung of the investment bank business, who put in endless hours to learn the building blocks of corporate finance, including the intricacies of mergers, public offerings and bond deals.
‘The structure of these jobs have remained largely unchanged at least for a decade,’ said Julia Dhar, head of BCG’s Behavioral Science Lab and a consultant to major banks experimenting with A.I. The inevitable question, as she put it, is ‘do you need fewer analysts?’
Like I said, this conversation parallels some of the conversations I’ve heard taking place in intelligence circles and I think it’s worth unpacking.
The “endless hours” junior staff spend learning the ins-and-outs of their job—even if the output of their labor has no discernable effect on business operations—are the price organizations in both the public and private sector pay to develop the expertise and, ultimately, the sources of the expert insights that will give them competitive advantage 12, 36, 60, and 120 months down the road.
So long as AI is informed, in part or whole, by human thinking, the demand for expertise and novel insights will persist…though how we develop that expertise in the face of and alongside AI is an unknown and worthy of thoughtful discussion and debate.
Today’s generative AIs are not panaceas.
For as much promise as we assign to generative AI, LLMs typically fail to meet professional standards associated with knowledge work.
In intelligence circles, questions about how an LLM weighs and uses information remain significant impediments to wide-scale adoption and use: what sources were used in an analysis? Why were those sources, over all other sources, used? How are those sources described? How much confidence do we have in those sources? What don’t we know about the issue we’re analyzing and what risks do we associate with those information gaps?
So, can generative AI be trained to do routine mundane tasks as well as a junior employee? Sure. Probably. I think there is a fair amount of “irrational exuberance” at play but I also think there’s plenty of room for informed suspension of disbelief.
Spoiler Alert: Junior Staff Aren’t Junior Forever
The implementation of AI likely represents a strategic shift in how an organization thinks about its mission, vision, operations, and future. If we move past the tactical issue of entry-level work, we start to touch on more strategic issues.
For example, unless the organization in question has parallel investments in vat-grown middle management, reducing the number of new analysts (any entry-level developmental job really) might reduce the potential quality contained in the organization’s leadership pipeline. According to Statista, for example, the U.S. Army had 118 brigadier generals in 2022. The army also had some 10,000 second lieutenants that same year. Over the course of 20+ years, the officer cadre is reduced through a promotion process from 10,000 to 118. The math is not wholly accurate for a variety of reasons but, if you let your eyes blur, the U.S. Army’s math boils down to “You need 10,000 new hires to get 100 potential Chiefs of Staff 20 years down the road.”
I doubt there are comparable numbers for most private sector industries, but I think the question is valid: “How many new hire does an organization need to recruit, train, and develop to populate an effective, efficient, and innovative leadership pipeline?” If you reduce the number of people at the base of an organization, how does that impact organizational performance in 3, 5, and 10 years?
Reframing the Conversation
I think the more pertinent narratives around the implications of generative AI are:
How do entry-level developmental jobs change in the face of AIs that will become more sophisticated over time? The “endless hours” spent learning the foundational skills of a job mean that, even after an undergraduate or graduate education, new hires do not know enough to function at full performance. No surprise there: they’re new hires. However, the power of new hires is perspective: their newness and eagerness to prove their value means that they’re likely to explore novel ideas in ways that someone who has been working—and building domain expertise might not—for 5+ years might not. The question then becomes at what point does novelty of thought or analytic creativity (i.e., crafting unique interpretations of information that expand decision-maker thinking) become a hallmark of a solid or exceptional analyst? If the mundane tasks can be offloaded to AI, how might organizations tap into their creativity and burgeoning expertise in productive and/or profitable ways?
What do we do with middle managers? First line supervisors typically provide on the job training and coaching. More than thinning the ranks of new hires, I think there’s a really interesting conversation to be had about AI and middle management, particularly when the promotion culture includes an “up and out of the way” solution for managers who have been promoted beyond their ability but, for one reason or another, cannot be (or aren’t) fired. I do not know what the role of middle managers will be in organizations with mature AIs, but it’s topic worth considering. What do middle managers bring to the table that will complement or augment enterprise AIs? When and how might we start developing those skills?
I think it is absolutely valid to ask how AI might make a company more efficient and effective, but that question needs to be asked up, down, and across business operations. Barring an AI winter, AI likely will continue to become more sophisticated, explainable, and trustworthy. Another way to think about AI is to acknowledge that every organization’s operations are, by definition, resource-constrained and start asking what opportunities might be created as resources are freed up? Every organization should be thinking about how AI might affect its operations but that thinking should be reasonably comprehensive and strategic.