There's an AI 'brain drain' in academia

Digitally generated image, perfectly usable for all kinds of topics related to digital innovations, AI, data processing, network security or technology and computer science in general.

Image Credits: Getty Images

As one might expect, lots of students who graduate with a doctorate in an AI-related field end up joining an AI company, whether a startup or Big Tech giant.

According to Stanford’s 2021 Artificial Intelligence Index Report, the number of new AI PhD graduates in North America entering the AI industry post-graduation grew from 44.4% in 2010 to around 48% in 2019. By contrast, the share of new AI PhDs entering academia dropped from 42.1% in 2010 to 23.7% in 2019.

Private industry’s willingness to pay top dollar for AI talent is likely a contributing factor.

Jobs from the biggest AI ventures, like OpenAI and Anthropic, list eye-popping salaries ranging from $700,000 to $900,000 for new researchers, per data from salary negotiation service Rora. Google has reportedly gone so far as to offer large grants of restricted stock to incentivize leading data scientists.

While AI graduates are no doubt welcoming the trend — who wouldn’t kill for a starting salary that high? — it’s having an alarming impact on academia.

A 2019 survey co-authored by researchers at the Hebrew University of Jerusalem and Cheung Kong Graduate School of Business in Beijing found that close to 100 AI faculty members left North American universities for industry jobs between 2018 and 2019 — an outsized cohort in the context of a specialized computer science field. Between 2004 and 2019, Carnegie Mellon alone saw 16 AI faculty members depart, and the Georgia Institute of Technology and University of Washington lost roughly a dozen each, the study found.

The effects of the mass faculty exodus have been far-reaching, with the Hebrew University and Cheung Kong survey concluding that it’s had an especially stark impact on AI companies founded by students graduating from universities where those professors used to work. Per the survey, there’s a chilling effect on entrepreneurism in the years following faculty departures at a college, with the impact intensifying when the AI professors who leave are replaced by faculty from lower-ranked schools or untenured professors.

That’s perhaps why AI companies and labs are increasingly recruiting talent from industry — not universities.

A new report from VC firm SignalFire suggests that the percentage of AI hires coming from top schools such as Caltech, Harvard, Princeton, Yale and Stanford — or those with doctorates — has dropped significantly from a peak of around 35% in 2015. In 2023, the percentage was closer to 18%, as AI companies began to look for and hire more non-graduate candidates.

“We discovered a high concentration of top AI talent amongst a few startups when historically we saw this clustering at public giants like Google,” Ilya Kirnos, SignalFire’s co-founder and CTO, told TechCrunch+ via email. “That led us to look at where top AI talent was moving across the industry, and whether talent was more correlated with top universities or top startups.”

To arrive at its findings, SignalFire identified a subset of top AI talent via two routes: academic publications and open source project contributions. (Kirnos acknowledges that many AI researchers don’t publish papers or contribute to open source, but says that the report is meant to show a “representative slice” of the AI talent ecosystem rather than the whole picture.)

SignalFire cross-referenced authors at major AI conferences like NeurIPS and ICML with university employment listings to identify AI faculty, and then matched the contributors to popular AI software projects on GitHub with public employment feeds (like LinkedIn) to identify top overall contributors.

Kirnos says that SignalFire’s data shows a growing preference on the part of AI companies (e.g., Hugging Face, Stability AI, Midjourney) for alternatives to prestigious graduate hiring pools, with one of those alternatives being open research communities spawned by new emerging new AI tradecraft (see: prompt engineering). And this, Kirnos claims, is a good thing for its potential to lower the industry barrier to entry for non-PhDs.

“This will create demand for new ways to assess recruiting candidates for real-world software engineering experience,” Kirnos said. “Instead of filtering by university brand names, we may see employers seek out new ways to screen applicants for expertise in building functional products out of the stack the company actually uses.”

Diversity is in the eye of the beholder, of course.

According to the Stanford study, AI PhD programs were decidedly homogenous as of 2019, with white students accounting for 45% of AI PhD graduates. But so were AI teams in industry. In its State of AI in 2022 report, McKinsey found that the average share of employees identifying as racial or ethnic minorities developing AI solutions was a paltry 25%, and 29% of organizations had no minority employees working on AI whatsoever.

Kirnos stands behind his assertion, but does suggest that universities could look to provide students more opportunities to apply research in real-world scenarios that more closely mirror work experience in the sector.

“Engineering is increasingly moving away from building whole products from scratch in a vacuum,” he added, “and toward cobbling together stacks of AI models, APIs, enterprise tools and open source software.”

This writer’s hopeful that universities sit up and pay attention to the alarming trend — and then do something about it. Prestigious AI doctorate programs deserve criticism for their exclusivity, certainly, and the ways in which it concentrates power and accelerates inequality. But I for one am loathe to embrace a future where industry, through hiring and other means of influence, commands increasing control over the AI field’s direction.

发表回复

您的电子邮箱地址不会被公开。 必填项已用 * 标注