Revisiting the Stanford AI100 study
The first homework assignment for the Intro to AI class ended up being an essay. We were each tasked with writing a 500-600 word critique of the Stanford AI100 article. In the end, I decided not to try critiquing the article on any technical basis; I didn’t feel qualified to do that. Rather, I decided to focus on some aspects of the article’s scope. Here’s the result:
The Stanford AI100 Study seeks to record and influence the impact of artificial intelligence in a series of articles over the course of 100 years. This ambitious endeavor is overseen by a Standing Committee who periodically select a Study Panel and charge them with the task of producing a written article. The first Study Panel–appointed in 2015–recently published the very first installment of this series in a report titled “Artificial Intelligence and Life in 2030”. Its scope is to describe the opportunities and challenges that may arise from AI in the next fifteen years, restricted to society in North American cities. Given the uncertainty inherent to technological advancement and the universality of AI’s significance for humanity, the Standing Committee’s framing choices are overly speculative and overly parochial.
The Study Panel’s mission to anticipate the status of technology in 2030 is misguided, given the uncertainty inherent to technological progress. Most technological advancement is accounted for by unexpected, improbable discoveries; “black swan” events, as Nassim Taleb calls them. As such, there are strong limitations on anyone’s ability to predict technological progress. To the Study Panel’s credit, they don’t ever seem to let this forecasting requirement guide their analyses in any concrete way; every reference they make to the year 2030 could easily be replaced by a more conservative reference to “the near future”. However, the fact that the Standing Committee framed the report in this way suggests an unfounded optimism in the Study Panel’s ability to forecast technological advancement. It would be more honest to frame future AI100 reports in a way that emphasizes assessing the current state and identifying immediate opportunities, rather than attempting forecasts ten or more years into the future.
The report’s restriction to North American cities unnecessarily limits its perspective; it reflects the homogeneity of the Standing Committee and Study Panel. Artificial intelligence poses opportunities for all of mankind, and challenges all of mankind with important questions. It is important to avoid parochialism when surveying its implications. The provincialism of the first report is undoubtedly related to the homogeneous composition of the people involved. Five of six Standing Committee members are Americans; the sixth is Canadian. Furthermore, fourteen of seventeen Study Panel members are current long-term residents of the US; the fifteenth is Canadian. None of them are from East Asia or continental Europe. Developing regions, such as Africa or South America, have no representation. It is unclear whether the Study Panel was appointed before or after the scope was specified; whatever the case, the Standing Committee’s choices centered myopically on their own piece of the world. While there is some rationale for starting small and “sticking to what you know”, there is also risk associated with homogeneous perspective. It blinds the group to “unknown unknowns”, increasing the fragility of its work with respect to unforeseen contingencies.
“Artificial Intelligence and Life in 2030” is only the first of many AI100 reports, and the next will likely be more global in scope. However, it was disappointing to see the first report framed in such an awkward fashion—culturally narrow and temporally distant. The Standing Committee ought to frame future reports in a more universal fashion and ought to appoint more heavily diversified Study Panels to write them. This may require the Standing Committee to form new connections, with people who are unfamiliar to them. However, if AI advancement is of fundamental importance, and if the Stanford AI100 Study wishes to provide authoritative guidance in this field, then it is crucial for the Standing Committee to avoid tunnel vision or unproductive speculation as it frames these studies.
\( \blacksquare\)