• About
  • Hard Problems in AI
  • Fellows
  • News & Perspectives
  • Contact
AI2050 by Schmidt Futures
  • About
  • Hard Problems in AI
  • Fellows
  • News & Perspectives
  • Contact
Percy Liang 2022 Senior Fellow

Percy Liang is an Associate Professor of Computer Science at Stanford University (B.S. from MIT, 2004; Ph.D. from UC Berkeley, 2011). His two research goals are (i) to make machine learning more robust, fair, and interpretable; and (ii) to make computers easier to communicate with through natural language. His awards include the Presidential Early Career Award for Scientists and Engineers (2019), IJCAI Computers and Thought Award (2016), an NSF CAREER Award (2016), a Sloan Research Fellowship (2015), and a Microsoft Research Faculty Fellowship (2014).

AI2050 Project

We are entering a new era of AI dominated by foundation models (e.g., GPT-3) which are trained on broad data and can be adapted to a range of downstream tasks. Percy’s project will unpack foundation models with a focus on language models. He will develop metrics to characterize models from a sociotechnical point of view, perform experimentation to theorize how and why capabilities emerge from the training process, and create new foundation models that are more reliable, interpretable, modular, and efficient. Finally, Percy will reimagine what foundation models should look like from first principles with an eye towards implications on centralization of power.

Project Artifacts

Lee et al. Holistic evaluation of text-to-image models. NeurIPS. 2023.

D. Narayanan, K. Santhanam, P.Henderson, R. Bommasani, T. Lee, and P. Liang. Cheaply evaluating inference efficiency metrics for autoregressive transformer APIs. NeurIPS. 2023.

R. Bommasani, K. Klyman, S. Longpre, S. Kapoor, N. Maslej, B. Xiong, D. Zhang, and P. Liang. The foundation model transparency index. Stanford University Center for Research on Foundation Models. 2023.

P. Liang et al. Holistic evaluation of language models. arXiv. 2023.

C. Toups, R. Bommasani, K. Creel, S. Bana, D. Jurafsky, and P. Liang. Ecosystem-level analysis of deployed machine learning reveals homogeneous outcomes. arXiv. 2023.

R. Bommasani, P. Liang, and T. Lee. Holistic evaluation of language models. Ann NY Acad Sci. 2023.

N. Liu, T. Zhang, and P. Liang. Evaluating verifiability in generative search engines. arXiv. 2023.

R. Bommasani, P. Liang, and T. Lee. Language models are changing AI: the need for holistic evaluation. Stanford University Center for Research on Foundation Models. 2022.

AI2050 Community Perspective — Percy Liang (2023)

https://ai2050.schmidtfutures.com/wp-content/uploads/sites/3/2023/06/ai2050logo_black-topaz.png
  • About
  • Hard Problems in AI
  • News & Perspectives
  • Fellows
  • Contact
Get the Latest News

© Schmidt Futures 2023 All Rights Reserved.
This site uses cookies to provide you with a personalized browsing experience. By using this site, you agree to our use of cookies as explained in our Privacy & Cookie Policy. Please read our Privacy Policy for more information on how we use cookies and how you can manage them.