skip to main content

IQIM Postdoctoral and Graduate Student Seminar

Friday, February 16, 2024
12:00pm to 1:00pm
Add to Cal
East Bridge 114
Interpretable Expressivity Separations in Trainable Quantum Machine Learning
Eric Anschuetz, Sherman Fairchild Postdoctoral Scholar, Preskill Group,

Abstract: Quantum mechanical systems naturally exhibit quantum correlations which are difficult to capture classically. Due to this property, there have been many proposals for using quantum systems to construct generative machine learning models capable of sampling from distributions beyond the reach of (efficient) classical models. Unfortunately, recent theoretical results have demonstrated that the resulting models typically take a time to train that is exponential in the associated model size; a corollary of these results is that practical exponential separations in expressive power over classical machine learning models are believed to be infeasible. We here circumvent these negative results by constructing a hierarchy of efficiently trainable QNNs that exhibit unconditionally provable, polynomial memory separations of arbitrary constant degree over classical neural networks in performing a classical sequence modeling task. The classical networks we prove a separation over include well-known examples such as recurrent neural networks and Transformers. We show that quantum contextuality is the source of the expressivity separation, suggesting that other sequence learning problems with long-time correlations may be a regime where achievable separations in quantum machine learning may exist.

Lunch will be provided, following the talk, on the lawn north of the Bridge Building.

For more information, please contact Marcia Brown by phone at 626-395-4013 or by email at marcia.brown@caltech.edu.