Foundry scientists have recently demonstrated the ability of neural networks to learn time-dependent protocols for materials self-assembly and synthesis. This approach, based on a branch of machine learning called reinforcement learning, focuses on real-time control of instruments and is distinct from approaches that attempt to identify promising initial conditions for synthesis. We are currently implementing these algorithms into autonomous materials discovery workflows on several robotic synthesis systems, including liquid- and gas-phase synthesis tools. The result will allow autonomous experiments in which a user specifies an objective and the learning algorithm performs multiple syntheses, iteratively improving its time-dependent protocol, until the objective is attained.
Was this page useful?
Send