Home PC News Stanford AI researchers introduce LILAC, reinforcement learning for dynamic environments

Stanford AI researchers introduce LILAC, reinforcement learning for dynamic environments

Researchers from Stanford AI Lab (SAIL) have devised a way to take care of knowledge and environments that change over time in a manner that outperforms some main approaches to reinforcement studying. Lifelong Latent Actor-Critic, aka LILAC, makes use of latent variable fashions and a most entropy coverage to leverage previous expertise for higher pattern effectivity and efficiency in dynamic environments.

“On a variety of challenging continuous control tasks with significant non-stationarity, we observe that our approach leads to substantial improvement compared to state-of-the-art reinforcement learning methods,” they wrote in a paper about LILAC. Reinforcement studying able to adapting to environments might be able to, for instance, let robots or autonomous autos function when climate situations change and rain or snow are launched.

Authors carried out 4 exams in dynamic reinforcement studying environments together with a Sawyer robotic from the Meta-World benchmark, a Half-Cheetah in OpenAI Gym, and a 2D navigation process. Researchers discovered that in all domains, LILAC attains increased and extra secure returns in comparison with some high reinforcement studying approaches just like the Soft Actor-Critic (SAC) launched by Berkeley AI Research (BAIR) in 2018 and Stochastic Latent Actor-Critic (SLAC), which UC Berkeley researchers launched earlier this yr.

Stanford researchers Annie Xie, James Harrison, and Chelsea Finn published a paper on LILAC two weeks ago in the preprint repository arXiv. Lead creator Xie additionally labored with UC Berkeley professor Sergey Levine on SAC and SLAC.

VB Transform 2020 Online – July 15-17. Join main AI executives: Register for the free livestream.

“In contrast to these methods, LILAC infers how the environment changes in future episodes and steadily maintains high rewards over the training procedure, despite experiencing persistent shifts in the environment in each episode,” the paper learn.

The authors say the LILAC strategy shares similarities with lifelong studying and on-line studying algorithms. Meta-learning and meta-reinforcement studying algorithms additionally try and rapidly adapt to new settings.

In different current reinforcement studying information, AI researchers from Google Brain, Carnegie Mellon University, University of Pittsburgh, and UC Berkeley — together with Levine once more — not too long ago launched a brand new strategy to domain adaptation, the approach of adjusting the reward perform for brokers in reinforcement studying environments. Like different reinforcement studying environments, the strategy makes an attempt to make a supply area in a simulator extra like a goal area like the true world.

“The agent is penalized for taking transitions which would indicate whether the agent is interacting with the source or target domain,” in response to the domain adaptation paper launched final week. “Experiments on a range of control tasks show that our method can leverage the source domain to learn policies that will work well in the target domain, despite observing only a handful of transitions from the target domain.”

Researchers modified the reward perform with classifiers made to tell apart between supply and goal area transitions. They examined their strategy with three duties in OpenAI Gym. Benjamin Eysenbach of Google Brain and CMU is the lead creator of that paper.

Most Popular

Recent Comments