AI with AI
Episode 1.47: Curiosity Killed the Poison Frog, Part I
Andy and Dave briefly discuss the results from the Group of Governmental Experts meetings on Lethal Autonomous Weapons Systems in Geneva; the Pentagon releases it's Unmanned Systems Integrated Roadmap 2017-2042; Google announces Dataset Search, a curated pool of datasets available on the internet; California endorses a set of 23 AI Principles in conjunction with the Future of Life, and registration for the Neural Information Processing Systems (NIPS) 2018 conference sells out in just under 12 minutes. Researchers at DeepMind announce a Symbol-Concept Association Network (SCAN), for learning abstractions in the visual domain in a way that mimics human vision and word acquisition. DeepMind also presents an approach to "catastrophic forgetting," using a Variational Autoencoder with Shared Embeddings (VASE) method to learn new information while protecting previously learned representations. Researchers from the University of Maryland and Cornell demonstrate the ability to poison the training data set of a neural net image classifier with innocuous poison images. Research from the University of South Australia and Flinders University attempts to link personality with eye movements. OpeanAI, Berkley and Edinburgh's research looks at curiosity-driven learning across 54 benchmark environments (including video games and physics engine simulations, showing that agents learn to play many Atari games without using any rewards, rally-making behavior emerging in two-player Pong, and others. Finally, Andy shares an interactive app that allows users to “play” with a Generative Adversarial Network (GAN) in a browser; “Franken-algorithms” by Andrew Smith is the paper of the week; “Autonomy: The Quest to Build the Driverless Car” by Burns and Shulgan is the book of the week; and for the videos of the week, Major Voke offers thoughts on AI in the Command and Control of Airpower, and Jonathan Nolan releases “Do You Trust This Computer?”