skip to main content
Article Podcast Report Summary Quick Look Quick Look Video Newsfeed triangle plus sign dropdown arrow Case Study All Search Facebook LinkedIn YouTube Bluesky Threads Instagram Right Arrow Press Release External Report

AI with AI

Episode 3.6: Some Superintelligent Assembly Required

In news, the Defense Innovation Board releases AI Principles: Recommendations on the Ethical Use of AI by the Department of Defense. The National Institute of Standards and Technology’s National Cybersecurity Center of Excellent releases a draft for public comment on adversarial machine learning, which includes an in-depth taxonomy on the possibilities. Google adds BERT to its search algorithm, with its capability for bidirectional representations, in an attempt to “let go of some of your keyword-ese.” In research, Stanford University and Google demonstrate a method for explaining how image classifiers make their decisions, with Automatic Concept-based Explanations (ACE) that extra visual concepts such as colors and textures, or objects and parts. And GoogleAI, Stanford, and Columbia researchers teach a robot arm the concept of assembling objects, with Form2Fit, which is also capable of generalizing its learning to new objects and tasks. Danielle Tarraf pens the latest response to the National Security Commission on AI’s call for ideas, with Our Future Lies in Making AI Robust and Verifiable. Jure Leskovec, Anand Rajaraman, and Jeff Ullman make their second edition of Mining of Massive Datasets available. The Defense Innovation Board posts a video of its public meeting from 31 October at Georgetown University. Maciej Ceglowski’s “Superintelligence: the idea that eats smart people” takes a look at the arguments against superintelligence as a risk to humanity.

CNA Office of Communications

John Stimpson, Communications Associate