AI Scientists in the Lab

Today, we introduce Periodic Labs. Our goal is to create an AI scientist.

Science works by conjecturing how the world might be, running experiments, and learning from the results.

Intelligence is necessary, but not sufficient. New knowledge is created when ideas are found to be consistent with reality. And so, at Periodic, we are building AI scientists and the autonomous laboratories for them to operate.

…Autonomous labs are central to our strategy. They provide huge amounts of high-quality data (each experiment can produce GBs of data!) that exists nowhere else. They generate valuable negative results which are seldom published. But most importantly, they give our AI scientists the tools to act.

…One of our goals is to discover superconductors that work at higher temperatures than today’s materials. Significant advances could help us create next-generation transportation and build power grids with minimal losses. But this is just one example — if we can automate materials design, we have the potential to accelerate Moore’s Law, space travel, and nuclear fusion.

Our founding team co-created ChatGPT, DeepMind’s GNoME, OpenAI’s Operator (now Agent), the neural attention mechanism, MatterGen; have scaled autonomous physics labs; and have contributed to important materials discoveries of the last decade. We’ve come together to scale up and reimagine how science is done.

The AI’s can work 24 hours a day, 365 days a year and with labs under their control the feedback will be quick. In nine hours, AlphaZero taught itself chess and then trounced the then world champion Stockfish 8, (ELO around 3378  compared to Magnus Carlsen’s high of 2882). That was in 2017. In general, experiments are more open-ended than chess but not necessarily in every domain. Moreover context windows and capabilities have grown tremendously since 2017.

In other AI news, AI can be used to generate dangerous proteins like ricin and current safeguards are not very effective:

Microsoft bioengineer Bruce Wittmann normally uses artificial intelligence (AI) to design proteins that could help fight disease or grow food. But last year, he used AI tools like a would-be bioterrorist: creating digital blueprints for proteins that could mimic deadly poisons and toxins such as ricin, botulinum, and Shiga.

Wittmann and his Microsoft colleagues wanted to know what would happen if they ordered the DNA sequences that code for these proteins from companies that synthesize nucleic acids. Borrowing a military term, the researchers called it a “red team” exercise, looking for weaknesses in biosecurity practices in the protein engineering pipeline.

The effort grew into a collaboration with many biosecurity experts, and according to their new paper, published today in Science, one key guardrail failed. DNA vendors typically use screening software to flag sequences that might be used to cause harm. But the researchers report that this software failed to catch many of their AI-designed genes—one tool missed more than 75% of the potential toxins.

Solve for the equilibrium?

Comments

Comments for this post are closed