3d images

DeepMind AI Lab releases 200 million 3D images of proteins

Matt Higgins and his team of researchers at Oxford University had a problem.

For years they have been studying the parasite that spreads malaria, a disease that still kills hundreds of thousands of people every year. They had identified an important protein on the surface of the parasite as a focal point for a potential future vaccine. They knew its underlying chemical code. But the all-important 3D structure of the protein eluded them. This shape was the key to developing the right vaccine to slip in and prevent the parasite from infecting human cells.

The best way for the team to take a “photograph” of the protein was to use X-rays, an imprecise tool that returned only the blurriest images. Without a clear 3D image, their dream of developing truly effective malaria vaccines was just that: a dream. “We were never able, despite many years of work, to see in sufficient detail what this molecule looked like,” Higgins told reporters on Tuesday.

Then came DeepMind. The artificial intelligence lab, which is a subsidiary of Google’s parent company Alphabet, aimed to solve science’s longstanding “grand challenge” of accurately predicting the 3D structures of proteins and enzymes. DeepMind has built a program called AlphaFold which, by analyzing the chemical composition of thousands of known proteins and their 3D shapes, could use this information to predict the shapes of unknown proteins with surprising accuracy.

When DeepMind gave Higgins and his colleagues access to AlphaFold, the team marveled at the results. “Using AlphaFold has been truly transformational, giving us a very clear view of this surface protein of malaria,” Higgins told reporters, adding that the new clarity has allowed his team to start testing new vaccines. targeting the protein. “AlphaFold has transformed the speed and capacity of our searches.”

On Thursday, DeepMind announced that it will now make its predictions of the 3D structures of 200 million proteins – almost everything known to science – available to the entire scientific community. Disclosure, DeepMind CEO Demis Hassabis told reporters, would energize the world of biology, facilitating faster work in areas as diverse as sustainability, food safety and neglected diseases. “Now you can look up a 3D structure of a protein almost as easily as doing a keyword search on Google,” Hassabis said. “It’s kind of like unlocking scientific exploration at digital speed.”

Read more: Demis Hassabis is on the 2017 TIME 100

The AlphaFold project is good publicity for DeepMind, whose stated end goal is to build an “artificial general intelligence”, or a theoretical computer that could perform most imaginable tasks with more skill and speed than any human. Hassabis described solving scientific challenges as necessary steps towards that end goal which, if successful, could transform scientific progress and human prosperity.

DeepMind’s CEO described AlphaFold as a “gift to humanity”. A DeepMind spokesperson told TIME that the company is making AlphaFold code and data freely available for any use, commercial or academic, under irrevocable open source licenses to benefit humanity and the scientific community. . But some AI researchers and experts have raised concerns that while machine learning research is accelerating the pace of scientific progress, it could also concentrate wealth and power in the hands of a few. businesses, threatening fairness and political participation in society at large.

The allure of “artificial general intelligence” may explain why DeepMind owner Alphabet (then known as Google), which paid more than $500 million for the lab in 2014, has historically allowed to work in areas he considers beneficial to humanity as a whole. , even at high immediate cost to the business. DeepMind ran at a loss for years, with Alphabet writing off $1.1 billion in debt incurred from those losses in 2019, but it made a modest profit of $60 million for the first time in 2020. That profit came entirely from of selling his AI to other weapons. from the Alphabet empire, including technology that improves the efficiency of Google’s voice assistant, its Maps service and the battery life of its Android phones.

The complicated role of AI in scientific discovery

The combination of masses of data and computing power, coupled with powerful pattern detection methods called neural networks, is rapidly transforming the scientific landscape. These technologies, often referred to as artificial intelligence, help scientists in areas as diverse as understanding how stars evolve and boosting drug discovery.

But this transformation is not without risks. In a recent study, researchers at a drug discovery company said that with just small modifications, their drug discovery algorithm could generate toxic molecules like the nerve agent VX, and others, unknown to science, which could be even more deadly. “We have spent decades using computers and AI to improve human health, not to degrade it,” the researchers wrote. “We were naive thinking about the potential misuse of our craft.”

For its part, DeepMind says it has carefully considered the risks of releasing the AlphaFold database to the public, saying it made the decision after consulting more than 30 bioethics and security experts. “The assessment came back saying that [with] this release, the benefits far outweigh the risks,” Hassabis, CEO of DeepMind, told TIME during a briefing with reporters on Tuesday.

Hassabis added that DeepMind made some adjustments in response to the risk assessment, to be “careful” with the structure of viral proteins. A DeepMind spokesperson later clarified that viral proteins had been excluded from AlphaFold for technical reasons, and the consensus among experts was that AlphaFold would not significantly lower the barrier to entry for cause protein damage.

According to Ewan Birney, director of the European Institute of Bioinformatics, which has partnered with DeepMind on the research. Even if AlphaFold were helping a bad actor design a dangerous compound, the same technology in the hands of the scientific community at large could be a force multiplier for antidote or vaccine design efforts. “I think like with all risks, you have to think about the balance here and the upside,” Birney told reporters on Tuesday. “The accumulation of human knowledge is just a huge advantage. And the entities that could be risky are likely to be a very small handful. So I think we’re comfortable.

But DeepMind recognizes that the balance of risk may play out differently in the future. Artificial intelligence research has long been characterized by a culture of openness, with researchers from competing labs often sharing their source code and results publicly. But Hassabis told reporters on Tuesday that as machine learning advances into other potentially riskier areas of science, that open culture may need to shrink. “Coming [systems]if they are risky, the whole community should consider different ways to provide access to this system – not necessarily all open source – because it could allow bad actors,” Hassabis said.

“Open-sourcing is not some sort of panacea,” Hassabis added. “It’s great when you can do it. But there are often cases where the risks may be too great.

More Must-Try Stories from TIME


Write to Billy Perrigo at [email protected]