AI and Biosecurity: Experts Call for Mandatory Rules to Prevent Pandemic Risks

  • Dr. Kurtis Gottlieb
  • August 29, 2024 02:03pm
  • 203

A new paper published in the journal Science warns that artificial intelligence (AI) models could create the next "enhanced pathogens capable of causing major epidemics or even pandemics." Co-authors from Johns Hopkins University, Stanford University, and Fordham University say that AI models are being "trained on or [are] capable of meaningfully manipulating substantial quantities of biological data."

The recent advancements in artificial intelligence (AI) have raised concerns about their potential impact on biosecurity. A new paper published in the journal Science warns that AI models could create the next "enhanced pathogens capable of causing major epidemics or even pandemics."

The paper's co-authors, from Johns Hopkins University, Stanford University, and Fordham University, say that AI models are being "trained on or [are] capable of meaningfully manipulating substantial quantities of biological data, from speeding up drug and vaccine design to improving crop yields."

AI and Biosecurity: Experts Call for Mandatory Rules to Prevent Pandemic Risks

AI and Biosecurity: Experts Call for Mandatory Rules to Prevent Pandemic Risks

However, as with any powerful new technology, such biological models also pose considerable risks. Because of their general-purpose nature, the same biological model able to design a benign viral vector to deliver gene therapy could be used to design a more pathogenic virus capable of evading vaccine-induced immunity," researchers wrote in their abstract.

The paper's authors argue that voluntary commitments among developers to evaluate biological models' potential dangerous capabilities are meaningful and important but cannot stand alone. They propose that national governments, including the United States, pass legislation and set mandatory rules to prevent advanced biological models from substantially contributing to large-scale dangers, such as the creation of novel or enhanced pathogens capable of causing major epidemics or even pandemics.

AI and Biosecurity: Experts Call for Mandatory Rules to Prevent Pandemic Risks

AI and Biosecurity: Experts Call for Mandatory Rules to Prevent Pandemic Risks

Although today's AI models likely do not "substantially contribute" to biological risks, the "essential ingredients to create highly concerning advanced biological models may already exist or soon will," Time quoted the paper's authors as saying.

They reportedly recommend that governments create a "battery of tests" that biological AI models must undertake before being released to the public – and then from there officials can determine how restricted access to the models should be.

AI and Biosecurity: Experts Call for Mandatory Rules to Prevent Pandemic Risks

AI and Biosecurity: Experts Call for Mandatory Rules to Prevent Pandemic Risks

"We need to plan now," Anita Cicero, the deputy director at the Johns Hopkins Center for Health Security and one of the co-authors of the paper, said according to Time. "Some structured government oversight and requirements will be necessary in order to reduce risks of especially powerful tools in the future."

Cicero reportedly added that biological risks from AI models could become a reality "within the next 20 years, and maybe even much less" without the proper oversight.

AI and Biosecurity: Experts Call for Mandatory Rules to Prevent Pandemic Risks

AI and Biosecurity: Experts Call for Mandatory Rules to Prevent Pandemic Risks

The paper's authors note that "the same biological model able to design a benign viral vector to deliver gene therapy could be used to design a more pathogenic virus capable of evading vaccine-induced immunity."

Paul Powers, an AI expert and CEO of Physna – a company that helps computers analyze 3D models and geometric objects – told Fox News Digital, "If the question is, can AI be used to engineer pandemics, 100% percent. And as far as how far down the road we should be concerned about it, I think that AI is advancing at a rate that most people are not prepared for."

Powers highlights the concern that it's not just governments and large businesses that have access to these increasingly powerful capabilities, but also individuals and small businesses. However, he also acknowledges the difficulty in regulating AI due to its rapid pace of advancement.

"The problem with regulation here is that one, as much as everyone wants a global set of rules for this, the reality is that it is enforced nationally. Secondly, is that regulation doesn't move at the speed of AI. Regulation can't even keep up with technology as it has been, with traditional speed," Powers explained.

Powers suggests a focus on restricting access to the building blocks of potential pathogens and emphasizing that mandatory rules and government oversight are crucial to mitigate the biosecurity risks posed by AI.

Share this Post:

Leave a comment

0 Comments

Chưa có bình luận nào

Related articles