Abstract
The convergence of artificial intelligence (AI) and biology has given rise to increasing concerns regarding generative AI, such as large language models (LLMs), and their potential for harm and misuse focused on biosecurity. President Biden’s Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (Executive Order 14110) has generated numerous discussions that may lead to recommendations from government agencies and others tasked with investigating these issues in the United States. For example, the National Academies of Sciences, Engineering, and Medicine (NASEM) is assessing the ways in which AI can increase these biosecurity risks and especially pandemic threats. Bringing together scientists from industry and academia, a recent meeting illustrated how generative AI has been used to design proteins, small molecules, CRISPRs, and to showcase the clear potential for AI to aid in innovation. On the other hand, misuse of AI is secondary to the science and most scientists do not consider misuse potential unless they need or must. The meeting discussed several steps that can be taken to add to the security of LLMs that may prevent or slow misuse in addition to their limitations. The general ease with which these generative AI technologies can be used, opens their use to the public, although a college education may be necessary to benefit from and understand their capabilities. At this stage, it is unclear what recommendations will be forthcoming from NASEM or the US government and what impact these will have on the growing AI field.
Get full access to this article
View all access options for this article.
