Lawyers say US cybersecurity law too ambiguous to protect AI security researchers
Been injecting prompts to test the safety of large language models? Better call Saul
black hat Existing US laws tackling those illegally breaking computer systems don't accommodate modern large language models (LLMs) and can open researchers up to prosecution for what ought to be sanctioned security testing, say a trio of Harvard scholars. …