The NIST AI 600-1 “Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile” (GenAI Profile) is a “companion” publication to the NIST AI Risk Management Framework. Among its contributions, 600-1 identifies and defines risks that are “unique or exacerbated” by GenAI, including: Chemical, biological, radiological, or nuclear (CBRN) information, confabulation, dangerous or violent recommendations, data privacy, environmental, and human-AI configuration. My focus in this note is to expand on the human-AI configuration.

As a preliminary matter, it is worthwhile to highlight that this risk is captured in the Human-Centered core principle and, more broadly, 600-1 is properly viewed as part of the Governance core principle. Now, as part of dealing with the human-AI configuration risk, 600-1 recommends having in place processes and procedures that deal with “user’s emotional entanglement with GAI functions,” which refers to the tendency of humans to develop an emotional attachment to an AI application. We can see how this play out, for example, with Replika, a chatbot dubbed by its developer as an “AI Friend” that is “Always here to listen and talk.” When the developer removed “erotic” features from the app, part of its user base was vocally unhappy, complaining the company had interfered with their romantic relationship with the chatbot.

As GenAI applications become more computationally powerful, the risk of emotional entanglement tends to increase. This ties in with a phenomena that I have previously described as the “power of augmentation.” (The TL;DR version: Once you introduce AI into an application, its capabilities are magnified, and the more computationally powerful the AI is, the greater effect of the magnification.) 

Emotional entanglement issues are complex to deal with. Part of the reason is that it is not an obvious outcome. Sure, the Replika example is one where that can be expected to occur. But it is not as obvious in applications that are not designed as an AI friend. Consider, for example, OpenAI’s GPT-4o. Its capabilities are certainly there and it would not be surprising if some would find ways of using it as an AI adult friend/companion and other ways OpenAI did not intend. 

Leave a Reply

Your email address will not be published. Required fields are marked *