by Bogdana R. Rakova, Senior Trustworthy AI fellow at Mozilla Foundation
The “I consent” in the Terms of Service (ToS) may be the most fraudulent form of consent given on the Internet. The recent Zoom ToS controversy, and the backlash that followed it, brought to light how technology companies can change their terms invisibly to build proprietary AI models. As emerging startups and tech companies race to deploy more powerful models and contracts, such as ToS and data governance policies, become more common and consequential.
This flawed form of consent does not have to be the norm. In the context of AI risks and harms, new consent models are emerging that emphasize trust, transparency and human agency. These mechanisms are based on interdisciplinary areas such as human-computer interactions, privacy, legal designs, and feminist science & technology studies. The Terms We Serve With initiative, as well as a related academic article, will present a vision for implementing a multi-stakeholder agreement framework and an alternative user contract. Adopting TwSw results in:
- Human-centered reparative user agreements
- User studies and user experience Research
- An ontology of AI failure modes as perceived by users
- Contestability mechanisms for continuous AI monitoring based on an ontology
- Mechanisms that allow the mediation of algorithmic risks, harms and failures to function when they occur
This blog post outlines the starting points of how rewriting clauses within a ToS could allow a reparative user agreement. Based on the call to algorithmic reparation made by Jenny L. Davis and Apryl Williams and Michael W. Yang we define a reparative as an approach that identifies, unmasks and undoes sociotechnical harms.
Image from the Mozilla Responsible AI Challenge Workshop on prototyping norms and agreements for AI communities. This workshop, along with engagements we had with communities of practices and technology companies, allowed us to put the TwSw Framework into practice and refine our recommendations.
AI Governance and Human-centered context disclosure of data
Based on our experience of using the TwSw Framework in practice, we suggest that user agreements, and design interfaces, should contextually disclose and describe how a product, service, or AI system uses algorithms, machine-learning, or other types of automated decision making or AI systems, as well as potential failure modes, and downstream risks. It would also include disclosures about what data users provide and how they are used, such as in the context for building or improving algorithms. A human-centered disclosure would also enable AI companies to respond meaningfully to calls for critical AI education. Maha Bali, Kathryn Conrad and other educators have argued there is a need for improved user literacy with AI systems, as well as the ability to know when, why and where to use them, as well as what they are used for.
Jordan Famularo outlines disclosure prompts for companies to follow in a recent Template for voluntary corporate reporting about data governance, cyber security, and AI.
- Provide a privacy or data protection policy covering the entire organization, including its third parties. Include the types of information collected by the organization, the way the data is collected, processed, and shared, as well as the purpose for which it is used. Also include the length of time the organization keeps user data, the response to requests from third parties (both government and private) for the sharing of information, and the risk assessment of targeted advertising practices.
- Disclose if the organization has an information and/or cyber security team. This includes whether or not the organization has developed a plan for incident management, which may include plans for disaster recovery, business continuity and disclosures about the impact of security vulnerabilities and data breaches.
- Disclose your organization’s AI governance policy. This includes the purposes for which algorithms are used, how you eliminate biases such as racial or gender-based, and other types of biases. Also, disclose whether and how the organization performs due diligence on human rights and/or audits to identify any potential risks associated with algorithmic systems.
Reporting incidents with contestability mechanisms and third-party oversight
Academic scholars define contestability as ability of people to challenge predictions made by machines. This definition is expanded to include contestations throughout the entire life cycle of algorithms, including the data that they rely upon.
Systems that are designed to be contestable could offer their users more transparency and agency, thereby contributing to the building of trust. Contestability mechanisms include incident reporting systems and customer feedback forms. Community forums are another example. Design choices that allow user feedback such as thumbs up/down options or feedback opportunities within the Human-Computer Interface through which people interact with a technological system can also be used. A large body of research is available that examines these mechanisms. The user agreements for AI products and services should make explicit contestability interventions. It would then legitimize the use of AI products and services, as well as improve their ability to perform algorithmic audits. There’s also a need for an external review of the data reported by such contestability mechanisms. The following terms could be added to the ToS of technology companies:
If you have any concerns about the Service, please let us know. You can provide feedback through our contestability system. The feedback you provide through our mechanism is verified and reviewed by a ___ independent external accountability forum. |
We agree to resolve any dispute between you and [company] that arises out of this Agreement or relates to the Services through arbitration or mediation. |
In addition, the limitations of liability clauses in ToS agreements must specifically address harms caused by algorithmic systems.
[THE COMPANY] AND ITS AFFILIATES AND EACH OF THEIR LICENSORS, AND SUPPLIERS WILL NOT BE LIABLE FOR ANY… INDIRECT, INCIDENTAL, SPECIAL, CONSEQUENTIAL, EXEMPLARY, OR PUNITIVE DAMAGES TO THE EXTENT THAT THERE IS NO DEMONSTRABLE HARM, EITHER INDIRECT OR DIRECT, SHOWN. |
These changes to ToS agreements will allow for fundamental changes in how we interact with AI systems. They encourage companies to stop using click-through agreements that create the illusion of consent and push users into making tradeoffs without their knowledge.
Participation and active codesign of User Agreements
The innovation in user agreements can also lead to new social relations, organizing structures and collective guardrails that could be used in the adoption of technology. Our goal is to highlight the possibilities for alternatives. We are building on the work done by scholars who have called regulators to take into consideration how current contracting regimes dehumanize.
What if, instead of spending nearly 76 days in a single year reading the digital privacy policies that you have agreed to, as per this research, Aleecia McDonald and Lorrie Francor, engagement could be achieved through a quiz or game to improve data and AI literacy. Contextual scenarios can also encourage users to consider underlying values and social norms, and whether they align with the products they use.
You can use a third-party tool to negotiate your terms. You can then provide your explicit consent and participate in the co-constitution and direct intervention of the agreement. This includes limitations of liability, disputes resolution and other issues. We hope you will use our services for a long time, so you will have the opportunity to voice your opinion as circumstances change. We recognize that your agreement to use our service is not permanent and we actively and continuously engage in the way you want to interact with our service. |
When AI products or services are offered to users, the ability to discuss terms of agreement will invariably cause friction. When anticipated, friction can improve interaction and build trust by allowing for more meaningful forms participation, mutual consent, and the actual choice of contract terms.
We will let you express your reasons for refusing to negotiate the terms of the user agreement, opting out or refusing any particular term. This will create a feedback loop that helps us improve our system. We will also provide a forum called ____ that aims to repair any harms experienced. We will ask for your explicit consent to use any information collected from this forum towards algorithmic bug-bounty programs and algorithmic auditors, including community-led initiatives. |
The suggested terms speak of moving away from transactional to relationshipal interactions, with contractual agreements that focus on equity, inclusion and meaningful participation as well as building long-term relationships based on trust.
The conclusion of the article is:
As advanced AI models continue to evolve and are put into production, it is important to provide new opportunities for third-party actors who want to be involved in the construction of socio-technical security guardrails. In legitimizing new engagement and participation models in user agreements, tech companies can signal to their users their proactive approach to AI risks and harms, and lay the groundwork for community-driven, justice-oriented AI governance models. Human-centered user agreement forms that are actionable in terms of AI transparency must include (1) contextual disclosures of data and AI governance; (2) contestability mechanisms, third-party oversight, and (3) active co-design and engagement of user agreements.
You can find more information a href=”https://termsweservewith.org/”>here/a>. Please let us know what you think are the biggest challenges and opportunities in evolving human-centered user agreements for building trustworthy AI systems – [email protected]. You can find more information here and please let us know what you think are the biggest challenges and opportunities in evolving human-centered user agreements in building trustworthy AI systems – [email protected].