This article was originally published in The AI Journal. Original. )

Artificial intelligence (AI) regulation has elements that must be tailored to each jurisdiction. There are also elements that can be applied to all jurisdictions.

This Note was prompted or commissioned by the Brazilian Senate’s AI Bill of Law. However, it is centered around the “universal” or global elements. Regulating a quicksilver-like object that we cannot see or define is a challenge for all of us.

There is no way to get the genie out of a bottle. Brazilians, like people from all countries, will continue to use AI products, effects and detritus (secondary application) in a large and growing number of ways, whether they are explicitly or covertly. Technology’s power and social impact will increase at an unpredictable rate. It will also compound, meaning it will grow in power and innovation.

As with other astronomical phenomena we can see , so the compounding analytic engines and present present a new phenomenon. This level of automation has never been possible for humans. This novelty compounded. This is a “AI Event Horizon” type of situation.

We cannot predict what AI will look like, or how it will be regulated. We try to frame something that we can’t see.

However, humility is its razor and can help the Senate to be more effective. I. I. In other words, we can apply new and existing laws to adaptive and incremental common law. II. It is possible to leverage civil and professional groups to create an AI auditing environment. This ecosystem will support the Senate’s intent while maintaining economic efficiency and adaptability of law, secondary, and implementing codes.

While national law can establish broad principles and boundaries, empowered and managed technical teams can implement legislative intent with speed and scale. This is often done with great efficiency and effectiveness by nation states in all areas, from financial and brokerage licensing to building and electrical safety codes. III. To balance the speed and scale between civil and commercial AI, the Senate and other branches should finance, develop and leverage public and privately owned “Governance AI”. This could be a socially beneficial and efficient way to regulate the AI explosion.

BACKGROUND DEFINITIONS

It is a good idea to first define the concepts and recommendations before you jump into them. These definitions may not be agreeable to everyone. Some may disagree with the above. However, as with any merger or complex contract with defined terms, definitions may simplify the analysis.

What’s AI? AI basically refers to software that performs analysis.

Law Regulation of human behavior, state and relationships.

Engineering: The art and science of getting things done.

Legal Engineering The art and science of getting things done legally.

AI Event Horizon This metaphor describes the difficulty in predicting and regulating the effects of a new compounding intelligence phenomenon.

All of the above can be found in On Legal AI Part I Chapter 5.

1. The Present Regulatory Dilemma

Brazil regulates many forms of analysis in different contexts, including housing, medical, finance, and discrimination. There are no need to invent new wheels for all machinations. To some extent, the “dynamic-steady” metaphor can be used to adapt existing systems to new phenomena.

AI is a rapidly growing problem. Its speed and scale in conducting analysis (whether it be informationally “extractive”, “generative”, or “predictive”) is a new problem. It will outpace manual analysis. The infrastructure of legal regulation, not the content (which will change as the people will), must be able to keep up with the speed and scale that AI is achieving.

Only one way is possible to achieve that goal, and that is to use AI to regulate AI. Individual human regulators who attempt to regulate AI events manually are more likely to fail than those working alone. Three levels are required for creating AI to audit and regulate AI.

(1) Implementers. People who create AI for n purposes (e.g. commercial and civil society AI used by consumers).

(2) Audit. (2) Audit. Independent audit groups should be authorized to audit algorithmic and execution details of implementers. Professional licensing is required. Technology can be used by the implementers themselves. Federal disclosure requirements and human cross-checks may also be required (see, for example, US securities laws). Legal-technical auditors are able to analyze commercial AI activity quickly, with diligence, and with great acuity. The author suggests that lawyers be required to participate in these auditing groups. This is because such requirements have been proven to deter financial fraud and corruption in the US and other countries. Naturally, there is no guarantee. Auditors will also need to be monitored and held accountable.

(3) Government. Government agencies will require AI tools to audit legal conformity. This is either because implementers or independent auditors have failed to meet the requirements; or because the Senate or other bodies are dealing with new and important issues that need government investigation.

This interconnected, interrelated series of Governance AIs (and Legal Engineering control system embedded in AI systems by all users) has the potential to modulate and adaptively regulate AI implementations and other phenomena. Remember that implementers have the ability to evade AI-fixated laws by simply adding de minimis human involvement prior to analysis. The Senate bill and other regulatory models may need to be adapted to other evasions. Government AI can be used selectively if necessary. Independent AI Auditors (using legal AI), may make regular and periodic compliance tests relatively painless for large numbers of AI implementers of all sizes. The bill’s implementation control requirements can make AI development safer and more productive for both the implementer as well as the final clients.

This nested system of regulatory controls is likely to work better than the algorithmic repository approach of China’s People’s Republic of China (“PRC”) for a number of reasons. The first isthe PRC’s requirement for all AI implementers to deposit their algorithms (or the heart of their AI system) into a government AI Bank. This is a huge taking of trade secrets, if not on a large scale. This puts at risk every business or organization of any depositor. This repository would be extremely valuable to the government , and to all government agents. Corruption, misuse, or “algorithmic theft” are all possible. All government agents working on the repository side of these systems would be able to gain incomparable competitive knowledge. It is not uncommon for these systems to be misused. Secondalgorithmic deposit may be made that do not tell the government how the AI was being used. The same algorithm can be used in discriminatory or non-discriminatory ways depending on how it is applied and what data feeds were “silenced”/activated. This could be picked up by an independent auditor who is skilled in both law and AI engineering.

My group recommends that the Brazilian government financially and legally promote a three-tiered, interconnected, and overlapping system for Governance AI–Implementer and Independent Auditor, Government and Standards-Setting. It doesn’t have to be costly to create such an ecosystem. It could also be significantly less costly than other regulatory options on the compliance side. First, national technical-legal standards can reduce the costs of compliance enormously–much like well-designed and locally/professionally enforced electrical codes, etc. Good software, while rare, has a high development cost. However, software has a low marginal cost of re-use, close to zero per copy/use/etc. To help AI implementers comply with regulations efficiently, the government could allow for the creation of a national source code. This would be subject to penalties for spoofing or security controls. Brazil is home to a highly developed and sophisticated Computer Science ecosystem. This legal environment could be used to increase academic rigor and civil scale independent testing.

This note’s primary recommendation is to create a three-tiered governance AI ecosystem. This ecosystem, which can be used to implement and realize the Senate’s intent with greater efficiency, adaptability, and cost-effectiveness, is a complement to legislation.

ACKNOWLEDGEMENTS

For generating the original comment, Professor Juliano Maranhao (Universidade de Sao Paulo), and his team, and especially to Anthony Novaes, (Universidade Presbiteriana Mackenzie), for coordinating and translating this note, as well as for his leadership and scholarship in general.

Author

Joshua Walker is the author “On Legal AI” [Full Court Press USA, 2019] and the CEO and co-founder of System.Legal. System.Legal provides full-service legal technology consulting and AI services for lawyers, the civil society, governments, and other entities worldwide. He was also the co-founder, architect, and leader of Lex Machina, which is the most trusted analytic platform for US lawsuits. It has been relied upon by the US government, lawyers, journalists, and a host of scholars around the world. Walker was also a co-founder of CodeX (Stanford Center for Legal Informatics). His undergraduate degree, m.c.l. from Harvard College, and his J.D. from Yale University. He graduated from The University of Chicago Law School where he was a Cornerstone Scholar.

Leave a Reply

Your email address will not be published. Required fields are marked *