Drilling Down into Venture Capital Financing in Artificial Intelligence

It should come as no surprise that venture capital (VC) investors are drilling down into startups building businesses with Artificial Intelligence (AI) at the core. New data from PitchBook actually shows that AI startups make up 22% of first-time VC financing. They note that $7 billion of first-time funding raised by startups in 2024 went to AI & machine learning (ML) startups (this is according to their data through Q3 of 2024).
Crunchbase data also showed that in Q3 of 2024, AI-related startups raised $19 billion in funding, accounting for 28% of all venture dollars for that quarter. They importantly point out that this excludes the $6.6 billion round raised by OpenAI, which was announced after Q3 closed. With this unprecedented level of investment in the AI vertical, there is increasing concern that i) some startups might be using AI as more of a buzzword to raise capital rather than truly focusing on this area, and/or ii) there are bubbles in certain sub-verticals.
PitchBook analysts also note that with limited funding available for startups, integrating AI into their offerings is crucial for founders to secure investment. However, this also makes it harder to distinguish which startups are genuinely engaging in meaningful AI work. For investors, the challenge lies in sifting through the AI “noise” to identify those startups that are truly transformative and focusing on key areas within the sector, which will be vital as we move into 2025.
A recent article in Forbes examined the themes that early-stage investors were targeting for the new year. When looking at investment in AI startups, these included the use of AI to help pharmaceutical companies optimize clinical trials, AI in fintech and personal finance, AI applications in healthcare to improve the patient to caregiver experience, and AI-driven vertical software that will disrupt incumbents.
According to the Financial Times (FT), this boom in AI investment comes at a time when the industry still has an “immense overhang of investments from venture’s Zirp era” (Zirp referring to the zero interest rate policy environment that existed between 2009 and 2022). This has led to approximately $2.5 trillion trapped in private unicorns, and we have not really seen what exit events or IPOs will materialize and what exit valuations will return to investors. Will investors get their capital back and see the returns they hope for? Only time will tell, but investors do not seem ready to slow down their investment in AI startups any time soon. As the FT says, this could be a pivotal year for the fate of VC investment in AI. We will all be watching closely.

Bridging the Gap: How AI is Revolutionizing Canadian Legal Tech

While Canadian law firms have traditionally lagged behind their American counterparts in adopting legal tech, the AI explosion is closing the gap. This slower adoption rate isn’t due to a lack of innovation—Canada boasts a thriving legal tech sector. Instead, factors like a smaller legal market and stricter privacy regulations have historically hindered technology uptake. This often resulted in a noticeable delay between a product’s US launch and its availability in Canada.
Although direct comparisons are challenging due to the continuous evolution of legal tech, the recent announcements and release timelines for major AI-powered tools point to a notable shift in how the Canadian market is being prioritized. For instance, Westlaw Edge was announced in the US in July 2018, but the Canadian launch wasn’t announced until September 2021—a gap of over three years. Similarly, Lexis+ was announced in the US in September 2020, with the Canadian announcement following in August 2022. However, the latest AI products show a different trend. Thomson Reuters’ CoCounsel Core was announced in the US in November 2023 and shortly followed in Canada in February 2024. The announcement for Lexis+ AI came in October 2023 in the US and July 2024 in Canada. This rapid succession of announcements suggests that the Canadian legal tech market is no longer an afterthought. 
The Canadian federal government has demonstrated a strong commitment to fostering AI innovation. It has dedicated CAD$568 million to its national AI strategy, with the goals of fostering AI research and development, building a skilled workforce in the field, and creating robust industry standards for AI systems. This investment should help Canadian legal tech companies, such as Clio, Kira Systems, Spellbook, and Blue J Legal, all headquartered in Canada. With the Canadian government’s focus on establishing Canada as a hub for AI and innovation, these companies stand to benefit significantly from increased funding and talent attraction.
While the Canadian government is actively investing in AI innovation, it’s also taking steps to ensure responsible development through proposed legislation, which could impact the availability of AI legal tech products in Canada. In June 2022, the Government of Canada introduced the Artificial Intelligence and Data Act (AIDA), which aims to regulate high-impact AI systems. While AI tools used by law firms for tasks like legal research and document review likely fall outside this initial scope, AIDA’s evolving framework could still impact the sector. For example, the Act’s emphasis on mitigating bias and discrimination may lead to greater scrutiny of AI algorithms used in legal research, requiring developers to demonstrate fairness and transparency.
While AIDA may present hurdles for US companies entering the Canadian market with AI products, it could conversely provide a competitive advantage for Canadian companies seeking to expand into Europe. This is because AIDA, despite having some material differences, aligns more closely with the comprehensive approach in the European Union’s Artificial Intelligence Act (EU AI Act).
While US companies are working to comply with the EU AI Act, Canadian companies may have an advantage. Although AIDA isn’t yet in force and has some differences from the EU AI Act, it provides a comprehensive regulatory framework that Canadian legal tech leaders are already engaging with. This engagement with AIDA could prove invaluable to Canadian legal tech companies as AI regulation continues to evolve globally.
Canadian companies looking to leverage their experiences with AIDA for European expansion will nonetheless encounter some material differences. For instance, the EU AI Act casts a wider net, regulating a broader range of AI systems than AIDA. The EU AI Act’s multi-tiered risk-based system is designed to address a wider spectrum of concerns, capturing even “limited-risk” AI systems with specific transparency obligations. Furthermore, tools used for legal interpretation could be classified as “high-risk” systems under the EU AI Act, triggering more stringent requirements.
In conclusion, the rise of generative AI is not only revolutionizing Canadian legal tech and closing the gap with the US, but it could also be positioning Canada as a key player in the global legal tech market. While AIDA’s impact remains to be seen, its emphasis on responsible AI could shape the development and deployment of AI-powered legal tools in Canada.

Litigation Minute: A Look Back and Ahead

What You Need to Know in a Minute or Less
Throughout 2024, we published three series highlighting emerging and evolving trends in litigation. From generative AI to ESG litigation, our lawyers continue to provide concise, timely updates on the issues most critical to our clients and their businesses.
In a minute or less, find our Litigation Minute highlights from the past year—as well as a look ahead to 2025.
Beauty and Wellness
Our first series of the year covered trends in the beauty and wellness industry, beginning with products categorized as “beauty from within,” including oral supplements focused on wellness. We outlined the risks of FDA enforcement and class action litigation arising from certain marketing claims associated these products.
We next reviewed the use of “clean” and “natural” marketing terminology. We assessed these labeling claims across a range of potentially impacted products and brands, as well as regulatory and litigation risks associated with such claims. 
Alongside these marketing-focused issues, companies also face increased regulatory scrutiny, including new extended producer responsibility laws and the FTC Green Guides. We concluded our series by assessing product packaging and end-of-life considerations for beauty and wellness brands.
Generative AI
One of the most-discussed developments of 2024, generative AI was the focus of our second series of the year, which examined key legal, regulatory, and operational considerations associated with generative AI. We outlined education, training, and risk management frameworks in light of litigation trends targeting these systems.
2024 also saw several new state statutes regulating generative AI. From mandatory disclosures in Utah to Tennessee’s ELVIS Act, we examined how new state approaches would remain at the forefront of attention for companies currently utilizing or considering generative AI.
With the need for compliance and training in mind, we next discussed the potential for generative AI in discovery. With the ability to rapidly sort through data and provide timely requested outputs, we provided an overview of how generative AI has created valuable tools for lawyers as well as their clients.
ESG Litigation
2024 highlighted the impacts of extreme weather, as well as the importance of preparation for such natural disasters. With extreme weather events expected to increase in both frequency and intensity around the world, we provided insurance coverage considerations for policyholders seeking to restore business operations following these events and weather the consequential financial storms. 
Further ESG headlines this year focused on the questions surrounding microplastics—including general definition, scientific risk factors, potential for litigation, and the hurdles complicating this litigation.
Greenwashing claims, on the other hand, have experienced fewer setbacks, with expanded litigation targeting manufacturers, distributors, and retailers of consumer products. Alleging false representation of companies or their products as “environmentally friendly,” we reviewed how the risk of such claims can be mitigated through proper substantiation and documentation of company claims and certifications. 

The Texas Responsible AI Governance Act and Its Potential Impact on Employers

On 23 December 2024, Texas State Representative Giovanni Capriglione (R-Tarrant County) filed the Texas Responsible AI Governance Act (the Act),1 adding Texas to the list of states seeking to regulate artificial intelligence (AI) in the absence of federal law. The Act establishes obligations for developers, deployers, and distributors of certain AI systems in Texas. While the Act covers a variety of areas, this alert focuses on the Act’s potential impact on employers.2 
The Act’s Regulation of Employers as Deployers of High-Risk Intelligence Systems
The Act seeks to regulate employers’ and other deployers’ use of “high-risk artificial intelligence systems” in Texas. High-risk intelligence systems include AI tools that make or are a contributing factor in “consequential decisions.”3 In the employment space, this could include hiring, performance, compensation, discipline, and termination decisions.4 The Act does not cover several common intelligence systems, such as technology intended to detect decision-making patterns, anti-malware and antivirus programs, and calculators.
Under the Act, covered employers would have a general duty to use reasonable care to prevent algorithmic discrimination—including a duty to withdraw, disable, or recall noncompliant high-risk AI systems. To satisfy this duty, the Act requires covered employers and other covered deployers to do the following:
Human Oversight
Ensure human oversight of high-risk AI systems by persons with adequate competence, training, authority, and organizational support to oversee consequential decisions made by the system.5 
Prompt Reporting of Discrimination Risks
Report discrimination risks promptly by notifying the Artificial Intelligence Council (which would be established under the Act) no later than 10 days after the date the deployer learns of such issues.6 
Regular AI Tool Assessments
Assess high-risk AI systems regularly, including conducting a review on an annual basis, to ensure that the system is not causing algorithmic discrimination.7 
Prompt Suspension 
If a deployer considers or has reason to consider that a system does not comply with the Act’s requirements, suspend use of the system and notify the system’s developer of such concerns.8 
Frequent Impact Assessments
Complete an impact assessment on a semi-annual basis and within 90 days after any intentional or substantial modification of the system.9 
Clear Disclosure of AI Use
Before or at the time of interaction, disclose to any Texas-based individual:

That they are interacting with an AI system.
The purpose of the system.
That the system may or will make a consequential decision affecting them.
The nature of any consequential decision in which the system is or may be a contributing factor.
The factors used in making any consequential decisions.
Contact information of the deployer.
A description of the system. 10 

Takeaways for Employers
The Act is likely to be a main topic of discussion in Texas’s upcoming legislative session, which is scheduled to begin on 14 January 2025. If enacted, the Act would establish a consumer protection-focused framework for AI regulation. Employers should track the Act’s progress and any amendments to the proposed bill while also taking steps to prepare for the Act’s passage. For example, employers using or seeking to use high-risk AI systems in Texas can:

Develop policies and procedures that govern the use of AI systems to make or impact employment decisions:
Include in these policies and procedures clear explanations of (i) the systems’ uses and purposes, (ii) the system’s decision-making processes, (iii) the permitted uses of such systems, (iv) the approved users of such systems, (v) training requirements for approved users, and (vi) the governing body overseeing the responsible use of such systems. 
Develop and implement an AI governance and risk-management framework with internal policies, procedures, and systems for review, flagging risks, and reporting.
Ensure human oversight over AI systems.
Train users and those tasked with overseeing the AI systems.
Ensure there are sufficient resources committed to, and an adequate budget assigned to, overseeing and deploying AI systems and complying with the Act. 
Conduct due diligence on any AI vendors and developers before engagement and on any AI systems before use, including relating to how AI vendors and developers and AI systems test for, avoid, and remedy algorithmic bias, and to ensure AI vendors and developers are compliant with the Act’s requirements relating to developers of high-risk AI systems.

Footnotes

1 A copy of HB 1709 is available at: https://capitol.texas.gov/tlodocs/89R/billtext/pdf/HB01709I.pdf (last accessed: 9 January 2025). 
2 Section 551.001(8). 
3 Section 551.001(13). The Act defines a “consequential decision” as “a decision that has a material, legal, or similarly significant, effect on a consumer’s access to, cost of, or terms of: (A) a criminal case assessment, a sentencing or plea agreement analysis, or a pardon, parole, probation, or release decision; (B) education enrollment or an education opportunity; (C) employment or an employment opportunity; (D) a financial service; (E) an essential government service; (F) residential utility services; (G) a health-care service or treatment; (H) housing; (I) insurance; (J) a legal service; (K) a transportation service; (L) constitutionally protected services or products; or (M) elections or voting process.”
4 Id.
5 Section 551.005
6 Section 551.011
7 Section 551.006(d)
8 Section 551.005
9 Section 551.006(a)
10 Section 551.007(a)

New Jersey Attorney General: NJ’s Law Against Discrimination (LAD) Applies to Automated Decision-Making Tools

This month, the New Jersey Attorney General’s office (NJAG) added to nationwide efforts to regulate, or at least clarify the application of existing law, in this case the NJ Law Against Discrimination, N.J.S.A. § 10:5-1 et seq. (LAD), to artificial intelligence technologies. In short, the NJAG’s guidance states:
the LAD applies to algorithmic discrimination in the same way it has long applied to other discriminatory conduct.
If you are not familiar with it, the LAD generally applies to employers, housing providers, places of public accommodation, and certain other entities. The law prohibits discrimination on the basis of actual or perceived race, religion, color, national origin, sexual orientation, pregnancy, breastfeeding, sex, gender identity, gender expression, disability, and other protected characteristics. According to the NJAG’s guidance, the LAD protections extend to algorithmic discrimination (discrimination that results from the use of automated decision-making tools) in employment, housing, places of public accommodation, credit, and contracting.
Citing a recent Rutgers survey, the NJAG pointed to high levels of adoption of AI tools by NJ employers. According to the survey, 63% of NJ employers use one or more tools to recruit job applicants and/or make hiring decisions. These AI tools are broadly defined in the guidance to include:
any technological tool, including but not limited to, a software tool, system, or process that is used to automate all or part of the human decision-making process…such as generative AI, machine-learning models, traditional statistical tools, and decision trees.
The NJAG guidance examines some ways that AI tools may contribute to discriminatory outcomes.

Design. Here, the choices a developer makes in designing an AI tool could, purposefully or inadvertently, result in unlawful discrimination. The results can be influenced by the output the tool provides, the model or algorithms the tool uses, and what inputs the tool assesses which can introduce bias into the automated decision-making tool.
Training. As AI tools need to be trained to learn the intended correlations or rules relating to their objectives, the datasets used for such training may contain biases or institutional and systemic inequities that can affect the outcome. Thus, the datasets used in training can drive unlawful discrimination.
Deployment. The NJAG also observed that AI tools could be used to purposely discriminate, or to make decisions for which the tool was not designed. These and other deployment issues could lead to bias and unlawful discrimination.

The NJAG notes that its guidance does not impose any new or additional requirements that are not included in the LAD, nor does it establish any rights or obligations for any person beyond what exists under the LAD. However, the guidance makes clear that covered entities can violate the LAD even if they have no intent to discriminate (or do not understand the inner workings of the tool) and, just as noted by the EEOC in guidance the federal agency issued under Title VII, even if a third-party was responsible for developing the AI tool. Importantly, under NJ law, this includes disparate treatment/impact which may result from the design or usage of AI tools.
As we have noted, it is critical for organizations to assess, test, and regularly evaluate the AI tools they seek to deploy in their organizations for many reasons, including to avoid unlawful discrimination. The measures should include working closely with the developers to vet the design and testing of their automated decision-making tools before they are deployed. In fact, the NJAG specifically noted many of these steps as ways organizations may decrease the risk of liability under the LAD. Maintaining a well thought out governance strategy for managing this technology can go a long way to minimizing legal risk, particularly as the law develops in this area.

SEC Priorities for 2025: What Investment Advisers Should Know

The US Securities and Exchange Commission (SEC) recently released its priorities for 2025. As in recent years, the SEC is focusing on fiduciary duties and the development of compliance programs as well as emerging risk areas such as cybersecurity and artificial intelligence (AI). This alert details the key areas of focus for investment advisers.

1. Fiduciary Duties Standards of Conduct
The Investment Advisers Act of 1940 (Advisers Act) established that all investment advisers owe their clients the duties of care and loyalty. In 2025, the SEC will focus on whether investment advice to clients satisfies an investment adviser’s fiduciary obligations, particularly in relation to (1) high-cost products, (2) unconventional investments, (3) illiquid assets, (4) assets that are difficult to value, (5) assets that are sensitive to heightened interest rates and market conditions, and (6) conflicts of interests.
For investment advisers who are dual registrants or affiliated with broker-dealers, the SEC will focus on reviewing (1) whether investment advice is suitable for a client’s advisory accounts, (2) disclosures regarding recommendations, (3) account selection practices, and (4) disclosures regarding conflicts of interests.
2. Effectiveness of Advisers Compliance Programs
The Compliance Rule, Rule 206(4)-7, under the Advisers Act requires investment advisers to (1) implement written policies reasonably designed to prevent violations of the Advisers Act, (2) designate a Chief Compliance Officer, and (3) annually review such policies for adequacy and effectiveness.
In 2025, the SEC will focus on a variety of topics related to the Compliance Rule, including marketing, valuation, trading, investment management, disclosure, filings, and custody, as well as the effectiveness of annual reviews.
Among its top priorities is evaluating whether compliance policies and procedures are reasonably designed to prevent conflicts of interest. Such examination may include a focus on (1) fiduciary obligations related to outsourcing investment selection and management, (2) alternative sources of revenue or benefits received by advisers, and (3) fee calculations and disclosure.
Review under the Compliance Rule is fact-specific, meaning it will vary depending on each adviser’s practices and products. For example, advisers who utilize AI for management, trading, marketing, and compliance will be evaluated to determine the effectiveness of compliance programs related to the use of AI. The SEC may also focus more on advisers with clients that invest in difficult-to-value assets.
3. Examinations of Private Fund Advisers
The SEC will continue to focus on advisers to private funds, which constitute a significant portion of SEC-registered advisers. Specifically, the SEC will prioritize reviewing:

Disclosures to determine whether they are consistent with actual practices.
Fiduciary duties during volatile markets.
Exposure to interest rate fluctuations.
Calculations and allocations of fees and expenses.
Disclosures related to conflicts of interests and investment risks.
Compliance with recently adopted or amended SEC rules, such as Form PF (previously discussed here).

4. Never Examined Advisers, Recently Registered Advisers, and Advisers Not Recently Examined
Finally, the SEC will continue to prioritize recently registered advisers, advisers not examined recently, and advisers who have never been examined.
Key Takeaways
Investment advisers can expect SEC examinations in 2025 to focus heavily on fiduciary duties, compliance programs, and conflicts of interest. As such, advisers should review their policies and procedures related to fiduciary duties and conflicts of interest as well as evaluating the effectiveness of their compliance programs.

China’s National Intellectual Property Administration Issues Guidelines for Patent Applications for AI-Related Inventions

On December 31, 2024, China’s National Intellectual Property Administration (CNIPA) issued the Guidelines for Patent Applications for AI-Related Inventions (Trial Implementation) (人工智能相关发明专利申请指引(试行)). The Guidelines follow up on CNIPA’s draft for comments issued December 6, 2024 in which only a week for comments were provided. The short comment period implied CNIPA did not actually want comments and is in contravention of the not-yet-effective Regulations on the Procedures for Formulating Regulations of the CNIPA (国家知识产权局规章制定程序规定(局令第83号)) requiring a 30-day minimum comment period. Highlights follow including several examples regarding subject matter eligibility.
There are four types of AI-related patent applications:
Patent applications related to AI algorithms or models themselves
Artificial intelligence algorithms or models, that is, advanced statistical and mathematical model forms, include machine learning, deep learning, neural networks, fuzzy logic, genetic algorithms, etc. These algorithms or models constitute the core content of artificial intelligence. They can simulate intelligent decision-making and learning capabilities, enabling computing devices to handle complex problems and perform tasks that usually require human intelligence.
Accordingly, this type of patent application usually involves the artificial intelligence algorithm or model itself and its improvement or optimization, for example, model structure, model compression, model training, etc.
Patent applications related to functions or field applications based on artificial intelligence algorithms or models
Patent applications related to the functional or field application of artificial intelligence algorithms or models refer to the integration of artificial intelligence algorithms or models into inventions as an intrinsic part of the proposed solution for products, methods or their improvements. For example: a new type of electron microscope based on artificial intelligence image sharpening technology. This type of patent application usually involves the use of artificial intelligence algorithms or models to achieve specific functions or apply them to specific fields.
Functions based on artificial intelligence algorithms or models refer to functions implemented using one or more artificial intelligence algorithms or models. They usually include: natural language processing, which enables computers to understand and generate human language; computer vision, which enables computers to “see” and understand images or videos; speech processing, including speech recognition, speech synthesis, etc.; knowledge representation and reasoning, which represents information and enables computers to solve problems, including knowledge graphs, graph computing, etc.; data mining, which calculates and analyzes massive amounts of data to identify information or laws such as potential patterns, trends or relationships. Artificial intelligence algorithms or models can be applied to specific fields based on their functions.
Field applications based on artificial intelligence algorithms or models refer to the application of artificial intelligence to various scenarios, such as transportation, telecommunications, life and medical sciences, security, commerce, education, entertainment, finance, etc., to promote technological innovation and improve the level of intelligence in all walks of life.
Patent applications involving inventions made with the assistance of artificial intelligence
Inventions assisted by artificial intelligence are inventions that are made using artificial intelligence technology as an auxiliary tool in the invention process. In this case, artificial intelligence plays a role similar to that of an information processor or a drawing tool. For example, artificial intelligence is used to identify specific protein binding sites, and finally obtains a new drug compound.
Patent applications involving AI-generated inventions
AI-generated inventions refer to inventions and creations generated autonomously by AI without substantial human contribution, for example, a food container autonomously designed by AI technology.

AI cannot be an inventor:
1. The inventor must be a natural person
Section 4.1.2 of Chapter 1 of Part 1 of the Guidelines clearly states that “the inventor must be an individual, and the application form shall not contain an entity or collective, nor the name of artificial intelligence.”
The inventor named in the patent document must be a natural person. Artificial intelligence systems and other non-natural persons cannot be inventors. When there are multiple inventors, each inventor must be a natural person. The property rights to obtain income and the personal rights to sign enjoyed by the inventor are civil rights. Only civil subjects that meet the provisions of the civil law can be the rights holders of the inventor’s related civil rights. Artificial intelligence systems cannot currently enjoy civil rights as civil subjects, and therefore cannot be inventors.
2. The inventor should make a creative contribution to the essential features of the invention
For patent applications involving artificial intelligence algorithms or models, functions or field applications based on artificial intelligence algorithms or models, the inventor refers to the person who has made creative contributions to the essential features of the invention.
For inventions assisted by AI, a natural person who has made a creative contribution to the substantive features of the invention can be named as the inventor of the patent application. For inventions generated by AI, it is not possible to grant AI inventor status under the current legal context in my country.

Examples of subject matter eligibility:
The solution of the claim should reflect the use of technical means that follow the laws of nature to solve technical problems and achieve technical effects
The “technical solution” stipulated in Article 2, Paragraph 2 of the Patent Law refers to a collection of technical means that utilize natural laws to solve the technical problems to be solved. When a claim records that a technical means that utilizes natural laws is used to solve the technical problems to be solved, and a technical effect that conforms to natural laws is obtained thereby, the solution defined in the claim belongs to the technical solution. On the contrary, a solution that does not use technical means that utilize natural laws to solve technical problems to obtain technical effects that conform to natural laws does not belong to the technical solution.
As an example and not a limitation, the following content describes several common situations where related solutions belong to technical solutions.
Scenario 1: AI algorithms or models process data with specific technical meaning in the technical field
If the drafting of a claim can reflect that the object processed by the artificial intelligence algorithm or model is data with a definite technical meaning in the technical field, so that based on the understanding of those skilled in the art, they can know that the execution of the algorithm or model directly reflects the process of solving a certain technical problem by using natural laws, and obtains a technical effect, then the solution defined in the claim belongs to the technical solution. For example, a method for identifying and classifying images using a neural network model. Image data belongs to data with a definite technical meaning in the technical field. If those skilled in the art can know that the various steps of processing image features in the solution are closely related to the technical problem of identifying and classifying objects to be solved, and obtain corresponding technical effects, then the solution belongs to the technical solution.
Scenario 2: There is a specific technical connection between the AI algorithm or model and the internal structure of the computer system
If the drafting of a claim can reflect the specific technical connection between the artificial intelligence algorithm or model and the internal structure of the computer system, thereby solving the technical problem of how to improve the hardware computing efficiency or execution effect, including reducing the amount of data storage, reducing the amount of data transmission, increasing the hardware processing speed, etc., and can obtain the technical effect of improving the internal performance of the computer system in accordance with the laws of nature, then the solution defined in the claim belongs to the technical solution.
This specific technical association reflects the mutual adaptation and coordination between algorithmic features and features related to the internal structure of a computer system at the technical implementation level, such as adjusting the architecture or related parameters of a computer system to support the operation of a specific algorithm or model, making adaptive improvements to the algorithm or model based on a specific internal structure or parameters of a computer system, or a combination of the two.
For example, a neural network model compression method for a memristor accelerator includes: step 1, adjusting the pruning granularity according to the actual array size of the memristor during network pruning through an array-aware regularized incremental pruning algorithm to obtain a regularized sparse model adapted to the memristor array; step 2, reducing the ADC accuracy requirements and the number of low-resistance devices in the memristor array through a power-of-two quantization algorithm to reduce overall system power consumption.
In this example, in order to solve the problem of excessive hardware resource consumption and high power consumption of ADC units and computing arrays when the original model is mapped to the memristor accelerator, the solution uses pruning algorithms and quantization algorithms to adjust the pruning granularity according to the actual array size of the memristor, reducing the number of low-resistance devices in the memristor array. The above means are algorithm improvements made to improve the performance of the memristor accelerator. They are constrained by hardware condition parameters, reflecting the specific technical relationship between the algorithm characteristics and the internal structure of the computer system. They use technical means that conform to the laws of nature to solve the technical problems of excessive hardware consumption and high power consumption of the memristor accelerator, and obtain the technical effect of improving the internal performance of the computer system that conforms to the laws of nature. Therefore, this solution belongs to the technical solution.
Specific technical associations do not mean that changes must be made to the hardware structure of the computer system. For solutions to improve artificial intelligence algorithms, even if the hardware structure of the computer system itself has not changed, the solution can achieve the technical effect of improving the internal performance of the computer system as a whole by optimizing the system resource configuration. In such cases, it can be considered that there is a specific technical association between the characteristics of the artificial intelligence algorithm and the internal structure of the computer system, which can improve the execution effect of the hardware.
For example, a training method for a deep neural network model includes: when the size of training data changes, for the changed training data, respectively calculating the training time of the changed training data in preset candidate training schemes; selecting a training scheme with the shortest training time from the preset candidate training schemes as the optimal training scheme for the changed training data, the candidate training schemes including a single-processor training scheme and a multi-processor training scheme based on data parallelism; and performing model training on the changed training data in the optimal training scheme.
In order to solve the problem of slow training speed of deep neural network models, this solution selects a single-processor training solution or a multi-processor training solution with different processing efficiency for training data of different sizes. This model training method has a specific technical connection with the internal structure of the computer system, which improves the execution effect of the hardware during the training process, thereby obtaining the technical effect of improving the internal performance of the computer system in accordance with the laws of nature, thus constituting a technical solution.
However, if a claim merely utilizes a computer system as a carrier for implementing the operation of an artificial intelligence algorithm or model, and does not reflect the specific technical relationship between the algorithm features and the internal structure of the computer system, it does not fall within the scope of Scenario 2.
For example, a computer system for training a neural network includes a memory and a processor, wherein the memory stores instructions and the processor reads the instructions to train the neural network by optimizing a loss function.
In this solution, the memory and processor in the computer system are merely conventional carriers for algorithm storage and execution. There is no specific technical association between the algorithm features involved in training the neural network using the optimized loss function and the memory and processor contained in the computer system. This solution solves the problem of optimizing neural network training, which is not a technical problem. The effect obtained is only to improve the efficiency of model training, which is not a technical effect of improving the internal performance of the computer system. Therefore, it does not constitute a technical solution.
Scenario 3: Using artificial intelligence algorithms to mine the inherent correlations in big data in specific application fields that conform to the laws of nature
When artificial intelligence algorithms or models are applied in various fields, data analysis, evaluation, prediction or recommendation can be performed. For such applications, if the claims reflect that the big data in a specific application field is processed, and artificial intelligence algorithms such as neural networks are used to mine the inherent correlation between data that conforms to the laws of nature, and the technical problem of how to improve the reliability or accuracy of big data analysis in a specific application field is solved, and the corresponding technical effects are obtained, then the solution of the claim constitutes a technical solution.
The means of using artificial intelligence algorithms or models to conduct data mining and train artificial intelligence models that can obtain output results based on input data cannot directly constitute technical means. Only when the inherent correlation between the data mined based on artificial intelligence algorithms or models conforms to the laws of nature, the relevant means as a whole can constitute technical means that utilize the laws of nature. Therefore, it is necessary to clarify in the scheme recorded in the claims which indicators, parameters, etc. are used to reflect the characteristics of the analyzed object in order to obtain the analysis results, and whether the inherent correlation between these indicators, parameters, etc. (model input) mined by artificial intelligence algorithms or models and the result data (model output) conforms to the laws of nature.
For example, a food safety risk prediction method obtains and analyzes historical food safety risk events to obtain header entity data and tail entity data representing food raw materials, edible items, and food sampling poisonous substances, and their corresponding timestamp data; based on each header entity data and its corresponding tail entity data, and its corresponding entity relationship carrying timestamp data representing the content level, risk or intervention of each type of hazard, corresponding four-tuple data is constructed to obtain a corresponding knowledge graph; the knowledge graph is used to train a preset neural network to obtain a food safety knowledge graph model; and the food safety risk at the prediction time is predicted based on the food safety knowledge graph model.
The background technology of the program description records that the existing technology uses static knowledge graphs to predict food safety risks, which cannot reflect the fact that food data in actual situations changes over time and ignores the influence between data. Those skilled in the art know that food raw materials, edible items or food sampling poisons will gradually change over time. For example, the longer the food is stored, the more microorganisms there are in the food, and the content of food sampling poisons will increase accordingly. When the food contains a variety of raw materials that can react chemically, the chemical reaction may also cause food safety risks at some point in the future over time. This program predicts food safety risks based on the inherent characteristics of food changing over time, so that timestamps are added when constructing the knowledge graph, and a preset neural network is trained based on entity data related to food safety risks at each moment to predict food safety risks at the time to be predicted. It uses technical means that follow the laws of nature to solve the technical problem of inaccurate prediction of food safety risks at future time points, and can obtain corresponding technical effects, thus constituting a technical solution.
If the intrinsic correlation between the indicator parameters mined by artificial intelligence algorithms or models and the prediction results is only subject to economic laws or social laws, it is a case of not following the laws of nature. For example, a method of estimating the regional economic prosperity index using a neural network uses a neural network to mine the intrinsic correlation between economic data and electricity consumption data and the economic prosperity index, and predicts the regional economic prosperity index based on the intrinsic correlation. Since the intrinsic correlation between economic data and electricity consumption data and the economic prosperity index is subject to economic laws and not natural laws, this solution does not use technical means and does not constitute a technical solution.

The full text is available here (Chinese only).

FTC Blog Outlines Factors for Companies to Consider About AI — AI: The Washington Report

The FTC staff recently published a blog post outlining four factors for companies to consider when developing or deploying AI products to avoid running afoul of the nation’s consumer protection laws.
The blog post does not represent formal guidance but it likely articulates the FTC’s thinking and enforcement approach, particularly regarding deceptive claims about AI tools and due diligence when using AI-powered systems.
Although the blog post comes just days before current Republican Commissioner Andrew Ferguson becomes FTC Chair on January 20, the FTC is likely to continue the same focus on AI as it relates to consumer protection issues as it has under Chair Khan. Ferguson has voted in support of nearly all of the FTC’s AI consumer protection actions, but his one dissent underscores how he might dial back some of the current FTC’s aggressive AI consumer protection agenda.  

The FTC staff in the Office of Technology and the Division of Advertising Practices in the FTC Bureau of Consumer Protection released a blog outlining four factors that companies should consider when developing or deploying an AI-based product. These factors are not binding, but they underscore the FTC’s continued focus on enforcing the nation’s consumer protection laws as they relate to AI.
The blog comes just under two weeks before current Republican Commissioner Andrew Ferguson will become the FTC Chair. However, under Ferguson, as we discuss below, the FTC will likely continue its same focus on AI consumer protection issues, though it may take a more modest approach.
The Four Factors for Companies to Consider about AI
The blog post outlines four factors for companies to consider when developing or deploying AI:

Doing due diligence to prevent harm before and while developing or deploying an AI service or product   

In 2024, the FTC filed a complaint against a leading retail pharmacy alleging that it “failed to take reasonable measures to prevent harm to consumers in its use of facial recognition technology (FRT) that falsely tagged consumers in its stores, particularly women and people of color, as shoplifters.” The FTC has “highlighted that companies offering AI models need to assess and mitigate potential downstream harm before and during deployment of their tools, which includes addressing the use and impact of the technologies that are used to make decisions about consumers.”  

Taking preventative steps to detect and remove AI-generated deepfakes and fake images, including child sexual abuse material and non-consensual intimate imagery   

In April 2024, the FTC finalized its impersonation rule, and the FTC also launched a Voice Cloning Challenge to create ways to protect consumers from voice cloning software. The FTC has previously discussed deepfakes and their harms to Congress in its Combatting Online Harms Report.  

Avoiding deceptive claims about AI systems or services that result in people losing money or harm users   

The FTC’s Operation AI Comply, which we covered, as well as other enforcement actions have taken aim at companies that have made false or deceptive claims about the capabilities of their AI products or services. Many of the FTC’s enforcement actions have targeted companies that have falsely claimed that their AI products or services would help people make money or start a business.  

Protecting privacy and safety   

AI models, especially generative AI ones, run on large amounts of data, some of which may be highly sensitive. “The Commission has a long record of providing guidance to businesses about ensuring data security and protecting privacy,” as well as taking action against companies that have failed to do so.  

While the four factors highlight consumer protection issues that the FTC has focused on, FTC staff cautions that the four factors are “not a comprehensive overview of what companies should be considering when they design, build, test, and deploy their own products.”
New FTC Chair: New or Same Focus on AI Consumer Protection Issues?
The blog post comes under two weeks before President-elect Trump’s pick to lead the FTC, current FTC Commissioner Andrew Ferguson, becomes the FTC Chair. Under Chair Ferguson, the FTC’s focus on the consumer protection side of AI is unlikely to undergo significant changes; Ferguson has voted in support of nearly all of the FTC’s consumer protection AI enforcement actions.
However, Ferguson’s one dissent in a consumer protection case brought against an AI company illuminates how the FTC under his leadership could take a more modest approach to consumer protection issues related to AI. In his dissent, Commissioner Ferguson wrote: 
The Commission’s theory is that Section 5 prohibits products and services that could be used to facilitate deception or unfairness because such products and services are the means and instrumentalities of deception and unfairness. Treating as categorically illegal a generative AI tool merely because of the possibility that someone might use it for fraud is inconsistent with our precedents … and risks strangling a potentially revolutionary technology in its cradle.
Commissioner Ferguson’s point seems well taken.Less clear is where he would draw the line.Moreover, as a practical matter, his ability to move the needle would likely need to wait until President Trump’s other nominee, Mark Meador, is confirmed, as expected, later this year.
Matthew Tikhonovsky also contributed to this article.

Black Box Issues [Podcast]

In part three of our series on potential pitfalls in the use of artificial intelligence (or AI) when it comes to employment decisions, partner Guy Brenner and senior counsel Jonathan Slowik dive into the concept of “black box” systems—AI tools whose internal decision-making processes are not transparent. The internal workings of such systems may not be well understood, even by the developers who create them. We explore the challenges this poses for employers seeking to ensure that their use of AI in employment decisions does not inadvertently introduce bias into the process. Be sure to tune in for a closer look at the complexities of this conundrum and what it means for employers.

McDermott+ Check-Up: January 10, 2025

THIS WEEK’S DOSE

119th Congress Begins. The new Congress began with key membership announcements for relevant healthcare committees.
Cures 2.1 White Paper Published. The document outlines the 21st Century Cures 2.1 legislative proposal, focusing on advancing healthcare technologies and fostering innovation.
Senate Budget Committee Members Release Report on Private Equity. The report, released by the committee’s chair and ranking member from the 118th Congress, includes findings from an investigation into private equity’s role in healthcare.
HHS OCR Proposes Significant Updates to HIPAA Security Rule. The US Department of Health & Human Services (HHS) Office for Civil Rights (OCR) seeks to address current cybersecurity concerns.
HHS Releases AI Strategic Plan. The plan outlines how HHS will prioritize resources and coordinate efforts related to artificial intelligence (AI).
CFPB Removes Medical Debt from Consumer Credit Reports. The Consumer Financial Protection Bureau (CFPB) finalized its 2024 proposal largely as proposed.
President Biden Signs Several Public Health Bills into Law. The legislation includes the reauthorization and creation of public health programs related to cardiomyopathy, autism, and emergency medical services for children.

CONGRESS

119th Congress Begins. The 119th Congress began on January 3, 2025. Lawmakers reelected Speaker Johnson in the first round of votes and adopted the House rules package. The first full week in session was slow-moving due to a winter storm in Washington, DC; funeral proceedings for President Jimmy Carter; and the certification of electoral college votes. Committees are still getting organized, and additions to key health committees include:

House Energy & Commerce: Reps. Bentz (R-OR), Houchin (R-IN), Fry (R-SC), Lee (R-FL), Langworthy (R-NY), Kean (R-NJ), Rulli (R-OH), Evans (R-CO), Goldman (R-TX), Fedorchak (R-ND), Ocasio-Cortez (D-NY), Mullin (D-CA), Carter (D-LA), McClellan (D-VA), Landsman (D-OH), Auchincloss (D-MA), and Menendez (D-NJ).
House Ways & Means: Reps. Moran (R-TX), Yakym (R-IN), Miller (R-OH), Bean (R-FL), Boyle (D-PA), Plaskett (D-VI), and Suozzi (D-NY).
Senate Finance: Sens. Marshall (R-KS), Sanders (I-VT), Smith (D-MN), Ray Luján (D-NM), Warnick (D-GA), and Welch (D-VT).
Senate Health, Education, Labor & Pensions: Sens. Scott (R-SC), Hawley (R-MO), Banks (R-IN), Crapo (R-ID), Blackburn (R-TN), Kim (D-NJ), Blunt Rochester (D-DE), and Alsobrooks (D-MD).

Congress has a busy year ahead. The continuing resolution (CR) enacted in December 2024 included several short-term extensions of health provisions (and excluded many others that had been included in an earlier proposed bipartisan health package), and these extensions will expire on March 14, 2025. Congress will need to complete action on fiscal year (FY) 2025 appropriations by this date, whether by passing another CR through the end of the FY, or by passing a full FY 2025 appropriations package. The short-term health extenders included in the December CR could be further extended in the next appropriations bill, and Congress also has the opportunity to revisit the bipartisan, bicameral healthcare package that was unveiled in December but ultimately left out of the CR because of pushback from Republicans about the overall bill’s size.
The 119th Congress will also be focused in the coming weeks on advancing key priorities – including immigration reform, energy policy, extending the 2017 tax cuts, and raising the debt limit – through the budget reconciliation process. This procedural maneuver allows the Senate to advance legislation with a simple majority, rather than the 60 votes needed to overcome the threat of a filibuster. Discussions are underway about the scope of this package and the logistics (will there be one reconciliation bill or two?), and we expect to learn more in the days and weeks ahead. It is possible that healthcare provisions could become a part of such a reconciliation package.
Cures 2.1 White Paper Published. Rep. Diana DeGette (D-CO) and former Rep. Larry Bucshon (R-IN) released a white paper on December 24, 2024, outlining potential provisions of the 21st Century Cures 2.1 legislative proposal expected to be introduced later this year. This white paper and the anticipated legislation are informed by responses to a 2024 request for information. The white paper is broad, discussing potential Medicare reforms relating to gene therapy access, coverage determinations, and fostering innovation. With Rep. Bucshon’s retirement, all eyes are focused on who will be the Republican lead on this effort.
Senate Budget Committee Members Release Report on Private Equity. The report contains findings from an investigation into private equity’s role in healthcare led by the leaders of the committee in the 118th Congress, then-Chair Whitehouse (D-RI) and then-Ranking Member Grassley (R-IA). The report includes two case studies and states that private equity firms have become increasingly involved in US hospitals. They write that this trend impacts quality of care, patient safety, and financial stability at hospitals across the United States, and the report calls for greater oversight, transparency, and reforms of private equity’s role in healthcare. A press release that includes more documents related to the case studies can be found here.
ADMINISTRATION

HHS OCR Proposes Significant Updates to HIPAA Security Rule. HHS OCR released a proposed rule, HIPAA Security Rule to Strengthen the Cybersecurity of Electronic Protected Health Information (ePHI). HHS OCR proposes minimum cybersecurity standards that would apply to health plans, healthcare clearinghouses, most healthcare providers (including hospitals), and their business associates. Key proposals include:

Removing the distinction between “required” and “addressable” implementation specifications and making all implementation specifications required with specific, limited exceptions.
Requiring written documentation of all Security Rule policies, procedures, plans, and analyses.
Updating definitions and revising implementation specifications to reflect changes in technology and terminology.
Adding specific compliance time periods for many existing requirements.
Requiring the development and revision of a technology asset inventory and a network map that illustrates the movement of ePHI throughout the regulated entity’s electronic information system(s) on an ongoing basis, but at least once every 12 months and in response to a change in the regulated entity’s environment or operations that may affect ePHI.
Requiring notification of certain regulated entities within 24 hours when a workforce member’s access to ePHI or certain electronic information systems is changed or terminated.
Strengthening requirements for planning for contingencies and responding to security incidents.
Requiring regulated entities to conduct an audit at least once every 12 months to ensure their compliance with the Security Rule requirements.

The HHS OCR fact sheet is available here. Comments are due on March 7, 2025. Because this is a proposed rule, the incoming Administration will determine the content and next steps for the final rule.
HHS Releases AI Strategic Plan. In response to President Biden’s Executive Order on AI, HHS unveiled its AI strategic plan. The plan is organized into five primary domains:

Medical research and discovery
Medical product development, safety and effectiveness
Healthcare delivery
Human services delivery
Public health

Within each of these chapters, HHS discusses in-depth the context of AI, stakeholders engaged in the domain’s AI value chain, opportunities for the application of AI in the domain, trends in AI for the domain, potential use-cases and risks, and an action plan.
The report also highlights efforts related to cybersecurity and internal operations. Lastly, the plan outlines responsibility for AI efforts within HHS’s Office of the Chief Artificial Intelligence Officer.
CFPB Removes Medical Debt from Consumer Credit Reports. The final rule removes $49 billion in unpaid medical bills from the credit reports of 15 million Americans, building on the Biden-Harris Administration’s work with states and localities. The White House fact sheet can be found here. Whether the incoming Administration will intervene in this rulemaking remains an open question.
President Biden Signs Several Public Health Bills into Law. These bills from the 118th Congress include:

H.R. 6829, the HEARTS Act of 2024, which mandates that the HHS Secretary work with the Centers for Disease Control and Prevention, patient advocacy groups, and health professional organizations to develop and distribute educational materials on cardiomyopathy.
H.R. 6960, the Emergency Medical Services for Children Reauthorization Act of 2024, which reauthorizes through FY 2029 the Emergency Medical Services for Children State Partnership Program.
H.R. 7213, the Autism CARES Act of 2024, which reauthorizes, through FY 2029, the Developmental Disabilities Surveillance and Research Program and the Interagency Autism Coordinating Committee in HHS, among other HHS programs to support autism education, early detection, and intervention.

QUICK HITS

ACIMM Hosts Public Meeting. The HHS Advisory Committee on Infant and Maternal Mortality (ACIMM) January meeting included discussion and voting on draft recommendations related to preconception/interconception health, systems issues in rural health, and social drivers of health. The agenda can be found here.
CBO Releases Report on Gene Therapy Treatment for Sickle Cell Disease. The Congressional Budget Office (CBO) report did not estimate the federal budgetary effects of any policy, but instead discussed how CBO would assess related policies in the future.
CMS Reports Marketplace 2025 Open Enrollment Data. As of January 4, 2025, 23.6 million consumers had selected a plan for coverage in 2025, including more than three million new consumers. Read the fact sheet here.
CMS Updates Hospital Price Transparency Guidance. The agency posted updated frequently asked questions (FAQs) on hospital price transparency compliance requirements. Some of the FAQs are related to new requirements that took effect January 1, 2025, as finalized in the Calendar Year 2024 Outpatient Prospective Payment System/Ambulatory Services Center Final Rule, and others are modifications to existing requirements as detailed in previous FAQs.
GAO Releases Reports on Older Americans Act-Funded Services, ARPA-H Workforce. The US Government Accountability Office (GAO) report recommended that the Administration for Community Living develop a written plan for its work with the Interagency Coordinating Committee on Healthy Aging and Age-Friendly Communities to improve services funded under the Older Americans Act. In another report, the GAO recommended that the Advanced Research Projects Agency for Health (ARPA-H) develop a workforce planning process and assess scientific personnel data.
VA Expands Cancers Covered by PACT Act. The US Department of Veterans Affairs (VA) will add several new cancers to the list of those presumed to be related to burn pit exposure, lowering the burden of proof for veterans to receive disability benefits. Read the press release here.
HHS Announces $10M in Awards for Maternal Health. The $10 million in grants from the Substance Abuse and Mental Health Services Administration (SAMHSA) will go to a new community-based maternal behavioral health services grant program. Read the press release here.
Surgeon General Issues Advisory on Link Between Alcohol and Cancer Risk. The advisory includes a series of recommendations to increase awareness of the connection between alcohol consumption and cancer risk and update the existing Surgeon General’s health warning label on alcohol-containing beverages. Read the press release here.
SAMHSA Awards CCBHC Medicaid Demonstration Planning Grants. The grants will go to 14 states and Washington, DC, to plan a Certified Community Behavioral Health Clinic (CCBHC). Read the press release here.
HHS Announces Membership of Parkinson’s Advisory Council. The Advisory Council on Parkinson’s Research, Care, and Services will be co-chaired by Walter J. Koroshetz, MD, Director of the National Institutes of Health’s National Institute of Neurological Disorders and Stroke, and David Goldstein, MS, Associate Deputy Director for the Office of Science and Medicine for HHS’s Office of the Assistant Secretary for Health. Read the press release here.

NEXT WEEK’S DIAGNOSIS

The House and Senate are in session next week and will continue to organize for the 119th Congress. Confirmation hearings are expected to begin in the Senate for President-elect Trump’s nominees, although none in the healthcare space have been announced yet. On the regulatory front, CMS will publish the Medicare Advantage rate notice.

5 Trends to Watch: 2025 EU Data Privacy & Cybersecurity

Full Steam Ahead: The European Union’s (EU) Artificial Intelligence (AI) Act in Action — As the EU’s landmark AI Act officially takes effect, 2025 will be a year of implementation challenges and enforcement. Companies deploying AI across the EU will likely navigate strict rules on data usage, transparency, and risk management, especially for high-risk AI systems. Privacy regulators are expected to play a key role in monitoring how personal data is used in AI model training, with potential penalties for noncompliance. The interplay between the AI Act and the General Data Protection Regulation (GDPR) may add complexity, particularly for multinational organizations.
Network and Information Security Directive (NIS2) Matures: A New Era of Cybersecurity Regulation — The EU’s NIS2 Directive will enter its enforcement phase, expanding cybersecurity obligations for critical infrastructure and key sectors. Companies must adapt to stricter breach notification rules, risk management requirements, and supply-chain security mandates. Regulators are expected to focus on cross-border coordination in response to major incidents, with early cases likely setting important precedents. Organizations will likely face increasing scrutiny of their cybersecurity disclosures and incident response protocols.
The Evolution of Data Transfers: Toward a Unified Framework — After years of turbulence, 2025 may mark a turning point for transatlantic and global data flows. The EU-U.S. Data Privacy Framework will face ongoing reviews by the European Data Protection Board (EDPB) and potential legal challenges, but it offers a clearer path forward. Meanwhile, the EU may continue striking adequacy agreements with key trading partners, setting the stage for a harmonized approach to cross-border data transfers. Companies will need robust mechanisms, such as Standard Contractual Clauses and emerging Transfer Impact Assessments (TIAs), to maintain compliance.
Consumer Rights Expand Under the GDPR’s Influence — The GDPR continues to set the global benchmark for privacy laws, and 2025 will see the ripple effect of its influence as EU member states refine their own data protection frameworks. Enhanced consumer rights, such as the right to explanation in algorithmic decision-making and stricter opt-in requirements for data use, are anticipated. Regulators are also likely to target dark patterns and deceptive consent mechanisms, driving companies toward greater transparency in their user interfaces and data practices.
Digital Markets Act Meets GDPR: Privacy in the Platform Economy — The Digital Markets Act (DMA), fully enforceable in 2025, will bring sweeping changes to large online platforms, or “gatekeepers.” Interoperability mandates, restrictions on data combination across services, and limits on targeted advertising will intersect with GDPR compliance. The overlap between DMA and GDPR enforcement will challenge platforms to adapt their practices while balancing privacy obligations. This regulatory synergy may reshape data monetization strategies and set a precedent for digital market governance worldwide.

AI Versus MFA

Ask any chief information security officer (CISO), cyber underwriter or risk manager, or cybersecurity attorney about what controls are critical for protecting an organization’s information systems, you’ll likely find multifactor authentication (MFA) at or near the top of every list. Government agencies responsible for helping to protect the U.S. and its information systems and assets (e.g., CISA, FBI, Secret Service) send the same message. But that message may be evolving a bit as criminal threat actors have started to exploit weaknesses in MFA.
According to a recent report in Forbes, for example, threat actors are harnessing AI to break though multifactor authentication strategies designed to prevent new account fraud. “Know Your Customer” procedures are critical in certain industries for validating the identity of customers, such as financial services, telecommunications, etc. Employers increasingly face similar issues with recruiting employees, when they find, after making the hiring decision, that the person doing the work may not be the person interviewed for the position.
Threat actors have leveraged a new AI deepfake tool that can be acquired on the dark web to bypass the biometric systems that been used to stop new account fraud. According to the Forbes article, the process goes something like this:
“1. Bad actors use one of the many generative AI websites to create and download a fake image of a person.
2. Next, they use the tool to synthesize a fake passport or a government-issued ID by inserting the fake photograph…
3. Malicious actors then generate a deepfake video (using the same photo) where the synthetic identity pans their head from left to right. This movement is specifically designed to match the requirements of facial recognition systems. If you pay close attention, you can certainly spot some defects. However, these are likely ignored by facial recognition because videos are prone to have distortions due to internet latency issues, buffering or just poor video conditions.
4. Threat actors then initiate a new account fraud attack where they connect a cryptocurrency exchange and proceed to upload the forged document. The account verification system then asks to perform facial recognition where the tool enables attackers to connect the video to the camera’s input.
5. Following these steps, the verification process is completed, and the attackers are notified that their account has been verified.”
Sophisticated AI tools are not the only MFA vulnerability. In December 2024, the Cybersecurity & Infrastructure Security Agency (CISA) issued best practices for mobile communications. Among its recommendations, CISA advised mobile phone users, in particular highly-targeted individuals,
Do not use SMS as a second factor for authentication. SMS messages are not encrypted—a threat actor with access to a telecommunication provider’s network who intercepts these messages can read them. SMS MFA is not phishing-resistant and is therefore not strong authentication for accounts of highly targeted individuals.
In a 2023 FBI Internet Crime Report, the FBI reported more than 1,000 “SIM swapping” investigations. A SIM swap is just another technique by threat actors involving the “use of unsophisticated social engineering techniques against mobile service providers to transfer a victim’s phone service to a mobile device in the criminal’s possession.
In December, Infosecurity Magazine reported on another vulnerability in MFA. In fact, there are many reports about various vulnerabilities with MFA.
Are we recommending against the use of MFA. Certainly not. Our point is simply to offer a reminder that there are no silver bullets to achieving security of information systems and that AI is not only used by the good guys. An information security program, preferably one that is written (a WISP), requires continuous vigilance, and not just from the IT department, as new technologies are leveraged to bypass older technologies.