SEC Crypto 2.0: SEC Announces New Crypto Task Force

On January 21, 2025, the SEC announced the formation of a new Crypto Task Force. Styled “Crypto 2.0” in the SEC press release, the announcement signals a shift in the agency’s approach to the digital asset sector coincident with the change in presidential administrations.
The task force will be led by Commissioner Hester Peirce and draw on staff from around the agency. Its mission is to “collaborate with Commission staff and the public to set the SEC on a sensible regulatory path that respects the bounds of the law.” The task force anticipates future roundtables and invites the submission of public comments. It will also coordinate with other state and federal agencies, including the Commodity Futures Trading Commission.
The SEC press release announcing the task force’s creation is somewhat critical of the agency’s prior approach to regulating digital assets, noting that the agency “relied primarily on enforcement actions to regulate crypto retroactively and reactively, often adopting novel and untested legal interpretations along the way.” The press release noted, “Clarity regarding who must register, and practical solutions for those seeking to register, have been elusive.” The announcement concludes, “The SEC can do better.”
The crypto industry heavily supported the candidacy of President Trump, and the President’s nominee for SEC chairman, Paul Atkins, is likely to support a reset of the SEC’s approach to regulating the sector. After the crypto winter, it appears spring is coming to the SEC.

5 Trends to Watch: 2025 Financial Services Litigation

Increasing Focus on Payments — Payments litigation will likely continue and increase in 2025 in the United States and globally, along with increased use of Automated Clearing House (ACH) transfers and wires, bank and non-bank competition, state regulation, and more sophisticated fraud schemes. This trend should continue regardless of the incoming administration’s enforcement priorities. Borrowing from Europe, the United States could see increasing pressure for a Payment Services Regulation or other laws to shift more risk of payment fraud to financial institutions. State-based efforts to regulate interchange fees may create additional risk.
Increasing Use of Mass Arbitration and Rise of International Arbitration — Mass arbitration in the United States is likely to continue and increase, particularly as plaintiffs’ counsels become more equipped, efficient, and coordinated at lodging these attacks. International arbitration also is likely to increase, given globalization and diversification, driven by the growing complexity of cross-border issues. The strategic advantage of leveraging global litigation offices in regions like Latin America, Europe, and the Middle East will be crucial, as these areas continue to be hot spots for international business activities and disputes. Reliance on local knowledge will become increasingly important as parties seek more efficient and culturally sensitive resolutions.
Anti-Money Laundering (AML), Know Your Customer (KYC), and Compliance-Related Issues — There was increased activity over the past year on AML-related matters globally, and this trend appears likely to continue. This increase also is likely to carry over to civil litigation, including complex fraud and Ponzi schemes and allegations relating to improper asset management or trust disputes, where financial institutions are being more heavily scrutinized over actions taken by their customers, and the plaintiffs’ bar is expected to try to create more hospitable case law and jurisdictions. As regulatory scrutiny intensifies globally, financial institutions will continue to find themselves at the intersection of civil litigation and concurrent regulatory/criminal investigations, creating additional risks. The growing complexity of these cases underscores the need for banks to maintain vigilance and adaptability.
Changing Enforcement and Regulatory Risks — A slowdown of Consumer Financial Protection Bureau (CFPB)-related activity, including a relative slowdown of crypto enforcement, could take place over the course of the year due to the change of administration and agency leadership, but there could be an increase in certain states’ attorneys general activity. State-based regulation and legislation would pose additional risks, creating jurisdictional and other challenges. State regulatory agencies may continue enforcement efforts related to consumer protections in the financial services space. There also may be continued focus on fair lending practices, with potential litigation concerning artificial intelligence’s (AI) role in lending or other decisions. The rise of digital currencies also has introduced new legal challenges. Cryptocurrency exchanges are being held accountable for frauds occurring on their platforms and ongoing uncertainties in digital asset regulations are resulting in compliance challenges and related litigation.
Information Use and Security — The increasing use of new technologies and AI likely will result in increased risks and a rise in civil litigation. Litigation may emerge over AI tools allegedly infringing on copyrights. Another area would be AI-based pricing algorithms being scrutinized for potential collusion and antitrust violations or discrimination and bias. More U.S. states are proposing and passing comprehensive AI and other laws that do not have broad financial institution or Graham Leach Bliley Act-type exemptions, so there could be additional regulation. States also could continue efforts to pass new laws in the privacy area to address areas not currently regulated through federal laws.

CFPB Seeks Public Comment on Digital Payment Privacy and Consumer Protections

On January 10, 2025, the U.S. Consumer Financial Protection Bureau (“CFPB”) invited public comment on strengthening privacy protections for, and a proposed interpretive rule extending financial consumer protections to, emerging payment mechanisms. The agency’s request for information (“RFI”) aims to clarify how existing financial privacy laws apply to emerging consumer payment mechanisms, including digital currencies and gaming platforms. Additionally, the agency issued a proposed interpretive rule (“Proposed Rule”) meant to extend financial consumer protections against errors and fraud to emerging payment mechanisms. 
The CFPB’s RFI focuses on how companies collect, use and share consumer financial data. The agency’s research indicates that some digital payment platforms collect more data than necessary to complete transactions, often integrating this data with broader consumer information such as location and browsing history. This practice raises concerns about personalized pricing and potential consumer harm. The CFPB is evaluating whether existing regulations, such as the Gramm-Leach-Bliley Act and the Fair Credit Reporting Act, sufficiently address modern data surveillance practices.
In addition to privacy concerns, the CFPB has issued the Proposed Rule to clarify the application of Regulation E of the Electronic Fund Transfer Act to emerging payment mechanisms. Regulation E provides consumer protections against errors and unauthorized transactions in electronic fund transfers. The proposed rule would expand key definitions within Regulation E to include:

Financial Institutions: Extending coverage to nonbank entities that facilitate electronic fund transfers.
Funds: Broadening the definition to encompass digital assets that function as a medium of exchange, including stablecoins and similar payment instruments.
Accounts: Expanding the definition to cover virtual currency wallets, gaming accounts and credit card rewards points used for transactions.

The CFPB’s proposal highlights the growing role of digital payment options beyond traditional banking systems and seeks to ensure consumer protection measures apply consistently across emerging platforms.
Public comments may be submitted on the Proposed Rule by March 31, 2025, and the RFI by April 11, 2025. 

Managing Artificial Intelligence: The Monetary Authority of Singapore’s Recommendations on AI Model Risk Management

This publication is issued by K&L Gates Straits Law LLC, a Singapore law firm with full Singapore law and representation capacity, and to whom any Singapore law queries should be addressed. K&L Gates Straits Law is the Singapore office of K&L Gates, a fully integrated global law firm with lawyers located on five continents.
Introduction and Background
On 5 December 2024, as part of the Monetary Authority of Singapore’s (MAS) incremental efforts to ensure responsible use of artificial intelligence (AI) in Singapore’s financial sector, MAS published recommendations on AI model risk management in an information paper1 following a review of AI-related practices of selected banks.
In the information paper, MAS stressed that the good practices highlighted in the information paper should apply to other financial institutions. This alert briefly outlines key recommendations in the information paper, with three key focus areas that MAS expects banks and financial institutions to keep in mind when developing and deploying AI, which covers (1) oversight and governance of AI, (2) key risk management systems and processes for AI, and (3) development, validation and deployment of AI.
Key Focus Area 1: Oversight and Governance of AI2 
Existing risk governance frameworks and structures (such as those related to data, technology and cyber; third-party risk management; and legal and compliance) remain relevant for AI governance and risk management. In tandem with these existing control functions, MAS deems it good practice for banks to do the following: 

Establish cross-functional oversight forums to avoid gaps in AI risk management and ensure that the bank’s standards and processes are aligned across the bank and kept in pace with the state of the bank’s AI usage.
Update control standards to keep pace with the increasing use of AI or new AI developments, policies and procedures relating to performance testing of AI for new use cases and clearly setting out roles and responsibilities to address AI risk. 
Develop clear statements and guidelines to govern areas such as fair, ethical, accountable and transparent use of AI across the bank to prevent potential harms to consumers and other stakeholders arising from the use of AI.
Build capabilities in AI across the bank to support both innovation and risk management.

Key Focus Area 2: Key Risk Management Systems and Processes3
MAS also recognised from most banks the need to establish or update key risk management systems and processes for AI, particularly in the following areas: 

Policies and procedures for identifying AI usage and risk across the bank, so that commensurate risk management can be applied to the respective AI model.
Systems and processes to ensure the completeness of a bank’s AI inventories, which also capture the approved scope of use for that particular AI (e.g., the purpose, use case, application, system and other relevant conditions) and provide a central view of AI usage to support oversight.
Assessment of the risk materiality of AI that covers key risk dimensions, such as AI’s impact on the customer, bank and stakeholders; the complexity of AI model or system used; and the bank’s reliance on AI, which takes into account the autonomy granted to AI and the involvement of humans, so that relevant controls can be applied proportionately. 

Key Focus Area 3: Development and Deployment of AI4
Most banks have established standards and processes for development, validation and deployment of AI to address key risks. MAS deems it good practice for banks and financial institutions to do the following:

In relation to the development of AI, to focus on data management, model selection, robustness and stability, explainability and fairness, as well as reproducibility and auditability. 
In relation to the validation of AI, to require independent validations or reviews of AI of higher risk materiality prior to deployment and to ensure that development and deployment standards have been adhered to. For AI of lower risk materiality, most banks conduct peer reviews that are calibrated to the risks posed by the use of AI prior to deployment. 
In relation to the deployment, monitoring and change management of AI, to perform predeployment checks, closely monitor deployed AI based on appropriate metrics, and apply the appropriate change management standards and processes to ensure that AI would behave as intended when deployed.

Generative AI and Third-Party AI5
MAS has noted that the use of generative AI is still in its early stages in banks and financial institutions. However, MAS suggests that banks and financial institutions should generally try to apply existing governance and risk management structures and processes where relevant and practicable. Innovation and risk management should be balanced by adopting the following: 

Strategies and approaches, in which a bank leverages on the general-purpose nature of generative AI for key enabling modules or services, but limits the current scope of generative AI to use cases for assisting or augmenting human and operational efficiencies that are not directly customer-facing. 
Process controls, such as setting up cross-functional risk control checks at key stages of the generative AI’s life cycle and requiring human oversight for generative AI decisions with attention on user education and training on the limitations of generative AI tools.
Technical controls, such as selection, testing and evaluation of generative AI models in the bank’s use cases; developing reusable modules to facilitate testing and evaluation; assessing different aspects of generative AI model performance and risks; establishing input and output filters as guardrails to address toxicity, bias and privacy issues; and mitigating data security risk via measures such as the use of private clouds or on-premise servers and limiting the access of generative AI to sensitive information.

In relation to third-party AI, existing third-party risk management standards and processes continue to play an important role in banks’ efforts to mitigate risks. As far as practicable, MAS suggests that banks extend controls for internally developed AI to third-party AI. Banks should also supplement controls for third-party AI with other approaches to mitigate additional risks. These include the following:

Conducting compensatory testing to verify the third-party AI model’s robustness and stability and detect potential biases.
Developing robust contingency plans to address potential failures, unexpected behaviour of third-party AI or discontinuing support by vendors.
Updating legal agreements and contracts with third-party AI providers to include clauses that provide for performance guarantees, data protection, the right to audit and notification when AI is introduced in third-party providers’ solutions to the banks and financial institutions.
Improving the staff training on AI literacy, risk awareness and mitigation. 

Conclusion
In conclusion, MAS has highlighted that robust oversight and governance of AI, supported by comprehensive identification, recording of AI inventories and appropriate risk materiality assessment, along with development, validation and deployment standards, are important areas that financial institutions and banks will need to focus on when using AI. Financial institutions and banks will need to keep in mind that the AI landscape will continue to evolve, and existing standards and process will need to reviewed and updated in consultation with MAS and industry best practices to ensure proper governance and risk management of AI and generative AI.

Footnotes

1 “Artificial Intelligence Model Risk Management: Observations from a Thematic Review,” accessible at https://www.mas.gov.sg/publications/monographs-or-information-paper/2024/artificial-intelligence-model-risk-management (the Information Paper).
2 Information Paper paras. 4.1–4.5.
3 Information Paper paras. 5.1–5.3.
4 Information Paper paras. 6.1–6.5.
5 Information Paper paras. 7.1–7.2.

Dubai Court of Cassation Recognises the Concept of Without Prejudice Settlement Discussions

Introduction
In a recent judgment in Case No. 486 of 2024 (issued on 22 October 2024), the Dubai Court of Cassation (Court of Cassation) upheld the decision of the Dubai Court of Appeal (Court of Appeal) (issued on 3 April 2024 in Case No. 31 of 2024) that parties’ unsuccessful settlement discussions are inadmissible as evidence of a party’s liability. 
Background
The claimant filed a case in the Dubai Court of First Instance (Court of First Instance) arising out of an agreement to purchase cryptocurrency. The claimant alleged that the agreed amount of cryptocurrency had not been transferred following payment and claimed compensation, plus interest. The Court of First Instance only awarded a small part of the claimed amount and dismissed the rest of the claim. The claimant appealed to the Court of Appeal on the basis that the Court of First Instance had failed to take into consideration WhatsApp communications between the parties during settlement discussions in which the defendant admitted to owing the claimed amount. The Court of Appeal held that statements made during amicable settlement discussions are not evidence of liability, as they are given on a “without prejudice” basis and such statements are protected from being used as evidence of liability. The claimant appealed that judgment to the Court of Cassation. 
Judgment of the Court of Cassation
The Court of Cassation upheld the decision of the Court of Appeal and dismissed the appeal. The Court of Cassation reiterated that settlement discussions, if unsuccessful, are inadmissible as evidence of a party’s liability. 
Analysis
Although the concept of without prejudice communication is well established in common law jurisdictions, the laws of the United Arab Emirates (UAE) do not expressly recognise the concept, and the onshore UAE courts have historically been open to receiving evidence of parties’ settlement discussions. As there is no system of binding precedent in the UAE, it remains to be seen whether this judgment marks a change in approach by the onshore UAE courts. If followed, this would be a welcomed development, as it would allow contracting parties to seek to negotiate a settlement without the risk of any settlement offers being used as evidence of liability. It is also worth noting that none of the judgments at any level are clear as to whether the correspondence at issue was marked “without prejudice.” This may suggest that no specific designation is required, provided the correspondence was sent as part of a genuine effort to settle the dispute. Nonetheless, parties may have more success asserting privilege over correspondence that has been clearly marked as such. 

FCA Consults on Second Phase of Enforcement Investigation Proposals

Following the UK Financial Conduct Authority (FCA)’s February 2024 consultation on changes to its Enforcement Guide and publicising enforcement investigations (CP 24/2) (First Consultation), the FCA has issued a further consultation on its proposals in November 2024 (CP 24/2, part 2) (Second Consultation). The Second Consultation sets out changes to the FCA’s initial proposals in response to feedback from the First Consultation.
Background
The First Consultation outlined the FCA’s proposed changes to how it publicises its enforcement investigations. The FCA aims to increase transparency about its enforcement work and its deterrent effect, and to disseminate best practice. The FCA also proposed wider changes to its Enforcement Guide to reduce duplication and make information about its processes more accessible.
Many stakeholders considered the proposals set out in the First Consultation to be somewhat controversial, sparking a high volume of comments and concerns. Chapter 2 of the Second Consultation summarises the common issues raised in responses to the First Consultation.
Since the First Consultation closed to responses on 30 April 2024, the FCA has been extensively engaging with stakeholders and responding to requests for information from parliamentary committees. In an oral evidence session before the House of Lords Financial Services Regulation Committee, Nikhil Rathi, FCA Chief Executive, and Ashley Alder, FCA Chair, noted the FCA recognised the strength of feedback and would be seeking further consultation on “fundamentally reshaped” proposals. 
Second Consultation
In response to feedback, the FCA recently launched the Second Consultation setting out revised proposals in an attempt to address the concerns raised and provide clarity on the proposals.
In particular, the Second Consultation made a number of key changes to the original proposals:

Negative Impact Considerations. The FCA proposes to consider the negative impact (such as reputational risks) of a firm when considering the public interest test of whether the firm shall be named and announcement of the investigation.
Public Confidence Considerations. The FCA will have regard to any serious disruption to the public confidence of the UK financial system in relation to public interest considerations of investigation or enforcement announcements. 
Staged Consideration. The FCA will provide its public interest consideration at each stage of the announcement process, including whether there should be any announcement at all, when the announcement should be made and details in the announcement. 
Greater Notice to Firms. The firm under investigation will have ten days’ notice ahead of any public announcements and an additional two days’ notice if the FCA proceeds with any announcement, which is a significant increase compared to the one day’s notice to firms under the First Consultation. 
Proposal Timelines. The FCA also confirmed that once the proposals are in effect, it will not make any proactive announcements in relation to ongoing investigations at the time, but may provide confirmation of ongoing investigations upon enquiries where the public interest test is satisfied.

Next Steps
The Second Consultation closes on 17 February 2025. The FCA Board expects to take a decision on the revised proposals in Q1 2025. 
The First Consultation, Second Consultation and transcript of oral evidence session are available here, here and here, respectively. 
 
Larry Wong contributed to this article.

Hong Kong’s Security Tokenization Support Initiative – A Subsidy Program

Recently, Hong Kong Monetary Authority (HKMA) initiated accepting applications for Digital Bond Grant Scheme (the Grant Scheme) to financially support digital bond issuers for the duration of three years. The Grant Scheme aims to encourage broader adoption of “tokenization technology” in capital markets and foster the development of digital securities markets in Hong Kong.
“Digital bond” is defined as a bond that utilizes distributed ledger technology (DLT) to digitally represent ownership, which may encompass legal titles and/or beneficial interests in the bond. Each eligible issuer, including its associates, may receive subsidies under the Grant Scheme for a maximum of two digital bond issuances.
The Grant Scheme subsidizes:

up to 50% of the eligible expenses for each digital bond issuances for:

Up to HK$1.25 million (Half Grant) for issuances meeting basic requirements; and
HK$2.5 million (Full Grant) for issuances meeting both basic and additional requirements, which are summarized below. 

Eligibility Requirements
Half grant
It is available when the issuances meet the following basic requirements:

It must be issued in Hong Kong with at least half of the lead arrangers recognized as having substantial Hong Kong debt capital market operations; and
The DLT platform’s development and/or operations team must have a substantial Hong Kong presence or use a DLT platform operated by the Central Moneymarkets Unit (CMU).

Full grant
For a Full Grant, in addition to the basic requirements, the issuance must meet additional requirements, including:

Being issued on a DLT platform provided by an independent entity;
Having a minimum issuance size of HK$1 billion equivalent;
Being issued to five or more non-associated investors; and
Being listed on the Stock Exchange of Hong Kong (SEHK) or on licensed virtual asset trading platforms (VATP).

Eligible Expenses
The Grant Scheme subsidizes expenses related to the issuance of digital bonds, including:

Fees to non-associated DLT platform providers;
Fees to local arrangers (non-associated), legal advisors, auditors, and rating agencies;
Listing fees on the SEHK or licensed VATPs; and
CMU lodging and clearing fees.

Additionally, if the digital bond qualifies as a green, social, or sustainability bond, the following grant will be available:

Eligible general bond issuance costs: covered by either the Grant Scheme or Track I of the Green and Sustainable Finance Grant Scheme (GSF Grant Scheme), up to HK$2.5 million, and
External sustainability review costs: covered by Track II of the GSF Grant Scheme, up to HK$800,000 for all pre-issuance and post-issuance external reviews combined.

How To Apply
Potential applicants may start with an “optional pre-application consultation” with the HKMA for preliminary feedback on their eligibility.
Formal applications must be submitted within three months of the bond’s issuance.
Conclusion
As tokenization of securities is expected to be more popular this year and HKMA is providing flexible subsidiary programs with options of Half Grant or Full Grant, foreign companies as well as Hong Kong companies may wish to take advantage of the subsidy programs to issue digital bonds and save their issuance costs.

Scrutiny on Financial Institutions Compliance Expected to Increase During Trump Administration

Key Takeaways: 

Federal bank regulators plan for vigorous review of safety and soundness and consumer compliance functions.
BSA/AML, fair lending and mapping compliance to controls are expected to be the focus of examiners’ review.
State regulators are expected to fill the void if the Trump Administration restrains federal supervision.

In the final months of 2024, federal regulators were outlining their respective supervisory priorities for 2025, in terms of personnel allocation, budget needs and examination priorities. As in the past several years, these regulatory previews served as potent prognosticators of supervisory surveillance to come. The agencies’ own predictions of their supervisory priorities serve as helpful aides for financial institutions in how to allocate their own legal and compliance resources. Then, Donald J. Trump was re-elected as the 47th President of the United States.
With sweeping power to appoint agency heads of pivotal federal financial regulators such as the Federal Deposit Insurance Corporation (“FDIC”), the Office of the Comptroller of the Currency (“OCC”) and the Consumer Financial Protection Bureau (“CFPB”), President Trump has broad authority to shape the future of federal regulatory examinations and enforcement. However, if the regulatory environment of the previous Trump Administration is any indication, this is no time for financial institutions to give their compliance functions a sabbatical.
What are the Agencies Predicting?
In the past few months, the federal bank regulatory agencies have offered their expectations for supervisory priorities. For the OCC, the regulator for most large banks, the agency has indicated that it will focus its 2025 resources on, among other things, compliance. The OCC has indicated that it will focus on BSA/AML/OFAC compliance, assessing whether “operations and systems are reasonably designed and implemented to mitigate and manage money laundering, terrorist financing and other illicit financial activity risks from business activities, including products and services offered and customers and geographies served.” In the area of consumer compliance, the OCC has indicated that it will focus on “banks’ risk management processes to determine compliance with consumer protection laws and statutory requirements, including those prohibiting unfair, deceptive or abusive acts or practices that aim to address potential instances of fraud tied to consumer accounts.” Moreover, the OCC notes: “Examiners also evaluate whether banks can identify and manage in a timely manner compliance risks presented by person-to-person and real-time payment product offerings, including dispute and error resolution.”
However, merely knowing what the relevant laws are is not enough, according to the OCC. Rather, the OCC will expect institutions to map these varying statutes and regulations to internal controls: “Examiners should consider banks’ systems and controls to ensure that relevant aspects of products or services, including those offered through third-party relationships, are communicated to consumers in a clear, consistent manner with accurate, complete information.”
The OCC has also indicated that it will heavily scrutinize banks’ fair lending compliance programs. According to the OCC: “Examiners focus on assessing fair lending risk and whether banks are ensuring adequate risk management and equal access to products and services. Risk assessments consider all factors affecting a bank’s fair lending risk, including changes to strategy, personnel, products, services, credit underwriting, CRA assessment areas or market areas, and operating environments since the previous risk assessment. Examiners will use data-driven, risk profile-based approaches to identify focal points for fair lending examinations.” In conducting fair lending risk assessments, the OCC examiners are likely to look for deficiencies in linking compliance requirements to internal controls. Examiners tend to allege that such deficiencies have led to compliance lapses, and such lapses must be remedied through severe and public enforcement actions.
The CFPB’s Supervisory Highlights for Winter 2024 similarly predicted a robust supervisory posture for 2025, focusing on such topics as unanticipated overdraft fees and representment NSF fees, core processor practices, improper representment processing practices, stop payment services, obligations of furnishers of credit report information, short-term small dollar lending, among others.
More recently, FDIC Acting Chairman Travis Hill, a Republican, offered his insights on the new direction that the agency may take following President Trump’s inauguration. In a speech before the American Bar Association on January 10, 2025, Mr. Hill noted that “the agency needs a new direction,” and called for reorienting bank supervision away from what he described as a misplaced focus on “process-related issues that have little bearing on a bank’s core financial condition or solvency.” Mr. Hill noted that despite what other changes will come with the FDIC, “basic controls and risk management infrastructure still matter.”
How Might the Trump Administration Respond?
With the incoming administration and the new agency appointments that are likely to quickly follow, the degree to which the federal agencies may be willing to implement the above policies remains uncertain. President Trump is likely to appoint at least acting leadership at the OCC and CFPB within the first few weeks of the new administration.
Once these acting leaders assume their roles, we are likely to see the rescission of informal agency guidance, bulletins, advisory notices, etc. These are informal agency guidance that did not have to go through notice-and-comment rulemaking.
One example of this may be CFPB Circular 2024-05, which clarified that consumers must affirmatively opt into overdraft protection services before a financial institution charges overdraft fees. In that informal agency guidance, the CFPB declared that charging such fees without consent may constitute a violation of the Electronic Funds Transfer Act (“EFTA”). Moreover, proposed rulemakings that have not yet been promulgated in final form are likely to stall and die without further action.
Resurgence in State Enforcement Actions
If the notion of state enforcement activities surging in response to the Trump Administration sounds familiar, it should—the same thing happened four years ago. Under the first Trump Administration, President Trump’s first appointed CFPB leader was Acting Director Mick Mulvaney, who had been an outspoken critic of the CFPB. Mr. Mulvaney issued a letter on Jan. 23, 2018, detailing how his administration of the agency would differ from Richard Cordray’s. Mr. Mulvaney believed that Mr. Cordray had promoted enforcement actions that far exceeded the agency’s statutory authority.
Mr. Mulvaney’s attempt to refocus and restrain the CFPB did not result in lessened focus on compliance. It just shifted the direction from where the scrutiny came. On December 12, 2017, a coalition of state attorneys general, led by then-New York Attorney General Eric Schneiderman, vigorously opposed the Trump Administration’s attempts to, in the AGs’ view, restrain the CFPB’s consumer compliance enforcement function. The AGs offered the following warning against lessening federal consumer compliance enforcement:
As you know, state attorneys general have express statutory authority to enforce federal consumer protection laws, as well as the consumer protection laws of our respective states. We will continue to enforce those laws vigorously regardless of changes to CFPB’s leadership or agenda. As attorneys general, we retain broad authority to investigate and prosecute those individuals or companies that deceive, scam, or otherwise harm consumers. If incoming CFPB leadership prevents the agency’s professional staff from aggressively pursuing consumer abuse and financial misconduct, we will redouble our efforts at the state level to root out such misconduct and hold those responsible to account.
The above warning from state attorneys general to “redouble” their efforts is not without foundation. Under the Dodd-Frank Act, state attorneys general are empowered to enforce various federal consumer financial protection statutes, including the Truth-in-Lending Act, the Fair Credit Reporting Act, the Equal Credit Opportunity Act and prohibitions on unfair, deceptive or abusive acts or practices under 12 U.S.C. § 5531 and regulations promulgated by the CFPB under 12 U.S.C. § 5552. As a result of this, state attorneys general have established “mini-CFPBs” to enforce consumer protection laws—both state and federal laws.
Summary and Recommendations
As discussed above, although the change in administrations will undoubtedly bring some measure of uncertainty, what is certain is that if the federal regulators in Washington attempt to dilute the vigilance of federal superintendence, the states are waiting in the wings to fill the void. In fact, with the changing political environment and zealous and ambitious state prosecutors and regulators, there is no shortage of state actors eager to make an example of a financial institution.
Moreover, although financial institutions serve an important public function—providing depository and lending services to a broad cross-section of Americans—they also draw bipartisan criticism in Washington and among the states. The likely result is that now more than ever, financial institutions, both banks and non-bank providers, must work toward being as pristine as possible in their compliance efforts and results. This will require thorough knowledge of the applicable statutes, regulations and agency guidance, the ability to map those requirements to institutions’ compliance functions, the ability to review and implement changes in laws as they change and to engage outside counsel with the knowledge and experience to defend the institutions’ position when challenged by the regulators—and those challenges will undoubtedly come.

CFPB’s Recent Rule Eliminates Medical Debt from Credit Reports

On January 7, 2025, the Consumer Financial Protection Bureau (“CFPB”) published a final Rule (the “Rule”) that prohibits consumer reporting agencies from including individuals’ medical debt on consumer credit reports.
The CFPB states that this Rule, which amends Regulation V of the Fair Credit Reporting Act, aims to ease financial burdens placed on individual consumers seeking loans by preventing medical debt from negatively impacting credit scores. Additionally, the Rule prohibits creditors from considering consumer medical debt information in credit eligibility determinations and decisions.
The Rule has been published in the Federal Register and is scheduled to become effective March 17, 2024. A recent Executive Order, however, may delay or impact whether the Rule is implemented and, if it is implemented, the timing of when the Rule becomes effective.
The Rule is currently facing at least two legal challenges. In ACA International, et al. v. Consumer Financial Protection Bureau, et al., which was filed in the U.S. District Court for the Southern District of Texas, and Cornerstone Credit Union League, et al. v. Consumer Financial Protection Bureau, et al., which was filed in the U.S. District Court for the Eastern District of Texas, plaintiffs allege that the CFPB has overstepped its rulemaking authority, which is the ultimate duty of Congress. In both actions, the plaintiffs also argue that the consequences of excluding medical debt in credit reporting could impede on the accuracy of credit reporting systems and make it more difficult to determine individuals’ creditworthiness. Additionally, the plaintiffs in ACA state that the Rule will ultimately harm the medical field by failing to hold individuals fully responsible for medical bills that support doctors, nurses, and hospitals. 
While lenders and individuals should continue to monitor this change, they should also be aware of states and other local jurisdictions that have already imposed restrictions on credit reporting as it relates to medical debt. For example, under New York’s Fair Medical Debt Reporting Act, hospitals, health care providers and ambulance services are prohibited from reporting, either directly or indirectly, medical debt information to consumer reporting agencies.

House Bipartisan Task Force on Artificial Intelligence Report

In February 2024, the House of Representatives launched a bipartisan Task Force on Artificial Intelligence (AI). The group was tasked with studying and providing guidance on ways the United States can continue to lead in AI and fully capitalize on the benefits it offers while mitigating the risks associated with this exciting yet emerging technology. On 17 December 2024, after nearly a year of holding hearings and meeting with industry leaders and experts, the group released the long-awaited Bipartisan House Task Force Report on Artificial Intelligence. This robust report touches on how this technology impacts almost every industry ranging from rural agricultural communities to energy and the financial sector to name just a few. It is clear that the AI policy and regulatory space will continue to evolve while being front and center for both Congress and the new administration as lawmakers, regulators, and businesses continue to grapple with this new exciting technology.
The 274-page report highlights “America’s leadership in its approach to responsible AI innovation while considering guardrails that may be appropriate to safeguard the nation against current and emerging threats.” Specifically, it outlines the Task Force’s key findings and recommendations for Congress to legislate in over a dozen different sectors. The Task Force co-chairs, Representative Jay Obernolte (R-CA) and Representative Ted Lieu (D-CA), called the report a “roadmap for Congress to follow to both safeguard consumers and foster continued US investment and innovation in AI,” and a “starting point to tackle pressing issues involving artificial intelligence.” 
There was a high level of bipartisan work on AI in the 118th Congress, and although most of the legislation in this area did not end up becoming law, the working group report provides insight into what legislators may do this year and which industries may be of particular focus. Our team continues to monitor legislation, Congressional hearings, and the latest developments writ large in these industries as we transition into the 119th Congress. See below for a sector-by-sector breakdown of a number of findings and recommendations from the report.
Data Privacy
The report’s section on data privacy discusses advanced AI systems’ need to collect huge amounts of data, the significant risks this creates for the unauthorized use of consumers’ personal data, the current state of US consumer privacy protection laws, and recommendations to address these issues. 
It begins with a discussion of AI systems’ need for “large quantities of data from multiple diverse sources” to perform at an optimal level. Companies collect and license this data in a variety of ways, including collecting data from their own users, scraping data from the internet, or some combination of these and other methods. Further, some companies collect, package, and sell scraped data “while others release open-source data sets.” These collection methods raise their own set of issues. For example, according to the report, many websites following “a voluntary standard” state that their websites should not be scraped, but their requests are ignored and litigation ensues. It also notes that some companies “are updating their privacy policies in order to permit the use of user data to train AI models” but not otherwise informing users that their data is being used for this purpose. The European Union and Federal Trade Commission have challenged this practice. It notes that in response, “some companies are turning to privacy-enhanced technologies, which seek to protect the privacy and confidentiality of data when sharing it.” They also are looking at “synthetic data.”
In turn, the report discusses the types of harms that consumers frequently experience when their personal and sensitive data is shared intentionally or unintentionally without their authorization. The list includes physical, economic, emotional, reputational, discrimination, and autonomy harms.
The report follows with a discussion of the current state of US consumer privacy protection laws. It kicks off with a familiar tune: “Currently, there is no comprehensive US federal data privacy and security law.” It notes that there are several sector specific federal privacy laws, such as those intended to protect health and financial data and children’s data, but, as has become clear from this year’s Congressional debate, even these laws need to be updated. It also notes that 19 states have adopted state privacy laws but notes that their standards vary. This suggests that, as in the case of state data breach laws, the result is that they have “created a patchwork of rules and regulations with many drawbacks.” This has caused confusion among consumers and resulted in increased costs and lawsuits for businesses. It concludes with the statement that Federal legislation that preempts state data privacy laws has advantages and disadvantages.” The report outlines three Key Findings: (1) “AI has the potential to exacerbate privacy harms;” (2) “Americans have limited recourse for many privacy harms;” and (3) “Federal privacy laws could potentially augment state laws.”
Based on its findings, the report recommends that Congress should: (1) help “in facilitating access to representative data sets in privacy-enhanced ways” and “support partnerships to improve the design of AI systems” and (2) ensure that US privacy laws are “technology neutral” and “can address the most salient privacy concerns with respect to the training and use of advanced AI systems.”
National Security 
The report highlights both the potential benefits of emerging technologies to US defense capabilities, as well as the risks, especially if the United States is outpaced by its adversaries in development. The report discusses the status and successes of current AI programs at the Department of Defense (DOD), the Army, and the Navy. The report categorizes issues facing development of AI in the national security arena into technical and nontechnical impediments. The technical impediments include increased data usage, infrastructure/compute power, attacks on algorithms and models, and talent acquisition, especially when competing with the private sector in the workforce. The report also identifies perceived institutional challenges facing DOD, saying “acquisition professionals, senior leaders, and warfighters often hesitate to adopt new, innovative technologies and their associated risk of failure. DOD must shift this mindset to one more accepting of failure when testing and integrating AI and other innovative technologies.” The nontechnical challenges identified in the report revolved around third-party development of AI and the inability of the United States to control systems it does not create. The report notes that advancements in AI are driven primarily by the private sector and encourages DOD to capitalize on that innovation, including through more timely procurement of AI solutions at scale with nontraditional defense contractors. 
Chief among the report’s findings and recommendations is a call to Congress to explore ways that the US national security apparatus can “safely adopt and harness the benefits of AI” and to use its oversight powers to hone in on AI activities for national security. Other findings focus on the need for advanced cloud access, the value of AI in contested environments, and the ability of AI to manage DOD business processes. The additional recommendations were to expand AI training at DOD, continue oversight of autonomous weapons policies, and support international cooperation on AI through the Political Declaration on Responsible Military Use of AI. The report indicates that Congress will be paying much more attention to the development and deployment of AI in the national security arena going forward, and now is the time for impacted stakeholders to engage on this issue.
Education and the Workforce
The report also highlights the role of AI technologies in education and the promise and challenges that it could pose on the workforce. The report recognizes that despite the worldwide demand for science, technology, engineering, and mathematics (STEM) workers, the United States has a significant gap in the talent needed to research, develop, and deploy AI technologies. As a result, the report found that training and educating US learners on AI topics will be critical to continuing US leadership in AI technology. The report notes that training the future generations of talent in AI-related fields needs to start with AI and STEM education. Digital literacy has extended to new literacies, such as media, computer, data, and now AI. Challenges include resources for AI literacy. 
US leadership in AI will require growing the pool of trained AI practitioners, including people with skills in researching, developing, and incorporating AI techniques. The report notes that this will likely require expanding workforce pathways beyond the traditional educational routes and a new understanding of the AI workforce, including its demographic makeup, changes in the workforce over time, employment gaps, and the penetration of AI-related jobs across sectors. A critical aspect to understanding the AI workforce will be having good data. US leadership in AI will also require public-private partnerships as a means to bolster the AI workforce. This includes collaborations between educational institutions, government, and industries with market needs and emerging technologies.
While the automation of human jobs is not new, using AI to automate tasks across industries has the potential to displace jobs that involve repetitive or predictable tasks. In this regard, the report notes that while AI may displace some jobs, it will augment existing jobs and create new ones. Such new jobs will inevitably require more advanced skills, such as AI system design, maintenance, and oversight. Other jobs, however, may require less advanced skills. The report adds that harnessing the benefits of AI systems will require a workforce capable of integrating these systems into their daily jobs. It also highlights several existing programs for workforce development, which could be updated to address some of these challenges.
Overall, the report found that AI is increasingly used in the workplace by both employers and employees. US AI leadership would be strengthened by utilizing a more skilled technical workforce. Fostering domestic AI talent and continued US leadership will require significant improvements in basic STEM education and training. AI adoption requires AI literacy and resources for educators.
Based on the above, the report recommends the following:

Invest in K-12 STEM and AI education and broaden participation.
Bolster US AI skills by providing needed AI resources.
Develop a full understanding of the AI workforce in the United States.
Facilitate public-private partnerships to bolster the AI workforce.
Develop regional expertise when supporting government-university-industry partnerships.
Broaden pathways to the AI workforce for all Americans.
Support the standardization of work roles, job categories, tasks, skill sets, and competencies for AI-related jobs.
Evaluate existing workforce development programs.
Promote AI literacy across the United States.
Empower US educators with AI training and resources.
Support National Science Foundation curricula development.
Monitor the interaction of labor laws and worker protections with AI adoption.

Energy Usage and Data Centers
AI has the power to modernize our energy sector, strengthen our economy, and bolster our national security but only if the grid can support it. As the report details, electrical demand is predicted to grow over the next five years as data centers—among other major energy users—continue to come online. These technologies’ outpacing of new power capacity can “cause supply constraints and raise energy prices, creating challenges for electrical grid reliability and affordable electricity.” While data centers only take a few years to construct, new sources of power, such as power plants and transmission infrastructure, can take up to or over a decade to complete. To meet growing electrical demand and support US leadership in AI, the report recommends the following:

Support and increase federal investments in scientific research that enables innovations in AI hardware, algorithmic efficiency, energy technology development, and energy infrastructure.
Strengthen efforts to track and project AI data center power usage.
Create new standards, metrics, and a taxonomy of definitions for communicating relevant energy use and efficiency metrics.
Ensure that AI and the energy grid are a part of broader discussions about grid modernization and security.
Ensure that the costs of new infrastructure are borne primarily by those customers who receive the associated benefits.
Promote broader adoption of AI to enhance energy infrastructure, energy production, and energy efficiency.

Health Care
The report highlights that AI technologies have the potential to improve multiple aspects of health care research, diagnosis, and care delivery. The report provides an overview of use to date and its promise in the health care system, including with regard to drug, medical device, and software development, as well as in diagnostics and biomedical research, clinical decision-making, population health management, and health care administration. The report also highlights the use of AI by payers of health care services both for the coverage of AI-provided services and devices and for the use of AI tools in the health insurance industry.
The report notes that the evolution of AI in health care has raised new policy issues and challenges. This includes issues involving data availability, utility, and quality as the data required to train AI systems must exist, be of high quality, and be able to be transferred and combined. It also involves issues concerning interoperability and transparency. AI-enabled tools must be able to integrate with health care systems, including EHR systems, and they need to be transparent for providers and other users to understand how an AI model makes decisions. Data-related risks also include the potential for bias, which can be found during development or as the system is deployed. Finally, there is the lack of legal and ethical guidance regarding accountability when AI produces incorrect diagnoses or recommendations. 
Overall, the report found that AI’s use in health care can potentially reduce administrative burdens and speed up drug development and clinical diagnosis. When used appropriately, these uses of AI could lead to increased efficiency, better patient care, and improved health outcomes. The report also found that the lack of standards for medical data and algorithms impedes system interoperability and data sharing. The report notes that if AI tools cannot easily connect with all relevant medical systems, their adoption and use could be impeded.
Based on the above, the report recommends the following:

Encourage the practices needed to ensure AI in health care is safe, transparent, and effective.
Maintain robust support for health care research related to AI.
Create incentives and guidance to encourage risk management of AI technologies in health care across various deployment conditions to support AI adoption and improve privacy, enhance security, and prevent disparate health outcomes.
Support the development of standards for liability related to AI issues.
Support appropriate payment mechanisms without stifling innovation.

Financial Services
With respect to financial services, the report emphasizes that AI is already and has been used for decades within the financial services system, by both industry and financial regulators alike. Key examples of use cases have included fraud detection, underwriting, debt collection, customer onboarding, real estate, investment research, property management, customer service, and regulatory compliance, among other things. The report also notes that AI presents both significant risks and opportunities to the financial system, so it is critical to be thoughtful when considering and crafting regulatory and legislative frameworks in order to protect consumers and the integrity of the financial system, while also ensuring to not stifle technological innovation. As such, the report states that lawmakers should adopt a principles-based approach that is agnostic to technological advances, rather than a technology-based approach, in order to preserve longevity of the regulatory ecosystem as technology evolves over time, particularly given the rapid rate at which AI technology is advancing.  Importantly, the report notes that small financial institutions may be at a significant disadvantage with respect to adoption of AI, given a lack of sufficient resources to leverage AI at scale, and states that regulators and lawmakers must ensure that larger financial institutions are not inadvertently favored in policies so as not to limit the ability of smaller institutions to compete or enter the market. Moreover, the report stresses the need to maintain relevant consumer and investor protections with AI utilization, particularly with respect to data privacy, discrimination, and predatory practices.
A Multi-Branch Approach to AI/Next Steps
The Task Force recognizes that AI policy will not fall strictly under the purview of Congress. Co-chair Obernolte shared that he has met with David Sacks, President Trump’s “AI Czar,” as well as members of the transition team to discuss what is in the report. 
We will be closely following how both the administration and Congress act on AI in 2025, and we are confident that no industry will be left untouched.
 
Vivian K. Bridges, Lauren E. Hamma, Abby Dinegar contributed to this article.

SEC Settlement Highlights Importance of Proper Disclosure Requirements for Private Fund Managers

On January 10th 2025, the Securities and Exchange Commission (SEC) settled charges against two fund managers (collectively the “Fund Managers”)[1] and their sole owner, chief executive office, chief compliance office and founder (the “Founder”)[2].
The SEC alleged the Founder and the Fund Managers had breached their fiduciary duties owed to the private equity funds managed by the Fund Managers (the “Private Funds”) and related compliance program deficiencies. Specifically, the SEC asserted the Founder and the Fund Managers: (i) impermissibly charged certain expenses to the Private Funds from January 2019 through December 2023 instead of paying such expenses themselves and in so doing failed to disclose the resulting conflicts of interest and (ii) improperly submitted vague and unsubstantiated invoices to the Private Funds without taking reasonable steps to confirm the Private Funds were the proper payees.
Improper Expenses
The SEC raised three specific improper expenses that it viewed as Fund Manager costs that were improperly charged to the Private Funds.
Prior to January 2019, the Fund Managers employed and paid the salary of a full-time, in house chief financial officer (the “CFO”), who provided services to the Fund Managers and not to the Private Funds. When to the CFO left, the Fund Managers outsourced those financial services (totaling approximately US$1.3millon from January 2019 to December 2023) to third-party financial firms and charged those services to the Private Funds. Similarly, in May 2019, a public relations provider was paid by and worked for one of the Fund Managers providing strategic communications and public relations services. However, when re-engaged in 2022, that expense (totaling approximately US$214,000) was instead charged to the Private Funds. Lastly, a legal expense (approximately US$91,000) was charged to one of the Private Funds, but the SEC asserted that more than 70% of those expense were for services performed for the Fund Manager.
In each case, the SEC noted the expenses at issue were not listed or disclosed in the applicable Private Fund governing documents or private placement memorandum as permitted fund expenses, and that when the applicable Fund Manager changed its prior practices and instead held the applicable Private Fund responsible for such expenses, it failed to fully and fairly disclose the payment and the resulting conflict of interest to the investors of the corresponding Private Fund.
Unsupported and Unspecified Expenses
The SEC also took issue with the Fund Managers’ supporting documentation and approval processesfor the improper expenses allocated to the Private Funds, noting that vague and unsubstantiated invoices for amounts to be borne by the Private Funds included generic invoices that described the expenses as “various expenses”, “expense reimbursement”, “due to management Co.” and nothing more, and generic credit card reimbursements with insufficient or no back up or further description including for the Founder’s living and business expenses as well as credit cards held by his family members.
The Settlement between the parties censured the Fund Managers and the Founder for violating the anti-fraud provisions of Sections 206(2) and 206(4) of the Investment Advisers Act of 1940 and Rules 206(4)-7 and 206(4)-8(a)(2). Without admitting or denying the SEC’s findings, the Fund Managers, and the Founder consented to the entry of the order and agreed to pay a civil money penalty of US$250,000 in addition to disgorgement of over US$1.5 million, prejudgment interest of approximately US$272,000.
This order highlights the importance of:

Clearly drafted private fund governing document provisions outlining, in detail, the expenses to be borne by the private fund and expenses to be borne by the manager and its affiliates.
Policies and procedures that are reasonably designed to ensure that expenses are allocated in accordance with the applicable private fund governing documents and that require appropriate, clear and supporting records and documented approval processes.
Established processes to timely review expense allocation practices and related recordkeeping, in particular, in cases of changes in a manager’s favor, such as allocating ongoing expenses previously paid by the manager to a fund and considering if such changes require disclosure to the investors of the impacted private fund.

It is notable here, that the issues for the Fund Managers appear to begin with the departure of the Fund Mangers’ CFO. Fund managers must ensure that they consistently have the appropriate internal staffing and third-party professional services firms’ support to appropriately operate their businesses in accordance with the governing documents of their private funds and related law.

[1] During the periods in question through March 2024, one Fund Manager was a registered investment adviser with the SEC with the other Fund Manager electing to file as a relying adviser thereof.
[2] In the Matter of ONE THOUSAND & ONE VOICES MANAGEMENT, LLC; FAMILY LEGACY CAPITAL CREDIT MANAGEMENT, LLC and HENDRIK F. JORDAAN.

U.S. Treasury Department’s Final Rule on Outbound Investment Takes Effect

On January 2, 2025, the U.S. Department of the Treasury’s Final Rule on outbound investment screening became effective. The Final Rule implements Executive Order 14105 issued by former President Biden on August 9, 2023, and aims to protect U.S. national security by restricting covered U.S. investments in certain advanced technology sectors in countries of concern. Covered transactions with a completion date on or after January 2, 2025, are subject to the Final Rule, including the prohibition and notification requirements, as applicable.
The Final Rule targets technologies and products in the semiconductor and microelectronics, quantum information technologies, and artificial intelligence (AI) sectors that may impact U.S. national security. It prohibits certain transactions and requires notification of certain other transactions in those technologies and products. The Final Rule has two primary components:

Notifiable Transactions: A requirement that notification of certain covered transactions involving both a U.S. person and a “covered foreign person” (including but not limited to a person of a country of concern engaged in “covered activities” related to certain technologies and products) be provided to the Treasury Department. A U.S. person subject to the notification requirement is required to file on Treasury’s Outbound Investment Security Program website by specified deadlines. The Final Rule includes the detailed information and certification required in the notification and a 10-year record retention period for filing and supporting information.
Prohibited Transaction: A prohibition on certain U.S. person investments in a covered foreign person that is engaged in a more sensitive sub-set of activities involving identified technologies and products. A U.S. person is required to take all reasonable steps to prohibit and prevent its controlled foreign entity from undertaking transaction that would be a prohibited transaction if undertaken by a U.S. person.  The Final Rule contains a list of factors that the Treasury Department would consider whether the relevant U.S. person took all reasonable steps.

The Final Rule focuses on investments in “countries of concern,” which currently include only the People’s Republic of China, including Hong Kong and Macau. The Final Rule targets U.S. investments in Chinese companies involved in the following three sensitive technologies sub-sets: semiconductor and microelectronics, quantum information technologies and artificial intelligence. The Final Rule sets forth prohibited and notifiable transactions in each of the three sectors:
Semiconductors and Microelectronics

Prohibited: Covered transactions relating to certain electronic design automation software, fabrication or advanced packaging tools, advanced packaging techniques, and the design and fabrication of certain advanced integrated circuits and supercomputers.
Notifiable: Covered transactions relating to the design, fabrication and packaging of integrated circuits not covered by the prohibited transactions.

Quantum Information Technologies

All Prohibited: Covered transactions involving the development of quantum computers and production of critical components, the development or production of certain quantum sensing platforms, and the development or production of quantum networking and quantum communication systems.

Artificial Intelligence (AI) Systems

Prohibited:

Covered transactions relating to AI systems designed exclusively for or intended to be used for military, government intelligence or mass surveillance end uses.
Covered transactions relating to development of any AI system that is trained using a quantity of computing power meeting certain technical specifications and/or using primarily biological sequence data.

Notifiable: Covered transactions involving AI systems designed or intended to be used for cybersecurity applications, digital forensics tools, penetration testing tools, control of robotic systems or that trained using a quantity of computing power meeting certain technical specifications.

The Final Rule specifically defines the key terms “country of concern,” “U.S. person,” “controlled foreign entity,” “covered activity,” “covered foreign person,” “knowledge” and “covered transaction” and other related terms and sets forth the prohibitions and notification requirements in line with the national security objectives stated in the Executive Order.  The Final Rule also provides a list of transactions that are excepted from such requirements.
U.S. investors intending to invest in China, particularly in the sensitive sectors set forth above, should carefully review the Final Rule and conduct robust due diligence to determine whether a proposed transaction would be covered by the Final Rule (either prohibited or notifiable) before undertaking any such transaction. 
Any person subject to U.S. jurisdiction may face substantial civil and/or criminal penalties for violation or attempted violation of the Final Rule, including civil fines of up to $368,137 per violation (adjusted annually for inflation) or twice the amount of the transaction, whichever is greater, and/or criminal penalties up to $1 million or 20 years in prison for willful violations.  In addition, the Secretary of the Treasury can take any authorized action to nullify, void, or otherwise require divestment of any prohibited transaction.