Virginia Governor Recommends Amendments to Strengthen Children’s Social Media Bill

On March 24, 2025, Virginia Governor Glenn Youngkin asked the Virginia state legislature to strengthen the protections provided in a bill (S.B. 854) passed by the legislature earlier this month that imposes significant restrictions on minors’ social media use.
The bill would amend the Virginia Consumer Data Protection Act (“VCDPA”) to require social media platform operators to (1) use commercially reasonable methods (such as a neutral age screen) to determine whether a user is a minor under the age of 16; and (2) limit a minor’s use of the social media platform to one hour per day, unless a parent consents to increase the limit. The bill would prohibit social media platform operators from altering the quality or price of any social media service due to the law’s time use restrictions.
The Governor declined to sign the bill and recommended that the legislature make the following amendments to enhance the protections in the bill: (1) raise the covered user age from 16 to 18; and (2) require social media platform operators to, in addition to the time use limitations, also disable (a) infinite scroll features (other than music or video the user has prompted to play) and (b) auto-playing videos (i.e., where videos automatically begin playing when a user navigates to or scrolls through a social media platform), absent verifiable parental consent.

China’s National Intellectual Property Administration: Accelerate the Use of AI in Examination

On March 28, 2025, China’s National Intellectual Property Administration (CNIPA) held a press conference to discuss their work goals in light of the recent National People’s Congress and to “implement the spirt of General Secretary Xi Jinping’s important speech” at the Congress.  Heng Fuguang, spokesperson and director of CNIPA stated it will implement that spirit in seven aspects: 1. improve the legal system of intellectual property rights; 2. improve the quality of intellectual property creation; 3. improve the efficiency of intellectual property use;  4. strengthen intellectual property protection; 5. optimize public services for intellectual property rights; 6. expand international cooperation and exchanges on intellectual property; and 7. consolidate the foundation for the development of intellectual property.
Some highlights of the press conference include:

Accelerate the revision of the Trademark Law (this was also mentioned in the Opinions on Further Improving the Business Environment in the Field of Intellectual Property) and the Regulations on the Protection of Integrated Circuit Layout Designs;
Accelerate the application of artificial intelligence models in examination work;
Deepen the implementation of the trademark brand strategy, create more “national fashion brands”, promote the transformation of Made in China to Created in China, and the transformation of Chinese products to Chinese brands;
We will comprehensively strengthen foreign-related intellectual property protection, improve the overseas intellectual property dispute response guidance system, and effectively safeguard the legitimate interests of Chinese companies overseas;
Plan the development of the “15th Five-Year Plan” for intellectual property;
Continue to improve the overseas risk early warning mechanism, strengthen the early warning monitoring of disputes such as 337 investigations, cross-border e-commerce litigation, and malicious trademark registration;
345,000 patent pre-examination requests were accepted , and the average authorization period for invention patents authorized after passing the pre-examination was less than 3 months.

A full transcript is available here (Chinese only).

US State AI Legislation: Virginia Vetoes, Colorado (Re)Considers, and Texas Transforms

Virginia’s Governor, Glenn Youngkin, vetoed a bill this week that would have regulated “high-risk” artificial intelligence systems. HB 2094, which narrowly passed the state legislature, aimed to implement regulatory measures akin to those established by last year’s Colorado AI Act. At the same time, Colorado’s AI Impact Task Force issued concerns about the Colorado law, which may thus undergo modifications before its February 2026 effective date. And in Texas, a proposed Texas Responsible AI Governance Act was recently modified.
The Virginia law, like the Colorado Act, would have imposed various obligations on companies involved in the creation or deployment of high-risk AI systems that influence significant decisions about individuals in areas such as employment, lending, health care, housing, and insurance. These obligations included conducting impact assessments, keeping detailed technical documentation, adopting risk management protocols, and offering individuals the chance to review negative decisions made by AI systems. Companies would have also needed to implement safeguards against algorithmic discrimination. Youngkin, like Colorado’s Governor Polis, worried that HB 2094 would stifle the AI industry and Virginia’s economic growth. He also noted that existing laws related to discrimination, privacy, data usage, and defamation could be used to protect the public from potential AI-related harms. Whereas Polis ultimately signed the Colorado law, Youngkin did not.
However, even though Polis signed the Colorado law last year, he urged in his statement for legislators to assess and provide additional clarity and revisions to the AI law. And, last month, the AI Task Force issued a report on their recommendations. The task force identified potential areas where the law could be clarified or improved. It divided them into four categories: (1) where consensus exists about changes to be made; (2) where consensus needs additional time and stakeholder engagement; (3) where consensus depends on resolving multiple interconnected issues; and (4) where there is “firm disagreement.” In the first are only a handful of relatively minor changes. In the second, for example, is clarifying the definition of what are “consequential decisions” – important because AI tools used to make them are the ones that are subject to the law. In the third, for example, is defining “algorithmic discrimination” and obligations developers and deployers should have in preventing it. And in the fourth, by way of example, is whether or not to include an opportunity to cure incidents of non-compliance.
Texas, like Colorado and Virginia, has been considering legislation that addresses high-risk AI systems that are a “substantial factor” in consequential decisions about people’s lives. That bill was recently modified to remove the concept of algorithmic discrimination, and as currently drafted prohibits AI systems that are developed or deployed with the “intent to discriminate.” It has also been modified to expressly state that disparate impact alone is not sufficient to prove that there was an intent to discriminate. The proposed Texas law is similar to Utah’s AI legislation (which went into effect on May 1, 2024), insofar as it would require notice if individuals were interaction with AI (though this obligation is only for government agencies.) Lastly, the law would also prohibit the intentional development of AI systems to “incite harm or criminality.” The law was filed on March 14 and, as of this writing was pending in the House Committee.
Putting it into Practice: The veto of HB 2094 emphasizes the complex journey towards comprehensive AI regulation at the state level. We anticipate ongoing action at a state level as legislatures, and some time before we see a consensus approach to AI governance. As a reminder, there are currently AI laws in effect focusing on various aspects of AI in New York (likenesses and employment), California (several different topics), Illinois (employment), and Tennessee (likenesses), passed AI legislation set to go into effect at different times in 2024 through 2026, and bills sitting in committee in at least 17 states. 
Listen to this post 

Virginia’s Governor Vetos AI Bill

On March 24, 2025, Virginia’s Governor vetoed House Bill (HB) 2094, known as the High-Risk Artificial Intelligence Developer and Deployer Act. This bill aimed to establish a regulatory framework for businesses developing or using “high-risk” AI systems.
The Governor’s veto message emphasized concerns that HB 2094’s stringent requirements would stifle innovation and economic growth, particularly for startups and small businesses. The bill would have imposed nearly $30 million in compliance costs on AI developers, a burden that could deter new businesses from investing in Virginia. The Governor argued that the bill’s rigid framework failed to account for the rapidly evolving nature of the AI industry and placed an onerous burden on smaller firms lacking large legal compliance departments.
The veto of HB 2094 in Virginia reflects a broader debate in AI legislation across the United States. As AI technology continues to advance, both federal and state governments are grappling with how to regulate its use effectively.
At the federal level, AI legislation has been marked by contrasting approaches between administrations. Former President Biden’s Executive Orders focused on ethical AI use and risk management, but many of these efforts were revoked by President Trump this year. Trump’s new Executive Order, titled “Removing Barriers to American Leadership in Artificial Intelligence,” aims to foster AI innovation by reducing regulatory constraints.
State governments are increasingly taking the lead in AI regulation. States like Colorado, Illinois, and California have introduced comprehensive AI governance laws. The Colorado AI Act of 2024, for example, uses a risk-based approach to regulate high-risk AI systems, emphasizing transparency and risk mitigation. While changes to the Colorado law are expected before its 2026 effective date, it may emerge as a prototype for others states to follow. 
Takeaways for Business Owners

Stay Informed: Keep abreast of both federal and state-level AI legislation. Understanding the regulatory landscape will help businesses anticipate and adapt to new requirements.
Proactive Compliance: Develop robust AI governance frameworks to ensure compliance with existing and future regulations. This includes conducting risk assessments, implementing transparency measures, and maintaining proper documentation.
Innovate Responsibly: While fostering innovation is crucial, businesses must also prioritize ethical AI practices. This includes preventing algorithmic discrimination and ensuring the responsible use of AI in decision-making processes.

Virginia Governor Vetoes Rate Cap and AI Regulation Bills

On March 25, Virginia Governor Glenn Youngkin vetoed two bills that sought to impose new restrictions on “high-risk” artificial intelligence (AI) systems and fintech lending partnerships. The vetoes reflect the Governor’s continued emphasis on fostering innovation and economic growth over introducing new regulatory burdens.
AI Bias Bill (HB 2094)
The High-Risk Artificial Intelligence Developer and Deployer Act would have made Virginia the second state, after Colorado, to enact a comprehensive framework governing AI systems used in consequential decision-making. The proposed law applied to “high-risk” AI systems used in employment, lending, and housing, among other fields, requiring developers and deployers of such systems to implement safeguards to prevent algorithmic discrimination and provide transparency around how automated decisions were made.
The law also imposed specific obligations related to impact assessments, data governance, and public disclosures. In vetoing the bill, Governor Youngkin argued that its compliance demands would disproportionately burden smaller companies and startups and could slow AI-driven economic growth in the state.
Fintech Lending Bill (SB1252)
Senate Bill 1252 targeted rate exportation practices by applying Virginia’s 12% usury cap to certain fintech-bank partnerships. Specifically, the bill sought to prohibit entities from structuring transactions in a way that evades state interest rate limits, including through “rent-a-bank” models, personal property sale-leaseback arrangements, and cash rebate financing schemes.
Additionally, the bill proposed broad definitions for “loan” and “making a loan” that could have reached a wide array of service providers. A “loan” was defined to include any recourse or nonrecourse extension of money or credit, whether open-end or closed-end. “Making a loan” encompassed advancing, offering, or committing to advance funds to a borrower. In vetoing the measure, Governor Youngkin similarly emphasized its potential to discourage innovation and investment across Virginia’s consumer credit markets.
Putting It Into Practice: The vetoes of the High-Risk Artificial Intelligence Developer and Deployer Act (previously discussed here) and the Fintech Lending Bill signal Virginia’s preference for a more flexible, innovation friendly-oversight. This development aligns with a broader pullback from federal agencies with respect to oversight of fintech and related emerging technologies (previously discussed here and here). Fintechs and consumer finance companies leveraging AI should continue to monitor what has become a rapidly evolving regulatory landscape.
Listen to this post 

SEC Creates New Tech-Focused Enforcement Team

On February 20, the SEC announced the creation of its Cyber and Emerging Technologies Unit (CETU) to address misconduct involving new technologies and strengthen protections for retail investors. The CETU replaces the SEC’s former Crypto Assets and Cyber Unit and will be led by SEC enforcement veteran Laura D’Allaird.
According to the SEC, the CETU will focus on rooting out fraud that leverages emerging technologies, including artificial intelligence and blockchain, and will coordinate closely with the Crypto Task Force established earlier this year (previously discussed here). The unit is comprised of approximately 30 attorneys and specialists across multiple SEC offices and will target conduct that misuses technological innovation to harm investors and undermine market confidence.
The CETU will prioritize enforcement in the following areas:

Fraud involving the use of artificial intelligence or machine learning;
Use of social media, the dark web, or deceptive websites to commit fraud;
Hacking to access material nonpublic information for unlawful trading;
Takeovers of retail investor brokerage accounts;
Fraud involving blockchain technology and crypto assets;
Regulated entities’ noncompliance with cybersecurity rules and regulations; and
Misleading disclosures by public companies related to cybersecurity risks.

In announcing the CETU, Acting Chairman Mark Uyeda emphasized that the unit is designed to align investor protection with market innovation. The move signals a recalibration of the SEC’s enforcement strategy in the cyber and fintech space, with a stronger focus on misconduct that directly affects retail investors.
Putting It Into Practice: Formation of the CETU follows Commissioner Peirce’s statement on creating a regulatory environment that fosters innovation and “excludes liars, cheaters, and scammers” (previously discussed here). The CETU is intended to reflect that approach, redirecting enforcement resources toward clearly fraudulent conduct involving emerging technologies like AI and blockchain.
Listen to the Post 

Virginia Governor Vetoes Artificial Intelligence Bill HB 2094: What the Veto Means for Businesses

Virginia Governor Glenn Youngkin has vetoed House Bill (HB) No. 2094, a bill that would have created a new regulatory framework for businesses that develop or use “high-risk” artificial intelligence (AI) systems in the Commonwealth.
The High-Risk Artificial Intelligence Developer and Deployer Act (HB 2094) had passed the state legislature and was poised to make Virginia the second state, after Colorado, with a comprehensive AI governance law.
Although the governor’s veto likely halts this effort in Virginia, at least for now, HB 2094 represents a growing trend of state regulation of AI systems nationwide. For more information on the background of HB 2094’s requirements, please see our prior article on this topic.
Quick Hits

Virginia Governor Glenn Youngkin vetoed HB 2094, the High-Risk Artificial Intelligence Developer and Deployer Act, citing concerns that its stringent requirements would stifle innovation and economic growth, particularly for startups and small businesses.
The veto maintains the status quo for AI regulation in Virginia, but businesses contracting with state agencies still must comply with AI standards under Virginia’s Executive Order No. 30 (2024), and any standards relating to the deployment of AI systems that are issued pursuant to that order.
Private-sector AI bills are currently pending in twenty states. So, regardless of Governor Youngkin’s veto, companies may want to continue proactively refining their AI governance frameworks to stay prepared for future regulatory developments.

Veto of HB 2094: Stated Reasons and Context
Governor Youngkin announced his veto of HB 2094 on March 24, 2025, just ahead of the bill’s deadline for approval. In his veto message, the governor emphasized that while the goal of ethical AI is important, it was his view that HB 2094’s approach would ultimately do more harm than good to Virginia’s economy. In particular, he stated that the bill “would harm the creation of new jobs, the attraction of new business investment, and the availability of innovative technology in the Commonwealth of Virginia.”
A key concern was the compliance burden HB 2094 would have imposed. Industry analysts estimated the legislation would saddle AI developers with nearly $30 million in compliance costs, which could be especially challenging for startups and smaller tech firms. Governor Youngkin, echoing industry concerns that such costs and regulatory hurdles might deter new businesses from innovating or investing in Virginia, stated, “HB 2094’s rigid framework fails to account for the rapidly evolving and fast-moving nature of the AI industry and puts an especially onerous burden on smaller firms and startups that lack large legal compliance departments.”
Virginia Executive Order No. 30 and Ongoing AI Initiatives
Governor Youngkin’s veto of HB 2094 does not create an AI regulatory vacuum in Virginia. Last year, Governor Youngkin signed Executive Order No. 30 on AI, establishing baseline standards and guidelines for the use of AI in Virginia’s state government. This executive order directed the Virginia Information Technologies Agency (VITA) to publish AI policy standards and IT standards for all executive branch agencies. VITA published the policy standards in June 2024. Executive Order No. 30 also created the Artificial Intelligence Task Force, currently comprised of business and technology nonprofit executives, former public servants, and academics, to develop further “guardrails” for the responsible use of AI and to provide ongoing recommendations.
Executive Order No. 30 requires that any AI technologies used by state agencies—including those provided by outside vendors—comply with the new AI standards for procurement and use. In practice, this requires companies supplying AI software or services to Virginia agencies to meet certain requirements with regard to transparency, risk mitigation, and data protection defined by VITA’s standards. Those standards draw on widely accepted AI ethical principles (for instance, requiring guardrails against bias and privacy harms in agency-used AI systems). Executive Order No. 30 thus indirectly extends some AI governance expectations to private-sector businesses operating in Virginia via contracting. Companies serving public-sector clients in Virginia may want to monitor the state’s AI standards for anticipated updates in this quickly evolving field.
Looking Forward
Had HB 2094 become law, Virginia would have joined Colorado as one of the first states with a broad AI statute, potentially adding a patchwork compliance burden for firms operating across state lines. In the near term, however, Virginia law will not explicitly require the preparation of algorithmic impact assessments, preparation and implementation of new disclosure methods, or the formal adoption of the prescribed risk-management programs that HB 2094 would have required.
Nevertheless, companies in Virginia looking to embrace or expand their use of AI are not “off the hook,” as general laws and regulations still apply to AI-driven activities. For example, antidiscrimination laws, consumer protection statutes, and data privacy regulations (such as Virginia’s Consumer Data Protection Act) continue to govern the use of personal information (including through AI) and the outcomes of automated decisions. Accordingly, if an AI tool yields biased hiring decisions or unfair consumer outcomes, companies could face liability under existing legal theories regardless of Governor Youngkin’s veto.
Moreover, businesses operating in multiple jurisdictions should remember that Colorado’s AI law is already on the books and that similar bills have been introduced in many other states. There is also ongoing discussion at the federal level about AI accountability (through agency guidance, federal initiatives, and the National Institute of Standards and Technology AI Risk Management Framework). In short, the regulatory climate around AI remains in flux, and Virginia’s veto is just one part of a larger national picture that warrants careful consideration. Companies will want to remain agile and informed as the landscape evolves.

Building the Blueprint: The Foundation of South Florida’s Tech Evolution Part 3 [Podcast]

In part 3 of the Building the Blueprint podcast miniseries, host Jaret Davis, Senior Vice President of Greenberg Traurig and Co-Managing Shareholder of the Miami office, is joined by GT Shareholder Joshua Forman, who leads the firm’s team in Miami focused on digital infrastructure and data centers. Together, they explore the critical role of data centers in Miami’s tech growth and their broader impact on the global tech ecosystem.
As demand for data continues to grow – fueled by AI, IoT, and remote work – Miami and South Florida are poised to play a key role in this evolving industry. This episode offers insight into the advancements shaping the future of data centers and examines the intersection of tech infrastructure, investment, and innovation in one of the fastest-growing tech hubs in the U.S. Tune in!

Privacy Tip #437 – 23andMe Files for Bankruptcy—What to Do If It Has Your Genetic Information

Genetic testing company 23andMe has filed for Chapter 11 bankruptcy protection, and its CEO has resigned. It is seeking to sell “substantially all of its assets” through a reorganization plan that will have to be approved by a federal bankruptcy judge.
Mark Jensen, Chair and member of the Special Committee of the Board of Directors stated: “We are committed to continuing to safeguard customer data and being transparent about the management of user data going forward, and data privacy will be an important consideration in any potential transaction.” The company has also stated that the buyer must comply with applicable law in using the data.
That said, privacy professionals are concerned about the sale of the data in 23andMe’s possession, including the sensitive genetic information of over 15 million people. People often assume that the information is protected by HIPAA or the Genetic Information Nondiscrimination Act, but as my students know, neither applies to genetic information collected and used by a private company. State laws may apply, and consumers could be offered the ability to request the deletion of their data.
The company has said that customers can delete their data and terminate their accounts. The California Attorney General “urgently” suggests that consumers request the deletion of their data and destruction of the genetic materials in its possession and offers a step-by-step guide on how to do so.
Apparently, so many people have followed the suggestion that the 23andMe website crashed. The site is now back up and running, so 23andMe customers may wish to log in and request the deletion of their data and termination of their accounts.

AI Governance: Steps to Adopt an AI Governance Program

There are many factors to consider when assisting clients with assessing the use of artificial intelligence (AI) tools in an organization and developing and implementing an AI Governance Program. Although adopting an AI Governance Program is a no-brainer, no form of a governance program is insufficient. Each organization has to evaluate how it will use AI tools, whether (and how) it will develop its own, whether it will allow third-party tools to be used with its data, the associated risks, and what guardrails and guidance to provide to employees about their use.
Many organizations don’t know where to start when thinking about an AI Governance Program. I came across a guide that I thought might be helpful in kickstarting your thinking about the process: Syncari’s “The Ultimate AI Governance Guide: Best Practices for Enterprise Success.”
Although the article scratches the surface of how to develop and implement an AI Governance Program, it is a good start to the internal conversation regarding some basic questions to ask and risks that may be present with AI tools. Although the article mentions AI regulations, including the EU AI Act and GDPR, it is important to consider state AI regulations being introduced and passed daily in the U.S. In addition, when considering third-party AI tools, it is important to question the third-party on how it collects, uses, and discloses company data, and whether company data is being used to train the AI tool.
Now is the time to start discussing how you will develop and implement your AI Governance Program. Your employees are probably already using it, so assess the risk and get some guardrails around it.

Virginia Governor Vetoes High-Risk Artificial Intelligence Developer and Deployer Act

On March 25, 2025, Virginia Governor Glenn Youngkin vetoed the High-Risk Artificial Intelligence Developer and Deployer Act (the “Act”), which had been passed by the Virginia legislature. The Act would have imposed accountability and transparency requirements with respect to the development and deployment of “high-risk” AI systems. In explaining the veto, the Governor stated that the Act’s “rigid framework fails to account for the rapidly evolving and fast-moving nature of the AI industry and puts an especially onerous burden on smaller firms and startups that lack large legal compliance departments.” The Governor also noted that Virginia’s existing laws “protect consumers and place responsibilities on companies relating to discriminatory practices, privacy, data use, libel, and more,” and that an executive order issued by the Governor’s Administration had “establish[ed] safeguards and oversight for AI use, and assembl[ed] a highly skilled task force comprised of industry leading experts to work closely with [the] Administration on key AI governance issues.”

FERC’s Co-Location Conundrum: Balancing Grid Reliability with Data Center Development as PJM’s Tariff Faces Scrutiny

Key Points

FERC’s Order Signals Transformative Change While Navigating Jurisdictional Limits: While FERC recognizes the urgent need to address co-location arrangements (particularly given the AI/data center boom), the intricate interplay of federal and state authority means any solution must carefully navigate jurisdictional boundaries. The Order reflects FERC’s attempt to maximize its impact within the framework of the Federal Power Act’s cooperative federalism. 
Cost Allocation and Reliability Concerns Drive Reform: FERC’s primary concerns center on preventing cost-shifting to other ratepayers and ensuring grid reliability. The current Tariff’s lack of clear provisions for ancillary services, different co-location configurations, and sudden load shifts poses risks that FERC seeks to address through this proceeding. 
Industry Response Suggests High Stakes for Multiple Stakeholders: The approximately 100 intervention motions filed indicate that stakeholders view this proceeding as potentially industry-reshaping. The outcome will likely influence how data center developers approach power supply strategies and could affect the viability of co-location as a solution to grid connection challenges.

Last year, the Federal Energy Regulatory Commission (“FERC”) convened a technical conference to discuss issues related to large loads being co-located with generating facilities (Docket No. AD24-11-000), which we summarized in the following client alert. In a related development late last year, Constellation Energy Generation, LLC (“Constellation”) filed a complaint against PJM Interconnection, LLC (“PJM”) pursuant to Section 206 of the Federal Power Act (“FPA”), arguing that PJM’s Open Access Transmission Tariff is “unjust, unreasonable and unduly discriminatory” due to the absence of guidance on co-located configurations where the generating asset is completely isolated from the grid (Docket No. EL25-20-000).
The importance of this topic is underscored by nearly daily announcements of new data center projects, such as the $500 billion proposed investment on AI infrastructure by OpenAI, SoftBank and Oracle highlighted by President Trump on the day after his inauguration. The massive power demands from both training and inference applications of AI are anticipated to place significant strains on power grids, while grid operators are contending with lengthy interconnection queues and insufficient buildout of transmission networks. In order to secure power supply for their projects, many data center developers are exploring co-location opportunities with new and existing generating facilities.
On February 20, 2025, FERC issued an order (the “Order”) consolidating the two dockets mentioned above and instituting for cause proceedings under Section 206 of the FPA, finding that PJM’s tariff appears to be unjust, unreasonable, unduly discriminatory or preferential (Docket No. EL25-49-00). FERC ordered PJM and the relevant transmission owners to either:

“show cause as to why the Open Access Transmission Tariff, the Amended and Restated Operating Agreement of PJM, and Reliability Assurance Agreement Among Load Serving Entities in the PJM Region (the “Tariff”) remains just and reasonable and not unduly discriminatory or preferential without provisions addressing with sufficient clarity or consistency the rates, terms and conditions of service that apply to co-location arrangements; or 
explain what changes to the Tariff would remedy the identified concerns if [FERC] were to determine that the Tariff has in fact become unjust and unreasonable or unduly discriminatory or preferential and, therefore, proceeds to establish a replacement Tariff.”

On March 24, 2025, PJM and the transmission owners filed their responses to the Order, with both PJM and a joint answer submitted on behalf of a significant majority of the transmission owners arguing that the Tariff remains just and reasonable. The transmission owners urged FERC to clarify that co-located load served by generation interconnected to the transmission or distribution system is network load for the purposes of the Tariff. PJM presented a number of different configurations under the existing Tariff, while noting jurisdictional concerns based on federal/state shared jurisdiction and differences in regulation among the states. 
Interested parties are able to respond with comments by April 23, 2025. Approximately 100 such entities have filed motions to intervene, which is indicative of the significance industry players are placing on these proceedings and FERC’s ultimate resolution.
FERC’s Analysis
Although the Order relates specifically to the complaint initiated by Constellation under Section 206 of the FPA, FERC is clearly conscious of many policy considerations that need to be addressed in the context of co-located large load configurations.
Jurisdiction. Although FERC has indicated that it is aware of the nationwide importance of co-located large load configurations, particularly with respect to the national security interests identified in facilitating the rapid buildout of AI infrastructure, it is also plainly conscious of its jurisdictional limitations. The Order highlights that the FPA only allocates jurisdiction to FERC for transmission and wholesale sales of electricity in interstate commerce, whereas retail sales, intrastate transmission and wholesaling, as well as siting authority, are all subject to state jurisdiction. Accordingly, there are jurisdictional limits to how transformative FERC’s guidance can be on this issue. The Order invites comments on when and under what circumstances co-located load should be considered as interconnected to the transmission system in interstate commerce. Specifically, FERC poses the query of whether fully isolated load should be understood as being connected to the transmission system, and if so, what characteristics would result in such a determination.1
Tariff Provisions. The Order makes a determination that the Tariff is “unjust and unreasonable or unduly discriminatory or preferential” due to its lack of clarity and consistency regarding rates and terms of use. For example, FERC comments that the Tariff does not account for costs associated with ancillary services that the co-located generator would be unable to provide, such as black start capabilities and load following services. There is also significant discussion about how the Tariff does not account for different co-location configurations, and specifically, how those differences may impact overall costs. Due to the ambiguities in the Tariff, FERC seems to be acutely concerned with the potential for parties to a co-location arrangement to shift costs to other ratepayers.
Reliability and Resource Adequacy. The concerns raised in the Order with respect to reliability and resource adequacy were identified and thoroughly discussed at the technical conference. For example, in the event a generator co-located with a large load customer temporarily goes offline, the large load customer could suddenly be drawing from the grid, thus potentially impacting overall network performance. Grid operators would be better placed if they had the ability to model such scenarios. Further, concerns relating to the removal of existing generating assets from capacity markets, and thus increasing rates of other consumers (at least in the short-term), were raised by many participants to the technical conference and restated in the Order. On the other hand, FERC notes that many of the concerns raised by serving large load customers would be present even if the customer is treated as network load rather than in a behind-the-meter configuration.
Questions. The Order stipulates that PJM and the transmission owners must include responses to a number of questions relating to: 1) transmission service, 2) ancillary or other wholesale services, 3) interconnection procedures and cost allocation, 4) the PJM capacity market, reliability and resource adequacy, and 5) general and miscellaneous questions which do not fall under any of these headings. The responses to these questions will assist FERC in framing its analysis of how revisions can be made to the Tariff to ensure it is just, reasonable and not unduly discriminatory.
Final Thoughts
Electricity infrastructure is already being built out at a rapid pace in the United States. This trend is set to continue, particularly to meet the needs of increased electrification across numerous sectors such as industry and transportation, along with the anticipated expansion of the data center fleet. Developers have pursued co-located arrangements as a potential means of reducing time frames for getting projects online. FERC’s guidance will result in greater certainty for developers on the costs and timing associated with co-location, which should clarify the role of co-location in the ongoing data center build-out.

1 Key questions about jurisdiction, cost allocation, and reliability turn on what it means for load to be isolated from the grid. For example, in the Complaint, Constellation describes “Fully Isolated Co-Located Loads” as behind the meter load with system protection facilities designed to ensure power does not flow from the grid to the load, with the PJM transmission owners refer to “fully isolated” load where both load and generator serving the co-located load are islanded from the transmission and distribution systems. PJM saw the nuance of different co-location agreements as risking to introduce regulatory gaps in federal and state jurisdiction.