Throughout 2025, artificial intelligence has shifted from a buzzword to infrastructure critical to workflows across nearly every industry. Generative-AI tools have moved from experiments to everyday workplace utilities, with Microsoft integrating Copilot into Office, and Google and OpenAI rolling out enterprise-grade assistants. NVIDIA has become the poster child for the AI boom as governments and Fortune 500 companies race to secure computing power for their own initiatives, making them the world’s most valuable company. Analysts view AI investments as driving the stock market, but will there be a similar bump in job growth, particularly in the legal professions? Against this backdrop, The National Law Review spoke with Garrett Rosen, a senior vice president of legal recruiting at Larson Maddox, about how AI’s rapid ascent is reshaping the legal hiring market.
Eli: To start us off, could you give a quick introduction to who you are and what you do?
Garrett: Sure. I’m a recruiter focused on in-house legal roles across tech, media, and telecom, and I effectively cover all major U.S. markets. Most of my work is in New York, San Francisco, Los Angeles, Denver, Chicago, Dallas/Austin, and Boston, among others. At Larson Maddox, we have offices in New York, Charlotte, Tampa, Dallas, Chicago, and Los Angeles, and I work closely with colleagues in those locations on things like product counsel roles, privacy, IP/compliance, and adjacent positions in the broader tech and infrastructure space.
Eli: Thank you for taking the time to speak with me. From where you sit, how would you describe the legal hiring market right now, especially for tech and AI-related roles? Are things looking good? Not so good? What’s going on?
Garrett: It’s a loaded question.
The last few years have been a real swing. Coming out of 2020–2022 we had a very candidate-friendly market. There was heavy investment, tons of growth hiring, and it felt like people could move around pretty freely.
Then we hit a cooldown—you saw the big downturn, a lot of uncertainty, and waves of layoffs. Over the last 12 months or so, it’s started to stabilize and pick back up. I’m definitely busier this year across industries than I was last year.
That said, it’s still competitive. You have people who were laid off, people who moved during that 2020–2022 window who are now questioning those moves, and people who are employed but cautiously testing the market. So there’s a lot of candidate activity relative to the number of truly great roles.
Eli: You mentioned product counsel, privacy, IP, and compliance. When you look at AI-related legal roles right now—AI product counsel, AI governance, “ethical tech” officers, that sort of thing—what are companies actually hiring for? What are you seeing on the ground?
Garrett: A lot of what I’m seeing is hybrid.
There’s growing demand for AI-focused product counsel roles—people who sit at the intersection of product development, privacy, and regulatory risk. You also see roles framed as AI governance, AI policy, or broader “responsible AI” positions, but often they’re essentially privacy-plus: privacy and data protection, plus AI policy and compliance layered on top.
In many companies, especially those earlier in their AI product development, AI is being folded into existing privacy, product, or commercial roles. Larger, more mature organizations, are more likely to spin out explicit “AI” or “governance” titles, particularly where there’s a heavy regulatory or compliance overlay.
Eli: Are employers mostly trying to upskill their existing in-house lawyers into AI roles, or are they going out and hiring people with deeper AI backgrounds?
Garrett: It’s a mix, but the market has definitely shifted toward wanting proven experience.
There are companies willing to say, “We’ll take a really strong tech or privacy lawyer who’s genuinely interested in AI and help them grow into it.” That happens.
But right now, I’d say more clients want someone who can demonstrate they’ve already done the work—that they’ve handled AI-related issues, worked with product and engineering teams on these questions, and can hit the ground running.
The risk tolerance is lower than it was a few years ago. Candidates who want to pivot into AI from adjacent fields need to make a very strong case for how their existing experience translates, and they need to show they’ve invested in learning—not just that they’re “curious about AI.”
Eli: That ties into my next question. When companies talk to you about ideal candidates, what are they looking for in terms of background, undergrad, practice area, certifications?
Garrett: There are a few layers to it.
First, there’s the core legal training: strong law school credentials, time at a reputable firm, and experience with privacy, product counseling, consumer protection, or regulatory work. That’s still the baseline.
Then there’s the cross-functional piece. The best product and AI lawyers are deeply plugged into the business. They work closely with product, engineering, data science, marketing, and trust & safety. They understand how the product actually works, who the internal stakeholders are, and what their day-to-day looks like. That cross-functional experience is huge.
On the credential side, we’re seeing more interest in things like privacy certifications—CIPP and related certifications—and the same will likely happen with AI. AI-specific certifications or structured coursework signal that someone has put in the effort to formalize their knowledge rather than just reading headlines. Over the next couple of years, I expect those to become a more common way for candidates to stand out.
Eli: Let’s say a candidate already has some of that background. From your perspective, what do recruiters and hiring managers look for most when as a sign of true technical fluency?
Garrett: It’s less about writing code and more about being able to “speak the language.”
Hiring managers want lawyers who can sit in a room with product and engineering and ask intelligent questions—who understand data flows, how a model is trained and deployed at a high level, where the data is coming from, and what the user journey looks like.
The candidates who do well are the ones who can tell concrete stories:
- “Here’s a product I supported.”
- “Here’s how we were using machine learning or AI.”
- “Here are the risks we identified and the guardrails we put in place.”
It’s that ability to articulate the work, connect it to business outcomes, and explain how their legal advice actually shaped the product. That kind of narrative really resonates.
Eli: Another thing I’m curious about: given all the buzz and opportunity, do you feel like the market is too hot for candidates, in the sense that people might be tempted to jump around a lot? Or is it actually more constrained than it looks from the outside?
Garrett: I don’t think we’re in an “everyone’s job-hopping constantly” phase right now.
If anything, job-hopping has cooled compared to five years ago. With the macro environment and recent layoffs, there’s more caution on both sides. Companies are wary of candidates who look like they’ve bounced too much, and candidates are more thoughtful about whether they really want to move.
There are lots of interesting roles, especially in AI and privacy, but it’s not unlimited. And because there’s so much interest in the space, the bar is higher. You’re competing not only with folks who are actively unemployed, but also with well-credentialed people who are secure in their jobs and just selectively looking for the “right” next step.
Eli: You mentioned geography earlier—markets like New York, the Bay Area, LA, and so on. How has remote work changed the competition for AI and tech-adjacent legal roles?
Garrett: Remote and hybrid work have definitely reshaped things.
On one hand, remote roles let companies tap into broader talent pools. Someone sitting in, say, the Midwest can now compete for jobs in a Bay Area or New York company that used to hire only locally. On the other hand, that also means a candidate in a smaller market is now competing with people from every major tech hub.
Some clients are still committed to particular hubs—they want people in-office in New York, San Francisco, or Seattle a certain number of days a week. Others are more flexible and will hire fully remote if they find the right person.
So for candidates, the question is often: “Am I willing to relocate or commit to a hub city?” If not, they can still find opportunities, but the competition for fully remote, high-end AI/privacy roles is intense.
Eli: For law students or early-career lawyers who are watching all this AI change and feeling anxious about their careers, what advice would you give them?
Garrett: First, don’t panic.
AI is changing a lot, but the fundamentals still matter: strong training, solid writing and analysis, good judgment, and the ability to work well with people. If you build that core skill set, you’ll be able to adapt as the technology evolves.
Second, be intentional about exposure. If you’re at a firm, try to get staffed on matters involving privacy, data, product counseling, or emerging tech. If you’re in-house, volunteer for projects that touch AI or data governance.
Third, show that you’re investing in yourself. That could mean taking relevant courses, getting a privacy or AI-related certification, writing or speaking about the issues, or just building a thoughtful point of view about the space.
The candidates who will do best are the ones who can say, “I understand the basics, I’ve seen some of this work up close, and I’m genuinely engaged with how AI is reshaping my practice area.”
Eli: My final big question is about the road ahead. We’ve talked about uncertainty. We’ve talked about growth. What do you think the path forward looks like for AI-adjacent legal roles over the next few years?
Garrett: I think we’re still in a growth phase, but it won’t be a straight line.
There are a lot of forces at play: regulation catching up, companies figuring out sustainable business models around AI, and startups competing hard for market share. Before we ever hit a true “bubble bursting,” I suspect we’ll see a wave of consolidation—more M&A as larger players acquire smaller ones with strong technology or teams.
For legal, that means continued demand for people who understand AI, privacy, and regulatory frameworks—especially around safety, consumer protection, and data. I’d expect 2026 and 2027 to be pretty active years from an M&A and regulatory standpoint, which usually translates to sustained need for strong in-house counsel in this space.
Could things slow at some point? Sure. But I don’t see AI-related legal work disappearing. I see it becoming more embedded in how companies operate over time.
DISCLAIMER:
The views and opinions expressed in this interview are those of the speaker and not necessarily those of The National Law Review (NLR). The NLR does not answer legal questions, nor will we refer you to an attorney or other professional if you request such information from us. If you require legal or professional advice, kindly contact an attorney or other suitable professional advisor. Please see NLR’s terms of use.