The past year has been a busy time for lawmakers and lobbyists concerned about AI. Most notably in California, Gavin Newsom vetoed high-profile AI legislation while signing 18 new AI laws.
According to Mark Weatherford, 2025 could see just as much activity, especially at the state level. Weatherford saw, in his words, “sausage making of policy and legislation” at both the state and federal levels. He served as chief information security officer in California and Colorado and as deputy to the secretary of cybersecurity under President Barack Obama.
Weatherford says that although he holds a variety of job titles, his role typically involves “elevating the conversation around security and privacy and influencing how policy is done. It boils down to figuring out how to be able to help.
Last fall, he joined synthetic data company Gretel as vice president of policy and standards. So I was excited to talk to him about what he thinks is next in AI regulation and why he thinks states are likely to lead the way.
This interview has been edited for length and clarity.
The goal of elevating the level of conversation likely resonates with many in the technology industry. They’ve watched Congressional hearings on social media and related topics in the past, grabbed their heads, and looked at what elected officials know and don’t know. How optimistic are you that legislators will have the context they need to make informed regulatory decisions?
Well, I’m sure they can get there. What I’m not as confident in is the timeline to get there. As you know, AI is changing every day. It’s amazing to me that the problem we were talking about just a month ago has already evolved into something else. So I’m confident the government will get there, but they need people to guide them, to staff, to help educate them.
Earlier this week, the U.S. House of Representatives released a report from its Task Force on Artificial Intelligence, a task force that started about a year ago. This is a 230 page report. I’m walking through it now. (Weatherford and I first spoke in December.)
(When it comes to) policy and legal sausage making, you have two different, highly partisan organizations, and they’re trying to come together and create something that makes everyone happy. . . It’s a long time coming, but now as we move into a new administration, everything is up in the air as to how much attention certain things will receive or not.
Your perspective is that we may see more regulatory action at the state level than at the federal level in 2025. is that so?
I absolutely believe that. In California, I think Governor (Gavin) Newsom has signed 12 laws in the last few months that have to do with AI. (Again, 18 years old by TechCrunch’s count.)) He vetoed the big AI bill.
In fact, I spoke yesterday in Sacramento at the California Cybersecurity Education Summit. I talked a little bit about the laws that are happening across the United States. The state level was introduced in the last 12 months. So there’s a lot going on there.
And I think that’s one of the big concerns. That’s a big concern in technology in general and in cybersecurity, but we’re seeing it right now on the artificial intelligence side, is that there is a requirement for harmonization. Reconciliation is the word that Harry Coker in the (Department of Homeland Security) and (Biden) White House is using to (see). T has a (situation) where everyone is doing their own thing. Then they have to figure out, so how do they comply with all these different laws and different state regulations?
I think there’s a lot more activity on the state side. And I hope that if we could harmonize these a little bit, we wouldn’t have this very diverse set of regulations that businesses have to follow.
I hadn’t heard of that term, but that would be my next question. I think most people would agree that harmony is a good goal, but are there any mechanisms by which it is happening? What incentives do states need to actually make sure their laws and regulations are in line with each other?
Honestly, there isn’t much incentive to harmonize regulations. Except that you can see the same kind of language appearing in different states, to me it shows what each other does.
But purely, like, “Let’s take a strategic planning approach to this within every state,” that’s not going to happen, I don’t have a lot of hope that that’s going to happen.
Do you think other states might follow California’s lead in terms of a general approach?
A lot of people don’t want to hear this, but California does all the heavy lifting, so they push the envelope (technical law) to help people come along and they do a lot of work. Research on some of those laws.
The 12 bills that Governor Newsom just passed put everything on the map, from pornography to data use to all sorts of things. They were pretty inclusive about moving forward there.
My understanding is that they passed more targeted, specific measures and then larger regulations that got most of the attention, but Governor Newsom ultimately vetoed it.
I could see both sides of it. There’s a privacy component that was initially driving the bill, but we have to consider the cost of doing these things and the requirements that artificial intelligence companies put on themselves to be innovative. There’s a balance.
I fully expect (in 2025) that California passes something a little more stringent than what they did (in 2024).
And your sense is that there is certainly interest at the federal level, like the House report you mentioned, but it’s not necessarily going to be that much of a priority, or if we’re going to make major legislation. Not planning to see it (2025)?
Well, I don’t know. It depends on how focused Congress is. I think we’ll see. I mean, from what I’ve read, what I’ve read is that there will be an emphasis on less regulation. But technology in many ways is certainly about privacy and cybersecurity, and that’s kind of a bipartisan issue and good for everyone.
I’m not a big fan of regulation. There’s a lot of duplication and a lot of wasted resources that happen with very different laws. But at the same time, as with AI, when the safety and security of society is at stake, there is definitely a place for more regulation.
You said it’s a bipartisan issue. My sense is that when there’s a split, it’s not always predictable – it’s not just all Republican votes and all Democratic votes.
That’s a great point. Geography, whether we want to admit it or not, is important and that’s why places like California have been really proactive about some of their laws compared to some other states. That’s why it’s tilted.
Obviously this is an area Gretel would address, but it sounds like you believe it. Or they seem to believe that because there is more regulation, it will push the industry in the direction of more synthetic data.
perhaps. One of the reasons I’m here is that I think synthetic data is the future of AI. Without data, there is no AI. Data quality is becoming more of an issue as the pool of data is exhausted or shrinking. We’re going to increasingly need high-quality synthetic data that guarantees privacy, eliminates bias, and takes care of all those kinds of non-engineering, soft issues. We believe synthetic data is the answer to that. In fact, I’m 100% sure of it.
I’d love to hear more about what brought you to that perspective. I’m sure there are others who are aware of the problem you’re talking about, but rather than solving the problem, synthetic data potentially amplifies the biases and things that were problematic in the original data. I’m thinking.
Sure, that’s the technical part of the conversation. Our customers feel they’ve solved it, and there’s this concept of a data generation flywheel – if you produce bad data, it gets worse, but the control you build into this flywheel is Verify that the data is not getting worse, staying even or getting better with each fly. That’s the problem Gretel solved.
Many Trump-placed figures in Silicon Valley have warned of AI “censorship,” or the different weights and guardrails that companies surround content created by generated AI. Do you think it is likely to be regulated? Should I do that?
Regarding concerns about AI censorship, governments have many control levers they can pull, and they are almost certain to take action if there is a perceived risk to society.
However, finding the sweet spot between reasonable content moderation and restrictive censorship will be a challenge. The next administration has made it very clear that “less regulation” is going to be the modus operandi, so whether through formal legislation or executive orders, or through formal regulations such as the (National Institute of Standards and Technology) Guidelines and Frameworks or Joint Statements, We should expect some guidance through interagency coordination through appropriate means.
I want to return to this question about what good AI regulation looks like. There’s this huge expansion in how people are talking about AI. It’s either the most amazing technology or it’s greatly exaggerated, like it’s going to save the world or destroy the world. There are so many different opinions about the technology’s potential and its risks. How can single or multiple pieces of AI regulation cover it?
I think we have to be very careful about managing AI sprawl. We’ve already seen with deepfakes and some really negative aspects. It’s concerning to see high schools and young kids producing deep fakes who are in trouble with the law. So I think there’s a place for laws to control how people use artificial intelligence that doesn’t violate existing laws. Create new laws that enhance current laws, but simply bring an AI component to it.
All of us who have been in the technology field know that many things we just consider second nature to us when we talk to our family and friends who don’t talk to us. Must remember. Technology, they literally have no clue what I’m talking about most of the time. We don’t want people to feel like that big government is over-regulating, but it’s important to talk about these things in language that non-technical people can understand.
But on the other hand, you can probably tell from talking to me, I’m capricious about the future of AI. I see so much goodness coming. I think we’re going to have some bumpy years because people are understanding more and more and the law has a place there. Around AI.