As privacy concerns regarding artificial intelligence skyrocket, its future and place in the law feel a little uncertain. The good news is that we're not doomed - in fact, there's never been a better time to get involved, say Whistler Partners.
“This is going to be called ‘Whistler’s Doomsday Take on AI!’ or ‘It’s Too Late! AI and Privacy Article!” Whistler Partners' founding partner Sean Burke jokes, but we’ve taken the creative liberty of tweaking the title just a tad. Burke’s comment was said in jest, but there’s an undeniable, underlying truth to it. The proliferation of artificial intelligence in our modern world has been a hot topic since the turn of the decade, but increasingly frequent developments in technology mean that this discussion has ramped up significantly in the last two years. Naturally, people (including us) are harboring some concerns about its future, particularly concerning privacy and the use of our data. So, to avoid a descent into mass hysteria, we caught up with the folks at Whistler, as well as Rick Borden, privacy partner at Frankfurt Kurnit Klein & Selz, and Janel Thamkul, deputy general counsel at AI startup Anthropic, in the hopes that they’d alleviate our anxieties. And they did… kind of!
Consent
Let’s set the record straight: privacy concerns aren’t new, and neither is AI. “I’ve been dealing with this since the mid-2000s, but machine learning has been around since the 1970s,” Rick Borden explains to our disbelief. “This is not a new technology. We have new implementations of it, but the core technologies have been around for a very, very long time.” Janel Thamkul seconds that: “I’ve been counseling AI research and development for six plus years now, and we’ve had AI machine learning models, classification systems, recommendation systems in development. AI is what surfaces the next YouTube video that you might want to watch, or it’s what’s used to recognize your face and unlock your phone.” AI is already more integrated into our society than many of us realize. Josh Bilgrei, managing director at Whistler, tells us, “Privacy and the use of people’s data, which escalated with social media and smartphones, has always been a big deal. AI is a powerful tool and it’s going to be the next big driver of technology. But people’s concerns, understandably, are that AI is just the use case to take data collection to the next level.”
"AI is what surfaces the next YouTube video that you might want to watch, or it’s what’s used to recognize your face and unlock your phone.”
What exactly does Bilgrei mean by people’s concerns? It’s simple: consent. “We’re in a system now where it says you can opt-out, right? So, people have to proactively say they’re not giving consent,” Burke details, “The idea that us regular people have to make decisions and keep track of when and where we’ve given consent, and the potential for our data to be used a million different ways is pretty unrealistic.” Look, we’ll hold our hands up and say that we’re not reading the terms and conditions but be honest – neither are you. Because the literature on consent is often “a set of like five questions buried somewhere in every single app,” we’re not likely to read what our data is being used for, as many of us either can’t find the information, don’t understand what we’re consenting to, or both.
What about ‘opting out’ being the default? Wolf Konstant, senior consultant at Whistler and former General Counsel and Chief Privacy Officer of Turn (an adtech startup that was sold to Amobee Inc. in 2017), lets us down gently. “It’s almost too late,” he admits, “because Google and IBM have already scraped all the photos and information in different apps. The AI has already been trained on our personal data, so how do we reverse that now?”
Admittedly, the future is sounding a little unwieldy, isn’t it? Well, as Burke warns us, “buckle up!” He poses a hypothetical – “what if AI analyzes your social media posts to determine your creditworthiness? They’re going to compile that and say, ‘oh, Sean goes to Europe a lot, and it looks like he eats at expensive restaurants. We’re going to factor that into his creditworthiness,’” to which Konstant quips, “And your health insurance premiums!”
“We sat down in 2009 – two years after the iPhone came out – and worked on a patent for the use of data from the Apple Watch (which didn’t exist yet) and other data to price medical insurance."
What’s that saying – life imitates art? “We sat down in 2009 – two years after the iPhone came out – and worked on a patent for the use of data from the Apple Watch (which didn’t exist yet) and other data to price medical insurance,” Borden admits of his time running the patent office of an insurance company (just one of many paths he’s traveled – more on that later, though). He continues, “We predicted that the Apple Watch would exist – an insurer could use it to price medical insurance. For example, it might look at blood sugar levels; if you kept things within certain boundaries, you might get a discount. We did that in 2009.”
For context of how mind-blowing that is, here are some things that happened in 2009: Barack Obama was sworn into office, Hannah Montana: The Movie was released, the first episode of Glee aired, and you couldn’t escape the Black Eyed Peas’ “Boom Boom Pow” no matter how hard you tried. We’re sure you weren’t thinking about how your data was being used back then because we certainly weren’t. So, while consent is definitely a huge point of contention, “the FTC is moving away from that, the EU is a little confused on it too. I’ve heard the FTC Commissioner speak about it directly,” Borden continues, recounting that “she thinks consent is meaningless, and the California Attorney General’s office has said that too, because how many privacy policies can you read?”
Behavioral Advertising
As such, Borden puts the growing issue of behavioral advertising on the table as a primary privacy concern. “The easiest way to see that is the proposed regulations in California surrounding automated decision-making,” Borden states. “They’ve said that behavioral advertising is a high-risk thing.” Behavioral advertising – when advertisers use your data to feed you personalized and targeted marketing messages – is best described by Burke, who is particularly perturbed by this phenomenon: “At some point, we’ve all joked around with each other about how we talked about something, and it starts appearing in our Instagram feed. There’s an obscure horror movie reviewer named Joe Bob Briggs – totally a cult figure. I randomly was talking to a bartender about him and that night, I got a sponsored ad on Instagram about his new book.” Sound familiar? Burke exclaims in disbelief, “Someone walk me through – on what level did I consent to that?! Where the heck is the button that turns that off?!”
“You can’t have an advertiser create interest-based ads and deliver them in real-time without AI."
Borden might have an answer for us: “Where it comes from is a very different concept than you might think – I don’t think they’re listening to your phone... It’s much more complicated.” Hypothetically, “you have a conversation with somebody about something, and your phone knows you two were together or talking or something based on other data. Then, they go and look something up. Or you look something up or do something, and inferences are drawn regarding what you talked about.” And AI is at the center of this type of data collection: “You can’t have an advertiser create interest-based ads and deliver them in real-time without AI,” Borden lets us know.
That said, it’s important to remember that not all companies collect your personal data for advertising purposes. Yes, most of these companies are for-profit – balancing “the economic but also ethical issue,” as Bilgrei supplements– and “a large part of their revenue is based on advertising,” Thamkul contextualizes. But, comparing companies like Google and Facebook to Anthropic, she describes that “Anthropic’s use of data is really about text and understanding how language is used; how words are formulated and how concepts interrelate so that technology can better communicate with humans in the same vernacular that the user’s employing. We don’t use personal data for the purpose of advertising or personalizing our service to the individual user,” but she’s sympathetic to people’s concerns, as when people hear that their data is being collected, it can feel “invasive.” Just like the thought of companies listening to our phones!
Thankfully, Thamkul says, “It would be highly unlikely that commonly used products are ambiently listening to you without you turning on a particular functionality or knowing that’s happening.” Phew! Although, our relief may be short-lived, as “there are certainly emerging technology products that do want to do that,” Thamkul continues, and “the idea is that they’re essentially an extension of your brain and external perception. To the extent that the human brain has a hard time retaining certain volumes of data or remembering things, this technology can ambiently sense things for you, making you more productive and allowing you to operate at a higher level.”
"LLMs are going to be used more broadly because the technology is suddenly more flexible. It’ll just be more ubiquitous – not so much a change in the technology itself, but a change in how it can be used and applied.”
Don’t worry if that sounds equal parts cool and terrifying – we’re still quite a way away from this tech being a reality, but it does beg the question of how it would work. Huge amounts of information are right at our fingertips, and that’s not new; it’s been the case for decades, so, what’s the difference now? Enter large language models, or LLMs – foundation models trained on huge amounts of data, making them capable of understanding and generating natural language and other types of content. “You can do a broader set of things with those,” Borden explains, so “what may be a larger change is that LLMs are going to be used more broadly because the technology is suddenly more flexible. It’ll just be more ubiquitous – not so much a change in the technology itself, but a change in how it can be used and applied.”
Transparency & Regulation
While LLMs may seem to make all of the problems surrounding AI as wide-reaching as the technology itself, the crux of this issue is largely transparency. “A theme of privacy legislation is being transparent,” Konstant notes, “it’s about giving the consumer as much information as possible, and ideally giving them the right to opt in or out to have a say in where their personal data’s being used.” Transparency is "a really critical consideration,” as Thamkul stresses; we need “disclosure awareness for the end user so that they know how to interact with them, as well as any technical mitigations.”
“It’ll be more challenging if you have a patchwork of regulation. Oftentimes, as you’ve seen with state privacy laws in the US, what happens is you sink to the lowest common denominator..."
It’s something that regulators are having to think about too, and Thamkul believes that “there’s an open question right now around how various privacy laws impact, infer, or relate to the development of AI.” To this, Bilgrei points out that “a lot of times in privacy, both with AI and non-AI, there are a million different jurisdictions. Not only do you have the EU which usually takes a leading stance, but now, you also have different states trying to come in and do different stuff in the US.” Thamkul agrees, adding that “it’ll be more challenging if you have a patchwork of regulation. Oftentimes, as you’ve seen with state privacy laws in the US, what happens is you sink to the lowest common denominator or the most conservative privacy regulation. You run the risk of states rolling out regulation that actually might not make sense for how the technology’s developed.” She goes on to say that a commonly accepted standard of regulation would be useful, but it’s important that the compliance burden strikes a balance of “not being so burdensome that companies actually can’t develop something useful for society, customers, or users.” Essentially, a lack of “unification, real thought processes and thought leaders” in this space makes it difficult for not just regulators, but companies who “want to do right” and are more inclined to respect our right to privacy.
Case in point: “Apple made the decision to only use on-device AI, meaning that your data isn’t being added to a large data set on the cloud. It’s great, but it may put them at a competitive disadvantage,” says Burke. “It definitely illustrates why the AI community is so concerned with ethics and how for-profit companies use AI. The most ethical practices might not be the most profitable. If you were a philosophy major like me, it’s fascinating.”
“Given how long it takes for regulation to go into effect and for compliance to happen, by the time certain regulations take effect, the whole landscape may look completely different.”
On the bright side, Borden thinks that a change in legislation to accommodate privacy concerns in AI is doable. After all, “we had the internet before the creation of the World Wide Web, and suddenly, you could put a graphical user interface on top of something that used to be text-based. There was a lot of time and money spent there in the 90s, and we changed the law based on that.” Thamkul is equally optimistic, citing the GDPR’s legitimate interest balancing test as a prime example of legislation that “creates discussion and encapsulates the very broad policy-related questions.” She takes care to emphasize how when advising US policymakers on how to craft similar legislation, she cautions them against “overregulating for what the technology looks like right now.” Today, “generative AI looks like a chat bot interface that you talk to – maybe it helps you summarize documents and is helpful in terms of productivity. Where that will be in the next three months is going to change drastically,” so while creating legislation to regulate this technology is possible, “given how long it takes for regulation to go into effect and for compliance to happen, by the time certain regulations take effect, the whole landscape may look completely different.”
Becoming an AI lawyer
Speaking of a changing landscape, the next few years will be a crucial turning point for attorneys who are looking to break into privacy – and more specifically privacy within AI. “There’s a bigger need for companies and law firms to have lawyers who understand how data is used,” Bilgrei shares. “Because these are newer fields, there aren’t as many people who are experts, so there are more needs for attorneys who understand privacy and AI – both from a transactional and litigation perspective.” Even if your firm doesn’t have marquee AI clients, Konstant assures that “you can definitely start moving into the space by working with clients in other verticals. Almost every tech company is using AI in some capacity now.” Konstant urges associates “to raise your hand. If you really want to be an AI lawyer, now’s the time to start positioning yourself to make a change and focus more on that.” Borden echoes that: “I promise you that you can do something else,” he affirms, “because I’ve had to.”
"If you really want to be an AI lawyer, now’s the time to start positioning yourself to make a change..."
It’s true – Borden’s career trajectory is astounding. The tech market crashed in the early 2000s when he moved from general counsel of a wireless startup to an insurance company; “everything I was doing was out the window,” he recalls. “I ended up in an insurance company, and guess what? Insurance companies are data companies. They spend more on IT than you would imagine. They’re technology companies that are bound by regulation, and they’re using information in all kinds of ways for very specific purposes – their product is a contract,” and that’s how Borden learned patents and IP. He also worked at Bank of America and learned cybersecurity from their information security team, even having a few of them teach him cryptography. “My resume is insane,” Borden confirms proudly, “people have no idea what to do with me!”
All that’s to say that everything he’s done throughout his career is related; “I had to learn different pieces of it. Now, I understand it from the technical side, so I’m able to work with people on the legal issues in a better way,” and that’s the point that Borden wants legal professionals to understand. “I think if you try to be an AI lawyer without understanding privacy, it won’t work,” he states resolutely. “The largest part of the regulation on AI right now has to do with the use of personal data. I run into this all the time – people say, ‘oh, I don’t want to be a privacy lawyer,’ but you need to know this stuff if you’re going advise on any of these things!” Thamkul agrees, telling us, “It’s funny because, really, being an AI lawyer is just about having a holistic picture of a bunch of different existing legal issues. You’re a mini privacy lawyer, you’re a mini copyright lawyer. Having expertise in privacy and copyright will make you a great AI lawyer.”
"Being an AI lawyer is just about having a holistic picture of a bunch of different existing legal issues. You’re a mini privacy lawyer, you’re a mini copyright lawyer."
Beware though; “old-school privacy: response to data breaches, government investigations litigation – that work is getting more commoditized,” Burke cautions. “It’s no longer as lucrative to be a privacy generalist because there’s a lot more competition now.” Burke adds, “if you can be the privacy person in AI, or even a vertical that’s using AI – private equity, financial services, healthcare – that can be lucrative.”
When it comes down to it, “being a good lawyer also means being a good businessperson,” Bilgrei informs us. “Someone who takes the time to understand and use the technology, who really understands it from both a user and developer perspective, and who can speak to that is, I think, the winner.” Ending on a high note, he goes on to say, “While it’s easy to go the Doomsday route – and I get it – there is so much potential here for young lawyers.” Burke supports this notion, leaving us with some encouraging advice: “There is a huge opportunity to be a leader in this space. I don’t think we can even imagine where this technology will be in ten years, but who wouldn’t want a hand in shaping it?”