Hi privacy navigators,
This week I want to dive deeper into two resources and give all other news and resources as a quick list. Let me know how you like it.
Here’s what I’ll cover this week:
- The Austrian DSB Slaps Down Google’s Controllership Denial
- California’s Privacy Laws Meet AI in Healthcare
The Austrian DSB Slaps Down Google’s Controllership Denial
A data subject submitted a Data Subject Access Request (DSAR) directly to Google LLC, demanding access to their personal data under GDPR.
Google LLC dodged responsibility, passing the request off to Google Ireland Ltd., claiming the latter was the sole controller for EEA and Swiss operations.
This triggered an investigation by the Austrian DSB, who didn’t buy Google LLC’s claim that they were just a bystander.
Evidence uncovered showed Google LLC wasn’t just “helping out” — they were the master mind behind key data processing decisions.
Why Google LLC Can’t Escape Being a Controller?
Let’s be clear — the DSB saw right through Google LLC’s attempt to paint themselves as a processor. Google LLC sets the tone for product development, infrastructure, and the rules of the game for how personal data is handled globally. That’s textbook controllership.
DSARs Are a Controller’s Problem, Period.
Here’s the deal: GDPR Article 4(7) says controllers are responsible for everything—from why data is collected to what’s done with it. And under Articles 12–23, responding to DSARs is non-negotiable. By directing data processing globally, Google LLC effectively made themselves accountable for these requests.
What nailed Google LLC?
They control the playbook for EEA processing.
They design the systems that collect and process personal data.
Their contracts with Google Ireland Ltd. didn’t effectively hand off responsibilities.
In short, the DSB ruled: “You can’t be this involved and not call yourself a controller.”
Signs You’re a Controller (Even If You Deny It):
- You decide what data gets collected and why.
- You build the systems and infrastructure for processing.
- You set the rules — from storage to security to compliance.
- You enforce standards across global operations.
- You call the shots when it comes to how personal data is used, shared, or accessed.
Sponsored Message
Handling GDPR compliance is complex, but it doesn’t have to be overwhelming.
Conformally automates compliance tasks, saving time and avoiding fines.
Map data categories with purposes, track vendors and much more with Conformally.com.

California’s Privacy Laws Meet AI in Healthcare: A Step Forward or a Half-Measure?
A legal advisory was released exploring how California privacy laws, particularly CCPA and CPRA, apply to artificial intelligence (AI) in healthcare.
The document zeroed in on how AI systems process sensitive patient data—everything from diagnostics and predictive analytics to treatment recommendations.
It reviewed compliance requirements while calling out gaps in the legal framework when dealing with the sheer complexity of AI-driven data use.
California Privacy Laws Take a Swing at AI
CCPA and CPRA require transparency, purpose limitation, and data minimization, even for AI systems. But let’s be real—these laws weren’t written with AI in mind. AI systems operating in healthcare, often processing sensitive personal information, must navigate these laws, but it’s far from seamless.
Accountability for black box decisions
Transparency and accountability sound great, but how do you explain a decision made by an AI that’s essentially a black box? California law says you need to. Organizations using AI in healthcare are playing catch-up, building governance systems and audit trails that should’ve been there from day one.
Is This Really Enough?
Sure, California’s privacy laws are ahead of the curve in the U.S., but are they enough for the pace of AI innovation? The advisory raises critical questions: Can existing laws truly account for the ethical dilemmas, biases, and unpredictability of AI systems? Or are we just papering over the cracks?
Actions Organizations Should Take When Using AI in Healthcare
Here is what I consider the bare minimum that organisations shall do to use AI in their systems:
- Perform a deep dive DPIA for all AI-driven processes—not just a checkbox exercise. AI can go rogue fast.
- Build audit trails that track every step of the decision-making process to prove accountability.
- Make AI explainable—no one cares how sophisticated your algorithm is if you can’t explain its outputs.
- Update your privacy notices to clearly spell out how AI systems impact patients—don’t hide behind legal jargon.
- Plan for AI failures. If the system makes a bad call, you need a clear strategy to fix it, fast.
Hi privacy navigators, This week I want to dive deeper into two resources and give all other news and resources as a quick list. Let me know how you like it. Here’s what I’ll cover this week: The Austrian DSB Slaps Down Google’s Controllership Denial California’s Privacy Laws Meet AI in Healthcare The Austrian DSB […]