Monday, January 5, 2026

Santa’s Naughty and Nice list for privacy 2025

Santa has been an early adopter of large-scale data mining and AI predictive algorithms to compile this year’s list of Naughty and Nice.  A (naughty) elf secretly shared a copy with me.  Santa is an older North European guy, who’s not afraid to call out the naughty ones, and he knows he’s not particularly popular in the world of Big Tech, especially since Santa doesn’t deliver 100$ million support yachts on the Silicon Valley wish list.  Santa knows 2025 has been a terrible, terrible year for online privacy.    

Naughty

The AI industry:  engineered the biggest data theft in human history to train its AI models, running roughshod over intellectual property, copyright and personal data rights in the content it just took.  It just took all that data, without anyone’s consent, without paying for it, and without any transparency about what it was taking.  And despite a lawsuit here or there, the industry seems to be getting away with it.  But Santa saw it.  AI-generated slop and AI scams started infiltrating every aspect of our lives, with things getting worse, and the AI industry that made the tools to create this slop and scams washed its hands of the damage it was creating.  Even AI companies have trouble distinguishing real from fake, good luck to the rest of us.  But Santa isn’t resentful, and he gave the AI tech gurus a free one week vacation to learn AI scam techniques in Nigeria, and their teenage daughters are particularly welcome too.  

Santa knows that AI models need to be trained on data, like human brains, learning, e.g., to associate the word “zebra” with a picture of a zebra.  Neither the machines nor the humans need to have actually seen a living zebra to learn the association.  But to enable the current state of generative AI, the AI models must be trained on vast amounts of data.  Santa knows there are legal ways for AI companies to get this data to train their models.

Santa knows that nice AI players could license curated data sets:  some companies have built businesses that build data sets specifically to sell to AI companies to train their models.  Imagine a company building a vast library of cards, combining the picture of a zebra with the word zebra.  Scale AI is an example of such data annotation companies.  They build their databases by employing lots of very low-paid humans, and then license their database to the AI industry.  Privacy laws around the world would permit the database annotations to label public figures, like Bill Clinton, in their database, but not private citizens, like most of us.  

Santa also knows that AI players could license other people’s copyright data:  companies that own datasets, in other words, own the copyright on their datasets, can license (sell) them.  For example, The New York Times could license its archive of articles.  But AI companies have in the past just taken the data, and sometimes they’ve been sued for it.  One of them, Anthropic, settled with copyright owners for a very large amount of money.  

https://www.reuters.com/legal/government/new-york-times-reporter-sues-google-xai-openai-over-chatbot-training-2025-12-22/  

This lawsuit, and settlement, was based on copyright.  But what about privacy?  Santa doesn’t like when naughty people steal someone’s property.  Santa has never seen an AI company pay individual humans for taking their personal data to train their models.  

The online ads industry:  a giant ecosystem of publishers, advertisers and online ad exchanges continue to develop ever more intrusive and secretive data collection, monitoring, profiling and targeting methods.  The industry increasingly uses “fingerprinting” to uniquely identify you and me and everyone online.  For example, I clicked on some random news website called Daily Mail:  it informed me that I had the choice of “subscribing, paying money to Daily Mail” or alternatively to “consent” to sharing my data (basically everything it was technically able to collect) with its 1436 partners for profiling and targeted ads…and that I was invited to visit the websites of each of those 1436 partners to understand what they would do with my data.  (This absurd farce passes as “consent” to the processing of my personal data, in the grotesque world of online advertising.)  The online ads industry has become a giant data orgy, and guess the role that you and I play in that orgy. Santa offered the online industry honchos a free one week vacation to Jeffrey Epstein’s Caribbean island, to join a seminar on the legal concept of “consent” in the context of a “deal” between rich and powerful older men claiming to obtain the consent of much younger women and girls for intimacy.  The online ads world repeats that pattern, where rich and powerful players claim to have obtained our “consent” to share and abuse our personal data with all their thousands of partners.  

Third-party cookies:  everyone’s favorite online tracking tech was on death’s watch as a privacy-invasive untenable technology, until Google decided to cease its efforts to phase them out.  Google abandoned its project to improve privacy online.  https://www.theverge.com/news/653964/google-privacy-sandbox-plans-scrapped-third-party-cookies  Why?  I’m guessing, because the ecosystem of publishers, advertisers and ad exchanges just enjoyed the current data orgy too much.  Too bad for individual humans whose data will continue to be shared (or abused) amongst thousands of “partners”.  

E.xtremely L.oud O.bnoxious N.arcissist tech billionaires:  Santa just wants them to STFU.  Santa offered E.L.O.N.s a free one week vacation to Nelson Mandela’s former cell on Robben Island, in solitary confinement.  Santa also invited them to consider moving, or returning to, South Africa, permanently, if they want to escape California’s proposed “billionaire tax”.  

Nice  

The Regulators, in particular, the Irish Data Protection Commission continued its pretense to regulate privacy on behalf of all of Europe in its role as the “one stop shop”...flop…and successfully finished another year doing nothing to upset the big tech geese laying the golden eggs in Dublin.  You can’t be nicer than that to big tech.  Its sister regulator in the US, the Federal Trade Commission, spent the year doing even less, in a Trump/Musk-induced organizational coma, after Trump fired their Democratic commissioners.  The competition authorities had even less impact:  the most important antitrust case of the last quarter century (Google) ended with judge-imposed remedies that were so gentle that Google’s stock rocketed 9% on the news.  Santa offered the regulators a one week vacation to a Buddhist retreat:  the world will end, you can't do anything about it, just accept it.  

Individual humans are starting to wake up and fight back to preserve their world:  real people are waking up to real threats to their freedoms and privacy.  Real people are starting to object to building large industrial data centers in their backyards.  Real people are starting to object to AI models taking their data to train their models, or deepfake their voices or faces, or to make basic human decisions about their lives, like getting hired or fired, or scamming them.  Real people are starting to object to the mass surveillance system that the open web has become.  Santa offered them heart, brain, and courage…just like the Wizard of Oz.  

How about you?:  have you been naughty or nice this year?  Santa knows the naughty ones have gotten much, much richer this year, because the other 8 billion people got a little poorer, stealing their data from them, little by little, invisibly.  This trend is not your friend.   


Thursday, December 4, 2025

How the One Stop Shop Flop undermined Europe’s grand privacy law

I believe in privacy, and I believe in the need for privacy laws to guarantee privacy.  I’ve been calling for serious global privacy laws for two decades:

http://news.bbc.co.uk/2/hi/technology/6994776.stm


Europe has been the continent with the world’s most serious and strict privacy law, but only on paper.  It’s been almost a decade since Europe’s landmark privacy law, the GDPR, was passed.  It’s fair now to assess its impact.  The European Commission is taking a look at certain revisions, mostly irrelevant and mostly in the direction of loosening some of its paperwork provisions.  Meanwhile, leading privacy advocates are already warning against weakening the GDPR:  https://www.theguardian.com/commentisfree/2025/nov/12/eu-gdpr-data-law-us-tech-giants-digital


I have a lot of experience working with these laws, and I have a clear opinion:  the GDPR has failed.  It has not met any of its key goals.  If you have any doubt, ask yourself the simple question:  do I have more or less privacy today than a decade ago?  There are many reasons for this failure, but number one is the One Stop Shop Flop.  


The European lawmakers passed a strict privacy law, on paper, with massive potential fines (up to 4% of worldwide turnover) for non-compliance, on paper.  But then, in a massive blunder, the European lawmakers created a one-stop shop notion, meaning that non-European companies, like Chinese and US Big Tech, could pick any EU country to regulate them.  Guess what, they picked Ireland, their longstanding tax haven, as their regulatory haven too.  Before this law, all 28 European countries could enforce privacy laws against Big Tech.  After this law, only Ireland.  Hallelujah for American and Chinese Big Tech…


But let’s keep things simple.  A law with no enforcement will not be respected.  A law that is “enforced” by a small regulator, based in a small country, with a small staff, in a country that makes its money by being a tax haven to American and Chinese big tech…was tasked with enforcing this law on behalf of 450 million citizens of the EU?... I’m hardly the only person to criticize this farce:  https://noyb.eu/en/former-meta-lobbyist-named-dpc-commissioner-meta-now-officially-regulates-itself


I love to take early morning walks in a park near my home.  Every morning I see the same spectacle:  a few adorable dogs chase the local squirrels.  The dogs bark and wag their tails, the squirrels scamper and scurry, and the humans chuckle.  It’s fun for all, because everyone knows…the dogs will never catch a squirrel.  And indeed, no one in the “lead” privacy regulator for Europe, in Ireland, has ever caught a Big Tech squirrel, and never will.  The Irish regulator has never imposed the fines that the European lawmakers envisaged.


Meanwhile, the European Commission proposes to fix this iconic privacy law…not by asking why this key law has failed, but by suggesting its paperwork documentation obligations could be streamlined.  Meanwhile, the one stop shop flop continues, and the Big Tech squirrels are not worried about any privacy law enforcement.  The losers, of course, are the 450 million Europeans who were promised a strict privacy law.  


The lesson for the future, in particular for AI, is clear.  You can pass laws (as Europe already has done in its AI Act), but a law with no enforcement won’t be respected.  Ask the squirrels. 


Wednesday, November 12, 2025

Data centers coming to a town near you

 




You may think of your personal data as a digital asset, but it does have a physical home.  It exists in a data center (or multiple data centers) at rest.  As you’ve read in the press, tech companies are building data centers at a frenzied pace.  The massive computing needs of AI require them to build them as fast as they can, regardless of the cost.  It’s now becoming debatable whether we’re living through a period of boom or bubble in data center construction:  https://www.theguardian.com/technology/2025/nov/02/global-datacentre-boom-investment-debt


Google plans to spend something like 90 billion dollars investments, mostly on data centers (not humans) this year.  This week alone it announced something like 5 billion to be spent on data centers in Germany.   This made the German politicians very happy, crowing about Germany’s high-tech investment environment.  https://www.tagesschau.de/wirtschaft/unternehmen/google-investition-deutschland-klingbeil-100.html


But should local people really be happy to see a data center built in their backyard?  Should politicians really welcome them?  Benefits are few:  data centers create very few long-term jobs, mostly junior technicians.  The servers, chips and high value tech work are done remotely, often on the other side of the planet.  Harms locally can be more significant:  data centers consume vast amounts of power and water (to run and cool them).  This can put stress on either the electricity grid or the water supplies or both, and sometimes leads to rising electricity costs for all.  https://fortune.com/2025/11/08/voter-fury-ai-bubble-high-electricity-prices-offseason-elections-openai/


The environmental impact of data centers is now the subject of study.  https://www.theguardian.com/technology/2025/nov/10/data-centers-latin-america


Back in 2007, I was asked by my then-employer Google to officiate publicly the opening of its big new data center in Belgium.  The investment was valued at around 250 million.  (Compare that to Facebook’s recent announcement of planning to spend 600 billion on data centers …https://finance.yahoo.com/news/meta-plans-600-billion-us-175901659.html ).  But back then, 250 million was a big deal.  Even the Belgian Minister President (later Prime Minister) Elio Di Rupo came along to celebrate it, and he and I took a long pleasant stroll through the town of Mons after the ceremony.  Like the German politicians this week, welcoming Google’s data center announcement.  


But shouldn’t we all know better now?  Data centers are the physical concrete of the industrial infrastructure of the digital world.  The industrial revolution needed coal mines and steel mills, until it didn’t. The Google data center I opened in Belgium was in a depressed region full of abandoned coal mines and steel mills.  I fear history repeats itself. 


I understand that local politicians will welcome “investment” in their countries, even if 99% of the value of that investment comes from servers and chips and software created far away.  But we all need to look at the local environmental cost of these data centers.  If not, you’re just breathing the pollution of someone else’s beautiful shiny Ferrari.  


Sunday, November 9, 2025

Taking the Humans out of Human Resources

We all know the trend:  AI is replacing human workers.  We all know that trend is accelerating.  One of the industries where AI is replacing humans is...ironically, human resources.  And even more ironically, increasingly, we humans will be hired, or not hired, as decided by an AI agent.  I think we deserve to know why.  Privacy law requires it.  

Let’s take the example of a company called Workday.  It’s a human resources outsourcing company using AI to replace humans in human resources.  https://www.workday.com/en-us/homepage.html


Lots of companies, including my former employer Google, have outsourced human resources functions to Workday.  When I worked at Google, I was forced (like all employees) to provide very detailed, sensitive information to Workday.  I had little idea what Workday did with that information, or where they sent it (e.g., worldwide back to the US, I’m guessing, since it’s a US-based company?).  Workday has been sued in the US, alleging Workday’s tech discriminated against people over 40 from getting hired.  https://www.cnn.com/2025/05/22/tech/workday-ai-hiring-discrimination-lawsuit


Companies like Workday are now using AI to replace humans in the human resources functions, like filtering CVs or conducting “interviews” between AI agents and human job applicants.  You will be hired, or not, based on the automated decision made by an AI agent.  


AI agents work by defining “success” based on the models on which it was trained.  Then they assess job candidates or job workers based on how closely they correlate to these models of “success”.  They can collect and assess hundreds or thousands of characteristics, far more than any human could.  I’m concerned that AI models will reinforce whatever forms of bias exist in the training models of “success”.  Imagine an AI “interview”:  what data is it collecting?  It’s not collecting data like a human interviewer.  An AI “interviewer” can measure pupil dilation, eye movements, head angles, speech rhythms, vocabulary spectrum, verbal biometrics…you get the idea, collecting vastly more sensitive data than a human could in order to build an assessment.  


Let’s take the example of an AI agent that is tasked with hiring a CEO for a tech start-up.  The AI agent will search the public data of the world about successful CEOs of tech companies.   Which characteristics will it find?  How will it assign a weight to each of those characteristics?  Let’s play a thought experiment, based on fairly obvious assumptions of what an AI agent would find based on the current crop of successful tech CEOs.  I’m not going to name names, but they’re all household names anyway.


Ethnicity:  high correlation for being Jewish or Indian.  Negative for being Hispanic or Black


Gender:  high correlation for male.  Negative for being female. 


LGBT:  positive correlation for being cis male gay.  Negative for all other categories of LGBT.


Neurodivergence:  high correlation for being on the “high functioning” autism spectrum.  Negative correlation for all other categories of neurodivergence.


Personality:  high correlation for narcissistic personality disorder.  Negative correlation for introversion.  


Age:  high correlation for 25-40.  Negative correlation for over 40, increasing by age.


You may want to dispute some of my assumptions above.  I’ll accept that.  The list of characteristics above may vary from function to function.  What’s “good” in the list above for a tech CEO may be “bad” for a mid-level tech worker.  


As companies turn over their human resources functions to outsourcing companies that have in turn outsourced these functions to machines…it’s essential that we ask these companies the criteria they’re using to make these automated decisions about our most basic human aspirations, like getting or keeping a job.


I know privacy regulators are under-resourced.  So, to my friends in the privacy regulatory community, I’ve drafted the simple questions you should send to companies in your countries.  


Do you engage third party companies to provide human resources functions?  If so, please provide their names.


Do you provide these companies with criteria to apply to recruiting or evaluating your employees?  If so, please list them.  Can your employees or applicants object to sharing their data with these companies?  


Do these companies transfer such data outside of your country, e.g., to the US?  If so, under what legal basis?  Can your employees object to this?


Do these companies use AI?  If so, on what data has it been trained?  


What steps does it take to eliminate bias (relating to race, age, gender) in its automated decisions?


What transparency do these companies provide in terms of how it is making its automated decisions?  List the criteria used to make these automated decisions.  Can an employee or applicant object to the use of such automated decisions?  What would be the consequences of such an objection?  Is there an ability to appeal such automated decisions?  Is there a human in the loop?  


Look.  I know we (humans) are on a path to automating more and more of our human jobs.  Human resources is one of them.  Technology is taking humans out of human resources.  But it’s up to us to keep humanity in the machines.  Privacy law is central to this.  Let me conclude by citing Google’s own AI-generated overview of the topic:


European privacy laws, primarily the EU's General Data Protection Regulation (GDPR) and the upcoming AI Act, regulate automated decision-making by providing individuals with the right to an explanation, to human intervention, and a qualified right to opt-out of certain automated decisions. The GDPR's Article 22 prohibits decisions based solely on automated processing that produce legal or similarly significant effects, unless specific conditions are met, such as explicit consent or a necessary contract with safeguards. Key rights include transparency, the right to a human review, and the right to challenge such decisions.

Thursday, October 30, 2025

Worried about AI? Don’t worry, it will cure cancer

You have surely read the dire predictions of the leaders of the field of AI:  everyone from the leaders of OpenAI, Anthropics, Bill Gates, etc etc have been speaking about the massive challenges that AI will thrust onto our societies, including in particular the massive imminent wave of job destruction.  So, it’s refreshing to hear from Google’s President that AI “will cure cancer”.  https://fortune.com/2025/10/26/google-ruth-porat-cure-cancer-in-our-lifetime-with-ai/

Some of you will be cynical, and suggest that she is just whitewashing AI in the PR interest of her own employer, and her own.  She’s repeated that same line about AI curing cancer more times than Brittney Spears has gotten wasted.  


I’d like to hear more leaders of companies building AI tools engage in a public discussion about the good and bad consequences of their inventions.  The tech industry is famous for privatizing gains and socializing losses.  In other words, building their businesses and their share prices based on the good use cases of their inventions, but letting other people, governments or societies deal with the negative fall-out.  Heads, I win, Tails, you lose.  For example, a company could make a fortune “curing cancer”, but would it be held responsible if the same AI tool that it built to cure cancer could also be re-purposed to build bio-weapons?  


Let’s take the example of using AI to screen CVs.  Many people looking for jobs today will be auto-rejected by an AI bot.  There’s no transparency, and they won’t be told why.  One reason might be that they’re screened out for being “old”, amongst many other possible, but non-transparent, reasons.   https://www.wired.com/story/ageism-haunts-tech-workers-layoffs-race-to-get-hired/  And indeed, in the US, a lawsuit on precisely this topic has been launched, accusing Workday of using AI tools to discriminate against older job applicants:  https://www.hklaw.com/en/insights/publications/2025/05/federal-court-allows-collective-action-lawsuit-over-alleged  


AI in recruitment is pretty simple.  Ask the AI to study the characteristics of “successful” employees, and go find me more like them.  So, if the data about “successful” employees skews heavily to the age range of 25-39, well, the AI will look for more of the same and auto-reject the rest.  That’s how AI will reinforce discrimination in our societies.  The category of ageism is not unique:  sexism, homophobia, racism (some races, but not others) are all categories that AI will discriminate against, simply to pursue the goal of advancing the types of people who meet its training model of “success”. 


Europe’s privacy law (GDPR Article 22) has a very clear provision dealing with machines that make automated decisions: 


  • The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.

I’m not aware of any serious regulatory attempt to enforce GDPR Article 22 against companies using AI to screen job applicants.  But they should.  If companies are using machines to decide whether to hire or auto-reject an applicant, then those decisions “significantly affect him or her” under the law.    


AI has already begun to accelerate its trend to replace human workers with machines.  Companies are already replacing entry-level functions with AI agents, making it hard for young workers to get their first rung on the employment ladder.  Older workers have long been pushed out of Silicon Valley.  The trend is just starting, but even today, hardly a day goes by without some big company announcing plans to slash human jobs and hire machines.   https://seekingalpha.com/news/4508911-amazon-plans-to-cut-up-to-30000-jobs---reuters


You may despair of getting or keeping a job in this new environment.  But you don’t have to accept, in Europe at least, that a machine will make an automated decision not to hire you, in violation of existing law.  Often regulators don’t act, until complaints are filed.  If your CV is being auto-rejected by a machine, you can file a complaint with your local data protection authority, which might prompt them to intervene.   


AI is a tool for automated decision making:  that’s the whole point of it.  I cited one example from the world of job applications, but there are thousands of other examples, today and soon.  It’s high time to start applying and enforcing the laws against automated decision making.  The biggest imminent disruption in human history is around the corner:  so far, I can confidently state that privacy regulators have had very close to zero impact on the development of how AI is being developed and used.  The Italian Garante is one of the few to have tried, and I applaud them for their leadership.  The others seem to be content with conducting blah blah public consultations.   If we want AI to respect human values and the laws, we need to speed up, urgently, because the machines won’t slow down.  Enforcing the laws on automated decision making would be a good place to start. 


Thursday, October 23, 2025

The Age of Discovery, seen from Seville




I enjoyed a week in enchanting, intriguing Seville.  The photo is the tomb of Christopher Columbus in the Seville Cathedral. In the Age of Discovery, Seville had a Spanish monopoly on ships to/from the New World.  Historic Seville can teach us a lot about our own AI-driven age of discovery.  Both eras have a lot in common, driven by science, greed and missionization. 

Europe won that technological race:  compared to the indigenous populations of the “New World”, it had superior sailing/navigation/mapping tech, it had superior military tech, it had deep capital markets to fund the expeditions, and it had a belief in its cultural and religious superiority.   That’s a good description of the people leading the AI race today.  


Europeans in the Age of Discovery expanded human knowledge and science dramatically, and AI will do the same now.  But even though some actors were driven by science and a pure search for knowledge, most were driven by greed.  Leaders in the field of AI are now accumulating vast (perhaps bubble) riches, just as the riches of the New World poured into Seville in the Age of Discovery.  As a tourist in Seville, you can still visit the architectural gems financed by plundering the New World indigenous populations.  Then as now, some people got very rich, but most people got much poorer.  The Spanish royal house got rich, the indigenous populations were plundered.  The tech bros of today have gotten obscenely rich, the soon-to-be-unemployed legions of workers of today replaced by AI agents will get poor.  


The biggest losers of the Age of Discovery were the indigenous populations, wiped out by European-introduced diseases.  90% of indigenous populations were wiped out within one century of contact with European colonizers.  AI will probably do the same to us:  it’s becoming a consensus that superintelligence (whenever that happens) will eventually similarly cull or eliminate homo sapiens.  Lots of leaders are calling for a (temporary) ban on developing superintelligence, until our species can figure out how to build this safely.  My former colleague, Geoffrey Hinton, a Nobel prize-winning AI genius is amongst them. 


History tends to repeat itself.  As we enter into our own new AI-driven age of discovery, ask yourself if you think you and your society will become winners or losers.  A lot of people today think they’ll be winners:  tech bros (obviously), governments and businesses looking for new tech-driven growth and profits, scientists, entire countries like China or the US which are currently leading the race.  But lots of people will be losers:  in particular, looming job destruction and unemployment, in turn leading to social disruption, which in turn historically tends to lead to revolutions.   Which do you think you’ll be, winner or loser?  Even if AI doesn’t destroy humanity, yet, it may well destroy democracy.  It will destroy privacy too (I’ll blog about that separately).   


Privacy is anchored in the idea of the dignity of the individual human being.  There wasn’t much dignity in being an indigenous person dying of smallpox during the Age of Discovery, or an African victim of the trade routes of the Age of Discovery that evolved into the slave trade.  Can we do better today?  Machines don’t believe in privacy:  they consume data to output data to accomplish a task.   The rise of AI is the challenge of our age.  You might ask where to start:  how about stopping private companies from plundering other people’s intellectual property or personal data to train their AI models, as the Spanish conquistadors plundered the wealth of the indigenous populations.  


Lots of us need to step up to confront this challenge.  Or we can leave it in the hands of the tech bros and gullible politicians and impotent regulators, who are welcoming AI like Montezuma welcoming the Spanish.  


Wednesday, October 1, 2025

The world’s largest surveillance system…hiding in plain sight

 

The world’s largest surveillance system is watching you. 


It’s capturing (almost) everything you do on the web on (almost) every website.  And it’s hiding in plain sight. 


And it’s “legal” because it’s claiming that you know about it,

and that you consented to it. 

But do you know what it is?  

Do you know what “analytics” is?  Websites use analytics services to give them insights into how their users interact with their sites.  Every website wants to know that.  And analytics providers can give them that information.  For example, an analytics provider can give detailed statistical reports to a website about their users and how they interact with its site:  how many people visited the site, where did they come from, what did they view or click on, how did they navigate the site, when did they leave/return, and many, many other characteristics.  This data can be collected and collated over years, over thousands or millions of users.  

There are many providers of analytics service, but according to analysts, there is only one 800-pound gorilla, Google Analytics.  

“Google Analytics has market share of 89.31% in analytics market. Google Analytics competes with 315 competitor tools in analytics category.

The top alternatives for Google Analytics analytics tool are Tableau Software with 1.17%, Vidyard with 0.78%, Mixpanel with 0.59% market share.”

And according to other third party analystsAs of 2025, Google Analytics is used by 55.49% of all websites globally. This translates to approximately 37.9 million websites using Google Analytics.”

You get the point:  one company, one service is capturing the bulk of the web traffic on the planet.  Websites get statistical reports on the user interactions on their sites.  Google gets individual-level information on the actions of most everyone on the web, on most websites, click-by-click, globally.  Wow. 

Legally, a website that uses Google Analytics is contractually obligated to obtain "consent" from its visitors to apply Google Analytics.  But often the disclosure on those websites is cursory, or even incomprehensible:  “we use analytics”, or “we use analytics software for statistical purposes”...which sounds harmless, but hardly would explain what’s happening to the average user.  Technically, what happens is simple, but invisible to the average user:  when they click on a website, that website auto-transfers to Google, in real time, detailed information about every step a user takes on its site. What’s happening is very simple.  A site using Google Analytics incorporates a small piece of code on its site which auto-transfers to Google, in real time, information about every interaction its users have:  every visit, every click, and information about each of those visitors, on an identifiable basis.  

In fairness, Google Analytics has some privacy protections.  Its reports to its client websites are statistical, rather than reports at individual users.  But even if the websites don’t get information about users at an individually-identifiable level, Google does…. And Google does not do cross-site correlation, i.e., it does not profile users across sites, for Analytics purposes.  (Note, Google does exactly this cross-site correlation in the context of its Ads businesses, but that’s a different topic than this blog.)  

All this is “legal” if it’s based on consent.  A phrase disclosed in a privacy policy, or a cookie notice, no doubt you’ve seen, or maybe clicked on, is deemed to constitute “consent”.  But really, did you or the average user have a clue?  

I’m in the school of believing that analytics tools represent a relatively low level of privacy risk to individual users.  But what do you think if one company is getting real-time information about how most of humanity is engaging with websites on a planetary level?  A user goes to any random site, but their data also auto-transfers to Google, did they know?  Since the scale of this service vastly exceeds any other service on the web, the scale of this data collection is the largest on the web.  Please respond with a comment if you can think of anything of similar surveillance scale.  I know you can’t, but let’s engage in the thought experiment.  I’m not picking on Google (I love my former employer), but in this field, which is essential to privacy, it’s the 800-pound gorilla, surrounded by a few mice.  

And the photo, if you’re interested, is Chartres Cathedral, built in the era when we believed only God was all-knowing.  

Wednesday, September 24, 2025

The Irish Backdoor

It’s not a gay bar in Dublin, the Irish Backdoor, sorry if that’s why you clicked on this blog.  It’s how non-EU companies, like tech companies from the US and China, use the “one stop shop” mechanism to evade the privacy regulations of 26 countries to be regulated instead by the Irish regulator, the gentle golden retriever of privacy enforcement.  

I am expanding on my blogpost below.  But now I’m revealing something new.  How most of the non-EU companies, like tech companies from the US and China, have no legal right to assert a claim to be regulated by the one stop shop.  Fiction or fraud?  Let me explain.


Legally, a non-EU company can only claim the benefits of the one stop shop if the decisions regarding data processing in Europe are made there.  


Let me suggest a reality test.  Most companies from outside the EU claim to benefit from the one stop shop in Ireland if they do the following:  1) create a corporate entity in Ireland, 2)  write a privacy policy (or ask ChatGPT to write one) that tells users that the Irish corporate entity is the “controller” of their data in Europe, and 3) has some minimal presence in Ireland, like appointing some employee as a “data protection officer” for the entity.  All this can be done in a day, and with a tiny local Irish staff. But…does this meet the legal test?… that the data processing operations in Europe are being decided by this Irish entity?


Most tech companies build products in their homes, Silicon Valley, China, etc.  They then roll out these products globally.  Usually these products are identical worldwide, except for language interface translations.  In those cases, does anyone really believe that their Irish subsidiaries are really the decision-makers for how the data will be processed for their services for their (millions) of European users?  Perhaps that is the case for a few large non-EU companies with large operations in Ireland.  For all the others, it’s hard to believe.  


Maybe it’s an innocent fiction for a company from China or the US to claim it is “established” in Ireland to evade the privacy laws of 26 EU countries with millions of users.  Or maybe it’s a fraud…?


(Final note, as a former employee of Google, I must point out that nothing in this blogpost is meant to suggest anything regarding that particular company.  Google has a huge workforce in Ireland).


Meanwhile, non-EU companies are getting an easy ride in Europe, while their EU company competitors aren’t.  I just don’t think that’s fair to EU companies or to EU users. 


Monday, September 22, 2025

Why does every US and Chinese company want to go to Ireland?

Ireland is one of the biggest winners of the EU 27 construct.  It has established itself as a tax and regulatory haven for foreign (non-EU) companies.  Virtually all Chinese and American companies, in particular in tech, rush to “establish” themselves in Ireland.  In exchange, they get to pay a low corporate tax rate (even if their users and their money is made in the other 26 EU countries) and they get to benefit from the light-touch privacy regulation of Ireland.  

You’ll recall that Europe’s tough (on paper) General Data Protection Regulation of 2018 created the concept of a one-stop shop for foreign companies.  So, any Chinese or American company could pick one of the EU countries as its “establishment”.  Of course, they all picked Ireland, given its universal reputation for light-touch tax and regulation.  It is entirely a different debate about why/how Europe made this blunder:  in effect, it gave a massive advantage to foreign companies over domestic European companies.  A French/Italian/Spanish company would be regulated by their domestic French/Italian/Spanish regulators, who take privacy seriously, and would sanction non-compliance.  But a Chinese or American tech company would do business in all those countries, while benefiting from the Irish regulatory culture, as gentle as an Irish mist.  


Occasionally, a European regulator would try to take on an American or Chinese company in the field of privacy.  https://www.cnil.fr/en/cookies-placed-without-consent-shein-fined-150-million-euros-cnil

But this action wasn’t based on the core European privacy law, the GDPR, but on a rather obscure law about other things.  


The Trump administration has defended American companies in Europe against what it claims are discriminatory regulatory actions.  https://www.lemonde.fr/en/international/article/2025/09/06/eu-commission-reluctantly-fines-google-nearly-3-billion-despite-trump-threat_6745092_4.html#  It was therefore not a surprise to see the French regulator announce fines at the same time against one American and one Chinese company.  But it is surprising to see the Trump administration rushing to defend one of the most Democratic-leaning companies in the US.


Indeed, Europe does discriminate, in the field of privacy, in favor of non-EU Chinese and American companies, due to the one-stop-shop Irish backdoor.  One can only assume European dysfunctional politics led to this absurd result, from a European perspective.  Hundreds of millions Europeans depend on a small Irish privacy regulator to ensure that the gigantic American and Chinese tech companies respect European privacy laws.  Hilarious.  


All of this might seem like trivial corporate politics, but the consensus is growing that humanity is allowing the tech industry to put us (I mean, our entire home sapiens species) on a path to doom.  https://www.theguardian.com/books/2025/sep/22/if-anyone-builds-it-everyone-dies-review-how-ai-could-kill-us-all  Even if we’re doomed, can we at least put up a fight?