EU law could allow US prosecutors to scan phones for abortion texts


The reader at protecting children online will soon come up against an equal and opposing political force: the criminalization of abortion. In a country where many states will soon treat fetuses like children, surveillance tools designed to protect children will be exploited to target abortion. And one of the greatest threats to reproductive freedom will come unwittingly from its ardent defenders within the European Union.

Last week the EU unveiled draft regulation this would effectively ban end-to-end encryption and force internet companies to search for abusive content. Regulators would not only require chat app makers to scan every message for child sexual abuse material (CSAM), a controversial practice companies like Meta are already doing with Facebook Messenger, but they would also require require platforms to scan every sentence of every message for illegal activity. Such rules would impact anyone using a chat app company that does business within the EU. Virtually all US users would be subject to these scans.

Regulators, companies and even staunch opponents of surveillance on both sides of the Atlantic have called CSAM a unique threat. And while many of us might subscribe to a future in which algorithms magically detect harm to children, even the EU admits that digitization would require “human oversight and scrutiny.” The EU ignores the mathematical reality of encryption: if we allow a surveillance tool to target one set of content, it can easily target another. This is how such algorithms can be trained to target religious content, political messages or abortion information. It’s exactly the same technology.

Earlier child protection technologies provide us with a cautionary tale. In 2000, the Internet Child Protection Act (CIPA) demanded that schools and federally funded libraries block content “harmful to children.” More than 20 years later, School districts from Texas progressive Arlington, Virginiaexploited this legislation to block the sites of Planned Parenthood and other abortion providers, as well as a wide range of progressive, anti-racist and LGBTQ content. Congress never said that medically accurate information about abortion was “harmful material,” but that’s what some states are claiming today, even with deer still in the books.

To post-deermany states will not just treat abortion as child abuse, but in several states probably as murder, prosecuted to the full extent of the law. European regulators and tech companies are unprepared for the next civil rights catastrophe. No matter what companies say pro-choice values, they will behave very differently in the face of an anti-choice court ruling and the threat of jail. An effective ban on end-to-end encryption would allow US courts to force Apple, Meta, Google and others to search for abortion-related content on their platforms, and if they refuse, they would be looked down upon.

Even with abortion still constitutionally protected, police are already pursuing pregnant women with all the surveillance tools of modern life. As Cynthia Conti-Cook of the Ford Foundation and Kate Bertash of the Digital Defense Fund wrote in a Washington Post editorial last year, “The use of digital forensic tools to investigate pregnancy outcomes…presents an insidious threat to our fundamental freedoms.” Police use search histories and text messages to charge pregnant women with murder following a stillbirth. This is not only an invasive technique, but also very error-prone medical questions that are easily misinterpreted as evidence of criminal intent. For years we have seen digital records of payment and purchasesame PayPal historical, used to arrest people for buying and selling abortifacients like mifepristone.

Pregnant women don’t just have to worry about which companies currently hold their data, but whoever else they might sell it to. According to a trial 2019 I helped file a lawsuit against data broker and news service Thomson Reuters, the company sells information on millions of Americans’ abortion histories to police, private businesses and even government. U.S. Immigration and Customs Enforcement (ICE). Even some state regulators are sounding the alarm, like a recent “consumer alert” New York State Attorney General Letitia James, warning that period-tracking apps, text messages and other data can be used to target pregnant women.

We need to reevaluate every surveillance tool (public and private) taking into account the pregnant people who will soon be targeted. For tech companies, this means revisiting what it means to promise privacy to their customers. Apple has long been lauded for the way it protects user data, especially when it went to federal court in 2016 to oppose government demands to hack into a suspect’s iPhone. His uncompromising stance on confidentiality was particularly evident because the court order was part of a terrorism investigation.

But the company has been much less willing to take up the same fight when it comes to CSAM. Last summer, Apple offered to integrate CSAM monitoring in every iPhone and iPad, analyzing content on more than a billion devices. The Cupertino giant quickly gave in to what the National Center for Missing and Exploited Children first called “the screaming voices of the minority” but he never completely gave up on the effort, recently announcing CSAM analysis for UK users. Apple isn’t alone in joining companies like Meta, which not only actively scans the content of unencrypted messages on the Facebook platformbut also circumvents claims of “end-to-end encryption” for monitor messages on the WhatsApp platform by accessing decrypted and user-reported copies. Google also integrates CSAM detection into several of its platforms, making hundreds of thousands of reports to authorities each year.


Leave a Reply