Tinder Scans, AI Court Errors, and Google’s Privacy Retreat
From dating apps to cloud servers, the past week has been a rollercoaster for the tech industry. Tinder is rolling out mandatory facial scans to curb fake profiles, Amazon has explained how a single glitch brought large parts of the internet to its knees, a major U.S. law firm is apologizing after AI-generated citations derailed a bankruptcy case, and Google has officially pulled the plug on its long-promised cookie replacement project. Together, these stories reveal a tech landscape caught between innovation, accountability, and the growing tension between convenience and privacy.
We Now Support PayPal and Stripe!
As a privacy-focused company, we know that many of our users value anonymity and control over their personal data. That commitment remains at the heart of everything we do. But we also understand that some customers prefer more traditional payment methods. So starting now, you also have the option to pay through PayPal or Stripe.
Tinder Rolls Out Mandatory Face Scans to "Fight Romance Scams and Fake Profiles"
Source
Tinder has introduced a new facial verification feature called Face Check, requiring all new users in the United States to confirm their identity before creating a profile. The system scans a user’s face and cross-references it against existing profiles to prevent duplicates and ensure authenticity. This marks the dating app’s most aggressive step yet in addressing the rise of bots, catfishing, and fraudulent accounts that have plagued the platform in recent years.
The decision comes amid growing concern over online romance scams, which have cost victims billions of dollars over the past decade. By implementing biometric verification, Tinder aims to restore user trust and create a safer digital dating environment. The company says it will expand the tool globally in the coming months, though privacy advocates are already raising questions about data security and the implications of storing sensitive biometric information.
Major US Law Firm Apologizes After AI-Generated Errors Tarnish Bankruptcy Case
Source
Prominent law firm Gordon Rees Scully Mansukhani, which has offices in all 50 states, issued a public apology after one of its attorneys submitted a bankruptcy court filing riddled with fake and inaccurate legal citations generated by artificial intelligence. The firm told a U.S. bankruptcy judge in Alabama that it was “profoundly embarrassed” by the incident and accepted full responsibility, pledging to comply with any sanctions the court may impose. The filing, related to the Jackson Hospital & Clinic bankruptcy, prompted the judge to question the firm and attorney Cassie Preston, who initially denied using AI but later admitted awareness that the technology had been involved.
In response to the debacle, Gordon Rees agreed to pay over $55,000 in legal fees to other parties affected and announced new internal policies on AI usage, including mandatory training and stricter citation checks. The case highlights growing tension between legal ethics and generative AI, as courts increasingly penalize lawyers for unverified or fabricated AI-assisted filings. The firm’s misstep joins a string of high-profile incidents showing how reliance on AI without proper oversight can have costly professional and reputational consequences.
Google Pulls the Plug on Privacy Sandbox, Ending Its Cookie Replacement Dream
Source
Google has officially terminated its Privacy Sandbox project, once hailed as the company’s long-term solution to replace third-party cookies with a more privacy-friendly ad system. The initiative, launched in 2019, aimed to group users into anonymized “interest cohorts” processed locally on their devices, letting advertisers target audiences without tracking individuals. But after years of pushback from both advertisers and privacy advocates—and minimal adoption—Google confirmed the project’s end in a blog post, citing “low ecosystem adoption” and shifting industry priorities. The decision follows Google’s earlier retreat from its plan to phase out cookies entirely in Chrome, marking a major reversal in its privacy strategy.
Instead, Google will maintain its current third-party cookie settings in Chrome, giving users control over cookie blocking options while preserving existing ad systems. Although the Privacy Sandbox framework is finished, a few of its components—such as CHIPS (Cookies Having Independent Partitioned State), Federated Credential Management (FedCM), and Private State Tokens—will continue development. The failure underscores Google’s struggle to balance privacy, user trust, and advertising revenue, even as Chrome retains a dominant global market share. With AI-powered browsers emerging as potential competitors, Google’s next move in digital advertising and privacy control remains uncertain.