Why Tech Companies Keep Selling “Safety” Instead of Solving Problems

Why Tech Companies Keep Selling “Safety” Instead of Solving Problems

“Safety” has become one of the most profitable words in modern technology. It appears in press releases, product demos, keynote speeches, and glossy trust centers. It shows up as badges, dashboards, “AI-powered protection,” and an endless stream of features promising to reduce harm such as fraud, abuse, misinformation, privacy violations, cyberattacks, and everything else that makes people feel uneasy online.

And yet the basic pattern does not change. The harms persist. Public frustration deepens. The solution set remains heavily tilted toward selling safety rather than building it in.

This is not because engineers are uniquely indifferent. It is because the incentives of the tech industry are structurally misaligned with what real safety requires. Safety is expensive. Safety is slow. Safety creates friction. Safety often reduces the engagement metrics that drive revenue. “Safety,” by contrast, is a brand asset that can be packaged, priced, and deployed precisely when reputational risk spikes.

To understand why safety features multiply while foundational problems remain, it helps to separate two very different concepts: safety as an outcome versus safety as a product.

Safety as a product is easy. Safety as an outcome is hard.

Safety as product is legible. It is something you can list on a slide. We added new detection. We launched a reporting tool. We now use AI. It is measurable in the shallow way quarterly updates like, such as number of flagged items, number of accounts actioned, and number of warnings issued.

Safety as outcome is messier. It asks whether fewer people were harmed, whether systems are less gameable, whether vulnerable users are genuinely protected, whether scams become harder to run, and whether security defaults prevent entire classes of mistakes. It demands baseline measurements, error rates, external audits, and a willingness to admit failure.

That is why safety is so often marketed as capability rather than proven impact.

It is also why the most visible safety investments frequently land in the same place. They focus on moderation layers, detection layers, and compliance layers, all things that can be added without rewriting the underlying business model.

The business model often profits from the conditions that create harm

A large share of consumer tech revenue is built on attention and conversion. That means optimizing for speed, ease, and completion. Click the button. Keep scrolling. Stay in the funnel.

Many forms of real safety do the opposite. They add friction. They require clearer disclosures, slower flows, stronger verification, more warnings, fewer manipulative nudges, and fewer default-on permissions. They reduce conversion. They reduce engagement. They reduce the “magic” that product teams are paid to maximize.

This is why regulators keep finding the same thing. Interfaces are designed to appear user-friendly while steering users into decisions they did not fully intend.

The U.S. Federal Trade Commission has documented how design tricks, often called dark patterns, manipulate users into subscriptions, data sharing, or purchases, and how companies exploit behavioral vulnerabilities rather than respect informed choice. The issue is not merely consumer annoyance. Safety and autonomy become secondary to growth.

Enforcement often arrives only after the safety narrative collapses. In its action against Vonage, the cloud communications provider, the FTC alleged the company trapped customers with illegal dark patterns and junk fees during cancellation, ultimately requiring refunds and simpler cancellation flows. If safety were treated as a core design constraint, meaning clarity, consent, and easy exits, these outcomes would not require regulatory intervention.

“Safety theater” thrives because it is cheaper than redesign

Most safety rollouts share a defining feature. They can be layered on top of existing systems without changing what the system is optimized to do.

Sometimes this is triage. Often it is convenience. Either way, the result is safety theater: high visibility, confident language, and low willingness to confront root causes.

A useful test is whether a safety feature changes the incentives that produced the harm in the first place.

If a recommendation engine amplifies extreme content because it maximizes watch time, adding a reporting tool does not change the engine’s incentive structure. If a service profits from difficult cancellation flows, adding a help center does not change the revenue incentives behind friction. If products ship insecure by default because speed wins, publishing patch notes does not change the release culture that created the vulnerability.

Regulators and standards bodies increasingly point to the same conclusion. Safety must be built into design, not added later as reputation management.

Cybersecurity offers a preview of where safety is headed

For years, the industry’s implicit message was buy more tools, train your users, and patch faster. This approach pushed responsibility downstream onto customers rather than treating insecure products as the root problem.

The U.S. Cybersecurity and Infrastructure Security Agency has pushed back on this model through its Secure by Design initiative, which frames product safety as a manufacturer responsibility rather than a user burden. Its Secure by Design Pledge asks technology providers to embed security into development and shipping practices by reducing default passwords, improving patching, adopting vulnerability disclosure programs, and treating secure defaults as a baseline requirement.

This is what real safety looks like. Not better detection after harm occurs, but systems that are harder to compromise in the first place.

Notice what that implies. It requires changes in engineering practices, release timelines, internal metrics, and accountability. It requires companies to absorb costs they have historically externalized onto users.

Artificial intelligence is now facing the same fork in the road

AI vendors increasingly sell safety as a feature: guardrails, filters, detection, and policy enforcement. Meanwhile, core risks remain unresolved, including bias, misuse, hallucination, privacy leakage, unsafe automation, and systemic harms that cannot be solved by surface-level controls.

The National Institute of Standards and Technology’s AI Risk Management Framework treats AI safety as an organizational responsibility tied to governance, measurement, transparency, and real-world impact. It emphasizes context, documentation, accountability, and tradeoffs across a system’s lifecycle. This approach stands in direct contrast to safety as branding. It is safety as discipline, and discipline is slower, less marketable, and harder to fake.

Regulation is forcing safety to become more than marketing

Technology companies did not invent safety theater, but regulatory pressure is shrinking the space for cosmetic trust gestures.

The European Union’s Digital Services Act imposes due diligence obligations on online intermediaries, including requirements around transparency, risk assessment, and accountability measures, especially for large platforms. The importance of the DSA is not that it solves every problem. It is that it moves safety away from voluntary public relations and toward enforceable governance.

International policy work reflects a similar shift. The OECD’s guidance on digital security risk management frames safety as a shared responsibility requiring structured risk management rather than ad hoc features. The direction is clear. Regulators want safety outcomes, not safety slogans.

Measurement remains inconvenient

One of the most under-discussed reasons tech keeps selling safety instead of solving problems is that real safety demands metrics companies often hesitate to publish.

If detection is strong, error rates should be public. If scams are reduced, there should be measurable declines in successful fraud and user losses. If reporting systems work, response times and appeal accuracy should be transparent.

Transparency invites scrutiny, and scrutiny can reveal that safety sometimes functions as cover for business decisions.

Design-related harms are receiving growing attention because the damage often lies not in content but in the architecture of choice. The UK Competition and Markets Authority has documented how online choice architecture harms consumers through nudges and pressure tactics that undermine genuine consent.

When business models benefit from manipulation, safety becomes a story companies tell rather than a reality they prove.

What solving the problem would actually look like

If tech companies were serious about safety as an outcome, secure-by-default configurations would replace user blame. Transparent measurement would replace vague claims. Independent auditing and governance would become standard practice, especially for large platforms and AI systems. User autonomy would become a design constraint rather than an obstacle. Fewer bolt-on tools would be launched, and more incentives would be changed at the core.

The uncomfortable conclusion

Tech companies keep selling safety because it fits existing incentive structures. It is marketable, modular, and often compatible with growth-first design. Solving problems is different. It requires changing defaults, slowing shipping, rejecting manipulative design, and accepting accountability even when it costs revenue.

Safety as marketing is easy to scale. Safety as reality is harder, slower, and more expensive. The gap between the two is where public trust erodes.

The path forward is not another dashboard. It is a different definition of success that prioritizes outcomes over optics, secure defaults over user blame, and governance over slogans.

About Greg Collier:

Greg Collier is a seasoned entrepreneur and advocate for online safety and civil liberties. He is the founder and CEO of Geebo, an American online classifieds platform established in 1999 that became known for its proactive moderation, fraud prevention, and industry leadership on responsible marketplace practices.

Leave a Reply

Discover more from The Broad Lens

Subscribe now to keep reading and get access to the full archive.

Continue reading