When Artificial Intelligence Gets It Wrong: Protecting the Public in an Age of Algorithms
The holiday season brings celebration, generosity, and community. It also brings an increase in online scams, identity theft, data breaches, and financial fraud. While families focus on gift-giving and travel, criminals often take advantage of the digital world’s vulnerabilities. At the very same time, public institutions and private companies are rapidly adopting artificial intelligence to automate decisions, manage data, and deliver services.
Most people do not realize how often AI is already influencing their daily lives. It screens resumes, flags benefit applications, evaluates risk, interprets video footage, and shapes how organizations interact with the public. When AI is well governed, it can strengthen service delivery. When it is not, the consequences are serious.
This makes 2026 a defining year for understanding the risks, responsibilities, and public rights associated with AI.
AI Can Be Biased, and Biased AI Can Harm the Public
AI systems learn from data created by people. When the underlying data contains racial, socioeconomic, or historical biases, the AI model amplifies these patterns rather than correcting them. Research has repeatedly demonstrated this point.
Use Case 1: Facial Recognition Misidentifying Black Citizens
Studies from the MIT Media Lab and the National Institute of Standards and Technology found that commercial facial recognition systems have significantly higher error rates for Black and Brown individuals compared to white individuals (Buolamwini & Gebru, 2018; Grother et al., 2019). These errors have contributed to several wrongful arrests, including documented cases in Detroit and New Jersey.
This is not simply a technology failure. It is a failure of public governance and oversight. When algorithms incorrectly identify citizens, the harm is personal, reputational, and deeply unjust.
Use Case 2: Automated Systems Wrongly Flagging Public Benefits as Fraud
In 2021, Michigan’s automated fraud detection system falsely accused thousands of residents of unemployment insurance fraud, leading to widespread hardship, debt, and emotional trauma (Michigan Auditor General, 2021). Similar issues have been documented in disability benefits, SNAP administration, and nonprofit service screening.
These errors show that automated decisions can carry real human consequences. The harm falls disproportionately on vulnerable populations who rely on public services.
AI Can Support Ethical, Inclusive Public Service When Governed Correctly
AI is not inherently harmful. When public institutions ensure transparency, fairness, and human oversight, AI can:
reduce administrative delays
detect genuine fraud without punishing innocent people
improve access to services
support overburdened workforces
identify inequities that humans may overlook
Ethical AI requires intentional design, clear accountability, and inclusive representation in the data that trains these systems. Professional guidance from privacy and governance organizations, including the International Association of Privacy Professionals (IAPP), emphasizes the need for risk assessments, fairness reviews, and ongoing monitoring.
AI becomes a public asset when leaders govern it responsibly.
Why Proprietary Data Matters
Organizations often underestimate the value and sensitivity of the data they hold. Proprietary data includes internal operational knowledge, performance trends, service histories, and community-specific patterns. This data provides a competitive advantage, but it also represents tremendous responsibility.
If organizations do not understand how their proprietary data shapes AI decisions, they risk:
discriminatory outcomes
inaccurate predictions
violations of public trust
exposure of sensitive information
compromised decision quality
For public institutions, proprietary data is not merely an asset. It is a form of stewardship.
The Rising Threat of Data Crimes
The holiday season often sees increased digital theft and identity fraud. Criminal groups exploit stolen data to commit financial crimes, impersonate individuals, and create synthetic identities. The Federal Trade Commission reports significant annual increases in identity theft, particularly during high-traffic digital shopping periods (Federal Trade Commission, 2024).
AI tools now make these crimes faster and more scalable. Fraudsters can generate convincing documents, mimic voices, and combine leaked data to bypass security systems. This reality makes the protection of personal data more urgent than ever.
PII: What It Is and Why It Matters
Personally identifiable information includes:
legal names
addresses
phone numbers
financial account data
Social Security numbers
biometric identifiers
photos and voice recordings
geolocation data
PII is extremely valuable to criminals and extremely sensitive for individuals. Once stolen, it can be sold, altered, or used for fraudulent activity for years.
What Citizens Can Do to Protect Themselves
Although organizations carry the primary responsibility for protecting data, citizens have rights and options to reduce their risk.
Know Your Rights
Many states now provide:
the right to know what data companies have about you
the right to request deletion
the right to correct inaccurate information
the right to opt out of data sales
the right to know whether your data trains AI systems
These rights empower citizens to hold institutions accountable.
Report Violations
If you suspect data misuse, privacy violations, or unauthorized data collection, contact:
your state attorney general
the Federal Trade Commission
your state consumer protection agency
Public reporting strengthens oversight.
Take Preventive Steps
freeze your credit at all major bureaus
use multi-factor authentication
avoid sharing biometric data unnecessarily
limit what apps can access
use strong, unique passwords
monitor accounts during the holiday season
These steps reduce exposure without requiring major technical knowledge.
Why This Matters as We Enter 2026
AI will shape public life whether citizens are aware of it or not. Without proper safeguards, AI can deepen inequality and erode trust. With strong governance, it can strengthen institutions, lighten workloads, and improve service for millions.
As families celebrate, travel, shop, and reconnect, it is important to remain aware of digital vulnerabilities. Public institutions must govern AI responsibly. Organizations must protect proprietary and personal data. And citizens must use the rights available to them to stay informed and safe.
Public trust is fragile. Protecting it requires effort from leaders, institutions, and communities alike.
References
Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of Machine Learning Research, 81, 1–15.
Federal Trade Commission. (2024). Consumer Sentinel Network data book. Federal Trade Commission.
Grother, P., Ngan, M., & Hanaoka, K. (2019). Face recognition vendor test (FRVT) Part 3: Demographic effects (NISTIR 8280). National Institute of Standards and Technology.
Michigan Auditor General. (2021). Unemployment insurance agency: Office of the Auditor General report on fraud claim determinations. State of Michigan.



