Call to Action: Protecting People from AI

“The more you automate away human tasks, the more you accrue return and earnings to the holders of capital. I could build a company right now, and I could say I’m not hiring anybody, and I’m just going to invest in machines and tools. That’s a problem.”
Ryan Carrier, Founder of ForHumanity

The AI era is here. Transformative shifts in the way humans work are already underway, and more disruption is coming. Ensuring that AI-powered tools and technologies are developed safely and ethically is one of the 21st century’s greatest challenges.

Without proper regulation, AI’s development will be guided predominantly by market forces. That leaves the public unprotected from critical risks like algorithmic bias, data privacy, and massive job disruption. The more AI is integrated into our society, the stronger the need for guardrails and oversight. Governmental guidelines, like the AI Bill of Rights, are a step in the right direction but lack strong enforcement mechanisms.

Protecting people from AI means mitigating downside risks while also encouraging innovation. Getting it right is essential. To learn more, read on.

Meet the Expert: Ryan Carrier

Defining and Analyzing AI Risks

ForHumanity’s mission statement is to examine and analyze the downside risk associated with AI, algorithmic, and autonomous systems and then engage in the maximum amount of risk mitigation. But they aren’t prescriptive about what types of AI should or should not exist.

“We want to accept technology where it is,” Carrier says. “We don’t recommend prohibitions of any kind. But we do think there are systems that are not robust. And we do think there are systems that are very risky.”

The territory of risk, when it comes to AI, is vast. Carrier breaks down the risks into five categories: ethics, bias, privacy, trust, and cybersecurity—any of which can result in detrimental impacts to humans. As AI and other automated systems integrate more with society, the risks increase in their potential frequency of occurrence and their size of impact.

Everyone is at risk, but the impacts might be unevenly distributed. Those who are already experiencing income inequality, job displacement, and/or discrimination are also the most likely to suffer disproportionately from the downside risks of AI, algorithmic, and autonomous systems.

At the same time, AI has the potential to help reduce the burden on some of those same disadvantaged communities. AI-powered tools can help boost sight and sound accessibility, promote safe transportation with autonomous vehicles, increase access to educational services, and even encourage new forms of creative expression.

So how do you design effective risk mitigation while also encouraging positive innovation?

Promoting Safe and Ethical AI

“The primary mechanism for us supporting the reduction of risk is what we call independent audit of AI systems,” Carrier says. “We’re trying to create an infrastructure of trust where third parties validate that the providers and deployers of these tools are doing so in a trustworthy, safe, ethical manner. Increasingly, we have laws and regulations that describe what that means. And what we do is we codify those laws into auditable rules so that third-party independent validators can come in and basically document and prove that a particular provider of a tool has
or has not complied with the rules.”

AI development has been allowed to proceed relatively unchecked for much of the 21st century. Many developers have thus been incentivized to ask forgiveness rather than ask permission. To wit, large language models (LLMs) like ChatGPT have been trained on large swathes of information that include copyrighted material; even a multi-billion dollar fine for such a practice might be seen by behemoths like Microsoft as just the cost of doing business.

“A lot of people want to be ethical,” Carrier says. “They want to be responsible. They want to produce safe tools. They recognize the risks and want to do the right thing—until they get busy, until they get distracted, until market share becomes more important than being safe. We have all these tensions and trade-offs in how we operate as individuals.”

The tricky part, according to Carrier, is taking laws, guidelines, best practices, and standards, and translating them into binary rules. Those binary rules are what would allow an auditor—a third-party validator—to look at an AI system and determine whether it has complied or not. Making sure the lines are drawn distinctly is a challenge, and it’s precisely what ForHumanity has set out to do.

Examining the AI Bill of Rights & Other AI Regulation

In October 2022, the White House Office of Science and Technology Policy (OSTP) introduced its Blueprint for an AI Bill of Rights. That document provided guidelines to protect the rights of individuals and promote ethical practices in the AI era. While the principles it offers are esteemable ones, they lack enforcement mechanisms, and thus accountability.

The AI Bill of Rights is also not an all-encompassing set of guidelines—no such thing exists. But it aligns with a growing number of regulatory frameworks for defining the scope of AI, data privacy, and the need for consumer protections: the EU’s GDPR, the EU’s AI Act of 2021, and California’s CCPA, for example. Increasing coordination between state and federal governments and between international governments will be essential. But the first step is making auditable rules that can be assessed and enforced.

“What sets apart what ForHumanity does is that we take those laws, guidelines, and regulation standards, and we craft them into auditable rules,” Carrier says. “It’s an art, not a science. But we embrace all sorts of good principles, best practices, and standards, and we try to get them to a place where they can have a binary level of assured compliance or non-compliance.”

The Future of AI and AI Regulation

There is room for citizens to advocate for the safe and ethical development of AI. This can range from building awareness to shifting consumer choices to volunteering time and effort to lobbying for more progressive policy. But at a macro level, the sociological and economic impacts of AI mean that questions of how to mitigate its downside risk can quickly become political.

“The more you automate away human tasks, the more you accrue return and earnings to the holders of capital,” Carrier says. “I could build a company right now, and I could say I’m not hiring anybody, and I’m just going to invest in machines and tools. That’s a problem.”

The efficiency of AI, algorithmic, and autonomous systems is likely to only increase, and so will its potential for disruption. That will result in significant changes in the way people work—or don’t. Even with structures put in place to help displaced workers sustain themselves financially, there are still existential questions of what society looks like when automated systems can replicate practically every human task. The endgame can appear either utopic or frightening, depending on how the risks are managed.

“We are increasing the risks because we’re increasing the volatility in the range of outcomes by introducing these tools,” Carrier says. “I don’t think there’s anything we can do about it. I think it’s coming regardless. The question is: how do we handle it?”

Matt Zbrog
Matt Zbrog
Writer

Matt Zbrog is a writer and freelancer who has been living abroad since 2016. His nonfiction has been published by Euromaidan Press, Cirrus Gallery, and Our Thursday. Both his writing and his experience abroad are shaped by seeking out alternative lifestyles and counterculture movements, especially in developing nations. You can follow his travels through Eastern Europe and Central Asia on Instagram at @weirdviewmirror. He’s recently finished his second novel, and is in no hurry to publish it.

Call to Action: Affordable Child Care in the U.S.

Many studies demonstrate the wide-reaching benefits of early childhood care and education for children and parents but also for employers and society at large. Yet, affordable child care remains out of reach for many American families.

Call to Action: Affordable Higher Education

According to recent data from the Board of Governors of the Federal Reserve System, more than 44 million Americans are paying off student loans. Collectively, these borrowers hold nearly $1.5 trillion in student debt. The average student loan borrower graduates with $37,172 in educational debt—a $20,000 rise in the last 13 years.

Call to Action: Collective Bargaining

Experts from Cornell University and the London School of Economics and Political Science suggest that the inability of unions to negotiate for better pay or working conditions has historically lead to a degeneration of all workers’ rights, unionized or not.

Call to Action: Combatting Sexual Harassment

No company gets it perfect in the fight against sexual harassment. Not yet, at least. But there are some best practices that, if taken together, can make strides towards a safer, more equitable corporate environment.

Call to Action: Data Privacy

While many of the world’s other advanced economies have made great strides towards governing the flow of personal data, American legislation is years behind where it should be.