“The more you automate away human tasks, the more you accrue return and earnings to the holders of capital. I could build a company right now, and I could say I’m not hiring anybody, and I’m just going to invest in machines and tools. That’s a problem.”
Ryan Carrier, Founder of ForHumanity
The AI era is here. Transformative shifts in the way humans work are already underway, and more disruption is coming. Ensuring that AI-powered tools and technologies are developed safely and ethically is one of the 21st century’s greatest challenges.
Without proper regulation, AI’s development will be guided predominantly by market forces. That leaves the public unprotected from critical risks like algorithmic bias, data privacy, and massive job disruption. The more AI is integrated into our society, the stronger the need for guardrails and oversight. Governmental guidelines, like the AI Bill of Rights, are a step in the right direction but lack strong enforcement mechanisms.
Protecting people from AI means mitigating downside risks while also encouraging innovation. Getting it right is essential. To learn more, read on.
Ryan Carrier is the founder of ForHumanity, a non-profit. He serves as ForHumanity’s Executive Director and Chairman of the Board of Directors. In these roles, he is responsible for the day-to-day function of ForHumanity and the overall process of Independent Audit.
Prior to founding ForHumanity, Carrier owned and operated Nautical Capital, a quantitative hedge fund that employed artificial intelligence algorithms.
Carrier founded ForHumanity after a 25-year career in finance. His global business experience, risk management expertise, and unique perspective on how to manage the risk led him to launch the non-profit entity. He focused on the independent audit of AI systems as one means to mitigate the risk associated with artificial intelligence and began to build the business model associated with a first-of-its-kind process for auditing corporate AIs, using a globally, open-source, crowd-sourced process to determine best practices.
ForHumanity’s mission statement is to examine and analyze the downside risk associated with AI, algorithmic, and autonomous systems and then engage in the maximum amount of risk mitigation. But they aren’t prescriptive about what types of AI should or should not exist.
“We want to accept technology where it is,” Carrier says. “We don’t recommend prohibitions of any kind. But we do think there are systems that are not robust. And we do think there are systems that are very risky.”
The territory of risk, when it comes to AI, is vast. Carrier breaks down the risks into five categories: ethics, bias, privacy, trust, and cybersecurity—any of which can result in detrimental impacts to humans. As AI and other automated systems integrate more with society, the risks increase in their potential frequency of occurrence and their size of impact.
Everyone is at risk, but the impacts might be unevenly distributed. Those who are already experiencing income inequality, job displacement, and/or discrimination are also the most likely to suffer disproportionately from the downside risks of AI, algorithmic, and autonomous systems.
At the same time, AI has the potential to help reduce the burden on some of those same disadvantaged communities. AI-powered tools can help boost sight and sound accessibility, promote safe transportation with autonomous vehicles, increase access to educational services, and even encourage new forms of creative expression.
So how do you design effective risk mitigation while also encouraging positive innovation?
“The primary mechanism for us supporting the reduction of risk is what we call independent audit of AI systems,” Carrier says. “We’re trying to create an infrastructure of trust where third parties validate that the providers and deployers of these tools are doing so in a trustworthy, safe, ethical manner. Increasingly, we have laws and regulations that describe what that means. And what we do is we codify those laws into auditable rules so that third-party independent validators can come in and basically document and prove that a particular provider of a tool has
or has not complied with the rules.”
AI development has been allowed to proceed relatively unchecked for much of the 21st century. Many developers have thus been incentivized to ask forgiveness rather than ask permission. To wit, large language models (LLMs) like ChatGPT have been trained on large swathes of information that include copyrighted material; even a multi-billion dollar fine for such a practice might be seen by behemoths like Microsoft as just the cost of doing business.
“A lot of people want to be ethical,” Carrier says. “They want to be responsible. They want to produce safe tools. They recognize the risks and want to do the right thing—until they get busy, until they get distracted, until market share becomes more important than being safe. We have all these tensions and trade-offs in how we operate as individuals.”
The tricky part, according to Carrier, is taking laws, guidelines, best practices, and standards, and translating them into binary rules. Those binary rules are what would allow an auditor—a third-party validator—to look at an AI system and determine whether it has complied or not. Making sure the lines are drawn distinctly is a challenge, and it’s precisely what ForHumanity has set out to do.
In October 2022, the White House Office of Science and Technology Policy (OSTP) introduced its Blueprint for an AI Bill of Rights. That document provided guidelines to protect the rights of individuals and promote ethical practices in the AI era. While the principles it offers are esteemable ones, they lack enforcement mechanisms, and thus accountability.
The AI Bill of Rights is also not an all-encompassing set of guidelines—no such thing exists. But it aligns with a growing number of regulatory frameworks for defining the scope of AI, data privacy, and the need for consumer protections: the EU’s GDPR, the EU’s AI Act of 2021, and California’s CCPA, for example. Increasing coordination between state and federal governments and between international governments will be essential. But the first step is making auditable rules that can be assessed and enforced.
“What sets apart what ForHumanity does is that we take those laws, guidelines, and regulation standards, and we craft them into auditable rules,” Carrier says. “It’s an art, not a science. But we embrace all sorts of good principles, best practices, and standards, and we try to get them to a place where they can have a binary level of assured compliance or non-compliance.”
There is room for citizens to advocate for the safe and ethical development of AI. This can range from building awareness to shifting consumer choices to volunteering time and effort to lobbying for more progressive policy. But at a macro level, the sociological and economic impacts of AI mean that questions of how to mitigate its downside risk can quickly become political.
“The more you automate away human tasks, the more you accrue return and earnings to the holders of capital,” Carrier says. “I could build a company right now, and I could say I’m not hiring anybody, and I’m just going to invest in machines and tools. That’s a problem.”
The efficiency of AI, algorithmic, and autonomous systems is likely to only increase, and so will its potential for disruption. That will result in significant changes in the way people work—or don’t. Even with structures put in place to help displaced workers sustain themselves financially, there are still existential questions of what society looks like when automated systems can replicate practically every human task. The endgame can appear either utopic or frightening, depending on how the risks are managed.
“We are increasing the risks because we’re increasing the volatility in the range of outcomes by introducing these tools,” Carrier says. “I don’t think there’s anything we can do about it. I think it’s coming regardless. The question is: how do we handle it?”