Urgent need for responsible innovation
AI race
Over the past decade, a number of companies, think tanks and institutions have developed responsible innovation initiatives to forecast and mitigate the negative consequences of tech development.
Marcel O'Gorman - The recent frenzy over language processing tools such as ChatGPT has sent organisations scrambling to provide guidelines for responsible usage.
The online publishing platform Medium, for example, has released a statement on AI-generated writing that promotes “transparency” and “disclosure.”
My own institution has established an FAQ page about generative AI that calls on educators to make “wise and ethical use” of AI and chatbots.
These ethical measures seem quaint, given this week’s release of the more powerful GPT-4, which runs the risk of being a disinformation and propaganda machine. OpenAI claims GPT-4 was able to pass a simulated bar exam in the top 10%, compared to GPT-3.5 which only scored in the bottom 10%.
Unchecked innovation
ChatGPT is powered by a supercomputer and powerful cloud computing platform, both of which were funded and created by Microsoft. This Microsoft OpenAI partnership will accelerate the global spread of generative AI products through Microsoft’s Azure platform.
Perhaps coincidentally, GPT-4 was released less than two months after Microsoft laid off an ethics and society team. Frustrated team members said the decision was based on pressure from Microsoft’s C-suite, which stressed the need to move AI products “into customers hands at a very high speed.”
The once-reviled Silicon Valley motto of “move fast and break things” may be back in fashion.
For now, Microsoft still has its Office of Responsible AI. But it seems appropriate to ask what responsible innovation means as this high-speed, high-profit game of unchecked innovation rages on.
Responsible innovation
When I asked ChatGPT what responsible innovation is, it wrote: “The process of developing and implementing new technologies, processes, or products in a way that addresses ethical, social and environmental concerns. It involves taking into account the potential impacts and risks of innovation on various stakeholders, including customers, employees, communities, and the environment.”
ChatGPT’s definition is accurate, but bereft of context. Whose ideas are these and how are they being implemented? Put otherwise, who is responsible for responsible innovation?
Over the past decade, a number of companies, think tanks and institutions have developed responsible innovation initiatives to forecast and mitigate the negative consequences of tech development.
Google founded a responsible innovation team in 2018 to leverage “experts in ethics, human rights, user research, and racial justice.” The most notable output of this team has been Google’s responsible AI principles. But the company’s ethical profile beyond this is questionable.
Google’s work with the U.S. military and its poor treatment of two ethics-minded ex-employees raises concerns about Google’s capacity for self-policing.
These lingering issues, along with Google’s parent company’s recent antitrust indictment, demonstrate that a focus on responsible AI is not enough to keep large tech companies from being “evil.”
In fact, Google’s greatest contribution to responsible innovation has come from the grassroots efforts of its own employees. This suggests responsible innovation may need to grow from the bottom up. But this is a tall order in an era of massive tech industry layoffs.
Ethics-washing
The Association for Computing Machinery’s Code of Ethics and Professional Conduct states that tech professionals have a responsibility to uphold the public good as they innovate.
But without support from their superiors, guidance from ethics experts and regulation from government agencies, what motivates tech professionals to be “good”? Can tech companies be trusted to self-audit?
Another issue related to self-auditing is ethics-washing, where companies only pay lip service to ethics. Meta’s responsible innovation efforts are a good case study of this.
In June 2021, Meta’s top product design executive praised the responsible innovation team she helped launch in 2018, touting Meta’s “commitment to making the most ethically responsible decisions possible, every day.” By September 2022, her team had been disbanded.
Today, responsible innovation is used as a marketing slogan in the Meta store. Meta’s Responsible AI team was also dissolved in 2021 and folded into Meta’s Social Impact group, which helps non-profits leverage Meta products.
This shift from responsible innovation to social innovation is an ethics-washing tactic that obfuscates unethical behaviour by changing the subject to philanthropy. For this reason, it’s essential to distinguish “tech for good” as the responsible design of technology from the now-common philanthropic PR phrase “tech for good.”
Responsible innovation vs. profit
Unsurprisingly, the most sophisticated calls for responsible innovation have come from outside corporate culture.
The principles outlined in a white paper from the Information and Communications Technology Council (ICTC), a Canadian non-profit, speaks to values such as self-awareness, fairness and justice — concepts more familiar to philosophers and ethicists than to CEOs and founders.
The ICTC’s principles call for tech developers to go beyond the mitigation of negative consequences and work to reverse social power imbalances.
One might ask how these principles apply to the recent developments in generative AI. When OpenAI claims to be “developing technologies that empower everyone,” who is included in the term “everyone?” And in what context will this “power” be wielded?
These questions reflect the work of philosophers such as Ruha Benjamin and Armond Towns who are suspicious of the term “everyone” in these contexts, and who question the very identity of the “human” in human-centred technology.
Such considerations would slow down the AI race, but that might not be such a terrible outcome.
Value tensions
There is a persistent tension between financial valuation and moral values in the tech industry. Responsible innovation initiatives were established to massage these tensions, but recently, such efforts are being swept aside.
The tension is palpable in the response of conservative US pundits to the recent Silicon Valley Bank failure. Several Republican stalwarts, including Donald Trump, have wrongly blamed the turmoil on the bank’s “woke outlook” and its commitment to responsible investing and equity initiatives.
In the words of Home Depot co-founder Bernie Marcus, “these banks are badly run because everybody is focused on diversity and all of the woke issues,” rather than what Trump calls “common sense business practices”.
The future of responsible innovation may depend on how so-called “common sense business practices” can be influenced by so-called “woke” issues like ethical, social and environmental concerns. If ethics can be washed away by dismissing them as “woke,” the future of responsible innovation is about as promising as that of the CD-ROM.
* Marcel O'Gorman is a professor of English language and literature and the university research chair and founding director of Critical Media Lab at the University of Waterloo.
The online publishing platform Medium, for example, has released a statement on AI-generated writing that promotes “transparency” and “disclosure.”
My own institution has established an FAQ page about generative AI that calls on educators to make “wise and ethical use” of AI and chatbots.
These ethical measures seem quaint, given this week’s release of the more powerful GPT-4, which runs the risk of being a disinformation and propaganda machine. OpenAI claims GPT-4 was able to pass a simulated bar exam in the top 10%, compared to GPT-3.5 which only scored in the bottom 10%.
Unchecked innovation
ChatGPT is powered by a supercomputer and powerful cloud computing platform, both of which were funded and created by Microsoft. This Microsoft OpenAI partnership will accelerate the global spread of generative AI products through Microsoft’s Azure platform.
Perhaps coincidentally, GPT-4 was released less than two months after Microsoft laid off an ethics and society team. Frustrated team members said the decision was based on pressure from Microsoft’s C-suite, which stressed the need to move AI products “into customers hands at a very high speed.”
The once-reviled Silicon Valley motto of “move fast and break things” may be back in fashion.
For now, Microsoft still has its Office of Responsible AI. But it seems appropriate to ask what responsible innovation means as this high-speed, high-profit game of unchecked innovation rages on.
Responsible innovation
When I asked ChatGPT what responsible innovation is, it wrote: “The process of developing and implementing new technologies, processes, or products in a way that addresses ethical, social and environmental concerns. It involves taking into account the potential impacts and risks of innovation on various stakeholders, including customers, employees, communities, and the environment.”
ChatGPT’s definition is accurate, but bereft of context. Whose ideas are these and how are they being implemented? Put otherwise, who is responsible for responsible innovation?
Over the past decade, a number of companies, think tanks and institutions have developed responsible innovation initiatives to forecast and mitigate the negative consequences of tech development.
Google founded a responsible innovation team in 2018 to leverage “experts in ethics, human rights, user research, and racial justice.” The most notable output of this team has been Google’s responsible AI principles. But the company’s ethical profile beyond this is questionable.
Google’s work with the U.S. military and its poor treatment of two ethics-minded ex-employees raises concerns about Google’s capacity for self-policing.
These lingering issues, along with Google’s parent company’s recent antitrust indictment, demonstrate that a focus on responsible AI is not enough to keep large tech companies from being “evil.”
In fact, Google’s greatest contribution to responsible innovation has come from the grassroots efforts of its own employees. This suggests responsible innovation may need to grow from the bottom up. But this is a tall order in an era of massive tech industry layoffs.
Ethics-washing
The Association for Computing Machinery’s Code of Ethics and Professional Conduct states that tech professionals have a responsibility to uphold the public good as they innovate.
But without support from their superiors, guidance from ethics experts and regulation from government agencies, what motivates tech professionals to be “good”? Can tech companies be trusted to self-audit?
Another issue related to self-auditing is ethics-washing, where companies only pay lip service to ethics. Meta’s responsible innovation efforts are a good case study of this.
In June 2021, Meta’s top product design executive praised the responsible innovation team she helped launch in 2018, touting Meta’s “commitment to making the most ethically responsible decisions possible, every day.” By September 2022, her team had been disbanded.
Today, responsible innovation is used as a marketing slogan in the Meta store. Meta’s Responsible AI team was also dissolved in 2021 and folded into Meta’s Social Impact group, which helps non-profits leverage Meta products.
This shift from responsible innovation to social innovation is an ethics-washing tactic that obfuscates unethical behaviour by changing the subject to philanthropy. For this reason, it’s essential to distinguish “tech for good” as the responsible design of technology from the now-common philanthropic PR phrase “tech for good.”
Responsible innovation vs. profit
Unsurprisingly, the most sophisticated calls for responsible innovation have come from outside corporate culture.
The principles outlined in a white paper from the Information and Communications Technology Council (ICTC), a Canadian non-profit, speaks to values such as self-awareness, fairness and justice — concepts more familiar to philosophers and ethicists than to CEOs and founders.
The ICTC’s principles call for tech developers to go beyond the mitigation of negative consequences and work to reverse social power imbalances.
One might ask how these principles apply to the recent developments in generative AI. When OpenAI claims to be “developing technologies that empower everyone,” who is included in the term “everyone?” And in what context will this “power” be wielded?
These questions reflect the work of philosophers such as Ruha Benjamin and Armond Towns who are suspicious of the term “everyone” in these contexts, and who question the very identity of the “human” in human-centred technology.
Such considerations would slow down the AI race, but that might not be such a terrible outcome.
Value tensions
There is a persistent tension between financial valuation and moral values in the tech industry. Responsible innovation initiatives were established to massage these tensions, but recently, such efforts are being swept aside.
The tension is palpable in the response of conservative US pundits to the recent Silicon Valley Bank failure. Several Republican stalwarts, including Donald Trump, have wrongly blamed the turmoil on the bank’s “woke outlook” and its commitment to responsible investing and equity initiatives.
In the words of Home Depot co-founder Bernie Marcus, “these banks are badly run because everybody is focused on diversity and all of the woke issues,” rather than what Trump calls “common sense business practices”.
The future of responsible innovation may depend on how so-called “common sense business practices” can be influenced by so-called “woke” issues like ethical, social and environmental concerns. If ethics can be washed away by dismissing them as “woke,” the future of responsible innovation is about as promising as that of the CD-ROM.
* Marcel O'Gorman is a professor of English language and literature and the university research chair and founding director of Critical Media Lab at the University of Waterloo.
Comments
Namibian Sun
No comments have been left on this article