Open MIC in the News
Contact us at media@openmic.org with any press inquiries.
Investors, particularly those with stakes in the big tech hyperscalers and limited partnerships in venture funds exposed to generative AI, should be critical of overselling the benefits of this new technology — financial or otherwise. Efforts should be made to encourage the deployment of this technology toward more productive ends. Any actual benefits should be weighed against the very real costs for which tech companies should be held accountable.
US-based tech firms’ close ties with the Trump administration are unlikely to discourage continued pushback and engagement from ESG-focused investors in the longer term.
“Aligning with political leaders may seem expedient in the short-term, but tech company heads may regret going all in if these leaders’ popularity declines or their policies are seen to be capricious and harmful to people and markets,” - Jessica Dheere, Open MIC Advocacy Director
“While Trump will undoubtedly have an impact short-term, we think ESG principles will ultimately prevail because they represent a smart way of doing business.” - Michael Connor, Open MIC Executive Director
With AI-related risks being inherently global in nature, the hope and anticipation is that there will be a suitably robust regulatory environment wherever the technology is deployed, Michael Connor, Executive Director at Open MIC, tells ESG Investor.
“Many companies have global activities, so we need to ensure that if they are applying a higher standard in Europe due to EU regulations, this should be the case wherever they operate to foster greater corporate accountability in the deployment and use of digital technologies,” he says.
As Interfaith Center on Corporate Responsibility (ICCR), a coalition of faith-based and values-based investors, and OpenMIC, a nonprofit focused on responsible use of digital technologies, explain in their new report, Dehumanization, Discrimination and Deskilling: The Impact of Digital Tech on Low-Wage Workers, a critical element of algorithmic management systems is the monitoring and surveillance of workers in violation of their human rights.
“Generative AI makes up information. It creates information and misinformation that doesn’t exist. So, yeah, there are a wide number of experts that are concerned about it, and the shareholder proposal at Alphabet, and one that we’ve raised at other companies, reflects that concern. One survey shows that people, because of generative AI and those concerns about generative AI, they are eroding public trust in technology generally, and that’s one of the things that concerns us about the public franchise that these companies have.”
“These AI resolutions are just the beginning – this is not something that we see as a one off,” said Dheere. “AI is here to stay and it is in the interest of civil society, investors, companies, and governments policymakers to maintain focus on how we integrate it constructively into society while protecting human rights – and, frankly, the companies that are creating it.”
Michael Connor of Open MIC: “…we've embarked over the years on a whole range of issues ranging from the need for federal privacy laws writ large to artificial intelligence, facial recognition, all sorts of issues. And more recently working with both Arjuna and with my colleague Jessica at Open MIC on questions of artificial intelligence and what that means for misinformation and disinformation and as well dealing with Ekō and Christina as well. So the three organizations have been involved in a big effort lately about artificial intelligence.”
Arjuna Capital, which specializes in sustainable investing and manages $319 million, partnered with the advocacy group Open MIC to submit proposals calling on Alphabet and Meta to produce in-depth reports on the dangers of generative AI’s deployment in misinformation campaigns and how the companies plan to address them.
“Alphabet and Meta need to assure billions of users and their shareholders that their management and boards are up to the task of responsibly managing [AI] technology,” Open MIC Executive Director Michael Connor said.
“Their desire to maximize the value of TikTok may conflict with their desire to promote the interests of the Jewish people and Israel,” said Michael Connor, the executive director of Open MIC, an advocacy organization focused on corporate accountability in the tech industry. “There may be that conflict and they may not want to wade into it. It’s a complicated situation.”
Without that trust, no system or institution that relies on the accurate communication and assimilation of fact—not democracy, not financial markets, not health or the environment, not small business, not policy advocacy, not human rights—will survive as we know it.
Before we reach the tipping point, which I and many of my colleague’s think may well be next year’s elections, it would behoove us to remember that other axiom about moving fast: Speed kills.
Tech firms began reporting details about government interactions a decade ago, under pressure from investors. Takedown demands from private parties - as in the Harvard case - can put companies in the awkward position of choosing sides, said Michael Connor, executive director of Open Mic, which has successfully pressed companies for details about government requests. Connor said of Google that like other tech companies, "They don't like being in the position of being content moderators, and they'd rather they didn't have to."
"We should not be turning the classroom into a combat zone or a surveillance zone," said Michael Connor, executive director of Open MIC, a nonprofit that is urging Axon's shareholders to vote to abandon the drone plan in schools and other public places at a shareholder meeting next month.
The lack of information is a problem for shareholders and users alike. Investors play a critical role in holding corporations accountable by ensuring that companies like Amazon are upholding their human rights commitments and avoiding risky behaviors. But without more transparency, investors are left in the dark.
“You've got one person who is the CEO and the chairman and the largest stockholder in the company, and he frankly is responsible for a number of bad business decisions,” Connor says. “The share price took an almighty hit, and the solution was something close to a gimmick to try and bring it back.”
Because IPG has now followed up by publicly stating that Acxiom does not collect personally identifiable information related to sensitive locations such as abortion clinics, nor does it collect granular purchase data, Open MIC has withdrawn its proposal.
“A big part of our concern really comes down to transparency,” Dana Floberg, Open MIC’s advocacy director, told MarketWatch on Thursday. It’s unclear exactly what data Acxiom collects, Floberg added, as well as “what kinds of measures they may or may not be taking to ensure they keep that data safe, and what they turn over to law enforcement.”
"All the shareholder proposal asks, Is that an independent third party examiner come in and look at the evidence in an organized manner and try to help the company understand what its products are doing to people," Michael Connor, one of the activists, said in an interview Tuesday with Reuters.
“Alphabet is one of the most influential companies on the planet that shapes people’s attitudes about all sorts of things through search, through YouTube,” said Michael Connor, executive director of Open MIC, a corporate responsibility group that also signed the letters. “The proposal … is not an extraordinary request.”
Record ESG proposals are being fueled by a more shareholder-friendly SEC regime and growing awareness of shareholder engagement tools, according to Michael Connor, the executive director of Open Mic, an organization that works with institutional investors to file social impact shareholder proposals targeting Big Tech. Companies usually try to block shareholder resolutions through legal challenges, but “right now, the folks who make those decisions at the SEC are more favorably inclined toward shareholder resolutions,” Connor told Protocol.
“We need to be realistic about the outcomes, especially at companies like Alphabet and Meta which have ‘dual class’ shares that give the company founders and other insiders powerful voting rights that make shareholder initiatives extremely challenging,” said Michael Connor, Director at Open MIC, a nonprofit which campaigns for corporate accountability in media and tech.
The US military alone has more than 800 active AI-related projects and has requested almost US$2 billion in funding for AI in the 2024 budget.
“Any companies providing technologies to militaries that could be violating human rights or the laws of war run the risk of criminal indictments, sanctions, civil suits and considerable reputational damage,” said Audrey Mocle, Deputy Director of non-profit Open MIC.“The risks for investors in these tech firms increase as they become more enmeshed with the [weapons] sector.”