Efforts to Rein In AI Tap Lesson From Social Media: Don’t Wait Until It’s Too Late | Kanebridge News
Share Button

Efforts to Rein In AI Tap Lesson From Social Media: Don’t Wait Until It’s Too Late

Activists and officials race to shape rules and public understanding of new artificial intelligence tools

By DEEPA SEETHARAMAN
Tue, Jul 18, 2023 8:38amGrey Clock 4 min

Social media was more than a decade old before efforts to curb its ill effects began in earnest. With artificial intelligence, lawmakers, activists and executives aren’t waiting that long.

Over the past several months, award-winning scientists, White House officials and tech CEOs have called for guardrails around generative AI tools such as ChatGPT—the chatbot launched last year by Microsoft-backed startup OpenAI. Among those at the table are many veterans of the continuing battle to make social media safer.

Those advocates view the AI debate as a fresh chance to influence how companies make and market their products and to shape public expectations of the technology. They aim to move faster to shape the AI landscape and learn from errors in the fight over social media.

“We missed the window on social media,” said Jim Steyer, chief executive of Common Sense Media, a child internet-safety organisation that has for years criticised social-media platforms over issues including privacy and harmful content. “It was late—very late—and the ground rules had already been set and industry just did whatever it wanted to do.”

Activists and executives alike are pushing out a range of projects and proposals to shape understanding and regulation to address issues including AI’s potential for manipulation, misinformation and bias.

Common Sense is developing an independent AI ratings and reviews system that will assess AI products such as ChatGPT on their handling of private data, suitability for children and other factors. The nonprofit plans to launch the system this fall and spend between $5 million and $10 million a year on top of its $25 million budget to fund the project.

Other internet advocacy groups including the Mozilla Foundation are also building their own open-source AI tools and investing in startups that say they are building responsible AI systems. Some firms initially focused on social media are now trying to sell services to AI companies to help their chatbots avoid churning out misinformation and other harmful content.

Tech companies are racing to influence regulation, discussing it with global governments that are both wary of AI and eager to capitalise on its opportunities. In early May, President Biden met with the chief executives of companies including OpenAI, Microsoft and Google at the White House. OpenAI CEO Sam Altman has spent weeks meeting with lawmakers and other leaders globally to discuss AI’s risks and his company’s idea of safe regulation.

Altman and Microsoft President Brad Smith have both argued for a new regulatory agency that would license large AI systems. Tesla CEO Elon Musk, who on Wednesday announced the official launch of his new AI startup, said in May that the government should convene an independent oversight committee, potentially including industry executives, to create rules that ensure AI is developed safely.

The Federal Trade Commission also is taking a hard look at AI. It is investigating whether OpenAI has “engaged in unfair or deceptive practices” stemming from false information published by ChatGPT, according to a civil subpoena made public this past week. Altman said OpenAI is confident that it follows the law and “of course we will work with the FTC.”

Looming large over all this activity is the growing feeling among many activists and lawmakers that years of efforts to regulate or otherwise change social-media companies including Facebook parent Meta Platforms, Twitter and TikTok were unsatisfactory. Facebook was founded in 2004 and Twitter in 2006, but widespread discussion about regulation didn’t really take off until after discoveries of Russian interference and other issues in the 2016 U.S. election.

“Congress failed to meet the moment on social media,” Democratic Sen. Richard Blumenthal said during a congressional hearing on AI in May. “Now we have the obligation to do it on AI before the threats and the risks become real.”

Though social-media executives in recent years called for more regulation, no new U.S. federal laws have been set that require companies to protect users’ privacy and data or that update the nearly three-decade-old rules for how platforms police content. In part that is because of disagreements among lawmakers over whether companies should do more to moderate what is said on their platforms or whether they already have overstepped into stifling free speech.

Some of the activists who are veterans of those battles say two major lessons from this era are that the companies can’t be trusted to self-regulate and that the federal government is too gridlocked to pass meaningful legislation. “There’s a massive void,” Steyer of Common Sense Media said.

Yet he and others say they are encouraged by the willingness of AI companies to discuss major issues.

“We’re seeing some of the people from trust and safety teams from social media are now at AI companies,” said L. Gordon Crovitz, co-founder of NewsGuard, a company that tracks and rates news sites. Crovitz, former publisher of The Wall Street Journal, says these people seem much more empowered in their current roles. “The body language is ‘we’ve been freed.’”

Large language models such as GPT-4 are trained on anything that can be scraped from the internet, but the data contain large chunks of hate speech, misinformation and other harmful content. So these models are further refined after their initial training to weed out some of that bad content in a process called fine-tuning.

NewsGuard has been talking to AI companies about licensing its data—which Crovitz calls a “catalog of all the important false narratives that are out there”—for fine-tuning and to bolster AI models’ guardrails against producing just those types of misinformation and false narratives.

Ravi Iyer, a former product manager for Meta, is now at the University of Southern California’s Marshall School of Business and developing a poll that tracks how people experience AI systems. He hopes the poll will influence how AI companies design and deploy their products.

“We need to know that’s a choice platforms can make and reward them for not making the wrong choices,” Iyer said.

The Mozilla Foundation, a nonprofit that builds the Firefox internet browser, said it is building open-sourced models as alternatives to large private AI models. “We need to build alternatives and not just advocate for them,” Mark Surman, Mozilla’s president, said.

Steyer described the AI ratings system being built at Common Sense as the most ambitious in the nonprofit’s history. Tracy Pizzo Frey, a consultant who previously worked for Google and is helping craft the system, said there is no set way to evaluate the safety of AI tools.

So far, Common Sense is looking at seven factors, including how transparent companies are about what their systems can do and where they still have shortcomings. The nonprofit may factor in how much information companies provide about their training data, which companies including OpenAI view as competitive secrets.

Frey said Common Sense won’t ask for proprietary data but needs information that helps parents and educators make informed decisions about the use of AI. “There are no rules around what transparency looks like,” Frey said.



MOST POPULAR

Chris Dixon, a partner who led the charge, says he has a ‘very long-term horizon’

Americans now think they need at least $1.25 million for retirement, a 20% increase from a year ago, according to a survey by Northwestern Mutual

Related Stories
Money
Qatar Experiences the Fastest Non-Energy Business Growth in Nearly Two Years
Money
A New Strategic Alliance Transforming Trade Between Dubai and Australia
Money
Kuwaiti Banks See 1.6% Monthly Increase in Financial Institution Financing
Qatar Experiences the Fastest Non-Energy Business Growth in Nearly Two Years

Employment grew for the 16th consecutive month as companies expanded.

Fri, Jul 5, 2024 2 min

According to a recent PMI report, Qatar experienced its fastest non-energy sector growth in almost two years in June, driven by surges in both existing and new business activities.

The Purchasing Managers’ Index (PMI) headline figure for Qatar reached 55.9 in June, up from 53.6 in May, with anything above 50.0 indicating growth in business activity. Employment also grew for the 16th month in a row, and the country’s 12-month outlook remained robust.

The inflationary pressures were muted, with input prices rising only slightly since May, while prices charged for goods and services fell, according to the Qatar Financial Centre (QFC) report.

This headline figure marked the strongest improvement in business conditions in the non-energy private sector since July 2022 and was above the long-term trend.

The report noted that new incoming work expanded at the fastest rate in 13 months, with significant growth in manufacturing and construction and sharp growth in other sectors. Despite the rising demand for goods and services, companies managed to further reduce the volume of outstanding work in June.

Companies attributed positive forecasts to new branch openings, acquiring new customers, and marketing campaigns. Prices for goods and services fell for the sixth time in the past eight months as firms offered discounts to boost competitiveness and attract new customers.

Qatari financial services companies also recorded further strengthening in growth, with the Financial Services Business Activity and New Business Indexes reaching 13- and nine-month highs of 61.1 and 59.2, respectively. These levels were above the long-term trend since 2017.

Yousuf Mohamed Al-Jaida, QFC CEO, said the June PMI index was higher than in all pre-pandemic months except for October 2017, which was 56.3. “Growth has now accelerated five times in the first half of 2024 as the non-energy economy has rebounded from a moderation in the second half of 2023,” he said.

 

MOST POPULAR

Chris Dixon, a partner who led the charge, says he has a ‘very long-term horizon’

Americans now think they need at least $1.25 million for retirement, a 20% increase from a year ago, according to a survey by Northwestern Mutual

0
    Your Cart
    Your cart is emptyReturn to Shop