Artificial intelligence desperately needs global oversight

Wireless

all the time you Post a photo, reply on social media, create a website or even send an email, your data is scraped, stored and used to train generative AI technology that can generate text, audio, video and images with just a few words. This has real consequences: OpenAI researchers studying the labor market impact of their language models have estimated that nearly 80 percent of the U.S. workforce could be affected by at least 10 percent of their work tasks by introducing language large models (LLMs) such as ChatGPT, while About 19 percent of workers see at least half of their tasks affected. We are seeing an immediate shift in the job market with image generation as well. In other words, the statements you made could get you fired from the job.

When a company bases its technology on a public resource – the Internet – it is reasonable to say that this technology should be accessible and open to all. But critics note that GPT-4 lacks any clear information or specification that would enable anyone outside the organization to replicate, test, or verify any aspect of the model. Some of these companies have received huge amounts of funding from other major corporations to create commercial products. For some in the AI ​​community, this is a dangerous sign that these companies will seek profits above the public good.

Code transparency alone is unlikely to ensure that these generative AI models serve the public good. There are few conceivable immediate benefits for a journalist, policy analyst, or accountant (all professions with “high exposure” according to the OpenAI study) if the data underlying the LLM were available. We increasingly have laws, such as the Digital Services Act, that require some of these companies to open their code and data to review by expert auditors. And open source code can sometimes enable malicious actors, allowing hackers to sabotage the safety precautions companies build. Transparency is a laudable goal, but that alone will not guarantee that generative AI will be used to improve society.

In order to truly achieve public good, we need accountability mechanisms. The world needs a global AI governance body to solve these social, economic, and political upheavals beyond what any individual government can do, what any academic or civil society group can do, or what any company is willing or able to do. A precedent already exists for global collaboration by companies and countries to hold themselves accountable for technological outcomes. We have examples of well-funded, independent expert groups and organizations that can make decisions on behalf of the common good. An entity like this is charged with thinking about the benefits to humanity. Let’s build on these ideas to address the fundamental issues already emerging in generative AI.

In the era of nuclear proliferation after World War II, for example, there was a credible and significant fear that nuclear technologies would become evil. The widespread belief that society must act collectively to avoid global catastrophe echoes many discussions today about generative AI models. In response, nations around the world, led by the United States and directed by the United Nations, came together to form the International Atomic Energy Agency (IAEA), an independent body devoid of government and corporate affiliations that would offer solutions to the farthest. Access to the seemingly infinite ramifications and capabilities of nuclear technologies. It operates in three main areas: nuclear energy, nuclear safety and security, and safeguards. For example, after the Fukushima disaster in 2011, it provided critical resources, education, testing, and impact reports, and helped ensure continued nuclear safety. However, the agency is limited: it relies on member states to voluntarily comply with its standards and guidelines, and on their cooperation and assistance to carry out its mission.

In technology, the Facebook Oversight Board is one of the work’s attempts to strike a balance between transparency and accountability. The board members are a global multidisciplinary group, and their rulings, such as overturning a decision by Facebook to remove a post depicting sexual harassment in India, are binding. This model isn’t perfect either; There are accusations of company takeover, the board is funded solely by Meta, only cases referred to by Facebook itself can be heard, and it is limited to removing content, rather than addressing more systemic issues like algorithms or moderation policies.

Source link

Post a Comment

Cookie Consent
We serve cookies on this site to analyze traffic, remember your preferences, and optimize your experience.
Oops!
It seems there is something wrong with your internet connection. Please connect to the internet and start browsing again.
AdBlock Detected!
We have detected that you are using adblocking plugin in your browser.
The revenue we earn by the advertisements is used to manage this website, we request you to whitelist our website in your adblocking plugin.
Site is Blocked
Sorry! This site is not available in your country.