OpenAI offers $1 million in grants to shape ethical AI, combat misinformation

In a blog post, OpenAI has officially declared its intentions to offer $100,000 grants for the best ideas on how we should regulate and control AI.
Christopher McFadden
The grants will go to the best ideas on how to regulate AI.


ChatGPT creators, OpenAI, have announced ten $100,000 grants for anyone with good ideas on how artificial intelligence (AI) can be governed to help address bias and other factors. The grants will be awarded to recipients who present the most compelling answers for some of the most pressing solutions around AI, like whether it should be allowed to have an opinion on public figures.

This comes in light of arguments around whether AI systems such as ChatGPT may have a built-in prejudice because of the data they are trained on (not to mention the opinions of human programmers behind the scenes). Reports have revealed instances of discriminatory or biased results generated by AI technology. There is a growing apprehension that AI, when working alongside search engines like Google and Bing, might generate misleading information with great conviction.

Despite receiving $10 billion backing from Microsoft, OpenAI has been advocating for the regulation of AI for some time now. However, the organization has expressed concerns over proposed rules in the European Union and threatened to withdraw. "The current draft of the EU AI Act would be over-regulating, but we have heard it's going to get pulled back," OpenAI’s chief executive Sam Altman told Reuters. "They are still talking about it," he added.

But, as Reuters pointed out, these grants may not even touch the sides of budding AI startups. Most AI engineers earn more than $100,000 at going rates and can exceed $300,000 for exceptional talent. However, AI systems “should benefit all of humanity and be shaped to be as inclusive as possible," OpenAI said in the blog post. "We are launching this grant program to take a first step in this direction," they added.

Altman, a prominent advocate for AI regulation, has been updating ChatGPT and image-generator DALL-E. However, he recently expressed concerns about potential risks associated with AI technology during his appearance before a U.S. Senate subcommittee. Altman emphasized that if something were to go wrong, the consequences could be significant.

Recently, Microsoft joined the call for comprehensive regulation of AI. However, the company remains committed to integrating the technology into its products and competing with other major players like OpenAI, Google, and various startups to deliver AI solutions to consumers and businesses.

AI's potential to enhance efficiency and reduce labor costs has piqued the interest of almost every sector. However, there are also concerns that AI might spread misinformation or factual inaccuracies, which industry experts call "hallucinations."

There have been instances where AI has also been involved in creating popular hoaxes. For example, a recent fake image of an explosion near the Pentagon caused a momentary impact on the stock market. Although there have been numerous requests for stricter regulations, Congress has been unsuccessful in enacting new laws that significantly limit the power of Big Tech.

Add Interesting Engineering to your Google News feed.
Add Interesting Engineering to your Google News feed.
message circleSHOW COMMENT (1)chevron
Job Board