Launched in 2017, the Ethics and Governance of AI Initiative is a hybrid research effort and philanthropic fund that seeks to ensure that technologies of automation and machine learning are researched, developed, and deployed in a way which vindicates social values of fairness, human autonomy, and justice.
The Initiative is a joint project of the MIT Media Lab and the Harvard Berkman-Klein Center for Internet and Society. It incubates a range of research, prototyping, and advocacy activities within these two anchor institutions and across the broader ecosystem of civil society.
At present, the Initiative is supporting work in three domains which it believes to be among the most impactful, near-term areas of automation and machine learning deployment. As such, it has launched the Ethics and Governance of AI Initiative Challenge.
This open challenge, which will award up to $750,000 to a range of projects, is seeking fresh and experimental approaches to four specific problems at the intersection of AI and the news:
Ensuring that AI serves the public good requires the public to know how the platforms are deploying these technologies and how they shape the flow of information through the web today. However, as many others have pointed out, the level of transparency and accountability around these decisions has been limited, and they’re seeking ideas that help to raise it. This might be new policies in the form of draft legislation or technical tools that help keep an eye on the information ecosystem.
AI might be applied by a variety of actors to spread disinformation, from powering believable bots on social media to fabricating realistic video and audio. This exacerbates a range of existing problems in news and information. Ethics and Governance of AI Initiative is seeking approaches they can take to detect and counter this next generation of propaganda.
Journalists play a major role in shaping public understanding of AI, its impact on the information ecosystem, and what they should do to ensure the technology is used ethically. But it can be hard to keep up with the latest developments in the technical research and communicate them effectively to society at large. They’re seeking ideas that will help bolster this community in this important work and give them the tools they need to effectively communicate about AI and its impact.
It is easy to find a lot of things to critique about the influence that automation and AI have on the news and information space. More challenging is articulating plausible alternatives for how these platforms should be designed and how they should deploy these technologies. They’re interested in ideas that paint a picture of the future: How might platforms from smartphones and social media sites to search engines and online news outlets be redesigned in part or entirely to better serve the public good?
Grants in the range of $75,000 to $200,000 will be awarded to be spent towards a year-long project
The challenge is open to anyone who has a good approach to addressing these problems — be it journalists, designers, technologists, activists, entrepreneurs, artists, lawyers from a variety of communities around the world.
If you have an AI-tailored solution for any of the above-listed categories, you should apply immediately for the Ethics and Governance of AI Initiative Challenge 2018 with your idea.
To learn more about the organising body, you may visit the official website.
In a move to bridge the widening funding gap faced by small and medium-sized enterprises…
Application for Changemakers Africa 2025 is ongoing. Early-stage startups building solutions in healthcare or climate…
In today’s fast-paced digital economy, speed and relevance are currency, and businesses that can deliver…
ClimateLaunchpad Green Business Ideas Competition is accepting applications for its 2025 cohort. ClimateLaunchpad's mission is…
The OmniRetail Series A funding marks a major milestone in Africa's retail digitalisation. As Africa’s…
Are you an early-stage startup or an African solo talent? Applications are now being accepted…