OpenAI gears up to safeguard 2024 elections from tech abuse
OpenAI has announced measures it's implementing ahead of the 2024 worldwide elections to prevent potential abuses of its technology, provide transparency on artificial intelligence (AI)-generated content, and promote access to accurate voting information. The global artificial intelligence firm has emphasised its commitment to protecting the integrity of the democratic process by ensuring that its technology does not undermine it in any way.
The company is renowned for its AI tools which are utilised by people to enhance their day-to-day lives and tackle complex problems. These tools have a wide range of applications such as streamlining state services and simplifying medical paperwork. Like all novel technologies, AI tools present both advantages and unique challenges.
To prepare for this year's worldwide elections, OpenAI is focussing on elevating accurate voting information, implementing well-considered policies, and bolstering transparency. The institute has also formed a cross-functional team composed of professionals from its safety systems, threat intelligence, legal, engineering, and policy divisions to investigate and combat potential abuse promptly.
The AI pioneer is executing several key initiatives in readiness for the approaching elections. These include the adoption of measures to prevent the misuse of its tools during elections, for example, the dissemination of misleading 'deepfakes', large-scale influence campaigns, or the creation of chatbots impersonating political candidates. Additionally, it has been modernising tools to improve factual correctness, reduce bias, and turn down certain requests, thereby establishing a robust foundation for election integrity.
OpenAI's ongoing refinements of its Usage Policies for ChatGPT and API are based on learnings about the use or potential abuse of the technology. A few significant policy points to note regarding elections include: the prohibition of the creation of chatbots that impersonate real individuals or institutions, a ban on applications that discourage participation in democratic processes, and the restriction of applications for political campaigning and lobbying until the firm has a deeper understanding of the effectiveness of its tools for personalised persuasion.
Better transparency around AI-generated content and particularly image provenance is another area the company is concentrating on. OpenAI is undertaking several initiatives, including the implementation of the Coalition for Content Provenance and Authenticity's digital credentials for images generated by DALLE 3. Furthermore, it is developing a provenance classifier to identify images produced by DALLE. This tool has shown promising results in internal testing, even with images that have undergone common types of alterations.
In its ongoing quest for transparency, OpenAI is integrating ChatGPT with existing sources of information. This means that users will have real-time access to global news reporting, including attributions and links. Such visibility regarding the origin of information and balanced news sources will enable users to make more informed decisions.
To improve access to trustworthy voting information, OpenAI is partnering with the National Association of Secretaries of State (NASS) in the US. ChatGPT will direct users to authoritative US voting information when asked procedural election-related questions. Lessons learnt from this collaboration will guide the company's approach in other countries and regions.
OpenAI anticipates sharing more insights in the coming months and looks forward to further collaborations to preempt and prevent the potential misuse of its tools in the run-up to the 2024 global elections.