Artificial intelligence: Experts propose guidelines for safe systems

2023-07-20 10:53:20
Artificial intelligence: Experts propose guidelines for safe systems

A global consortium of hi-tech experts and data scientists has unveiled an innovative and forward-thinking framework aimed at promoting the safe development of artificial intelligence (AI) products. Led by the World Ethical Data Foundation, which boasts an impressive membership of 25,000 professionals from major tech giants such as Meta, Google, and Samsung, the framework consists of a comprehensive checklist comprising 84 thought-provoking questions for developers to consider at the inception of an AI project. In a unique move, the Foundation has also extended an invitation to the public to submit their own questions, all of which will be carefully reviewed at the organization's next annual conference.

Presented in the form of an open letter, which aligns with the preferred communication style within the AI community, the framework has garnered support from hundreds of signatories. AI, at its core, allows computers to emulate human-like behavior and responses. By leveraging vast amounts of data, computers can be trained to identify patterns, make predictions, solve complex problems, and even learn from their own mistakes. This process relies not only on data but also on algorithms, which are essentially sets of rules that must be followed in a precise order to accomplish a specific task.

The World Ethical Data Foundation was established in 2018 as a non-profit organization bringing together professionals from the tech and academic spheres, with the primary objective of exploring the development of new technologies. The questions posed by the Foundation to developers encompass a range of critical considerations, including methods to prevent bias from being incorporated into AI products and strategies for handling situations where tool-generated results may violate established laws. In fact, Yvette Cooper, the shadow home secretary, recently announced that the Labour Party intends to criminalize individuals who purposefully exploit AI tools for terrorist purposes, underscoring the importance of addressing ethical concerns surrounding AI.

Recognizing the increasing significance of AI, Prime Minister Rishi Sunak has appointed Ian Hogarth, a prominent tech entrepreneur and AI investor, to oversee an AI taskforce. Hogarth has expressed a desire to gain a deeper understanding of the risks associated with cutting-edge AI systems and to hold accountable the companies responsible for their development. The framework also takes into account various data protection laws across different jurisdictions, the need for clear user interaction with AI, and the fair treatment of human workers involved in the input and tagging of data used for training AI products. The comprehensive list of questions is divided into three chapters, covering individual developers, teams, and product testing.

A sample of the thought-provoking questions includes inquiries such as whether developers feel rushed or pressured to use data from questionable sources, whether the team responsible for selecting training data comprises individuals from diverse backgrounds to minimize bias, and what the intended use of the trained model will be. Vince Lynch, founder of IV.AI and an advisor to the World Ethical Data Foundation board, described the current state of AI as akin to the "Wild West," where experimentation is widespread but issues relating to intellectual property and the consideration of human rights are now coming to the forefront.

The ramifications of using copyrighted data in AI training can be significant, potentially necessitating the retraining of the entire model, which can cost hundreds of millions of dollars. To address these concerns, several voluntary frameworks for the safe development of AI have been proposed. Margrethe Vestager, the European Union's Competition Commissioner, is leading European efforts to establish a voluntary code of conduct with the United States government, requiring companies engaged in AI usage or development to adhere to non-binding standards.

One Glasgow-based recruitment platform, Willo, recently unveiled its own AI tool after three years of data collection. Co-founder Andrew Wood emphasized that the company does not employ AI for decision-making purposes, as those responsibilities remain with the employer. While AI is particularly useful for tasks like interview scheduling, the final decision on whether to hire a candidate is solely entrusted to humans. Co-founder Euan Cameron stressed the importance of transparency within the Foundation framework, highlighting the need to clearly indicate when AI technology has been employed rather than attempting to pass off AI-generated content as the work of humans.

In conclusion, the release of this groundbreaking framework by the World Ethical Data Foundation marks a significant milestone in the responsible development of AI products. By posing critical questions and emphasizing transparency and ethical considerations, the Foundation aims to navigate the uncharted territory of AI development, ensuring that potential risks are mitigated and that developers are held accountable for their creations. As AI continues to shape our world, it is imperative that we adopt a professional and proactive approach to ensure its safe and ethical integration into our lives.