UNESCO calls for regulating the use of artificial intelligence in schools
The United Nations Educational, Scientific and Cultural Organisation (UNESCO) has recently called for the application of strict regulations, to manage the use of artificial intelligence (AI) in classrooms, thereby contributing to protecting children. The development of strategies at national and global levels to ensure the responsible, safe and ethical use of AI is urgently needed.
In the latest guidance that UNESCO sent to governments, the organisation emphasized that regulators have still not prepared to address issues related to the implementation of artificial intelligence programmes in schools.
The agency said that using AI programmes instead of teachers could affect children's emotions, putting them at risk of being easily manipulated. The UNESCO guide states that AI tools have the potential to help children as research assistants, but these tools will only be safe and effective if the governments regulate their use and coordinate with teachers, students and researchers, to participate in the tool’s design process.
AI is changing the world and facilitating everyday life. Human works in all fields are significantly reduced thanks to AI applications.
However, there are more and more warnings about the dangers of AI. This technology is said to be sparking a new, difficult and complicated battle between cybersecurity forces and cybercriminals. Children are the beneficiaries of the opportunities brought by AI but are also the most vulnerable to the dangers of this disruptive technology.
As AI becomes more popular globally, with explosive investments in 2023, lawmakers around the world have urgently considered how to reduce the risks that this emerging technology poses to national security.
In November, the British Government will host a Global Summit on AI Safety, which will focus on discussing how to prevent AI from being used to spread false and fake news in elections and the use of this technology in warfare.
In May, heads of the Group of leading industrialised countries (G7) called for the application of global standards, to develop AI technology safely and reliably.
The rapid development of AI poses a difficult problem for officials in many countries, as to how to balance the promotion of the spirit of innovation and creativity and the control of potential risks from this technology. Despite benefiting from AI products, many technology companies still warn about the dangers of this technology, if it is not placed under strict supervision.
Microsoft Corporation President Brad Smith recently stated, that AI has the potential to become a useful tool but also has the risk of becoming a weapon against humanity if it goes beyond human control. Brad Smith affirmed that it is necessary to encourage technology companies to “do the right thing”, including creating new regulations and policies, to ensure safety in all situations.
OpenAI CEO Sam Altman also warned of potential dangers from AI and stressed the need to reduce those dangers. Four major technology companies, including Anthropic, Google, Microsoft and OpenAI, have established a new group called Frontier Model Forum, to develop safety standards, targeting key goals such as promoting safe AI research to support development and reduce risks; help the public understand the nature, capabilities, limitations and impacts of technology; and cooperate with policymakers and academics to share knowledge about risk and safety.
Using AI is an irreversible trend in the world. The development of strategies, so that AI truly serves human life, is an essential step to making preparations for the next generations, to live safely with artificial intelligence.