Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Following EU, China releases AI Principles

Save for later
  • 5 min read
  • 03 Jun 2019

article-image

Last week, the Beijing Academy of Artificial Intelligence (BAAI) released 15-point principles calling for Artificial Intelligence to be beneficial and responsible termed as Beijing AI Principles. It has been proposed as an initiative for the research, development, use, governance and long-term planning of AI. The article is a well-described guideline on the principles to be followed for the research and development of AI, the use of AI, and the governance of AI.

The Beijing Academy of Artificial Intelligence (BAAI) is an organization backed by the Chinese Ministry of Science and Technology and the Beijing municipal government. These principles have been developed in collaboration with Peking University, Tsinghua University, the Institute of Automation and Institute of Computing Technology within the Chinese Academy of Sciences, and China’s three big tech firms: Baidu, Alibaba, and Tencent.

Research and Development

  • Do Good


It states that AI should be developed to benefit all humankind and the environment, and to enhance the well-being of society and ecology.

  • For Humanity


AI should always serve humanity and conform to human values as well as the overall interests of humankind. It also specifies that AI should never go against, utilize or harm human beings.

  • Be Responsible


Researchers while developing AI should be aware of its potential ethical, legal, and social impacts and risks. They should also be provided with concrete actions to reduce and avoid them.

  • Control Risks


AI systems should be developed in a way that ensures the security of data along with the safety and security for the AI system itself.

  • Be Ethical


AI systems should be trustworthy, in a way that the system can be traceable, auditable and accountable.

  • Be Diverse and Inclusive


The development of AI should reflect diversity and inclusiveness, such that nobody is easily neglected or underrepresented in AI applications.

  • Open and Share


An open AI platform will help avoid data/platform monopolies, and share the benefits of AI development.

Use of AI

  • Use Wisely and Properly
  • Unlock access to the largest independent learning library in Tech for FREE!
    Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
    Renews at AU $24.99/month. Cancel anytime


The users of AI systems should have sufficient knowledge and ability to avoid possible misuse and abuse, so as to maximize its benefits and minimize the risks.

  • Informed-consent


AI systems should be developed such that in an unexpected circumstance, the users' own rights and interests are not compromised.

  • Education and Training


Stakeholders of AI systems should be educated and trained to help them adapt to the impact of AI development in psychological, emotional and technical aspects.

Governance of AI

  • Optimizing Employment


Developers should have a cautious attitude towards the potential impact of AI on human employment. Explorations on Human-AI coordination and new forms of work should be encouraged.

  • Harmony and Cooperation


This should be imbibed in an AI governance ecosystem, so as to avoid malicious AI race, to share AI governance experience, and to jointly cope with the impact of AI with the philosophy of "Optimizing Symbiosis".

  • Adaptation and Moderation


Revisions of AI principles, policies, and regulations should be actively considered to adjust them to the development of AI. This will prove beneficial to society and nature.

  • Subdivision and Implementation


Various fields and scenarios of AI applications should be actively researched, so that more specific and detailed guidelines can be formulated.

  • Long-term Planning


Constant research on the potential risks of Augmented Intelligence, Artificial General Intelligence (AGI) and Superintelligence should be encouraged, which will make AI always beneficial to society and nature in the future.

These AI principles are aimed at enabling the healthy development of AI, in such a way that it supports the human community, for a shared future. This will prove beneficial for humankind and nature, in general.

China releasing its version of AI principles, has come as a surprise for many. China has always been infamous for using AI to monitor citizens. This move by China comes after the European High-Level Expert Group on AI released ‘Ethics guidelines for trustworthy AI’ , this year. The Beijing AI Principles provided by BAAI, is similar to the AI principles published by Google last year. Google’s AI principles also provided a guideline for AI applications, such that it becomes beneficial for humans.

By releasing its own version of AI principles, is China signalling the world that its ready to talk about AI ethics, especially after the U.S. blacklisted China’s telecom giant Huawei over threat to national security.

As expected, users are also surprised with China showing this sudden care towards AI ethics.

https://twitter.com/sherrying/status/1133804303150305280

https://twitter.com/EBKania/status/1134246833100865536

While others are impressed with this move by China.

https://twitter.com/t_gordon/status/1135491979276685312

https://twitter.com/mgmazarakis/status/1134127349392465920

Visit the BAAI website, to read more details of the Beijing AI Principles.

Samsung AI lab researchers present a system that can animate heads with one-shot learning

What can Artificial Intelligence do for the Aviation industry

Packt and Humble Bundle partner for a new set of artificial intelligence eBooks and videos