HomeWinBuzzer NewsMicrosoft Gets Serious on AI Safety, Publishes Governance Blueprint for Future Development

Microsoft Gets Serious on AI Safety, Publishes Governance Blueprint for Future Development

Microsoft has published “Governing AI: A Blueprint for the Future” that rests on five principles for the development of intelligence models.

-

has released a blueprint for how it believes should be governed. The report, titled “Governing AI: A Blueprint for the Future,” outlines five key principles that Microsoft believes should be used to guide the development and use of AI.

Microsoft argues that AI is a powerful technology that can bring many benefits to society, but also poses significant risks and challenges that require careful oversight and accountability. The blog post, written by Microsoft President Brad Smith and Chief Technology Officer Kevin Scott, proposes a five-step blueprint for public governance of AI that includes:

  • Implementing government-led frameworks at the inception level and identifying content moderation standards for online platforms.
  • Establishing a new federal agency dedicated to AI policy and that can coordinate with other agencies and international partners.
  • Creating a national AI strategy that sets clear goals and priorities for , development, deployment, and education.
  • Promoting responsible AI practices and principles across the public and private sectors, such as fairness, transparency, , , inclusiveness, and accountability.
  • Supporting AI innovation and competitiveness through increased funding, infrastructure, talent, and collaboration.

How Microsoft is Creating Responsible AI

There have been concerns that Microsoft is not taking its AI committmens seriously. Alarm bells went off when the company laid off its entire Ethics team even though there are other AI oversight teams at Microsoft.

In an interview in March, CEO Sundar Pichai said that regulation is essential to ensure safe AI development. In the US, lawmakers are seeking to ensure that AI becomes regulated and that it must be certified before it launches.

The blog post also highlights some of the initiatives and tools that Microsoft has developed to implement responsible AI within its own organization and products, such as:

  • The Aether Committee, which advises Microsoft leadership on the ethical and social implications of AI technologies and makes recommendations on best practices and policies.
  • The Office of Responsible AI (ORA), which sets the company-wide rules for responsible AI and enables teams to comply with them through guidance, training, and impact assessments.
  • The Responsible AI Strategy in Engineering (RAISE), which defines and executes the tooling and system strategy for responsible AI across engineering teams and develops One Engineering System (1ES), a set of tools and systems built on Azure ML that help customers adopt responsible AI practices.
  • The Responsible AI principles from Microsoft, which are six core values that guide Microsoft's approach to creating and deploying AI systems: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability.

Microsoft also calls for more collaboration and dialogue among stakeholders from government, industry, academia, civil society, and the public to shape the future of AI in a way that reflects shared values and goals. The company says it is committed to contributing to this effort and advancing responsible AI for the benefit of everyone.

OpenAI Launches Grant Program for AI Governance

As well as Microsoft moving ahead with safer frameworks for , its main partner in AI is doing the same. has announced a grant program to fund experiments in democratic processes for governing AI systems.

The program will award 10 grants of $100,000 each to recipients who propose compelling frameworks for answering questions such as how AI should interact with public figures, what values AI should uphold, and who should be involved in shaping AI rules. The goal of the program is to explore how AI can benefit all of humanity and be as inclusive as possible, while avoiding bias, misinformation, and other harms. The results of the experiments may influence OpenAI's own views on AI governance but will not be binding.

SourceMicrosoft
Luke Jones
Luke Jones
Luke has been writing about all things tech for more than five years. He is following Microsoft closely to bring you the latest news about Windows, Office, Azure, Skype, HoloLens and all the rest of their products.