New Guide for Using AI in the Public Sector

Recently, Liberal Democrat leadership hopeful Jo Swinson urged parliament to ensure that any development of AI is done with ethics front and center. A new guide for the UK government has been created to help ensure that AI deployments in the public sector do follow such a path.

The guidance was created by the Office for Artificial Intelligence (OAI) and the Government Digital Service (GDS), with contributions from the likes of The Alan Turing Institute.

It begins by highlighting the number of applications of AI in the public sector today, with a number of bodies already using it for things such as fraud detection. Despite these early use cases, the possibilities are far greater than are currently being explored, and as usage rises, so too do ethical and safety concerns.

Safe Use of AI

Perhaps of particular interest is around that of the ethical deployment of AI. Whilst the potential issues surrounding AI are perhaps well known, the report provides a number of operationalisable measures to counteract them. For instance, the report argues that shared human purposes and values must be prioritized when developing new technologies so that a shared vision for a better future can materialize.

Of course, the report isn’t the first attempt to do that, with the House of Lords also issuing guidance on ethical development of AI last year.

“The UK contains leading AI companies, a dynamic academic research culture, and a vigorous start-up ecosystem as well as a host of legal, ethical, financial and linguistic strengths. We should make the most of this environment, but it is essential that ethics take center stage in AI’s development and use,” chair of the Committee, Lord Clement-Jones says.

Despite this potential, the Committee set about exploring the ethical issues involved in the development of AI and how the UK can ensure the technology develops in the right way.

The Committee has developed five principles around which they urge the development of AI to revolve:

  1. Artificial intelligence should be developed for the common good and benefit of humanity
  2. Artificial intelligence should operate on principles of intelligibility and fairness
  3. Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities
  4. All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence
  5. The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence

These core principles should form the crux of an AI code that features stakeholders from across the industry that can be adopted nationally at first but then potentially internationally.

Of course, the fact that the Turing Institute is still producing guidance a year after the Lords report was published perhaps indicates the slow pace of progress on the issue. When action does materialize, however, we can at least not complain that it wasn’t underpinned by sufficient advice.

This UrIoTNews article is syndicated fromDzone