The challenges of using AI programs
Artificial Intelligence (AI) is the use of computer programs to recognise patterns based on data and make predictions and judgements – similar to what humans do. Recently, large language models, such as ChatGPT, have shown that AI has come a long way in a short time and can now produce comprehensible documents with a reasoned argument that are hard to tell apart from human-produced content.
The rise of AI has been accompanied by growing concerns over its potential impact on human rights. The Australian Human Rights Commission and the Actuaries Institute have responded to these concerns by releasing a guide for the development and use of AI and promoting the respect of human rights in the deployment of AI technologies.
The aim of the guide, Guidance Resource: Artificial Intelligence and Discrimination in Insurance Pricing and Underwriting is to provide guidance to professionals and businesses on complying with federal anti-discrimination legislation in relation to the use of AI in insurance pricing and underwriting decisions. It follows on from the Australian Human Rights Commission report issued in 2021 on Human Rights and Technology.
The guide to AI
The purpose of the guide is to help organisations and individuals understand the human rights implications of AI. It provides practical tips on how to ensure that AI technologies respect human rights. The guide covers key human rights concerns, including privacy and non-discrimination.
Key human rights concerns
Privacy is a key human rights concern related to the use of AI. AI technologies often process large data sets, and there is a risk that this data may be used for purposes beyond what was originally intended, or may be shared with third parties without an individual’s consent. The 2021 report recommends that AI technologies should be developed and deployed in a way that respects the privacy rights of individuals, and that organisations need to be transparent about how they use personal data.
Another human rights concern related to AI is non-discrimination. AI technologies have the potential to perpetuate existing biases and discrimination. Examples include making decisions based on incomplete or incorrect data, or by profiling based on existing biases. The guide recommends that AI technologies should be designed to avoid discriminatory outcomes and that organisations need to have processes in place to identify and address any biases in their AI systems.
Tips for avoiding discrimination using AI
The guide suggests a number of tips for the development and deployment of AI technologies, including:
- Transparency: AI technologies need to be developed and deployed in a way that is transparent, and tells people how their data is being used.
- Accountability: Organisations need to be accountable for the impacts of their AI technologies. Again, the need to have processes in place to identify and address any human rights impacts.
- Human Oversight: There should be independent oversight and review of AI technologies, to ensure that they respect human rights.
The role of business, government and society in AI
- The 2021 report also highlights the role of government, businesses, and society in promoting human rights in AI.
- Governments have a responsibility to ensure that AI technologies respect human rights, and establish legal and regulatory frameworks that promote this.
- Businesses have a responsibility to respect human rights in the development and deployment of AI technologies. They also need to be transparent about how they use personal data.
- Generally, society has a role to play in raising awareness of the human rights implications of AI, and community organisations and individuals need to advocate for the protection of human rights in the context of technology.
Finally, the guide emphasises the importance of ongoing monitoring and evaluation to ensure that AI technologies respect human rights.
As AI technologies continue to evolve, it is important that we regularly assess their impact on human rights. If necessary, we need to adjust the models used to ensure that they respect human rights.
In conclusion, the guide from the Australian Human Rights Commission and the Actuaries Institute provides an overview of the human rights concerns related to the use of AI, and provides practical tips on how to ensure that AI technologies respect human rights.
It highlights the need for transparency, accountability, and human oversight in the development and deployment of AI technologies. It also emphasises the importance of ongoing monitoring and evaluation to ensure that AI technologies respect human rights.
The guide is an important resource for anyone with an interest in the development and use of AI, and in the promotion of human rights, particularly those interested in how insurance companies and actuaries use AI in decision-making.