Artificial Intelligence & Ethics

Blogpost
March 12 2019 - ,

Artificial Intelligence is developing at a rapid pace and if this pace is maintained, it is predicted A.I. will soon become an integral part of modern human life. The increasing presence of A.I. raises some important questions on ethics and accountability and although it is still a relatively fledging technology, it is clear that A.I.'s implementation could make people's lives easier, healthier and safer, with broader reaching benefits to society and the environment.

Steps are being taken to enhance the protection of human rights as well as begin identifying and solving environmental concerns. For instance, as part of Microsoft's A.I. for Earth project, the A.I. developers Conservation Metrics use remote sensors and drones to send data to Microsoft’s Azure cloud, where machine learning capabilities are harnessed to makes sense of the range of images and sounds, to detect a target species. This helps researchers determine ecological patterns and improve conservation efforts more thoroughly than the traditional 'boots on the ground' type approach, which can be expensive and inefficient. In a second example, machine learning developed by a Google subsidiary, is being put to use to determine those at risk of losing their sight. Using large amounts of background data, assembled into an algorithm, doctors can check whether the patient shows symptoms that would be impossible to detect if not for A.I.

Ethical Issues

There are risks associated with implementation of A.I., for example the reflection and amplification of historical, social and cultural inequalities, or existing injustices. The manipulation of the machine learning system by a human programmer, or use of historical data with biases, can instigate a bias in the machine. For example, in the US, two A.I. developers that look at crime rates to predict where future crime will occur using historical data claim its research has found the software to be twice as accurate as human analysts. However, this can lead to a feedback loop in which algorithms reflect historical attitudes in supposedly "bad" neighbourhoods and create a circular bias in the machine. By increasing police presence in these areas, more crime is likely to be detected, which reinforces the data that caused the police presence in the first place.

A.I. can also be used as a response to a large data set - automated tools to analyse CV's is an example of this and is already used by some employers to filter out obviously unsuitable candidates, enabling large volumes of applicants to be analysed in a short amount of time. However, this could lead to discrimination as, for instance, a disabled employee could be filtered out without due consideration of the disability and the context of their employment history. As noted above, the human biases possible in the underlying algorithms could lead to discrimination based on race, religion etc, (intentional or otherwise), which would be very hard to identify when buried within programming too complex for a layperson to unravel or question.

Humans not being perfect, it may be a fallacy to expect that the machines that we have programmed can be. More positively, however, transparency in the algorithms could help to root out machine learning bias. IBM has released a new transparency tool to help identify where biases originate in the data and, it is hoped, allow the data scientist to reproduce the algorithms into a fairer system for all.

The manipulation of human experiences by machine learning that uses social data can instil a bias or a change in the person. Netflix programming, for example, looks at nuanced themes within the content that a user is watching and learns from it, creating the recommendations tab. It could be argued that Netflix is trying to give their customer a range of programmes that the customer actually wants to watch without actively looking for it. However, it could also be argued that this type of machine learning inhibits our freedom as it narrows our perceived choices, by only recommending similar themes and creating an 'echo chamber' of content.

This content-based manipulation is even worrying the companies that make them. OpenAI, an Elon Musk backed tech company, has created an A.I. text creator so good at creating fake stories around a few words input to the machine, that the company itself has put a halt to public release for fear of misuse. The A.I. could be used to create plausible sounding fake stories to promote conspiracies or bigoted text, and therefore manipulating a person's thoughts and validating opinions with false evidence.

Future Employment

As A.I. improves there is the possibility human skills being supplanted in the not too distant future, with potential to reduce employment rates, particularly in certain sectors. Jobs in transport, warehousing and logistics are predicted to have above-average susceptibility to automation with an estimated 80% of roles in those industries at risk, according to a report by Citi and Oxford Martin School. Other examples with a high chance of future automation include roles such as paralegal and legal assistants, and other customer service related jobs that can be serviced by 'intelligent agents' or virtual assistants (think Alexa or Siri for simple home based examples).

In our own sector, the application of 'big data' to ESG and CSR related research is ripe for interpretation by A.I. type systems. The vast increase in online information can be analysed by machine learning databases, particularly to examine trends and potentially remove a level of human involvement. This too raises the possibility of bias, where potentially unsustainable companies can achieve a high positive rating 'score', if not viewed holistically.

On the more positive side, roles where social interaction is key are less likely to be affected. Carers, mental health and social workers, for instance, have a very low chance of automation according to McKinsey & Company. It is also worth bearing in mind that past technological changes created new jobs, as well as replaced them.

There is a need for governments and businesses to work together in supporting those that are displaced to adjust to new systems and new technologies and an increase in STEM learning is critical in developing people alongside A.I.

Conclusions

Despite the negative issues highlighted, if an A.I. system's purpose is a genuine ethical good, it is in society's interest to find a way around these issues and deliver it to the public. Driverless cars, for example, create ethical and practical dilemmas which programmers of A.I. must take into account; the "trolley problem" is one example - whether it is morally permissible to actively run over one person to save the lives of five others. The practicalities, such as infrastructure needed to support autonomous vehicles, is a separate but related obstacle. However, if the underlying purpose is to reduce road accidents below that which can be achieved with other initiatives, their development is a progression in A.I. that is arguably an ethical requirement.

The future of A.I. is uncertain and can seem ominous. Companies should be aware of the moral ramifications of A.I. and try to mitigate the impact of ethical issues such as bias. Transparency of algorithms and an explanation of the data is a useful way of building that trust. Additionally, in initiatives like PartnershipAI and AINow, companies can discuss, share and recommend best practices in research and development. A.I. should evolve for the public good, and when it does, it is likely to achieve marvellous things.

Other News

See all news

Jun 27 2019 Blogpost

Four years after the introduction of the UK's Modern Slavery Act, what's new for the ethical investor
Read full story

The Modern Slavery Act - Four Years On
Jun 26 2018 News

Ethical Screening has joined a growing list of organisations that have achieved the Fair Tax Mark
Read full story

Ethical Screening awarded Fair Tax Mark 2018