Artificial Intelligence and ESG Research

February 15 2024 - Andrew Hicklin, Senior Researcher

Human-led Machine Learning vs. Artificial Intelligence

Artificial intelligence is one of the most discussed topics and has been on the rise for multiple years for its potential to bring revolutionary innovation to the market. We have seen this in the automation of various industries and sectors over the last several decades and, in the previous few years, the explosion of novel chat tools and generative technology. I intended for this blog to focus on the advantages and issues associated with artificial intelligence used in the sphere of ESG, which I will come to. As I tried to write it, a more fundamental issue was raised during my writing "What is AI?". Whilst it seems a simple question, do every-day lay people understand what the proposal of AI truly is? A standard definition of AI might be considered as below:


AI refers to the capability of a machine to imitate intelligent human behaviour. It encompasses many technologies and applications, from simple algorithms performing specific tasks to complex machine learning and neural networks that can process and learn from large amounts of data. The ultimate goal of some AI research is to achieve artificial general intelligence (AGI), where machines can understand, learn, and apply knowledge in a way indistinguishable from human intelligence. (Generated from CHATGBT and refined by Grammarly (two machine learning systems))


As the machine displays humanlike responses (potentially even beating humans in chess games, etc.), we are happy to treat it as having a level of experience and possibly even consciousness. Yet the word that stands out in all of this that we all take for granted is "understand". We prescribe an agency to the machine, and we anthropomorphise the device. Yet, philosophers and scientists have spent hundreds of years struggling to understand what human consciousness is. Ultimately, we have been left with the old saying cogito, ergo sum ("I think, therefore I am").  For practical purposes, humanity has had to rely on a simple Turing Test that individuals who look and behave like we do likely have internal conscious states as we do.  Whilst this was a practical necessity, this common-sense approach is troublesome as we assume complexity is equivalent to understanding. Whilst the intricacies of this philosophical debate are beyond the scope of this blog, many of the discussions and arguments spill over into the world of AI. This is not unsurprising, given the nature of the technology and promises of AGI.

In simplistic terms, intelligence is a higher function of complex processes/language. Yet various arguments, such as the Chinese Room experiment, highlight the difficulties in this assumption. In the experiment, Searle imagines a person in a room who follows English instructions for manipulating strings of Chinese characters such that to outside observers, it appears as if the person in the room understands Chinese. However, Searle argues that despite appearing to understand Chinese, the person (or a computer in a real-world analogy) is merely following syntactic rules without understanding the semantic meaning. We could see this in non-AI environments with colleagues, clients and individuals. Do they truly know what they are discussing or are they just articulating learnt phrases to some extent? We all suffer from imposter syndrome, so perhaps we all commit this intellectual game, yet my consciousness and agency are independent.

This article has gone down a philosophical hole and, many may think, what does this have to do with investments and ESG? Yet the assumptions around AI are central to the issue of how the machine is trained.

Risks and Challenges Associated with AI in ESG

Training the Machine

Whilst we might treat them as independent agents, AIs are machine learning tools trained using large datasets and, importantly, labelling. We are trying to reduce an element of existence to quantifiable labels. This is a significant issue as it leads to the "black box" problem and refers to the challenge of understanding and interpreting how artificial intelligence systems, particularly those based on complex algorithms like deep learning, make decisions or arrive at their outputs (the real-world Chinese Room thought experiment). This problem is most prominent in neural networks, which are inspired by the structure and function of the human brain and are used in many advanced AI applications. Here's why it's called a "black box" and why it's a significant issue:

1. Complex Internal Mechanics: The decision-making process involves a complex network of nodes and layers in many AI intense learning models. Each node in these layers can have its own 'decision-making' process, influenced by the training data. The sheer number of these nodes and their intricate interactions make tracing and understanding how the model arrives at a specific decision challenging.

2. Lack of Transparency: Unlike more traditional algorithms, where the steps to conclusion are well-defined and understandable, neural networks process information in ways that are not always clear, even to the system's designers. This lack of transparency is a major concern, particularly in applications where understanding the decision-making process is crucial, such as in healthcare, finance, or law.

3. Implications for Trust and Accountability: The black-box nature of AI systems raises questions about trust and accountability. If we cannot understand how a system makes decisions, it becomes difficult to trust those decisions or hold the system (or its creators) accountable for mistakes. This is especially problematic in high-stakes scenarios like autonomous vehicles or criminal sentencing.

4. Bias and Fairness: Without a clear understanding of how decisions are made, biases in training data can lead AI systems to make unfair or discriminatory decisions, and the black box problem can obscure these biases. Studies have shown that facial recognition systems, like those developed by major tech companies, have had higher error rates for women and people with darker skin tones. Some state that it has been trained using western, educated, industrialised, prosperous and democratic (WEIRD) labels.

5. Artificial AI: A term coined by Jeff Bezos regarding requiring human cognitive intelligence for labelling datasets which often utilises the microwork model - a form of digital labour that involves breaking down large tasks into small, manageable parts, which are then outsourced to many people, often via an online platform. These tasks are typically distributed through online platforms or crowdsourcing marketplaces like Amazon Mechanical Turk, Clickworker, or CrowdFlower. These platforms connect clients who need tasks completed with a large pool of workers willing to perform these tasks for a fee. These business practices have faced various criticisms for low wages, poor worker protections, lack of feedback for workers, opaque task pricing, and unknown deductions in workers' pay.

6. AI Systems Creating or Improving Other AI Systems: AI systems could be used to create or improve other AI systems, leading to a rapid advancement that could outpace human understanding and control. This scenario raises concerns about the loss of human oversight and the potential for unintended consequences, especially if the AI's goals are not perfectly aligned with human values.

7. AI eating itself: There is a danger that as more content is generated by AI it will use other generated materials as sources, and errors or misinformation can be reinforced. Distinguishing between reliable and unreliable AI-generated documents requires robust mechanisms and can be time and resource-intensive.

In summary, the black box problem in AI highlights the challenges in understanding and interpreting complex AI systems, which are crucial for trust, accountability, and fairness in AI applications. By forgetting that AI is not some neutral agency/consciousness, we forget that creators select these datasets and labels and often incorporate preexisting political, ethical and social choices. Some commentators are drawing attention to what human intelligence is being replicated and whether this is primarily WEIRD. AI systems have the potential to perpetuate racial and other forms of inequality, which is particularly concerning in ESG research focused on identifying and mitigating such issues.

Environmental Impacts of Building the Machine

AI systems, especially those involving large data sets and complex computations, can have extraordinarily high energy and raw material demands. Various journals and articles now claim AI hardware will represent a large proportion of global energy use in 2030, with figures from 5% to potentially 20% being quoted. Not forgetting the power to procure and refine the raw materials for its development. More energy and materials are required as data sets become larger to try and avoid mistakes and bias, creating a growth cycle of increasing resource demand.


  • Mining and Extraction: The extraction of raw materials, especially rare earth elements, is often environmentally destructive. It can lead to deforestation, soil erosion, and habitat destruction, impacting biodiversity. Mining operations can also consume vast amounts of water and produce significant greenhouse gas emissions.
  • Energy Consumption: The production of AI hardware is energy-intensive. The energy used in manufacturing processes contributes to carbon emissions, primarily if sourced from fossil fuels. Furthermore, the operation of data centres, crucial for AI and machine learning tasks, is a significant electricity consumer.
  • Water Use: The production of semiconductors is water-intensive, and water pollution is a concern in regions where electronics manufacturing is concentrated.
  • E-Waste: The rapid pace of technological advancement leads to a short lifespan for electronic devices, contributing to the growing problem of electronic waste (e-waste). E-waste disposal poses challenges due to the toxic substances in electronics, like lead, mercury, and cadmium, which can leach into the environment.
  • Chemical Pollution: Manufacturing electronic components involves hazardous chemicals, which, if not properly managed, can contaminate air, water, and soil.
  • Supply Chain and Transportation: The global supply chain for AI hardware components involves transportation across long distances, contributing to emissions from shipping and aviation.


The environmental footprint of AI is a growing concern, and addressing it requires concerted efforts across industry, government, and consumers to adopt more sustainable practices and technologies.

Mitigating AI Risks Through ESG Practices

While AI can enhance ESG research through improved data collection and analysis, it poses various risks and ethical concerns. These include perpetuating inequalities, affecting democratic processes, high carbon emissions, and the potential misuse of technology. Evaluating AI ethics in ESG investments and operations is crucial to maximising benefits while minimising risks. Ethical Screening is dedicated to aligning investment portfolios with moral values and ESG criteria, navigating the complex landscape of AI integration, and facing unique opportunities and challenges. Human lead intelligence, or "consciousness", as many lay people might call it, is at the forefront of our business model. Machine learning and automation can significantly improve efficiency, but we believe this merely augments our existing virtues instead of replacing them. We want to ensure our clients never face a black box scenario with our team at the heart of the research and analysis process. Willing to think ahead of system developments and the limitations of manufactured datasets.


  • Human Lead Research - The company uses automation as a tool, not an end product. This blog was produced with the assistance of a generative AI but directed by a human consciousness and vision. As is all our research.
  • Supply Chain and Raw Materials: Monitor and report on companies' performance and highlight risks, including sourcing rare earth metals.
  • Recycling and Circular Economy: Report on efforts to recycle electronic components and adopt a circular economy model that can mitigate environmental impacts. Recycling can reduce the demand for new raw materials and lower the environmental burden of disposal.
  • Energy Efficiency: Reporting on the energy efficiency of data centres and AI hardware as mitigation can reduce overall energy consumption and associated emissions.
  • Renewable Energy: Reporting on the energy sources for manufacturing and data centre operations to renewable energy as these can significantly reduce carbon emissions.
  • Regulation and Best Practices: Implementing rules and industry best practices for responsible sourcing, manufacturing, and disposal can mitigate environmental impacts.
  • Automation KPIs: Reporting on sector-specific mitigation plans for the increased use of automation and what protections are in place for employees.


Our strength lies in providing detailed ESG scores and research narratives, which help mitigate our client's risks.



Atlas of AI - Kate Crawford,side%2C%20AI%20poses%20several%20risks

PhilosophyTube - Here’s What ETHICAL AI Really Means - Abigail Thorn

Written with the assistance of CHATGPT and Grammarly

Other News

See all news

Mar 08 2024 Blogpost

This International Women's Day we celebrate the strides toward gender equality, while acknowledging the journey that remains
Read full story

International Women's Day 2024
Oct 17 2023 Blogpost

An abundance of clean water is essential for human health, industry, agriculture and energy production. However, climate change, ecosystem degradation and unsustainable management is putting global fr...
Read full story

Global water risk in the context of food producers