When dealing with climate change mitigation or management of the supply chain, companies are expected to do a lot of manual work; from digging through spreadsheets, chasing data and comparing alternatives without knowing what to trust. This isn’t just inefficient. It’s unnecessary. And technology can do that part faster. Cheaper. And way better.
Not to replace people, but to let people focus on what people are actually good at - judgement, priorities, connections, action.
There was recently a big headline about misuse of AI in the Norwegian public sector, where a municipality had reported AI generated content in a very important report. A lot of this data was wrong and sources were incorrect and sometimes entirely fabricated. This is a perfect example of AI being confidently incorrect and a good reminder to have a healthy skepsis on AI generated content. AI is not perfect, and won’t be in the nearest future, but at some point the value of AI outweighs the mistakes it may make on a rare occasion when used properly. We accept that people make mistakes, so should we accept that AI makes mistakes?
Fortunately, there are several effective ways to mitigate the risk of AI-generated errors:
This has been a focus for the development of our recently launched version of our ESG Due Diligence solution, designed to support compliance with both the Norwegian Transparency Act (Åpenhetsloven) and the Modern Slavery Act. This solution makes extensive use of AI, from monitoring news related to your suppliers and identifying potential risks, to assessing those suppliers against key Environmental, Social, and Governance (ESG) risk factors.
But how do we ensure the AI makes reliable, responsible evaluations, especially in such a sensitive context?
To ensure accurate and scalable ESG risk evaluations, our system uses a network of specialized agents. Each agent has a focused task and builds upon the work of the previous one, enabling both speed and quality control.
Before initiating any assessment, we start with the Feasibility Agent. Its role is to determine whether there is sufficient, reliable information available about a given company.
It asks questions like:
If the answer is no, the company is filtered out of the screening process, and we default to broader sector or industry-level risk scores from trusted sources like ITUC and ILO.
For companies deemed feasible, the Company Context Agent gathers detailed contextual information. This includes:
This context acts as a foundation for the risk assessments to follow. It ensures the next agents operate with accurate, structured input.
Once the company context is available, a set of Risk Agents is deployed — one for each ESG risk category (e.g., child labor, corruption, environmental violations).
Each risk agent:
This targeted approach ensures that each type of risk is assessed independently, transparently, and with domain-specific nuance. By running these agents in parallel, the system delivers granular and scalable insights with a high degree of efficiency.
After all risk agents have completed their assessments, the Verification Agent performs a final quality check. Its job is to fact-check and validate the outputs across the entire network.
It reviews:
If any issues are detected, the agent flags them for review or reprocessing. This step ensures that the final ESG risk profile is coherent, reliable, and traceable, reducing the chance of errors or misjudgments slipping through.
The combination of specialized agents working in tandem, along with a robust verification process, significantly reduces the risks associated with AI-generated errors. However, we must remain vigilant, continually refining both the technology and the processes that support it. By doing so, we can harness AI’s potential while ensuring that decisions are based on accurate, reliable, and ethical data, ultimately helping us navigate the complex landscape of sustainability with greater confidence.