The Third Principle of AI by Design: Transparency and Trust

Orinal source post by Krishna Sai

This is the 3rd article in our Orange Matter series on AI by Design

IT professionals must trust their tools. When it comes to artificial intelligence (AI) solutions, trust is built on several factors, such as the adoption of privacy and security practices and safeguards against algorithmic bias. In this article, we explore the third principle of our AI by Design framework and discuss how SolarWinds navigates transparency challenges to cultivate user trust in our AI systems.

Transparency in AI

SolarWinds has utilized artificial intelligence in its IT management solutions for over five years. Traditional AI/machine learning (ML) systems made building transparency into their responses relatively easy. These algorithms often use straightforward logic like correlation: “If this, then that.” For example, if A happened because B happened, which in turn happened because C happened, the user is prompted to investigate C to resolve issues with A and B. These systems made underlying logic transparent by clearly laying out these processes; visual cues allowed users to represent the process, analyze past data, and review previous results.

If the system provided false or inaccurate answers, users could easily recognize and disregard these flawed responses. However, as generative AI (GenAI) models seek to provide a more detailed and intuitive user experience, achieving transparency around decision-making processes becomes complicated.

Building a Black Box

Unlike more traditional AI/ML algorithms, transparency is not inherent to generative AI systems. All GenAI systems start as foundational large language models (LLMs), created by combining massive online datasets. Developers train the AI to generate sensible, helpful, and non-biased answers. At SolarWinds, the foundational model is further trained with specific IT management datasets like past ticket data, knowledge base articles, and more. The complexity and scale of the data and algorithms impede “explainability.” When deployed, these systems often operate as a “black box:” a system or model whose internal workings are not visible to the user. While users can see the input and output of the system, there is no way of demonstrating how the system processes the input to produce the output. The fundamental nature of LLMs—built on vast, diverse, and often opaque datasets—means that providing full transparency remains a complex ongoing challenge.

Navigating the Partially Unknown

With comprehensive “explainability” still out of reach when it comes to generative AI, how can we foster trust in our AI-driven solutions? Working to ensure our systems consistently achieve their intended objectives is critical. At SolarWinds, we leverage a well-structured machine learning operations (MLOps) pipeline to refine our AI models over time. An MLOps pipeline provides a structured and documented process for developing, deploying, and maintaining machine learning models. It records data sources and preparation methods, tracks training algorithms and parameters, evaluates model performance with metrics, continuously monitors and updates the AI, and maintains detailed documentation. Minimizing hallucinations to ensure that our AI systems consistently perform their designated functions is central to our refinement process. As our models become ever more consistent, user trust deepens.

Even after our AI solutions are deployed, our customer service experts continue to monitor and validate the responses they provide. Our Agent Assist feature with GenAI allows human agents to edit AI-generated responses before they are sent to the requester. This helps ensure that any AI-generated information is accurate and appropriate. Anomalies encountered by our customer service experts are recorded to help refine and improve the systems further. We have built-in feedback and validation mechanisms designed to ensure negative user experiences are properly recorded and addressed. Regular feedback on AI suggestions is collected and reviewed to improve model behavior over time, helping our AI remain relevant and useful.

Cultivating Confidence in AI-Driven Tools

Software developers have yet to fully unravel the processes by which AI systems reach their conclusions. But with the right protocols in place, we needn’t be inhibited from leveraging the many benefits of this transformative technology. The key is to work with and around these unknowns. At SolarWinds, we create safeguards and parameters to carefully monitor our systems, implement feedback to continuously improve performance, and keep a human in the loop to ensure that the responses generated by our AI systems are effective, relevant, and useful. Working in tandem with other components of AI by Design, our Transparency and Trust principle helps our customers view SolarWinds AI as a reliable aide in navigating the challenges of IT management.

Read the full SolarWinds AI by Design series

The post The Third Principle of AI by Design: Transparency and Trust appeared first on Orange Matter.

Leave a Reply