The Second Principle of AI by Design: Accountability and Fairness

Orinal source post by Krishna Sai

SolarWinds has launched AI by Design, a set of guiding principles for integrating artificial intelligence (AI) into our IT management solutions. In the third article in our series exploring this dynamic framework, we’ll examine how SolarWinds places Accountability and Fairness at the center of our strategy for ethical AI. Let’s get started.

What Does Fairness Look Like in AI?

Algorithmic bias is a serious concern in the field of AI. Algorithmic bias occurs when AI systems make unfair decisions due to biased input data or flawed algorithms. In healthcare, AI systems have shown lower accuracy for African American than Caucasian patients due to underrepresented data of minority groups. In hiring, Amazon discontinued one algorithm after it was found to favor terms more common on men’s resumes while undervaluing resumes that included the word “women’s” (for example, “women’s chess club captain”). A Bloomberg study found that generative AI models amplify existing racial and gender biases, portraying high-paying jobs predominantly with lighter-skinned men, while low-paying jobs and criminal activities were associated with individuals with darker skin tones. These are just some of the troubling ways in which AI systems can reinforce biases present in the datasets they are trained on.

The Life of a Large Language Model

To understand why such outcomes occur and how they can be prevented, we need to look at the lifecycle of an AI system. All AI systems begin as foundational large language models (LLM). Foundational models are created by aggregating and analyzing massive amounts of online data into a single, comprehensive dataset. More simply, it’s like putting “the internet in a box.” The data collected is hugely diverse and includes all kinds of content. Some of it is true, some is false, and some contains harmful, biased, or unfair perspectives. While developers train the AI to generate answers that make sense in response to prompts, they also work to identify and reduce any biases that may have been absorbed from the initial data. This helps ensure that the system produces results that are not just logical but also fair and respectful.

The foundational model now passes to the next stage of “training.” At SolarWinds, we feed our IT management AI assistants on past ticket data, knowledge base articles, CMDB details, ITIL processes, user information, system logs, policy compliance data, FAQs, and more. We study the system output and use prompts to put parameters in place to help ensure that responses are clear, helpful, and free of bias. Then, it is ready to be delivered as part of a software solution to the IT professionals we serve.

Human in the Loop

SolarWinds believes AI systems should aid, assist, and inform their human counterparts—not replace them entirely. Even after our solutions have been deployed, our customer service experts remain involved in monitoring and validating the responses provided by our AI systems. AI by Design’s Accountability and Fairness principle ensures that there is a human in the loop to review and regulate decisions made by AI. Our Agent Assist feature with Gen-AI allows the human agent to edit AI-generated responses before sending them to the requester. Any anomalies encountered by our customer service experts are recorded to fuel further refinements of the systems. Feedback and validation mechanisms are built in so that negative user experiences are properly recorded and addressed. Feedback on AI suggestions is regularly collected and reviewed to improve model behavior over time.

Enhancing Our Virtues

In many ways, artificial intelligence embodies the best human traits—efficiency, precision, problem-solving, and innovation. However, given the vast oceans of data required to develop AI systems, this revolutionary technology is also capable of perpetuating existing imbalances. At SolarWinds, human judgment and moral responsibility are built into every stage of the AI lifecycle to help ensure that all output adheres to our ethical standards. The Accountability and Fairness principle of our AI by Design framework places human oversight and continuous feedback at the core of our strategy to continuously refine our systems in the name of responsible AI. In the process, our AI systems become a key pillar of our mission to continue enriching the lives of IT professionals around the world.

In the coming weeks, Krishna Sai will continue to explore AI by Design principles on Orange Matter.

Read the full SolarWinds AI by Design series

The post The Second Principle of AI by Design: Accountability and Fairness appeared first on Orange Matter.

Leave a Reply