INSIGHTS & IDEAS
arrow

LLM Part 7 | Governance

Introduction

Large Language Models (LLMs) have ushered in a new era of technological advancement, with applications spanning from customer service to healthcare. Their potential is immense, but so too are the associated challenges. Ensuring the responsible and ethical development, deployment, and use of LLMs necessitates a robust governance framework.

Transparency is a cornerstone of LLM governance. To build trust and accountability, it is imperative to understand how these models function and make decisions. However, the complexity of LLMs presents unique challenges in achieving transparency. Designing effective transparency approaches requires a deep understanding of stakeholder needs, the specific applications of LLMs, and the lessons learned from human-computer interaction. While strategies like model reporting, continuous evaluation, and providing explanations are foundational, their optimal application to LLMs remains an open question.

This article explores the critical dimensions of LLM governance, examining the challenges, best practices, and future directions for this rapidly evolving field. By understanding the complexities involved, organizations can develop effective governance strategies to harness the power of LLMs while mitigating risks.

Data Governance

Data and analytics governance has become increasingly complex due to the dispersed nature of data across edge, on-premises, and cloud environments. The growing regulatory landscape further underscores the need for effective data governance strategies. While self-service data access is essential for data democratization, it must be balanced with appropriate governance controls.

The data governance domain is vast, with numerous methodologies and tools claiming to provide comprehensive solutions. However, matching specific organizational requirements to these offerings can be challenging.

Governance frameworks vary significantly depending on the system type. Operational transactional systems, data and analytics environments, and master data management (MDM) applications each demand distinct governance approaches. As data volumes surge and the need for speed and agility intensifies, effective governance becomes paramount to:

  1. Operationalize data pipelines and reporting and machine learning models faster
  2. Meet data protection compliance requirements
  3. Enrich customer experience and speed up insights
  4. Better manage business processes
  5. Perform impact analysis
  6. Enhance data quality

The main steps are defined as:

  • Prework: Define use cases, roles, organizations, architectural pattern and data strategy
  • Step 1: Data discovery and policy alignment
  • Step 2: Transform the data from multiple sources and identify data quality issues
  • Step 3: Curate and remediate enterprise data using data catalog
  • Step 4: Harmonize data at the BU level and perform analytics and ML
  • Step 5: Operationalize data governance
Source Gartner "7 Must-Have Foundations for Modern Data and Analytics Governance"

I have provided more in-depth architectural discussion on customer data strategy in here.

AI Governance - The Three Lines Model

In June 2020, the IIA updated the guidance to release their position paper on The IIA’s Three Line Model. The document describes six key principles of governance, key roles in the three lines model, the relationships between the roles and how to apply the three lines model. It clearly articulates the responsibilities of the management, internal audit function, and the governing body (see figure below).

Three Lines Model (Source:

In addition to these roles within the organization, companies will also be interacting with external auditors, certifiers, or other assurance providers as well as regulators. The diagram below outlines the details of the different roles and the key responsibilities of each role.

Three Lines Model for AI Governance (Source: PwC Analysis)

Emerging Architectures for LLM Applications

There are multiple avenues for building with LLMs, including training models from scratch, adapting pre-trained open-source models, or utilizing hosted API services. The architecture illustrated below leverages the in-context learning paradigm.

The core principle of in-context learning involves harnessing pre-trained LLMs without fine-tuning, controlling their output through carefully crafted prompts and incorporating relevant contextual data.

Source:

As you can see in above architecture, one of the key components of Emerging Architectures for LLM Applications is "Validation". Validation is the key step between the application code (user queries) and the LLM (response to user).

Compliance/Governance as a Code for LLMs

The concept of Compliance as Code emerged during the early days of cloud adoption as a means to streamline software development. By transforming compliance policies into testable code, organizations can automate the verification of new code against regulatory and organizational standards. This approach not only captures policies as software and data but also ensures consistent compliance enforcement across the enterprise. Compliance as Code is a continuous process involving automated implementation, verification, remediation, monitoring, and reporting.

There are many open sources available under "Validation" which can be used as Compliance/Governance as a code, one of them is NeMo Guardrails.

NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational applications. Guardrails (or "rails" for short) are specific ways of controlling the output of a large language model, such as not talking about politics, responding in a particular way to specific user requests, following a predefined dialog path, using a particular language style, extracting structured data, and more.

This paper introduces NeMo Guardrails and contains a technical overview of the system and the current evaluation. NeMo Guardrails enables developers building LLM-based applications to easily add programmable guardrails between the application code and the LLM.

Key benefits of adding programmable guardrails include:

  • Building Trustworthy, Safe, and Secure LLM-based Applications: you can define rails to guide and safeguard conversations; you can choose to define the behavior of your LLM-based application on specific topics and prevent it from engaging in discussions on unwanted topics.
  • Connecting models, chains, and other services securely: you can connect an LLM to other services (a.k.a. tools) seamlessly and securely.
  • Controllable dialog: you can steer the LLM to follow pre-defined conversational paths, allowing you to design the interaction following conversation design best practices and enforce standard operating procedures (e.g., authentication, support).

You can use programmable guardrails in different types of use cases:

  • Question Answering over a set of documents (a.k.a. Retrieval Augmented Generation): Enforce fact-checking and output moderation.
  • Domain-specific Assistants (a.k.a. chatbots): Ensure the assistant stays on topic and follows the designed conversational flows.
  • LLM Endpoints: Add guardrails to your custom LLM for safer customer interaction.
  • LangChain Chains: If you use LangChain for any use case, you can add a guardrails layer around your chains.

NeMo Guardrails supports five main types of guardrails:

  1. Input rails: applied to the input from the user; an input rail can reject the input, stopping any additional processing, or alter the input (e.g., to mask potentially sensitive data, PII, to rephrase).
  2. Dialog rails: influence how the LLM is prompted; dialog rails operate on canonical form messages and determine if an action should be executed, if the LLM should be invoked to generate the next step or a response, if a predefined response should be used instead, etc.
  3. Retrieval rails: applied to the retrieved chunks in the case of a RAG (Retrieval Augmented Generation) scenario; a retrieval rail can reject a chunk, preventing it from being used to prompt the LLM, or alter the relevant chunks (e.g., to mask potentially sensitive data).
  4. Execution rails: applied to input/output of the custom actions (a.k.a. tools), that need to be called by the LLM.
  5. Output rails: applied to the output generated by the LLM; an output rail can reject the output, preventing it from being returned to the user, or alter it (e.g., removing sensitive data).

A guardrails configuration defines the LLM(s) to be used and one or more guardrails. A guardrails configuration can include any number of input/dialog/output/retrieval/execution rails. A configuration without any configured rails will essentially forward the requests to the LLM. Below is an example config.yml:

# config.ymlmodels:  - type: main    engine: openai    model: gpt-3.5-turbo-instructrails:  # Input rails are invoked when new input from the user is received.  input:    flows:      - check jailbreak      - mask sensitive data on input  # Output rails are triggered after a bot message has been generated.  output:    flows:      - self check facts      - self check hallucination      - activefence moderation      - gotitai rag truthcheck  config:    # Configure the types of entities that should be masked on user input.    sensitive_data_detection:      input:        entities:          - PERSON          - EMAIL_ADDRESS

Currently, the guardrails library includes:

LLM's Reliability

The adage "trust, but verify" has taken on new significance in the age of Large Language Models (LLMs). While these models offer unprecedented capabilities, their outputs cannot be taken at face value. However, this doesn't negate their immense potential.

LLMs are not infallible oracles, nor are they inherently destructive. With careful management and robust governance, they can be valuable assets. By transforming natural language queries into precise, structured formats, we can enhance accuracy, transparency, and control in AI-driven enterprise applications. Rather than rejecting LLMs outright, we should focus on understanding and optimizing their capabilities.

A real-life use case: Autonomous agents for Customer data

As a Head of Product at a financial institution, querying a general-purpose LLM like ChatGPT with a question such as "Which customers are subject to our product policy?" is unproductive. ChatGPT lacks the specific knowledge of our customer base, product policies, and underlying data structures. While training LLMs on proprietary data is possible, accuracy and reliability remain significant challenges.

A more effective approach involves an autonomous agent architecture. By structuring queries and connecting them to specific data sources within our organization, we can ensure accurate and actionable responses. This approach empowers us to harness the potential of AI while maintaining data privacy and control.

By querying an autonomous agent, you can ask a question like "Which customers qualify for the extra 0.2% interest rate?" The agent can access the knowledge graph to determine that this rate applies to customers with an AUM exceeding $30,000. Armed with this information, the agent can then translate the original question into a precise, structured query. The resulting data-driven response is accompanied by a clear audit trail, allowing users to verify the policy interpretation, query execution, and final output.

The autonomous agent approach combines the strengths of AI-driven question answering with the rigor of explainability and auditability. This empowers organizations to rely on these intelligent systems for decision-making, confident in the accuracy and trustworthiness of their AI-powered insights.

While LLMs excel at processing vast amounts of information and generating complex responses, their use must be carefully managed to maintain transparency and accountability. Ultimately, a truly reliable enterprise AI solution requires a synergistic combination of an autonomous agent, a comprehensive knowledge graph, and a robust governance framework.

Tags