Implementing AI Safety Guardrails in a Corporate Environment
Generative AI
March 23, 2025
EXTENDED ARTICLE: A comprehensive guide for Corporate C-Suite considering implementing AI into their data infrastructure. Contact eamonn@aqintelligence.net

Disclaimer:  The views and opinions expressed in these articles are those of the author and do not necessarily reflect the official policy or position of AQ Intelligence. Content is provided for informational purposes only and does not constitute legal, financial, or professional advice.

Introduction

Artificial intelligence (AI) is transforming business operations, but it also introduces new risks. AI safety guardrails are mechanisms and policies that ensure AI systems operate within safe, ethical, and legal boundaries​.

savvy.security

This report provides a comprehensive guide for business and tech leaders on implementing AI guardrail solutions in an enterprise setting. It covers general-purpose guardrails for AI (like those for large language models or coding assistants) and industry-specific practices in sectors such as finance and healthcare. It also addresses how to prepare enterprise data for integration with AI guardrails, how to integrate these solutions with cloud infrastructure (AWS and multi-cloud environments), and highlights common pitfalls to avoid. Throughout, insights from leading advisory firms (Gartner, McKinsey, BCG) are included, along with real-world examples and case studies.

1. Understanding AI Safety Guardrails and Their Purpose

AI safety guardrails refer to the policies, rules, and technical mechanisms that keep AI systems’ behavior aligned with ethical standards, legal requirements, and business objectives​

savvy.security

In practice, guardrails serve as boundaries that prevent AI from causing harm, making unauthorized decisions, or producing inappropriate outputs. For example, a guardrail might block a chatbot from revealing confidential data or generating toxic content. According to industry experts, AI guardrails are essentially the “safety net” for AI deployments – they ensure AI tools follow company policies and responsible practices​

cio.com

The purpose of these guardrails is multi-faceted:

  • Prevent Harmful Outputs: Guardrails filter out or modify AI outputs that contain disallowed content (e.g. hate speech, privacy violations, biased or unsafe recommendations). Major AI platforms now include basic guardrails to prevent gross misuse. For instance, Google’s Gemini and Microsoft’s Copilot come with built-in content moderation to avoid obviously harmful answers​gartner.com. This is crucial, as incidents have shown AI systems can otherwise generate wrong or damaging responses. (A well-known example: an AI once suggested using glue to fix pizza, highlighting the need to curb such absurd or unsafe suggestions​

cio.com

  • Ensure Compliance and Ethics: Guardrails help enforce compliance with laws and ethical norms. They can be programmed to respect privacy, avoid discrimination, and adhere to industry regulations. For example, an AI model might be restricted from using protected attributes like race or gender in its decisions​savvy.security, to prevent bias. Responsible AI frameworks often mandate such guardrails so that AI does not inadvertently violate regulations (e.g. GDPR for data privacy, or FDA rules in healthcare). As Boston Consulting Group (BCG) notes, aligning AI with an organization’s core purpose and principles serves as an ethical guardrail that ensures the AI reflects the company’s values​

medium.com

  • Maintain Trust and Safety: By keeping AI behavior predictable and within approved bounds, guardrails build trust among users and stakeholders. They reduce the likelihood of AI “hallucinations” (incorrect statements stated as fact) or reckless actions. Gartner has identified generative AI as a top emerging risk for organizations, underscoring that without proper guardrails, AI’s rapid adoption can lead to issues like intellectual property leakage or privacy breaches​boschaishield.com. In highly regulated settings, guardrails are essential for safety – for instance, ensuring a medical AI doesn’t give advice outside its training or a financial AI doesn’t violate compliance rules.

In summary, AI guardrails function as critical safety measures. Just as physical guardrails on a road prevent vehicles from veering off course, AI guardrails keep algorithms and AI applications on the “right path”​

savvy.security

They are foundational to responsible AI use, minimizing risks while allowing businesses to innovate with confidence. Major research firms emphasize that implementing strong AI governance and guardrails early on is key to balancing AI’s opportunities with its risks​

appsoc.com

mckinsey.com

2. Approaches to Implementing AI Guardrails

2.1 General-Purpose Guardrails for AI (LLMs and Copilots)

General-purpose AI guardrails are those applicable to broad AI systems like large language models (LLMs), chatbots, or coding assistants. These guardrails often focus on content filtering, usage monitoring, and result validation across any domain. Key implementation approaches include:

  • Content Filtering and Moderation: Most LLM-based services incorporate filters to detect and block inappropriate content. For example, the Azure OpenAI service uses an AI Content Safety system that scans prompts and outputs for harmful content (hate speech, violence, sexual content, etc.) and either refuses or edits responses that violate set thresholds​

mindgard.ai

  • Similarly, Amazon’s Bedrock service provides configurable content filters and PII detectors as guardrails – it can automatically mask or block sensitive personal data if an AI model tries to output it​

docs.aws.amazon.com

  • ​These filters are general safeguards to ensure any generative AI usage remains within accepted norms and legal requirements. Organizations implementing AI should leverage such built-in guardrails or integrate third-party content moderation APIs to cover the basics of toxicity, privacy, and safety.
  • “Rule-Based” Guardrails and Policies: Beyond generic filters, companies are setting custom rules for AI behavior. A flexible approach is to use middleware or libraries that act as a gatekeeper between user input, the AI model, and the output. For instance, NVIDIA’s NeMo Guardrails toolkit allows developers to define explicit rules (rails) for dialogues – e.g. banning certain topics, enforcing citation formats, or preventing code execution​

linkedin.com

  • Another example is the open-source Guardrails AI library which enables developers to validate LLM outputs against schemas or criteria (for example, ensuring an answer stays within a word limit or follows certain formatting). These rule-based guardrails can be tailored to an organization’s needs: for a coding assistant like GitHub Copilot, a guardrail rule might prohibit suggesting known vulnerable code patterns or code that looks like it contains API keys. In fact, alternative coding AIs such as AWS’s CodeWhisperer have advertised built-in guardrails to avoid leaking secrets and to flag insecure code suggestions – features aimed at addressing security concerns in code generation that early Copilot versions exposed​

x-rator.com

  • Human Feedback and Oversight Loops: General AI guardrails are not only automated; they also encompass processes for human oversight. Many organizations implement a “human in the loop” as a guardrail for critical AI functions​

cio.com

  • For example, if an AI writes a draft email or generates an analytical report, a guardrail policy might require human review before it’s sent out. McKinsey recommends establishing clear principles on which AI use cases require human validation – e.g. a rule that “no generative AI output goes directly to customers without review”, which serves as a high-level guardrail on deployment​

mckinsey.com

  • Additionally, companies are training employees on how to use AI safely – essentially turning educated users into a form of guardrail. Users are taught to double-check AI outputs and report anomalies rather than blindly trust the AI​

mckinsey.com

  • This cultural approach ensures that guardrails are not just technical filters but also part of the organization’s operating procedures.
  • Third-Party Guardrail Solutions: A growing market of AI safety tools can be integrated as general-purpose guardrails. For example, Arthur AI’s “Shield” and CalypsoAI’s “Moderator” are tools that enterprise users can put in front of public AI services. Arthur’s Shield acts as a firewall that checks prompts and responses for things like hallucinations, offensive language, or sensitive data​

linkedin.com

  • Calypso’s tool intercepts employee queries to ChatGPT via a company portal where all inputs/outputs can be audited​

linkedin.com

  • These solutions give organizations an extra layer of control: they can catch policy violations or data leaks before the prompt reaches the AI model or before the answer reaches the user. Such tools are particularly useful when using third-party AI (e.g. ChatGPT) where the internal team wants to enforce corporate policies on an external system. They can log all interactions for compliance, ensure no confidential info is being submitted, and sanitize outputs as needed.

In practice, deploying general-purpose guardrails often means combining multiple layers of defense. For example, a company building an AI assistant might use the cloud provider’s built-in content filter, plus add custom prompt instructions as guardrails, and then have a monitoring script or service evaluating outputs in real time. A case in point is Relex, a supply chain software firm that built an internal chatbot on Azure OpenAI’s GPT-4. They relied on Microsoft/OpenAI’s basic guardrails for blocking hate, self-harm, violence, etc., and then added their own instruction guardrails telling the bot to refuse answering questions outside its knowledge domain​

cio.com

They also conducted extensive red-team tests (both internal engineers and even family members trying to trick the bot) to ensure the guardrails held up​

cio.com

The result was that the bot could not be provoked into harmful outputs, aside from some benign off-policy answers like giving a cooking recipe when pressured, which they deemed an acceptable risk​

cio.com

This example illustrates a prudent approach: use platform guardrails + custom guardrails + adversarial testing to achieve a robust safety net.

2.2 Industry-Specific Guardrails and Use Cases

While general guardrails apply to all AI systems, many industries require domain-specific guardrails due to specialized regulations and risk scenarios. Implementing AI in finance or healthcare, for instance, demands guardrails tailored to those fields. Below we explore a few industry contexts and how guardrails can be adapted:

  • Finance (Banking & Wealth Management): The finance industry is heavily regulated (e.g. by SEC, FINRA, GDPR, etc.) and errors can have serious legal or monetary consequences. Thus, AI guardrails in finance focus on compliance, accuracy, and data confidentiality. A prime example is Morgan Stanley’s deployment of GPT-4 for its financial advisors. Rather than using the public ChatGPT, Morgan Stanley worked with OpenAI to use a private instance of GPT-4, effectively creating a walled garden where the AI does not learn from or leak any of the bank’s data​

linkedin.com

  • This setup itself is a guardrail: it ensures client data and proprietary research stay internal. On top of that, Morgan Stanley has hundreds of advisors and domain experts continually providing feedback to fine-tune the AI’s answers​

linkedin.com

  • This feedback loop acts as a guardrail to improve precision and prevent the AI from giving unsound financial advice. Another guardrail in finance is compliance filtering – for instance, an AI that drafts investment recommendations might be programmed to flag or avoid any statements that could be seen as guaranteeing returns or that mention unauthorized investment products. Some banks also enforce audit trails as guardrails: every AI-generated communication to a client is logged and reviewable by compliance officers before release. According to the Wall Street Journal, many firms pilot generative AI in finance with heavy guardrails and oversight, precisely because they must uphold fiduciary duties and protect sensitive information​

linkedin.com

  • In practice, finance AI guardrails often include limiting the AI’s knowledge scope to vetted, up-to-date financial data, disallowing any personal data generation (to comply with privacy laws), and integrating checks so that any recommendation or numerical output is cross-verified against source data.
  • Healthcare: In healthcare, the stakes are literally life and death, so guardrails center on patient safety, data privacy (HIPAA compliance), and ethical care standards. One approach discussed by healthcare AI leaders is to start with low-risk use cases and progressively add guardrails for higher-risk tasks

healthtechmagazine.net

  • For example, an AI might begin by handling administrative queries (appointment scheduling, insurance form assistance) where mistakes are minor, and as trust builds, move into clinical decision support with strict oversight. Key guardrails in healthcare include PHI scrubbing – AI systems must not expose Protected Health Information in outputs to unauthorized parties. If a generative AI summarizes a set of medical records, a guardrail might automatically remove patient names or identifiers (a capability cloud providers like AWS Bedrock offer with PII filters​

docs.aws.amazon.com

  • Clinical accuracy verification is another: any diagnostic or treatment suggestion from AI should be flagged for human doctor review. Some hospitals employ an internal review board for AI that defines what an AI can or cannot do (a governance guardrail). A recent AWS healthcare panel stressed having all stakeholders at the table to set guardrails so that each generative AI use case has an appropriate “minimum safety bar” defined

healthtechmagazine.net

  • For instance, if using an AI chatbot for patient triage, a guardrail might be: “If the bot’s confidence is below X% or if it mentions certain symptoms, it must escalate to a human clinician immediately.” Moreover, healthcare experts recommend building domain-specific knowledge bases for the AI and using those for grounded answers​

healthtechmagazine.net

  • By giving the AI only vetted medical content to draw from (and nothing from the open internet), the organization places a guardrail on the AI’s knowledge — it can’t wander outside approved information. Finally, ethical guardrails are crucial: for example, programming the AI to never provide treatment advice directly to a patient without a doctor’s sign-off, or to refuse certain types of medical questions altogether if outside its scope. This ensures the AI does not act as a doctor, but rather as an assistant with clearly bounded authority.
  • Other Industries (Manufacturing, Retail, etc.): Similar principles apply, with tweaks per industry. In manufacturing or critical infrastructure, safety guardrails might ensure an AI system controlling machinery cannot operate outside defined safe parameters (e.g. an AI that optimizes factory settings is constrained not to exceed temperature or speed limits). In customer service and retail, companies use guardrails to protect brand reputation – for example, a retailer’s AI chatbot might have a brand tone guardrail (avoiding profanity, slang, or off-brand humor) and a legal liability guardrail (not making claims about products that aren’t approved by legal). One case that highlights the need for such guardrails is when an airline’s chatbot gave an incorrect discount and a court held the airline responsible for that promise​

cio.com

  • The lesson is that AI-driven customer interactions must have guardrails to prevent unauthorized commitments or misinformation. In highly regulated fields like law or insurance, it is common to see guardrails such as limiting advice to informational purposes and not actual legal opinion, along with disclaimers in AI outputs as a form of “soft” guardrail (managing user expectations).

No matter the industry, the approach to guardrails is to integrate domain expertise and policy into the AI’s operation. Gartner’s guidance for generative AI risk management suggests doing a risk analysis for each use case and then designing technical, process, or policy guardrails to mitigate those specific risks​

bcg.com

For example, if a company decides a sales chatbot should not discuss competitors (to avoid legal issues or misinformation), a policy guardrail could be put in the system prompt explicitly forbidding that comparison​

bcg.com

Indeed, BCG reports that organizations are embedding such guardrails via system instructions and prompt engineering as part of responsible AI deployment, essentially hardcoding business rules into the AI’s behavior​

bcg.com

The key is that general guardrails (like content filters) are supplemented with industry-specific rules and validation steps to address the unique risks of each field.

3. Preparing Enterprise Data for AI Guardrails Integration

Implementing AI guardrails effectively is not just about controlling the AI – it also requires preparing your enterprise data in a way that supports these guardrails. Since guardrails often must detect sensitive or policy-relevant information, the underlying data and data management processes need to be in order. Here are crucial considerations for readying enterprise data:

  • Data Classification and Labeling: A prerequisite for many guardrails is knowing which data is sensitive, confidential, or regulated. Enterprises should categorize data (customer PII, financial records, trade secrets, public info, etc.) and label it accordingly. By applying sensitivity labels to documents and database fields, you enable AI guardrails to make context-aware decisions – for example, a guardrail could be set to “never include content labeled ‘Highly Confidential’ in AI outputs.” Microsoft’s internal approach is instructive here: they use a tiered labeling system (“Highly Confidential”, “Confidential”, “General”, “Public”) and tie automated controls to those labels​

microsoft.com

  • In practice, this means if a user tries to feed a document labeled “Highly Confidential” into an AI service, a guardrail might block that action or require special approval. Similarly, labeling data that should not be used for AI training (such as customer personal data) helps ensure compliance when building or fine-tuning models. Essentially, robust data labeling dovetails with guardrails: guardrails can only enforce what they can recognize, and labels help them recognize what’s sensitive. Tools like AWS Macie or Azure Purview can automate the detection and labeling of sensitive data across an enterprise.
  • Data Governance Policies: Integrate AI guardrails into your broader data governance framework. This means updating data use policies to account for AI – for instance, define which data sources an AI assistant is allowed to access. Organizations should maintain an “AI Asset Inventory” as part of governance​

gartner.com

  • Governance includes not just AI models in use but the data they can reach. A governance policy might state that customer financial records may only be accessed by AI models that have been certified for compliance. Governance also involves setting up review processes: before a new dataset is fed to an AI, there might be a governance checklist (checking for bias, privacy, accuracy of that data). Gartner’s AI Trust, Risk & Security Management (AI TRiSM) framework highlights information governance (data protection, classification, and access management) as one layer of AI security​appsoc.com

appsoc.com

healthtechmagazine.net

  • This reduces the chances of hallucination or outdated info in outputs.
  • Access Controls and Data Security: Guardrails must be supported by proper access control to data. Not every user or AI should access all data. Use role-based access controls (RBAC) integrated with AI services: for example, ensure that if a junior employee’s chatbot interface queries financial data, the query is rejected unless that employee has permission to view that data normally. On cloud platforms, this could mean setting up separate credentials or API endpoints for AI that only have read-access to certain databases or that enforce query quotas. Also, consider network-level guardrails: some firms route all AI traffic through secure gateways or VPNs to monitor data exfiltration attempts. Isolation of environments is another technique – e.g., have a segregated sandbox for AI experimentation that uses dummy data, to guard against accidental exposure of real data. When preparing data, encryption and masking should be applied where possible: an AI used for pattern recognition might only need tokenized data (IDs instead of actual names). In short, align your data access governance with AI usage patterns – if your data is locked down and monitored, any AI consuming that data will by extension be operating with those constraints, which acts as a guardrail against data leaks.
  • Data Labeling for Supervised Guardrails: In some advanced scenarios, enterprises label data to train the guardrails themselves. For instance, you might compile a list of “red flag” outputs (e.g. a set of example toxic sentences or incorrect answers) and use that to train a smaller model or rule-set that evaluates the primary AI’s output. This is analogous to how email spam filters are trained on labeled spam examples. If deploying such systems, ensure you collect relevant examples from your domain (like regulatory sensitive phrases in finance, or clinical misinformation examples in healthcare) and label them. This preparation will help any ML-based guardrail (such as an auxiliary model that checks outputs) to be accurate and reduce false positives/negatives.

Enterprises often find that in going through the exercise of data preparation for AI, they uncover hidden data issues or governance gaps that need fixing. This is beneficial beyond AI as well – as Credo AI notes, “data governance is AI governance”, meaning strong control and understanding of your data is foundational for any responsible AI initiative​

credo.ai

By cleaning, classifying, and securing your data now, you set the stage so that AI guardrail solutions can be effective when layered on top.

4. Integration with Existing Cloud Infrastructure

Most enterprises run on cloud infrastructure, so AI guardrail solutions should integrate seamlessly with these environments. Whether your company relies on AWS, Azure, Google Cloud (or all of them), leveraging cloud-native features can accelerate guardrail implementation. At the same time, a multi-cloud strategy requires guardrails that are interoperable across platforms. Below we outline integration considerations:

4.1 Seamless Integration with AWS Services

Amazon Web Services offers a rich ecosystem for AI and provides native guardrail capabilities that enterprises can use out-of-the-box. Key integration points include:

  • Amazon Bedrock Guardrails: AWS’s Bedrock is a managed service for deploying foundation models. It comes with built-in guardrails for content filtering and sensitive data handling. For example, Bedrock’s guardrails can automatically detect PII (names, emails, phone numbers, etc.) in prompts or model outputs and either block the response or mask the sensitive parts​

docs.aws.amazon.com

  • This means if you integrate Bedrock into your app, you can enable these features with configuration rather than building your own PII detector. Bedrock also supports toxic content filters with adjustable thresholds (for categories like hate, self-harm, sexual content)​

aws.amazon.com

  • Integrating Bedrock models into your workflows allows you to piggyback on these AWS guardrails which are maintained and updated by AWS (for example, as new slang or patterns of harmful content emerge, AWS updates its filters). To use them, security teams should work with developers to set the desired strictness levels in the Bedrock API calls or console.
  • AWS AI Services with Built-in Guardrails: Many higher-level AWS AI services have safety features. Amazon CodeWhisperer (AI code generator) has filtering to avoid certain open-source licensed code and a built-in scanner that highlights insecure code suggestions. Amazon Transcribe and Comprehend can identify PII and redact it from transcripts – which can serve as guardrails in voice or text processing pipelines. If deploying chatbots on AWS, Amazon Connect’s Wisdom and Lex services can be configured to restrict utterances or mask data. The key is to review the AWS service documentation for “responsible AI” or “safety” features and turn those on. AWS also publishes solutions and blog posts (e.g. by AWS partners) on how to build guardrails using its services. One AWS partner, ThinkCol, described a reference architecture on AWS where they put pre-check and post-check Lambda functions around a generative model, using Amazon API Gateway, Lambda, and DynamoDB to log and filter content before the AI responds​

aws.amazon.com

  • This kind of architecture can be adopted generally: you can run your own guardrail logic in a Lambda function (perhaps using a lightweight model or rules) that intercepts requests and responses to any AWS-hosted model. Thanks to cloud scalability, these guardrail Lambdas can scale with traffic, and services like AWS Step Functions can orchestrate multi-step guardrail flows (e.g. first call a profanity filter service, then the main model, then a fact-checking service, in sequence).
  • Identity and Access Management (IAM): Integration with AWS IAM is crucial for controlling which users or applications can invoke AI services and under what conditions. By using IAM roles and policies, you can ensure that, for example, only your approved backend service can call Bedrock (preventing end-users from directly hitting the model without guardrails). You can also segregate duties: one IAM role might allow calling the model only through a guardrail Lambda, whereas direct calls are denied. AWS CloudTrail logging will then give you an audit log of all AI inference calls, which is useful for post-hoc analysis if something slips through a guardrail. Additionally, consider using AWS Config rules or Service Control Policies to enforce guardrails – for instance, an SCP could disallow deploying any AI model that isn’t from a vetted list of models, or prevent turning off Bedrock guardrails in any account.
  • Monitoring and Alerting: Leverage AWS monitoring tools to watch the health of guardrails. Amazon CloudWatch can track metrics like how often content filters are triggered or how many requests were blocked. You could set up an alert if, say, there are too many blocked outputs (which might indicate users are attempting misuse or that the AI is frequently producing disallowed content). Amazon SNS or Security Hub can be integrated to raise security incidents if a guardrail triggers a critical event (for example, if an output contained sensitive data that was caught, you may want to notify the data protection officer).

Overall, AWS integration is about using the cloud’s native guardrail features where possible (for efficiency and reliability) and deploying custom guardrail logic on AWS’s scalable infrastructure where needed. AWS’s extensive partner solutions (e.g. third-party in AWS Marketplace focusing on AI risk) and reference architectures can accelerate building a robust system. The goal should be that your AI guardrails feel like a natural extension of your AWS environment – using familiar tools like IAM, CloudWatch, and Lambda ensures the devops and security teams can manage AI risks similarly to other cloud workloads.

4.2 Multi-Cloud Compatibility (Azure, Google Cloud, and Hybrid Environments)

Enterprises often operate in multi-cloud environments or may want cloud-agnostic guardrail solutions to avoid vendor lock-in. Ensuring your AI guardrails work across Azure, Google Cloud Platform (GCP), and on-premise is a strategic consideration:

  • Using Cloud-Native Features on Each Platform: Each major cloud has its responsible AI features. Microsoft Azure’s OpenAI Service, for example, includes a content filtering system powered by Azure AI Content Safety which by default monitors prompts and completions for prohibited content​

learn.microsoft.com

  • It also offers a “Prompt Shield” to guard against prompt injection attacks​

mindgard.ai

  • Google Cloud’s Vertex AI likewise provides tools for data loss prevention (via integration with Google DLP APIs) and allows setting safety parameters on models (Google’s PaLM models have safety settings for different content categories). When working multi-cloud, it makes sense to enable the native guardrails in whichever cloud you are using for a particular AI workload. This way, each environment enforces a baseline of safety and compliance. However, you should harmonize the policies – e.g., define a consistent set of disallowed content categories across clouds (possibly guided by your company’s AI policy) and configure each cloud’s filters accordingly.
  • Abstraction and Orchestration Layers: Some companies choose to implement an abstraction layer over multiple AI services to enforce uniform guardrails. For instance, you might build a central gateway service for all AI requests (no matter which cloud’s model is used behind the scenes). This gateway can apply corporate guardrails (like checking against a central “block list” of forbidden outputs or verifying the user’s authorization) before routing the request to Azure, AWS, or GCP. By doing so, you ensure that even if different technical guardrails exist in each cloud, your critical policies are always applied. Containerization and serverless functions can help deploy the same guardrail code in different clouds. For example, you could run an identical validation microservice on AWS Lambda, Azure Functions, and Google Cloud Functions, each sitting in front of the respective AI endpoints. This approach was echoed by Forrester analysts who suggest companies with resources can work directly with model providers to set up “walled-off” environments with their own guardrail layers, rather than relying solely on the provider defaults​

linkedin.com

  • ​Portability of Guardrail Configurations: If you use third-party guardrail software or open-source tools (such as the Nvidia NeMo Guardrails toolkit or others), ensure they can be deployed in a cloud-agnostic way. Many such tools are provided as Docker containers or Python libraries, which makes them portable. You might run the same guardrail container on AWS Fargate, Azure Container Instances, or Google Cloud Run. The key is to avoid hard-coding cloud-specific dependencies in your guardrail logic. For instance, if your guardrail needs to check a piece of text for profanity, you could either call AWS Comprehend’s profanity API, or use an open-source model. To be multi-cloud, you’d likely choose the open-source model or a vendor-neutral API so that you’re not tied to AWS only. Another example: logging – rather than writing guardrail logs only to Amazon CloudWatch, consider using a centralized logging system (like Elastic or Splunk) where logs from all clouds funnel in. This makes it easier to monitor guardrail events across environments.
  • Azure and GCP Specifics: On Azure, in addition to content filters, consider using Azure’s Policy feature for guardrails. Azure Policy can audit and enforce rules on Azure resources – for example, a policy could ensure that only certain regions are used for AI services (to comply with data residency laws), or that all AI deployments have logging enabled. Microsoft also provides Responsible AI dashboards and Fairness/Bias assessment tools in Azure Machine Learning that can act as guardrails during model development (ensuring models meet fairness thresholds before going live). On Google Cloud, tools like Explainable AI and Model Cards can serve as guardrails by documenting model limitations and ensuring human oversight where the model is less confident. If using GCP’s Generative AI Studio, you can set safety filters and also moderation webhooks – where after the model generates text, it calls your webhook for approval. That mechanism can be used to inject a custom guardrail stage in the workflow, perhaps calling a shared service that all clouds use.
  • Hybrid and On-Prem: Some sensitive applications may use on-premise AI infrastructure (for data control reasons). In such cases, cloud guardrails won’t directly apply, so you’ll rely more on vendor-neutral solutions or self-hosted guardrail components. Fortunately, many guardrail techniques (like open-source content moderation models, regex-based PII scrubbing, or rule-based checkers) can be run on-prem. Containerizing them or using Kubernetes operators can allow the same guardrail services to run in your data center and in cloud clusters in a similar fashion.

In essence, multi-cloud compatibility for AI guardrails means adopting a unified policy framework but implementing it with the native tools and custom layers as needed in each environment. The outcome should be that no matter where an AI workload is executed, it adheres to the same corporate guardrails. By planning for portability and consistency, enterprises can avoid a situation where, say, the AI in AWS is well-restrained but an experiment on Azure slips through cracks. A consistent approach also makes audits and compliance reviews easier – you can demonstrate to regulators or internal auditors that you have a holistic guardrail strategy spanning all platforms.

5. Hazards and Pitfalls of AI Guardrail Implementation

Implementing AI guardrails is essential, but it must be done thoughtfully. There are several potential hazards and pitfalls to be aware of:

  • Over-Reliance on Guardrails: One common pitfall is assuming that once guardrails are in place, AI outputs are infallible. In reality, guardrails greatly reduce risks but do not eliminate them entirely. Adversarial users or unforeseen scenarios can bypass guardrails. For example, security researchers recently showed methods to evade Azure’s AI content safety guardrails, managing to inject malicious prompts that slipped past the filters​

mindgard.ai

  • Over-reliance on automated guardrails without human oversight can lead to a false sense of security. If users stop exercising judgment because “the AI is guarded,” errors may go unnoticed. Always consider guardrails as the second line of defense – robust training of models and educated users should still be the first line. It’s also wise to continuously stress-test guardrails (through red teaming and audits) to identify weaknesses. The case of the Relex chatbot (which was well-guarded yet still gave out-of-scope advice under pressure) shows that even strong guardrails have limits​

cio.com

  • Thus, organizations should avoid complacency; maintain human-in-the-loop review for critical AI decisions, and have contingency plans for when guardrails fail (e.g. a protocol to quickly disable an AI system that’s outputting dangerous content).
  • Misalignment with Business Objectives: Guardrails must be aligned with what the business is trying to achieve, or they can backfire. If set too restrictively, guardrails might block useful functionality and frustrate users, leading them to circumvent official tools. Conversely, if guardrails are too lax in the interest of convenience, they may allow outcomes that conflict with company values or policies. It’s a delicate balance. BCG experts emphasize using an organization’s core Purpose and Principles as guardrails so that AI actions reflect the company’s values and objectives​

medium.com

  • A misalignment example might be a customer service AI that, in trying to never escalate to a human (for efficiency goals), ends up handling sensitive complaints it shouldn’t – hurting customer satisfaction. That would be guardrails (or lack thereof) working against the business objective of good service. To avoid this, involve business stakeholders in designing guardrails: define clearly what outcomes are desirable vs. unacceptable from a business perspective. For instance, a sales department might set a guardrail that AI never offers a discount beyond a certain margin without manager approval (to protect revenue). If tech teams implemented guardrails in isolation, they might not know that nuance. Regularly review guardrail performance metrics in light of KPIs. If employees start finding the AI tool unusable because “it just refuses everything,” that guardrail design needs adjusting to better support the actual work while still mitigating risk.
  • Latency and Performance Impacts: Guardrails, especially layered ones, can introduce additional processing steps that impact system performance. Every extra filter model or check is another computation. If not designed carefully, guardrails could make an AI system slow or unresponsive, undermining user adoption. For example, checking each AI response with multiple algorithms (toxicity model, then privacy model, then consistency checker) adds latency. In high-frequency use cases (like an AI assisting in real-time coding or rapid-fire chat), this can be noticeable. There is also a cost implication – e.g., calling an AI model twice (once to get an answer, once to verify it) doubles the compute cost. One mitigation is what some architects call a multi-model approach: use a lightweight model for initial screening and only use the heavy, more accurate model when needed​

aws.amazon.com

  • This can optimize the cost-latency trade-off by not overburdening the main loop with extremely slow checks. Another technique is to run guardrails asynchronously or in parallel where possible, to cut down user-perceived wait times. When implementing guardrails, always measure the end-to-end performance and optimize. If a guardrail process does cause significant delay, consider if it’s truly necessary for every single request or if it can be applied selectively (for instance, maybe internal queries don’t need as strict filtering as external ones, etc.). Also leverage cloud scalability – if a guardrail step is CPU-intensive, scale out more instances of it under load. The bottom line is to engineer guardrails efficiently so that security doesn’t come at the cost of usability. Early pilots should monitor latency and user feedback to tune this balance.
  • Ethical and Regulatory Blind Spots: AI guardrails are only as good as the foresight of those who design them. It’s possible to implement a set of guardrails that satisfy known regulations and ethical norms, yet miss a new or subtle issue. For example, a company might focus guardrails on avoiding explicit hate speech or data leaks, but unintentionally allow a biased outcome because they didn’t include a fairness guardrail. There is a risk of “unknown unknowns” – areas where the AI could cause harm that weren’t anticipated. One way to combat this is to involve diverse perspectives in the guardrail design process (include legal, ethics, and domain experts). Also, keep abreast of evolving AI regulations. If laws change (such as new AI Act requirements in the EU or updated FDA guidelines for AI in medical devices), your guardrails might need updating. An ethical blind spot example: an AI recruitment tool might be guardrailed to remove protected attributes, yet it could still end up favoring candidates from a certain demographic due to proxy variables. If the guardrails didn’t account for that (say, by testing outputs for disparate impact), the result could be discrimination slipping through. As Gartner and others note, continuous monitoring and feedback loops are crucial: you should gather data on AI decisions and outcomes and evaluate them for unintended bias or rule breaches​

bcg.com

  • When issues are found, update the guardrails or add new ones. Regulatory compliance is another angle – for instance, guardrails should ensure AI explanations are logged if required by law (the EU AI Act mandates certain transparency). Missing that could create legal exposure. It’s advisable to maintain an AI risk register where you list potential risks and map guardrails to them. If something is in the risk register with no corresponding guardrail, that’s a blind spot to address. Additionally, consider external audits of your AI systems; third-party evaluators might catch issues your team overlooked. Being proactive about ethical considerations – privacy, fairness, transparency – beyond just the obvious ones will strengthen your guardrail framework against future blind spots.

In summary, to avoid these pitfalls: treat guardrails as necessary but not sufficient on their own, align them tightly with business goals and user needs, architect for minimal performance impact, and remain vigilant and adaptive to new ethical or regulatory challenges. AI guardrails implementation is an ongoing process of improvement, not a one-time set-and-forget deployment​

bcg.com

6. Expert Insights and Strategic Guidance

Leading consulting and research firms have been studying enterprise AI adoption and emphasize the importance of guardrails and governance. Here we highlight guidance from Gartner, McKinsey, and BCG that can inform your guardrail strategy:

  • Gartner Group: Gartner advocates a comprehensive approach termed “AI Trust, Risk and Security Management (AI TRiSM)”, which essentially builds guardrails into every layer of AI projects​

appsoc.com

  • Gartner’s framework includes AI governance, runtime monitoring, data protection, infrastructure security, and traditional IT controls

appsoc.com

  • In practice, Gartner suggests establishing enterprise-wide AI policies and an inventory of AI models (governance), continuous real-time monitoring for policy violations or anomalies (runtime enforcement), and strict information governance (ensuring training and input data are properly classified and access-controlled)​

appsoc.com

  • This aligns with the notion that effective guardrails are layered: not just one tool, but multiple controls from policy to technical measures. Additionally, Gartner’s recent research has listed generative AI as a top emerging risk for organizations​

boschaishield.com

  • This is reinforcing that C-level executives must put guardrails in place before scaling up AI deployments. Their advice to clients often includes creating an internal AI governance committee or task force that oversees AI use cases and ensures appropriate guardrails and risk mitigations are applied to each. For companies looking for more tactical guidance, Gartner has begun to publish “Market Guides” for AI Trust and Security solutions, indicating that there is a growing ecosystem of tools (many mentioned in this report) that can be employed​

boschaishield.com

  • In essence, Gartner’s take is: don’t delay implementing AI guardrails. To quote a summary from an AppSOC analysis of Gartner’s view: “Enterprises should not delay implementing strong AI governance and security controls”​

appsoc.com

  • – start with a framework and refine it as you go.
  • McKinsey & Company: McKinsey emphasizes speed with safety when it comes to AI. They recognize the huge upside of AI ($4.4 trillion potential value by one estimate) but caution that over 90% of organizations are not fully prepared to deploy AI responsibly​

mckinsey.com

  • McKinsey’s guidance often starts with high-level principles and policies as guardrails, endorsed by leadership​

mckinsey.com

  • For example, a company should have clear guidelines on acceptable AI use cases (where it can be applied and where not) which “serve as a guardrail for acceptable use cases”

mckinsey.com

  • They also advise putting in place a central AI governance team or officer to coordinate risk management efforts​

mckinsey.com

  • On the operational side, McKinsey has identified critical roles – such as designers, engineers, and risk managers – who all need to embed “responsibility by design” into AI products​​

mckinsey.com

  • This means from the initial design phase, one should incorporate guardrails and think through misuse cases. McKinsey’s recent publications stress iterative testing and improvement of guardrails: deploy AI pilots, monitor outcomes closely, and adjust guardrails as needed. For instance, in a generative AI pilot, they suggest starting with narrower use cases and explicitly evaluating what could go wrong, then implementing mitigations for each risk before scaling up​

bcg.com

  • Another insight from McKinsey is the need to train and instill a responsible AI culture throughout the organization​

mckinsey.com

  • Educating employees not to blindly trust AI and to understand the guardrails aligns with making guardrails effective; people need to know there are boundaries and why they exist. In summary, McKinsey’s strategic guidance is: combine top-down governance (principles, committees) with bottom-up practices (robust processes, talent training) to operationalize AI guardrails. They often remind clients that moving fast on AI requires parallel investment in safety, so treat the implementation of guardrails as an integral part of the AI deployment roadmap, not an afterthought​

cio.com

  • Boston Consulting Group (BCG): BCG has been vocal about Responsible AI as a prerequisite for scaling AI. One of their key messages is that GenAI cannot scale without responsible AI – meaning if you try to deploy generative AI widely without guardrails, it will likely backfire with incidents or mistrust​

bcg.com

bcg.com

  • BCG’s responsible AI framework spans the entire lifecycle: from designing the AI solution with ethical guidelines, to coding with secure practices, to deploying with monitoring, and operating with continuous oversight​

bcg.com

  • A practical example BCG gives is incorporating guardrails in the design of prompts and user interactions. If a business policy is “we do not compare ourselves to competitors in public statements,” then the AI (say a sales chatbot) should have that as a guardrail in its prompt instructions​

bcg.com

  • BCG also highlights the need for feedback loops: once an AI system is live, gather user feedback and use it to fine-tune guardrails and workflows​

bcg.com

  • . In one BCG case study, they deployed a GenAI agent for customer support that served thousands of customers, and they established multi-level monitoring (from technical logs to user satisfaction metrics) to continually refine the guardrails as new types of queries came in​

bcg.com

  • Another theme from BCG is aligning guardrails with value creation – they argue that good guardrails not only prevent downside risk, but also enable trust which allows more impactful use of AI. For example, if customers see that an AI advisor always provides a disclaimer and an option to speak to a human (an explicit guardrail), they may trust the service more, increasing adoption. BCG’s Chief AI Ethics Officer has spoken about implementing internal accountability structures – essentially making sure someone in leadership “owns” AI ethics and guardrails implementation, which drives accountability and resource allocation for these efforts. In short, BCG advises to treat guardrails as a critical success factor for AI programs, investing in the processes, roles, and technology to get them right, thereby turning responsible AI into a competitive advantage rather than a constraint​

medium.com

  • Incorporating these insights: a company might establish a cross-functional AI governance board (Gartner’s advice), define clear AI use policies and educate staff (McKinsey’s advice), and integrate guardrails at each phase of AI deployment with continuous monitoring (BCG’s advice). Together, these ensure that as you implement the technical guardrails, you also have the organizational support and strategic alignment to make them effective.

7. Case Studies and Examples

Real-world examples illustrate how AI guardrails are being implemented and the benefits and challenges that come with them:

  • Morgan Stanley Wealth Management: Use Case: Internal AI chatbot for financial advisors. Guardrail Approach: The firm partnered with OpenAI to deploy GPT-4 on a private cloud, meaning none of the prompts or responses leave the Morgan Stanley environment​

linkedin.com

  • This addresses data confidentiality by design. Advisors use the chatbot to query the firm’s vast research library. To guardrail the content, Morgan Stanley carefully curated the knowledge base the AI was allowed to draw from – ensuring it’s all approved research, no random internet data. They also implemented a feedback loop: advisors can rate responses, and if an answer is incorrect or not useful, it’s flagged and the AI team adjusts the model or adds clarifications to the dataset. Results so far have been positive, with hundreds of advisors contributing to improvements​

linkedin.com

  • A notable guardrail is that the AI will not answer questions outside the provided knowledge base, and instead encourages the advisor to consult a specialist – this prevents hallucinations. Outcome: Nearly all advisor teams have adopted the tool, and it’s saving time on information lookup while maintaining compliance. Morgan Stanley’s approach demonstrates that in a regulated industry, it’s possible to harness generative AI by building strong guardrails (data privacy, scope limitation, human review) around it, thus unlocking productivity without compromising trust.
  • Relex (Supply Chain Software): Use Case: “Rebot” AI assistant for employees, built on Azure OpenAI (GPT-4). Guardrail Approach: Relex leveraged Azure’s built-in content filters to handle basic harmful content screening​

cio.com

  • On top of that, they included very specific company instructions in the system prompt (e.g. if a query is beyond the bot’s knowledge, it should refuse and suggest who to ask). They also explicitly told the AI not to provide advice outside of certain domains. To test the guardrails, Relex conducted both internal testing and invited outsiders to try to break it (red team). Only minor issues were found (like giving a harmless recipe when pushed, as mentioned earlier), which were acceptable for their risk level​

cio.com

  • They additionally curated their data source (the bot only uses an internal knowledge base that is kept up to date)​ and provided an in-app feedback mechanism (a thumbs-down button for any answer that seems off, which alerts the team)​

cio.com

  • Outcome: The guardrails have held well, with zero serious incidents after deployment. Employees trust Rebot for quick answers on company policies and product info, and if it ever doesn’t know or hits a guardrail, it politely declines rather than guessing. This case shows the effectiveness of combining cloud provider safety features with custom rules and rigorous testing.
  • Healthcare Provider’s Pilot (Hypothetical Composite): Use Case: An AI summarization tool to draft clinical notes from doctor-patient conversations. Guardrail Approach: Before deploying, the hospital’s IT and clinical leadership set clear guidelines: the AI can assist in generating the first draft of a note, but a physician must review and sign off (human guardrail). They used a vendor solution that anonymizes patient identifiers in the audio transcription, addressing privacy. They also configured the AI model such that it does not provide any new medical opinion – it only summarizes what was said in the conversation, to avoid diagnostic errors. During testing, they found the AI sometimes created false observations (hallucinations), so they added a step where the note is compared against key points extracted from the conversation; if there’s something in the draft not in the transcript, it’s highlighted for the doctor to double-check. Outcome: In trial runs, doctors reported the AI saved time on paperwork and with the guardrails (no autonomous medical advice, mandatory review, privacy filters) they felt it didn’t jeopardize patient safety or compliance. This kind of staged implementation (starting with back-office tasks under tight oversight) is echoed by healthcare experts who advise focusing on “right care, right time, right use of algorithms” and gradually building trust​

healthtechmagazine.net

  • Air Canada Chatbot Incident: Use Case: Public customer-facing fare quote chatbot. Incident: The chatbot mistakenly offered a promotional fare that was not actually available. A customer bought a ticket based on that information, and later a Canadian court ruled the airline had to honor the promised discount, even though it was the AI’s error​

cio.com

  • Guardrail Lessons: In hindsight, Air Canada could implement guardrails such as: requiring the chatbot to only present prices from the official pricing database (and not ‘improvise’ if uncertain), and a policy that any price quote gets confirmation from a pricing engine before showing to customers. Also, better testing could have caught that the AI might give out a dummy fare under certain conditions. This case underscores the financial and legal risks of AI without sufficient guardrails – essentially it became an accidental contract. Many companies have taken note and are now ensuring AI outputs that could be taken as promises or official info are verified or disclaimed. For instance, after such incidents, some airlines and banks put a default disclaimer on their AI chat responses: “This information is generated by AI and should be verified. It does not constitute a contractual offer.” While disclaimers alone aren’t full guardrails, they can help set user expectations as a partial safety net.
  • Nvidia’s Internal Use: (Not a public case study, but as an example of dogfooding) Use Case: AI support agent for developers using Nvidia products. Guardrail Approach: Nvidia uses its own NeMo Guardrails system to power the assistant, combining multiple guardrail types: topical (the agent will only discuss Nvidia products and general programming help, and refuses other topics), safety (it filters out toxic language or insecure code suggestions), and security (if asked to execute code or do something potentially harmful, it refuses)​

linkedin.com

  • They also integrated the guardrail tool with a monitoring dashboard that tracks when the AI had to invoke a guardrail (e.g. how often it refused a request and why). This helps their team adjust the knowledge base or rules if, say, too many users are asking for a certain unsupported help (maybe it indicates a need to add a Q&A for that rather than just say “can’t do”). Outcome: By all accounts, the agent has been useful in handling common support queries on forums, while handing off complex issues to human staff. The guardrails prevented it from giving advice on competitor GPUs or engaging in flame wars (things that might happen if the AI followed a user off-topic). This showcases how a carefully configured guardrail framework enables an AI to operate in a specific role reliably, preserving the company’s focus and reputation.

These examples highlight that effective AI guardrails are being applied in practice and can yield positive results (efficiency gains, better customer service) while controlling risks. They also show that when guardrails are weak or absent (as in the Air Canada case), the repercussions can be costly. Organizations should learn from such case studies: emulate the successes (e.g., the private, feedback-driven approach of Morgan Stanley, or the thorough testing of Relex) and avoid the mistakes (e.g., deploying a customer-facing AI without checks that what it says is correct).

Conclusion and Recommendations

Implementing AI safety guardrails in a corporate environment is now a non-negotiable best practice for any organization looking to harness AI’s power responsibly. Guardrails provide the structure and assurance needed to deploy AI at scale without unacceptable risk. As detailed in this report, successful guardrail implementation spans people, process, and technology: from clear executive-endorsed policies and cross-functional governance, to preparing high-quality data with proper oversight, to integrating technical safeguards at every integration point (applications, cloud services, and user interface).

When moving forward with AI guardrails in your enterprise, consider these final recommendations:

  • Start with Principles and Framework: Establish your AI ethics principles and risk framework upfront. Use those to guide what guardrails are necessary. Leverage models like Gartner’s AI TRiSM or industry-specific guidelines as starting points​

healthtechmagazine.net

  • Ensure board and C-suite buy-in so that guardrails are seen as enabling factors, not obstacles.
  • Implement Layered Guardrails: Do not rely on one single mechanism. Combine preventive guardrails (e.g., restricted training data, user access limits), detective guardrails (monitoring and alerts for violations), and responsive guardrails (human review, incident response plans). Overlap general-purpose safeguards with domain-specific ones. This defense-in-depth approach means even if one layer fails, others will catch the issue.
  • Use Cloud Capabilities but Stay Flexible: Turn on built-in guardrail features from cloud providers (AWS, Azure, GCP) to get quick wins in content filtering, data protection, and monitoring​

docs.aws.amazon.com

  • Augment them with custom solutions for your unique needs, and ensure you can operate across multi-cloud by abstracting where necessary. Avoid vendor lock by not tying your entire guardrail strategy to proprietary tools – maintain some independent checks.
  • Integrate Guardrails into Development and Deployment: Treat guardrails as part of the AI development lifecycle. When designing an AI feature, include misuse case scenarios and mitigation steps. Before deploying a model, pass it through a checklist of guardrail criteria (bias testing, privacy assessment, etc.). Automate what you can – for example, CI/CD pipelines could include scanning models for bias or verifying that the model only outputs approved vocabulary for certain fields. Deploy with toggles so you can tighten guardrails if needed (e.g., have a setting for how strict the content filter is, which you can adjust if you see issues).
  • Continuous Training and Awareness: Regularly train employees on AI policies and the presence of guardrails. Encourage users to report when the AI refuses something (it might indicate over-blocking) or when it produces a questionable output (potential under-blocking). This feedback is gold for iterating on guardrail settings​

cio.com

aws.amazon.com

  • Similarly, keep technical staff up-to-date via workshops on new guardrail tools, and incorporate lessons from incidents (internal or industry-wide) into practice. Cultivating an internal culture that values “AI done right” will make adoption of guardrails much smoother.
  • Monitor, Audit, and Adapt: Once in production, continuously monitor the AI systems. Set up dashboards for key risk indicators (e.g., percentage of AI outputs blocked by guardrails, user satisfaction scores, etc.). Conduct periodic audits – both internal audits and possibly external audits for an objective view. As new threats emerge (like new ways to jailbreak prompts) or new regulations come into effect, update your guardrails promptly. View the guardrails as a living system that evolves with the AI and its context. As McKinsey notes, organizations that embed risk mitigation processes deeply and make guardrails an ongoing effort see significantly better outcomes from AI investments​

cio.com

Eamonn Darcy
Director: AI Technology
Sources:

Bibliography

  1. Gartner. (2023). AI Trust, Risk and Security Management (AI TRiSM). Gartner Research.
  2. Gartner. (2023). Emerging Tech: Top Risks of Generative AI. Gartner Risk Report Series.
  3. McKinsey & Company. (2023). The State of AI in 2023: Generative AI’s Breakout Year. McKinsey Global Institute.
  4. McKinsey & Company. (2023). Navigating Risk and Building Trust in AI. McKinsey Technology Insights.
  5. Boston Consulting Group (BCG). (2023). Responsible AI: From Principles to Practice. BCG Henderson Institute.
  6. Amazon Web Services (AWS). (2024). Amazon Bedrock Documentation and Safety Features. Retrieved from: https://docs.aws.amazon.com/bedrock/
  7. AWS AI/ML Blog. (2023). Implementing AI Guardrails with Bedrock and Lambda. AWS Partner Solutions.
  8. Microsoft. (2023). Azure OpenAI Service: Content Filtering and Prompt Shield. Microsoft Azure Documentation.
  9. Microsoft. (2022). Responsible AI Standard. Retrieved from: https://www.microsoft.com/ai/responsible-ai
  10. Google Cloud. (2023). Vertex AI: Safety Filters and Explainable AI. Google Cloud Platform Documentation.
  11. Google Cloud. (2023). Cloud Data Loss Prevention API. Retrieved from: https://cloud.google.com/dlp
  12. NVIDIA. (2023). NeMo Guardrails Toolkit. Retrieved from: https://developer.nvidia.com/nemo
  13. Arthur AI. (2023). Arthur Shield: Monitoring Generative AI. Retrieved from: https://www.arthur.ai/shield
  14. CalypsoAI. (2023). Moderator Platform Overview. Retrieved from: https://www.calypsoai.com/
  15. Credo AI. (2023). Enterprise AI Governance Platform. Retrieved from: https://www.credo.ai/
  16. OpenAI. (2023). GPT-4 Use in Enterprise: Morgan Stanley Case Study. Retrieved from: https://openai.com/customers/morgan-stanley
  17. Wall Street Journal. (2023). How Morgan Stanley Uses GPT-4 Internally. Retrieved from: https://www.wsj.com/
  18. Relex Solutions. (2023). Relex AI Assistant Using Azure GPT-4. Microsoft Customer Story. Retrieved from: https://customers.microsoft.com/
  19. AWS re:Invent Panel. (2023). Generative AI in Healthcare: Safety and Ethics. Las Vegas, NV.
  20. CBC News. (2023, Nov). Air Canada Ordered to Honour AI Chatbot’s Erroneous Fare Quote. Retrieved from: https://www.cbc.ca/news/canada/air-canada-chatbot-legal-case