Docs
DashboardRelease Notes
  • What is DeepConverse?
  • Chatbots
    • Basics
      • Building chatbot intents
      • Intent action responses
      • Chatbot Persistence mode
      • Publishing changes
    • Advanced Functionality
      • Connection Override
      • User Identity Verification
      • Announcements
      • Channel Specific Functionality
        • Zendesk Sunshine Conversations
          • How to handle image and file uploads in Zendesk Sunshine Conversations?
        • Zendesk Chat (Classic)
          • How to handoff conversations to Zendesk Chat (Classic) ?
        • Calendly
    • Branding
    • Deploy
      • Chatbot Versioning
      • iOS SDK
        • iOS (Custom Webview)
      • Android SDK
      • Adding widget to your Zendesk Help Center
      • Adding widget to your website
      • Custom Initialization and Passing Metadata
      • Open chatbot via Javascript
      • Adding widget to your Shopify Store
        • Adding the widget to Shopify via the Theme editor embed block
    • Localization
    • Customizations
      • Adding a link to your Privacy Policy in Chatbot window
  • Voice Bot
    • Getting Started with Voice Bots
    • Voice Bot Architecture
    • Supported use cases for Voice Bot
    • Setup and Configuration
  • Ticket Automation
    • Setup Zendesk email and ticket automation
  • Guides
    • Building Guides
    • Guide Theme Customization
    • Embedding Guides on your website
    • Embedding Guides in Chatbots
    • How to copy Guides across sites
  • Conversational Flow Builder
    • What is the Conversation Flow Builder?
    • Assign Parameters in Conversations
      • Predefined Parameters
    • How to use Rules in Conversations
    • Conversation Blocks
      • Question
      • Salesforce Blocks
        • Agent Availability Block
        • Live Agent Handover
      • Guide Blocks
        • Guide Step (Guide Flow)
        • Solved Block
        • Unsolved Block
        • Guide (Chatbot)
      • HTTP Request
      • Client Events
      • Policy
      • Zendesk Sunshine Conversations Handoff (In Widget)
    • Data Tables
      • How to read or search data from Data Tables?
  • Analytics
    • Chatbot Analytics
    • Viewing Chat Conversations
    • Message Viewer
    • Integrating with Google Analytics
    • Export API
      • Conversations Endpoint
      • Messages Endpoint
  • Integrations
    • Supported Integrations
      • Zendesk
      • Zendesk Sunshine Conversations
      • Salesforce
      • Gorgias
  • Account
    • Adding Users
    • Permissions and Roles
    • Multiple Sites
  • Security
    • DeepConverse Public IPs
    • Subprocessors
    • Data Request Policy
    • Technical and Organizational Security Measures
    • Reporting Security Vulnerabilities
      • Log4Shell Vulnerability
    • Generative AI - Technical Security Measures
  • Support
    • Contacting Support
    • Service Levels and Response Times
    • Platform Stability
Powered by GitBook
On this page
  • Data Processors
  • PII and PHI Handling
  • Hallucinations

Was this helpful?

  1. Security

Generative AI - Technical Security Measures

PreviousLog4Shell VulnerabilityNextContacting Support

Last updated 1 year ago

Was this helpful?

This document applies to our Generative AI features used in the platform

The following document builds on the technical and organizational security measures that are in place and addresses specific questions relating to use of Generative AI in our products:

Data Processors

DeepConverse has a process for vetting data processors to ensure that any customer data that is processed by an external entity adheres to the standards of maintaining data security and privacy. We make use of the following platforms to serve Generative AI models.

  • Azure OpenAI

  • AWS

  • Google Cloud (Future)

Azure Open AI

  • Azure data centers are located in the US

  • GPT-3.5 & GPT-4 used dependent on use cases

  • Safeguards in place

    • Data is not used to train other models

    • Compliance in place ()

    • Retention

      • Microsoft may store data for up to 30 days to detect abuse. DeepConverse is currently in the process of adding an option to reduce this time as well.

  • Difference between OpenAI and Azure OpenAI

Azure OpenAI is a fully managed service offered by Microsoft Azure. It comes with SLAs, security and reliability to support enterprise use cases.

ChatGPT WebApp

Microsoft Azure OpenAI ChatGPT APIs

Access method

WebApp / IPhone App

API only

Data retention

Undefined

30 days for Moderation only

Data used for Training

Yes

No

Control over PII data

None

PII can be filtered out

Data Flow diagram for Azure OpenAI

PII and PHI Handling

DeepConverse does not make use of PII to provide answers and capabilities in the platform. PII and PHI are removed from the information being sent to Generative models.

  • PII is retained only till needed for the conversation, for:

    • Case creation

    • API lookups that need PII

      • Configurable duration

      • Minimum 15 days

  • Post Conversation

    • PII is removed

    • Non-PII Data is retained for analytics and DeepConverse model improvement

  • PII Processed for Flow Execution

    • User messages

    • Problem Specification:

  • PII (mostly during case creation)

    • Name

    • Email

    • Zip code, Order Number etc

Hallucinations

Generative AI models due to their generative nature can hallucinate i.e. give textual output that is most probable based on the input being provided. The models are outputting the content word by word on the likelihood of it being most probable.

  • DeepConverse makes use of Retrieval Augmented Question Answering to provide ground truth data to the models and reason out the answer. This approach reduces the risk of hallucination as we do not allow the model to use its memory.

  • DeepConverse also checks for sources of the information being generated. If we cannot determine the source we understand that the likelihood of the content being a hallucination is higher.

  • We make us of reasoning capabilities of LLMs to reduce hallucinations

  • All Generative AI actions are logged and available for review for our team to identify potential hallucinations and place more safeguards as we iterate on improving the models.

Azure Trust Center