Large Language Models (LLMs)<\/a> have put AI on the radar of every customer service leader. In addition, they have introduced an entirely new set of words and phrases to the CX vernacular. <\/span><\/p>\nHere\u2019s a glossary of terms you need to know to understand, evaluate and deploy LLMs in your contact center:<\/span><\/p>\nKey LLM Terms<\/span><\/h2>\nLarge Language Models (LLMs)<\/b>: A type of AI model that can understand and generate human-like language. LLMs are trained to understand the past 40 years of all data on the internet, then analyze and process it faster than humans in order to produce natural responses.<\/span><\/p>\nNatural Language Processing (NLP)<\/b>: A field of computer science focused on making interactions between computers and human language more natural and intuitive.<\/span><\/p>\nNatural Language Understanding (NLU)<\/b>: A subfield of NLP focused on enabling computers to understand human language in a way that is similar to how humans understand language.<\/span><\/p>\nNatural Language Generation (NLG): A subfield of NLP focused on <\/b>what to say back. NLG generates a free form response from a free form question.<\/span><\/p>\nGenerative AI<\/b>: A type of AI that uses machine learning models to generate new content, such as text, images, music or videos, that is similar to examples it was trained on. LLMs are a subset of Generative AI.<\/span><\/p>\nPre-training<\/b>: The process of training an LLM on large amounts of text data before fine-tuning it for a specific task.<\/span><\/p>\nTraining Set: <\/b>A set of examples used to train an LLM model, typically consisting of input-output pairs that are used to adjust the model’s parameters and optimize its performance on a specific task.<\/span><\/p>\nFine-tuning<\/b>: The process of adapting an LLM to a specific task by training it on a smaller dataset that is specific to that task.<\/span><\/p>\nTransformer<\/b>: A neural network architecture used in many LLMs, including GPT-3, that enables efficient and effective language processing.<\/span><\/p>\nGPT-3, GPT-4:<\/b>: Generative Pre-trained Transformer 3, a widely known and powerful LLM model developed by OpenAI, with fine-tuned models GPT-3.5 and GPT-4 following.\u00a0<\/span><\/p>\nChatGPT<\/b>: ChatGPT is OpenAI\u2019s accessible app for GPTs 3 through 4. The app puts LLMs into consumer hands to allow for dialogue-based interaction to create new text based content.\u00a0<\/span><\/p>\nPrompt<\/b>: A user\u2019s text input that initiates and guides language generation by an LLM.<\/span><\/p>\nKey CX Terms<\/span><\/h2>\nEntity<\/b>: A piece of information (e.g., text, item, number) that a machine needs to extract from a sentence to inform decisions and resolve requests. Collecting a set of entities is referred to as slot filling.<\/span><\/p>\nIntent: <\/b>The nature of a customer\u2019s request that must be extracted from their natural utterance (e.g. \u201cI want to send an order back\u201d is categorized as Return Request). The intent informs the machine\u2019s next steps.<\/span><\/p>\nMulti-intent Recognition: <\/b>LLMs enable Replicant\u2019s Thinking Machine to recognize multiple requests from a single utterance (e.g., \u201cI need to update my card on file and change my address), which significantly decreases Average Handle Time.\u00a0<\/span><\/p>\nZero-Shot Learning<\/b>: A technique used in LLMs that allows the model to generate outputs for tasks it has not been explicitly trained on.<\/span><\/p>\nHuman-in-the-Loop<\/b>: A design approach that involves incorporating human feedback and oversight into AI systems to improve their effectiveness. Unsupervised learning causes conversations and predictions to go wrong.<\/span><\/p>\nHallucination<\/b>: A problem inherent to LLMs where the machine makes up words or actions if it doesn\u2019t know what to do from the information in the knowledge base.<\/span><\/p>\nToxic Reply:<\/b> An inappropriate response generated by an LLM, often the number one reason why LLMs can\u2019t be connected directly to customers.\u00a0<\/span><\/p>\nPrompt Engineering<\/b>: The process of guiding LLMs to generate accurate and relevant outputs by creating high-quality prompts.<\/span><\/p>\nAdversarial Examples<\/b>: Inputs that have been intentionally designed to mislead an LLM or other AI system.<\/span><\/p>\nBias<\/b>: In the context of LLMs, bias refers to systematic errors or inaccuracies in language generation or understanding that result from the model’s training data.<\/span><\/p>\nExplainability<\/b>: The ability to understand how an LLM arrived at a particular output or decision.<\/span><\/p>\nKey Terms for LLM-Powered Automation\u00a0<\/span><\/h2>\nContact Center Automation: <\/b>A hybrid approach that uses AI and LLMs to create a customer-centric contact center that efficiently serves customers at scale while elevating agents to focus on the most complex and nuanced issues.<\/span><\/p>\nThinking Machine: <\/b>Replicant\u2019s Contact Center Automation brain which serves millions of customers across every channel and allows them to speak naturally and fully resolve issues with no wait.\u00a0<\/span><\/p>\nApplication Programming Interface (API)<\/b>: A set of protocols that specify how software components should interact with each other. APIs can connect LLMs with existing platforms like Contact Center Automation to enhance their performance.<\/span><\/p>\n1-Turn Problem Capture<\/b>: The Thinking Machine\u2019s ability to accurately capture several issues in a single turn of the conversation to increase completion rates and speed to resolution.<\/span><\/p>\nContextual Disambiguation<\/b>: The Thinking Machine\u2019s ability to be fully aware of nuanced and complex differences in callers’ varied answers determine their meaning (e.g., \u201cYou got it,\u201d means \u201cYes\u201d).<\/span><\/p>\nDynamic Conversation Repair<\/b>: The Thinking Machine\u2019s ability to seamlessly repair conversations when callers change their mind or need to correct previously provided information.\u00a0<\/span><\/p>\nIntelligent Reconnect<\/b>: The Thinking Machine\u2019s ability to automatically call, chat or SMS customers back and pick up where they left off when a conversation drops for any reason.<\/span><\/p>\nFew Shot Learning<\/b>: The Thinking Machine\u2019s ability to decipher intents from thousands of unique phrases using only a few training examples, significantly decreasing how long it takes to deploy Contact Center Automation.\u00a0<\/span><\/p>\nDialogue Policy Control<\/b>: Replicant\u2019s coded set of rules that ensures accuracy and control over scripts, workflows and actions the LLM follows, and prevents hallucinations and toxic responses.\u00a0<\/span><\/p>\nEnterprise-grade<\/b>: Replicant\u2019s advanced expertise gained from automating 10M+ conversations over the past 6 years which has created prompt engineering best practices and methods to rigorously evaluate and leverage LLMs.<\/span><\/p>\nPrompt Injection Prevention<\/b>: A prescribed set of workflows and scripts that filter and safeguard against sharing sensitive information with a third-party LLM.\u00a0<\/span><\/p>\nLLM-Agnostic<\/b>: The Thinking Machine\u2019s ability to leverage any LLM based on security and performance to ensure contact centers receive the best LLM available.\u00a0<\/span><\/p>\nGuardrails: <\/b>A set of rules and preventative measures that allows LLMs to be used to improve the performance of Contact Center Automation without connecting customers directly to third-party platforms.\u00a0<\/span><\/p>\n<\/div>\n","protected":false},"excerpt":{"rendered":"
ChatGPT and Large Language Models (LLMs) have put AI on the radar of every customer…<\/p>\n","protected":false},"author":19,"featured_media":5718,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"acf":[],"yoast_head":"\n
The ChatGPT Glossary for CX Leaders - Replicant<\/title>\n\n\n\n\n\n\n\n\n\n\n\n\n\n\t\n\t\n\t\n\n\n\n\n\n\t\n\t\n\t\n