
This document provides a comprehensive overview of using Ema's AI Employee Evaluation System to assess customer interactions for quality assurance, compliance, and personalized agent coaching. Leveraging advanced AI technologies, this system automates end-to-end evaluation, ensuring 100% coverage without manual sampling, and allows for effortless reconfiguration via natural language instructions.
Welcome to Ema's Agent QA, an AI-powered employee assistant designed to intelligently evaluate every customer interaction from start to finish. This AI solution ensures process compliance across CRM, knowledge management systems, and logistics applications, delivering evidence-backed quality assurance and generating personalized coaching for every agent. Behind the scenes, Ema Fusion orchestrates multiple large language models (LLMs) to reason, retrieve, and score interactions within one unified, explainable system.

The outcome is 100% interaction coverage with no need for manual sampling, automated scalable coaching across teams, and instant reconfiguration using natural language instructions. Let's see how it functions in practice by starting with our sample inputs.

We are utilizing two different types of inputs here. One is the complete call transcript, which includes the entire conversation and any case details that may exist within external applications.

In this demonstration, it is being validated with ServiceNow.

The second input we are examining is our various SOP process flow documents.

It's important to highlight that the AI employee assistant can analyze not only textual documents but also complex process flows with nested structures to validate customer-agent interactions and ensure compliance with process flows.

The system assesses whether the agent followed the process flow. Let’s examine the sample output now.

Once the agentic mesh execution is completed for the call transcript, a comprehensive validation summary and report on all the QA scorecard elements become available.

On the screen, you can view the complete validation report, which includes a summary and highlights key QA elements. It assesses whether the agent adhered to data protection laws and applied the correct processes.

All the key QA elements you want to examine are outlined as natural language instructions.

You will receive detailed information indicating whether the outcome is a pass or fail, including the rule rationale and whether the data was sufficient for decision-making.

Additionally, there is a "show work" section that provides insights into the origin of this information.

The system incorporates a responsible AI framework, ensuring transparency in the reasoning and decision-making process.

Feedback can be provided to help fine-tune the agents, especially as they encounter more variable datasets and call transcripts.

Alongside the evaluation report, an agent training document is generated. This document highlights key elements, mistakes made, skill gaps, and areas for improvement to enhance the required skills.

The report also identifies knowledge base deviations and provides recommendations for the agent, scoring each QA element accordingly. For this demo, we are validating details with ServiceNow, fetching them from external applications.

Next, we upload a realistic customer transcript concerning a missing parcel. The transcript details how the guest was assured delivery and then redirected for pickup. This transcript and corresponding ticket ID details will be uploaded to the QA system.

The system automatically identifies key elements within the scenario, such as the agent’s promise of re-delivery despite no courier attempt being logged. Ema verifies this by fetching shipment data and SOP clauses, recognizing it as a process deviation. For demonstration purposes, the transcript will be uploaded manually.

This system can integrate with external applications to receive transcripts. Once received, it triggers the agentic mesh of this AI employee for necessary actions. Meanwhile, we will explore the configuration tab of the AI employee.

At the core of this AI employee is Ema Fusion, an orchestration layer leveraging multiple LLMs across different platforms. It operates like a mixture of experts, retrieving responses from multiple LLMs and fusing them for the most accurate output.

Agents can be reconfigured using natural language instructions. Simply click on the edit configuration button to make modifications.

Once the upload is complete, the workflow is executed. Ema's AI agentic mesh orchestrates the agents to extract key insights validated against predefined rules, providing a QA score for the conversation.

While the workflow is in progress, let's examine the roles of the various agents involved.

External agents continue working to validate the details with external applications.

A key agent is connected to the ServiceNow application, retrieving ticket details and comments. Similar connections can be made to CRMs or external applications for data retrieval and validation.

External tools are used to connect to applications, fetch required details, and ensure agents analyze conversations, process flows, and retrieve details from external sources.

Personalized training documents are generated for each agent based on identified skill gaps. Additionally, structured queries are created from unstructured data.

For instance, converting call transcripts and process flows into structured queries eliminates the need for manual preprocessing.

This automated process allows agents to utilize structured queries within the workflow, streamlining operations.

The missing transcript document has been successfully executed.

The overall result indicates a failure due to contradictory delivery information provided by the agent.

A reasoning-based validation summary is available.

Let's examine specific examples from the report.

The process was not executed correctly within the conversation.

The agent initially promised next-day home delivery, which was misaligned with SOP guidance.

An update was given regarding delivery, with an urgent request to prioritize the date.

The system is capable of analyzing scattered data across applications to provide a rationale for decisions.

It highlights whether outcomes are a pass or fail and provides reasoning.

Textual contributions from various systems are identified, offering insight into response formation.

A complete evaluation report on the conversation has been compiled.

The agent's training document outlines strengths, areas for improvement, recommendations, and refresher topics pointing to SOP documents.

It highlights key knowledge base deviations and scores QA elements based on evaluations from multiple systems.

If relevant tickets exist in ServiceNow, they will be retrieved, facilitating end-to-end automation of the QA process with Ema's multi-agent system.

Ema's multi-agent QA AI employee assistant analyzes interactions for process and compliance, personalizes coaching per agent, evolves QA flows, and seamlessly connects to multiple applications for comprehensive QA automation.

The process facilitates complete end-to-end agent QA evaluation.
