Understanding the differences between AI and Gen AI

Part II of our series on Generative AI (“GenAI”) expands on Part I: Top 7 Generative AI Misconceptions. In the last year, GenAI has become a focal point of both fascination and skepticism. To fully appreciate its impact, it’s crucial to understand how it diverges from Traditional AI. This post delves into those distinctions, addressing seven common inquiries from clients implementing conversational AI—ranging from chatbots to sophisticated digital assistants.

For those new to AI, click here to review a few pivotal terms…
  • LLM (Large Language Model): An extensive model trained on diverse public data sources, featuring billions of parameters. It requires months of training and significant financial resources to operate. The aim is to simulate a broad linguistic comprehension. However, akin to a precocious teenager, an LLM grasps fundamentals but can occasionally fabricate responses. GenAI leverages the vast datasets inherent in LLMs.
  • Domain: This refers to a specific field of expertise. IT support (like password resets or hardware servicing), HR inquiries (such as leave balances or timesheet approvals), and financial tasks (reviewing invoices or processing expense reports) represent different domains.
  • Prompt: The set of instructions conveyed to an LLM in natural language. A prompt guides the bot not just on the content of the response, but the manner of its delivery, encompassing all relevant rules and parameters.
  • RAG (Retrieve, Augment, Generate): A technique where domain-specific knowledge is sourced and integrated into an LLM prompt, culminating in a refined, contextually informed response.

Armed with these definitions, let’s explore how GenAI stands in contrast to its predecessor, Traditional AI, through the lens of practical applications and client-driven concerns.

Traditional AI vs. Gen AI

Traditional AI is a trained model for a specific task or domain thereby sometimes called “Narrow AI”. Every output is known and approved in Traditional AI so the AI will not answer questions out of its domain. Generative AI (GenAI) is different in that it is not trained by the customer, rather it is trained by a vendor on a public data set and is meant to answer broadly a much larger set of questions. While Traditional AI has curated outcomes, GenAI uses word sequence probabilities to generate answers. Generative is a bit misleading in that it doesn’t create completely new outputs, it simply takes previous examples and shuffles them around.

  Traditional AI Generative AI
Implementation Effort  
Lower Software Cost  
Analytics  
Control  
Breadth & Scale  
Ongoing Effort
Predictability  
Security  
Run-time Speed  
Gideon Taylor’s Scorecard for Traditional AI vs. Generative AI

Ultimately, a hybrid approach may be the most pragmatic path forward, combining the reliability of Traditional AI for core, sensitive tasks with the adaptive prowess of Generative AI for more general inquiries.

There is great curiosity over which AI to use in which situation. Since our readership is made up of enterprise use cases, we will focus on those and specifically around user support chatbots in enterprises, higher education institutions and public sector organizations. This post aims to compare Traditional AI vs. Generative AI using the key questions our clients are asking us.

#1 – What Data does it need?

Understanding the data requirements for AI systems is key to implementing effective chatbots and digital assistants. Here’s a breakdown of the types of data needed for Traditional AI versus Gen AI:

Traditional AI

Training Data
Traditional AI operates on a foundation of examples, where inputs (like user questions) are directly linked to their intents. For enterprise-level chatbots, this means preparing a dataset with a mixture of synthetic data, crafted by data scientists, and authentic interactions from users. These inputs are then mapped to predefined, curated answers, with the AI learning to recognize and respond to variations of queries within the trained domain.

Domain Data
The responses in Traditional AI must be crafted and validated by humans. This guarantees that the chatbot responds correctly, following explicit guidelines. For example, in an HR service scenario, every potential query about hiring practices is paired with a corresponding, verified response. 

This doesn’t mean answers cannot be automated, but the AI will have clearly defined instructions as to how it provides any given answer.

Gen AI

Training Data
Gen AI starts with an extensive understanding of language, courtesy of the LLM’s training on vast and varied public datasets. This eliminates the need for organizations to create training data. However, fine-tuning with organization-specific data is an option, albeit an expensive one. The trade-off with fine-tuning involves cost and the risk of exposing sensitive data. Instead, Gen AI tends to rely on enriched prompts, utilizing the RAG method to generate responses.

Domain Data
To tailor Gen AI to a particular domain, your unique organizational data is overlaid at the time of the prompt. Unlike Traditional AI, Gen AI doesn’t need an array of examples but does require high-quality, consistent, and current policy information and/or enterprise data to inform its responses. This means your effort is in curating policies which are timely, accurate and high-quality. There should be no contradictions in the data set.

#2 – How is the AI Implemented?

Implementing AI for conversational interfaces varies significantly between Traditional AI and Generative AI (Gen AI), each with its own set of procedures and challenges.

Traditional AI

The process begins with identifying the scope of queries the bot should answer, often derived from FAQs, customer support tickets, and data analytics. The next step involves creating training data for each identified question. This data synthesizes real and synthetic examples, which the AI uses to learn. Finally, answers are crafted and curated. Each of these steps can require domain experts and/or data scientists to ensure accuracy and relevance.

Gen AI

Gen AI deployment in the enterprise is centered on the Retrieve, Augment, Generate (RAG) approach. It starts with gathering domain-specific knowledge and ensuring that this information is accurate, up-to-date, and of high quality. This curated knowledge is then used to enrich prompts that guide the Gen AI’s responses. Unlike the discrete answer curation in Traditional AI, Gen AI relies on an extensive base of documents and policies to draw from, highlighting the importance of the adage “garbage in equals garbage out.” Similarly, domain experts are needed to manage this knowledge.

#3 – How does it handle security?

The security of AI-driven responses is a critical aspect, especially when sensitive information is involved.

Traditional AI

In Traditional AI, security is implemented through predefined rules that determine the sourcing and sharing of answers. Because each response is based on explicit instructions, there’s a high level of confidence in the system’s security. For instance, in environments with varying information access, like different labor unions or educational levels, Traditional AI can provide persona-specific answers. Administrators have the advantage of complete visibility and control, ensuring responses are securely tailored and compliant with organizational protocols.

Gen AI

Security in Gen AI is more complex due to the dynamic nature of response generation. While it can incorporate persona-specific policies, there’s less certainty about the precise output, as it’s generated in real-time. Despite attempts to tag policies for specific personas, Gen AI might not always adhere to these when generating responses. This has raised valid concerns regarding the use of Gen AI in contexts where strict security filtering is crucial. For example, tests have shown that Gen AI may not consistently reference the intended policy documents, potentially leading to breaches in confidentiality or incorrect information dissemination.

#4 – How does it handle out-of-domain questions?

Addressing questions outside the trained expertise is a common challenge for AI systems, handled quite differently by Traditional AI versus Gen AI.

Traditional AI

Traditional AI operates within a defined set of parameters. When faced with out-of-domain queries, it typically lacks the confidence to provide an accurate answer and is programmed to offer a polite deflection, such as an apology or a redirection to human assistance. This conservative approach minimizes the risk of providing incorrect information.

Gen AI

In contrast, Gen AI will venture a response based on the LLM’s expansive understanding of language, regardless of the domain. This can be beneficial for general inquiries but poses a risk when the question falls outside the organization’s policies or the AI’s trained expertise. In such cases, Gen AI might produce responses that are distracting, potentially inaccurate, or even in violation of workplace policies.

NOTE: Some LLMs allow you to instruct the bot to not attempt to answer out-of-domain questions and perform more like traditional AI. In our early testing at the time of this post, that setting only sometimes works.

#5 – How does the AI learn?

In order for your AI to learn and get better over time an understanding of the different learning mechanisms is required.

Traditional AI

Traditional AI learns with new training data. This data can improve understanding or even teach the AI about new knowledge, like a policy. The training data comes by way of humans or autonomous software decisions. The AI doesn’t get better unless its data and corpus of outcomes gets better.

In the case of our own Ida product, most clients allow the software to autonomously supplement training data while the team is actively filling knowledge gaps. Our clients will train the model every 2-4 weeks so that the AI is constantly getting better.

Gen AI

Gen AI learns in a few ways, the first being retraining of the LLM that exists beneath the Gen AI. This is very expensive, but the cost is bore by the LLM provider and usually is retrained a few times a year. Because training happens infrequently, the LLMs understanding is based on older information.

While not traditional machine learning, the RAG method can improve over time as the knowledge it sources improves. When inconsistencies and inaccuracies are reduced in the source data or content, the RAG element will produce better prompts thereby producing answers that feel better, even if not done by way of AI learning.

#6 – What analytics should I expect?

Monitoring usage and effectiveness of the AI is important to continual improvement.

Traditional AI

Imagine a well-organized filing cabinet. Each outcome by traditional AI is cataloged, allowing a clear understanding of how the AI reached its conclusion. This transparency allows for in-depth analysis of user queries, AI confidence levels, AI reasoning and the decision trail.

Analytics extend beyond individual interactions to include insights into data such as:

  • User Behavior: Time of day, location, frequency, channel usage and other trends around user behavior.
  • Knowledge Trends: Popular topics, questions and seasonal usage of different types of information.
  • AI Performance: Knowledge areas the AI feels very confident or less than confident to identify where knowledge gaps exist or AI performance needs to be examined.

Gen AI

In contrast, generative AI operates more like a magician’s hat. While it delivers impressive results, the inner workings remain largely shrouded in mystery. We see the user’s query and the AI’s response, but the process of generating the response remains a black box.

This lack of transparency presents several challenges for analytics:

  • Limited Insight: We have minimal understanding of the factors influencing the AI’s output.
  • Difficult Optimization: Without insight into the decision process, it’s challenging to improve the AI’s performance.
  • Uncertain Trust: The lack of transparency can lead to concerns about bias and fairness in the AI’s output.

#7 – How is the AI Maintained? 

The maintenance of AI systems is critical to their effectiveness and accuracy, with practices differing significantly between Traditional AI and Generative AI (Gen AI).

Traditional AI

The upkeep of Traditional AI involves regular analysis of user interactions to identify and address knowledge or comprehension gaps. This means continually feeding the system new data, which includes questions, answers, and enriched training data to improve its understanding and response accuracy. Retraining with this updated information helps the AI expand its capabilities and adapt to changes in user behavior and language. Additionally, it’s essential to periodically review and update existing answers to reflect any changes in policies or information.

Gen AI

Since the training of Gen AI’s LLMs is not typically handled by individual users or organizations (unless fine-tuning is applied), maintenance focuses on enhancing the prompts used in the RAG method. To address new topics or refine understanding, the knowledge base used for prompt enrichment must be updated. Maintenance for Gen AI also includes ensuring the information used for responses remains current and precise, necessitating regular content reviews and updates.


Conclusion

Traditional AI offers a controlled environment with predictable outcomes and robust security, making it suitable for scenarios where precision and reliability are paramount. Its maintenance and learning processes, though requiring more effort, provide a level of transparency and control that can be critical for sensitive or highly regulated domains.

Generative AI, on the other hand, offers versatility and the ability to handle a broader range of queries. Its ability to generate responses from a vast corpus of language data can provide advantages in terms of answering more questions with less effort. However, the challenges associated with its black-box nature, potential security concerns, and the need for high-quality domain data for accurate responses must be carefully considered.

Below we will illustrate major features of any AI and whether Traditional or Generative AI has an advantage.

  Traditional AI Generative AI
Implementation Effort  
Lower Software Cost  
Analytics  
Control  
Breadth & Scale  
Ongoing Effort
Predictability  
Security  
Run-time Speed  

Organizations must weigh these factors against their specific use cases, resources, and risk profiles. Ultimately, a hybrid approach may be the most pragmatic path forward, combining the reliability of Traditional AI for core, sensitive tasks with the adaptive prowess of Generative AI for more general inquiries. This is exactly the path Ida is taking. Such a strategy can harness the strengths of both systems while mitigating their individual weaknesses, leading to a robust, dynamic, and efficient AI implementation. If you have any questions or need help in your AI project, don’t hesitate to contact us.