Calling All Sales Teams!
Custom Generative AI solutions require an understanding of Neural Networks and Large Language Models. Gen AI solutions also require awareness of the hardware, networking, and infrastructure management services required to deliver and deploy a Large Language solution. This focused workshop is designed to empower sales engineering teams with the knowledge they need to confidently pitch and present Generative AI solutions!
You’ll learn the technical aspects and requirements of AI solution design –model selection, model fine-tuning, retrieval augmentated generation (RAG) and solution inference.
- Gain a working knowledge of LLM solutions and solution architecture.
- Learn how to demo AI solutions using the latest tools and services.
- Develop the ability to show and explain AI concepts using apps you can run locally.
- Describe the infrastructure required to run large language solutions for customers.
- Lead requirements workshops and design AI solutions
High Level Agenda
Agenda | Key Topics & Services |
Introduction to Generative AI & NVIDIA AI Services | – Overview of Generative AI and LLM applications- Introduction to NeMo Framework- Overview of NVIDIA TensorRT, RAPIDS and cuDF- NVIDIA AI Enterprise Suite architecture |
LLM Fundamentals & Prompt Engineering | – LLM fundamentals and architecture- Model selection (e.g., GPT, LLaMA)- Prompt engineering principles and techniques- Customizing models using NeMo and fine-tuning |
Practical LLM Development & Experimentation | – Implementing LLM solutions with NVIDIA tools- Experimentation with RAPIDS, cuML, and Triton Inference Server- Data analysis and model training workflows |
LLM Use Cases: RAG, Chatbots, Summarizers | – Building Retrieval-Augmented Generation (RAG) solutions- Deploying chatbots and summarizers- Integrating NVIDIA tools for productionized LLM solutions |
Summary and review | – Review key topics, design patterns and demos |
By the end of the Generative AI – LLM Pre Sales Associate workshop, participants will be able to:
- Understand the Fundamentals of Generative AI:
- Define and explain key concepts of Generative AI and large language models (LLMs), including Transformer-based models and autoregressive models.
- Identify appropriate use cases for LLM applications, such as chatbots, summarizers, and Retrieval-Augmented Generation (RAG) systems.
- Master NVIDIA NeMo Framework:
- Demonstrate proficiency in using the NVIDIA NeMo framework to build, customize, and deploy LLM models.
- Fine-tune pre-trained LLMs for specific tasks, including text classification and summarization, using NeMo.
- Use NeMo to create industry-specific LLM applications that align with customer needs.
- Leverage NVIDIA AI Enterprise Tools:
- Recognise TensorRT, cuDF, and Triton Inference Server use cases to optimize LLM model inference and deployment for real-time applications.
- Explain AI pipelines using the NVIDIA AI Enterprise Suite, with a focus on efficient model deployment and scaling.
- Perform Prompt Engineering:
- Apply best practices in prompt engineering to refine model outputs, ensuring alignment with customer requirements.
- Create prompts to enhance the performance of LLMs in various business scenarios, including customer interaction and product recommendations.
- Develop and Deploy LLM-Based Solutions:
- Design and implement LLM solutions such as Retrieval-Augmented Generation (RAG), chatbots, and summarizers.
- Integrate LLM solutions into enterprise environments using NVIDIA tools like RAPIDS and cuML for large-scale data processing.
- Analyze and Optimize LLM Models:
- Conduct data analysis and model evaluation using GPU-accelerated tools, such as cuDF and Dask, to clean, transform, and visualize datasets.
- Optimize LLM performance through hands-on experimentation, including inference optimization using NVIDIA Triton Inference Server.
- Present and pitch a Generative AI Solution
- Demonstrate an understanding of key LLM design and deployment concepts and considerations.
- Be able to understand, discuss and present design solutions for real-world LLM challenges.
- Lead technical discussions and Proof of Concepts (POCs) with customers, confidently addressing LLM solution architecture, fine-tuning, and deployment.
These learning outcomes ensure that participants gain practical Pre Sales skills in the practical aspects of building, customizing, and deploying NVIDIA based AI solutions.