Developing Your Own GPT Model with Python
| Preference | Dates | Timing | Location | Registration Fees | 
|---|---|---|---|---|
| Weekdays program (in-person and live online) | February 17 - 21, 2025 | 9:00 AM - 4:00 PM (GMT+4) | Dubai Knowledge Park | 3675 USD | 
Course Description
This course is designed to guide participants through the entire process of working with Large Language Models (LLMs) like GPT-2, LLaMA, and Falcon, from fine-tuning to deployment. By the end of this course, participants will have the skills to fine-tune open-source LLMs with their own data, deploy these models on a Google Cloud VM, and create a user interface using Django to interact with the models via prompts. This hands-on, project-based course will equip participants with the knowledge to build and deploy a fully functional GPT-like chatbot.
Upon successful completion of this program the participants will earn a certificate accredited by Dubai Government.
											Course Outline
									Audience
									Prerequisites
									Learning Objectives
							Course Outline
					Module 1 – Introduction to LLMs & Setup
- Overview of LLMs and today’s open-source landscape (Mistral, LLaMA, Falcon)
 - Installing Python, PyTorch, Hugging Face libraries
 - Running your first chatbot on an NVIDIA GPU
 
Module 2 – Prompt Engineering & Customization
- Understanding effective prompts for domain-specific assistants
 - Prompt tuning vs. fine-tuning (when and why)
 - Hands-on: Experimenting with different prompting strategies
 
Module 3 – Data Collection & Preprocessing
- Collecting and cleaning organizational data (PDF, CSV, TXT)
 - Chunking text for use in LLM pipelines
 - Basics of tokenization and embeddings
 
Module 4 – Working with Embeddings & Vector Databases
- Introduction to embeddings and vector search
 - Storing data in FAISS (vector DB)
 - Querying your private data with similarity search
 
Module 5 – Building a RAG (Retrieval-Augmented Generation) Pipeline
- Combining LLM + FAISS for contextual answers
 - Passing retrieved context into prompts
 - Testing your first RAG-powered chatbot
 
Module 6 – From Chatbot to AI Agent
- What makes an AI agent different from a chatbot
 - Building tools and APIs for agents
 - Using LangChain to enable real-world actions (e.g., schedule queries)
 
Module 7 – Deployment with Django
- Building a simple web interface for your chatbot/agent
 - Connecting backend inference to the UI
 - Running locally and testing with users
 
Module 8 – Scaling & Production Readiness
- Deploying with Docker & Kubernetes
 - Monitoring and securing AI deployments
 - Future directions: multimodal models (text, images, and audio)
 
Audience
					Target Audience
- Software Developers: Build and deploy AI-powered chatbots and agents.
 - Data Scientists: Fine-tune and integrate LLMs into real-world workflows.
 - Data Analysts: Enhance analysis and insights with AI-driven tools.
 - AI Enthusiasts: Gain hands-on experience creating chatbots and agents.
 - Professionals Curious About Generative AI: Learn to apply LLMs and agents in practice.
 
Prerequisites
					Prerequisites for this Course
- Comfortable with Python programming, including writing scripts and managing Python packages.
 - Experience with Python libraries commonly used in data science (e.g., NumPy, Pandas) is advantageous.
 - Foundational understanding of AI and data science concepts, such as machine learning basics, data preprocessing, and model training and evaluation.
 
Learning Objectives
					Learning Objectives
- Run and customize state-of-the-art open-source LLMs such as Mistral and LLaMA.
 - Integrate private data into chatbots using Retrieval-Augmented Generation (RAG).
 - Develop AI agents capable of performing real-world tasks with LangChain.
 - Build and deploy user-facing applications with Django.
 - Scale and secure AI systems using Docker and Kubernetes.
 - Understand future directions in Generative AI, including multimodal models.