Overview of the Course
This cutting-edge program offers an exhaustive exploration of Generative AI, Large Language Models (LLMs), and the underlying Transformer architecture. Participants will master the mechanics of Foundation Models, learning to leverage Natural Language Processing (NLP), Prompt Engineering, and Fine-tuning techniques. By integrating tools like LangChain, Vector Databases, and Retrieval-Augmented Generation (RAG), this course equips professionals to build and deploy intelligent AI Agents and Generative Pre-trained Transformers for complex enterprise solutions.
The curriculum spans from the basics of generative modeling to advanced deployment strategies. Attendees will explore the differences between proprietary models like GPT-4 and open-source alternatives like Llama, while diving deep into multimodal AI, ethics, and the technical workflows required to customize AI for specific industrial domains.
Who should attend the training
- AI Engineers and Data Scientists
- Product Managers in the Tech sector
- Software Architects and Developers
- Digital Transformation Leads
- Content Strategists and Creative Professionals
- Innovation Managers
Objectives of the training
- To understand the core architecture and mathematical principles of Transformers.
- To master advanced prompt engineering techniques for maximizing model output quality.
- To implement Retrieval-Augmented Generation (RAG) for grounding AI in private data.
- To explore fine-tuning methodologies for specialized domain tasks.
- To evaluate and mitigate risks associated with hallucinations, bias, and security in LLMs.
Personal benefits
- Position yourself at the forefront of the most significant technological shift in decades.
- Develop the ability to automate complex creative and analytical workflows.
- Gain hands-on experience with the latest AI frameworks and API integrations.
- Enhance your problem-solving toolkit with generative design patterns.
Organizational benefits
- Drastically increase employee productivity through AI-augmented workflows.
- Reduce operational costs by automating content generation and customer support.
- Unlock new product capabilities by integrating generative features into existing software.
- Establish a robust framework for ethical and secure AI adoption within the enterprise.
Training methodology
- Technical lectures and architectural deep-dives
- Interactive coding labs using cloud-based GPU environments
- Prompt engineering "sandboxes" for iterative testing
- Collaborative design of RAG-based systems
- Real-world case study evaluations and ethics workshops
Trainer Experience
Our lead instructors are AI researchers and engineers who have actively contributed to the development of generative pipelines for global tech firms. They possess deep expertise in PyTorch, Hugging Face ecosystems, and the deployment of production-grade LLM applications.
Quality Statement
We maintain the highest standards of technical instruction by ensuring our course content is updated in real-time to match the weekly advancements in the Generative AI field. Every participant receives rigorous, evidence-based training designed for immediate workplace application.
Tailor-made courses
We provide bespoke training modules that focus on your organization's specific data privacy requirements and industry-specific use cases. Whether you need to focus on local model hosting or specific multimodal applications, we can adapt the syllabus to your strategic goals.
Course duration: 5 days
Training fee: USD 1500
Module 1: Introduction to the Generative AI Landscape
- Defining Generative AI vs. Discriminative AI and their unique use cases
- Evolution of Language Models: From N-grams to GPT-4
- Overview of the Foundation Model paradigm and scaling laws
- Exploring the capabilities and limitations of current state-of-the-art models
- High-level workflow of pre-training, supervised fine-tuning, and RLHF
- Practical session: Navigating the Hugging Face ecosystem to explore and test pre-trained models
Module 2: The Transformer Architecture Deep Dive
- Understanding the Encoder-Decoder framework and its variants
- Detailed mechanics of the Self-Attention mechanism and Query-Key-Value vectors
- Positional Encoding and its role in understanding sequence order
- Layer Normalization and Feed-Forward networks within the Transformer block
- Tokenization strategies: Byte Pair Encoding (BPE) and WordPiece
- Practical session: Visualizing attention weights in a live Transformer model to see how "context" is built
Module 3: Masterclass in Prompt Engineering
- Core principles of Few-Shot, Zero-Shot, and One-Shot prompting
- Implementing Chain-of-Thought (CoT) and Tree-of-Thought reasoning patterns
- Using system prompts to define persona, constraints, and output formats
- Advanced techniques: Prompt Chaining and Iterative Refinement
- Techniques for reducing hallucinations through structured output requests
- Practical session: Designing a complex multi-step prompt to automate a technical reporting task
Module 4: Working with LLM APIs and Open Source Models
- Comparing API-based models (OpenAI, Anthropic) vs. Self-hosted models (Llama, Mistral)
- Managing API parameters: Temperature, Top-P, and Frequency Penalty
- Understanding token limits, context windows, and cost optimization
- Setting up local environments for running open-source LLMs using Ollama
- Strategies for selecting the right model size for specific latency requirements
- Practical session: Building a Python script to interact with various LLM APIs and compare their outputs
Module 5: Retrieval-Augmented Generation (RAG) Systems
- The architecture of RAG: Why context injection beats model memory
- Introduction to Vector Databases: Pinecone, Milvus, and Weaviate
- Embedding models and the process of converting text into high-dimensional vectors
- Semantic search vs. keyword search: Implementation of retrieval strategies
- Managing long-form documents: Chunking, Overlapping, and Metadata filtering
- Practical session: Building a "Chat with your PDF" application using LangChain and a vector store
Module 6: Fine-Tuning and Model Adaptation
- Distinguishing between RAG, Fine-Tuning, and Continued Pre-training
- Introduction to Parameter-Efficient Fine-Tuning (PEFT) and LoRA
- Preparing high-quality datasets for Supervised Fine-Tuning (SFT)
- Hardware requirements and optimization techniques for training on limited GPUs
- Evaluating fine-tuned models using benchmarks and human-in-the-loop
- Practical session: Fine-tuning a small-scale LLM on a specific niche dataset using LoRA
Module 7: Multimodal Generative AI
- Architecture of Diffusion Models for high-fidelity image generation
- Understanding CLIP: How AI bridges the gap between text and images
- Exploration of Video Generation (Sora-style) and Audio Synthesis models
- Use cases for Vision-Language Models (VLM) in analyzing visual data
- Integrating multimodal outputs into automated marketing and design workflows
- Practical session: Using Stable Diffusion and DALL-E 3 to create consistent brand assets via API
Module 8: Building Autonomous AI Agents
- Definition of AI Agents: Autonomy, Planning, and Tool Use
- The ReAct framework: Combining Reasoning and Acting in LLMs
- Giving LLMs "Hands": Enabling models to execute Python code or SQL queries
- Multi-agent orchestration: Having different AI roles collaborate on a single goal
- Memory management for agents: Long-term vs. Short-term state retention
- Practical session: Developing an autonomous research agent that browses the web and writes a summary
Module 9: LLMOPs and Production Deployment
- Strategies for deploying LLMs: Serverless functions vs. dedicated GPU clusters
- Monitoring LLM performance: Latency, throughput, and "Drift" in generation
- Guardrails and Filtering: Implementing content moderation at the input and output levels
- Model versioning and A/B testing generative features in live products
- Cost management: Caching common responses and token usage tracking
- Practical session: Deploying a RAG application using a modern cloud framework (e.g., Streamlit or Vercel)
Module 10: Ethics, Governance, and AI Security
- Identifying and mitigating social bias and toxicity in generated content
- Security threats: Prompt Injection, Jailbreaking, and Data Leakage
- Intellectual Property and Copyright issues in generative training data
- Implementing "Human-in-the-loop" for high-stakes AI decision making
- Compliance with emerging international AI regulations and corporate policies
- Practical session: Performing a "Red Teaming" exercise to identify vulnerabilities in an AI chatbot
Requirements:
- Participants should be reasonably proficient in English.
- Applicants must live up to Armstrong Global Institute admission criteria.
Terms and Conditions
1. Discounts: Organizations sponsoring Four Participants will have the 5th attend Free
2. What is catered for by the Course Fees: Fees cater for all requirements for the training – Learning materials, Lunches, Teas, Snacks and Certification. All participants will additionally cater for their travel and accommodation expenses, visa application, insurance, and other personal expenses.
3. Certificate Awarded: Participants are awarded Certificates of Participation at the end of the training.
4. The program content shown here is for guidance purposes only. Our continuous course improvement process may lead to changes in topics and course structure.
5. Approval of Course: Our Programs are NITA Approved. Participating organizations can therefore claim reimbursement on fees paid in accordance with NITA Rules.
Booking for Training
Simply send an email to the Training Officer on training@armstrongglobalinstitute.com and we will send you a registration form. We advise you to book early to avoid missing a seat to this training.
Or call us on +254720272325 / +254725012095 / +254724452588
Payment Options
We provide 3 payment options, choose one for your convenience, and kindly make payments at least 5 days before the Training start date to reserve your seat:
1. Groups of 5 People and Above – Cheque Payments to: Armstrong Global Training & Development Center Limited should be paid in advance, 5 days to the training.
2. Invoice: We can send a bill directly to you or your company.
3. Deposit directly into Bank Account (Account details provided upon request)
Cancellation Policy
1. Payment for all courses includes a registration fee, which is non-refundable, and equals 15% of the total sum of the course fee.
2. Participants may cancel attendance 14 days or more prior to the training commencement date.
3. No refunds will be made 14 days or less before the training commencement date. However, participants who are unable to attend may opt to attend a similar training course at a later date or send a substitute participant provided the participation criteria have been met.
Tailor Made Courses
This training course can also be customized for your institution upon request for a minimum of 5 participants. You can have it conducted at our Training Centre or at a convenient location. For further inquiries, please contact us on Tel: +254720272325 / +254725012095 / +254724452588 or Email training@armstrongglobalinstitute.com
Accommodation and Airport Transfer
Accommodation and Airport Transfer is arranged upon request and at extra cost. For reservations contact the Training Officer on Email: training@armstrongglobalinstitute.com or on Tel: +254720272325 / +254725012095 / +254724452588