Securing Generative AI (Video Course)
Securing Generative AI (Video Course)
English | MP4 | AVC 1280×720 | AAC 44KHz 2ch | 3h 31m | 845 MB
Securing Generative AI (Video Course): Get the strategies, methodologies, tools, and best practices for AI security.
- Explore security for deploying and developing AI applications, RAG, agents, and other AI implementations
- Learn hands-on with practical skills of real-life AI and machine learning cases
- Incorporate security at every stage of AI development, deployment, and operation
This Securing Generative AI (Video Course) offers a comprehensive exploration into the crucial security measures necessary for the deployment and development of various AI implementations, including large language models (LLMs) and Retrieval-Augmented Generation (RAG). It addresses critical considerations and mitigations to reduce the overall risk in organizational AI system development processes. Experienced author and trainer Omar Santos emphasizes secure by design principles, focusing on security outcomes, radical transparency, and building organizational structures that prioritize security. You will be introduced to AI threats, LLM security, prompt injection, insecure output handling, and Red Team AI models. The course concludes by teaching you how to protect RAG implementations. You learn about orchestration libraries such as LangChain, LlamaIndex, and others, as well as securing vector databases, selecting embedding models, and more.