08
Model Armor: Securing AI Deployments
08
Model Armor: Securing AI Deployments
This course reviews the essential security features of Model Armor and equips you to work with the service. You’ll learn about the security risks associated with LLMs and how Model Armor protects your AI applications.
Course Info
Objectives
- Explain the purpose of Model Armor in a company’s security portfolio.
- Define the protections that Model Armor applies to all interactions with the LLM.
- Set up the Model Armor API and find flagged violations
- Identify how MA manages prompts and responses.
Prerequisites
- Working knowledge of APIs
- Working knowledge of Google Cloud CLI
- Working knowledge of cloud security foundational principles
- Familiarity with the Google Cloud console
Audience
Security Engineers, AI/ML Developers, Cloud Architects
Available languages
English
What do I do when I finish this course?
After finishing this course, you can explore additional content in your learning path or browse the catalog.
What badges can I earn?
Upon finishing the required items in a course, you will earn a badge of completion. Badges can be viewed on your profile and shared with your social network.
Interested in taking this course with one of our authorized on-demand partners?
Explore Google Cloud content on Coursera and Pluralsight.
Prefer learning with an instructor?
View the public classroom schedule here.
Can I take this course for free?
When you enroll into most courses, you will be able to consume course materials like videos and documents for free. If a course consists of labs, you will need to purchase an individual subscription or credits to be able consume the labs. Labs can also be unlocked by any campaigns you participate in. All required activities in a course must be completed to be awarded the completion badge.