AI Agentic Security &
Data Governance
Protect your organisation's AI systems from emerging threats. Master cybersecurity fundamentals, AI agent security, vulnerability scanning, and enterprise data governance. Hands-on training with real security tools.
Workshop Dates
Register your interest to be notified when dates are announced.
Dates Coming Soon
Register your interest and be the first to know when we announce workshop dates for this programme.
Register Your InterestPrivate Corporate Training
Looking to secure your entire organisation's AI infrastructure?
Exclusive sessions available for groups of 25-35 pax per class. Fully claimable.
5 Core AI Threat Areas. Hands-On.
The AI-specific attack surfaces every security professional must master. Each one is taught with live attack-and-defend exercises.
Prompt Injection
Defending against direct & indirect injection
Prompt Theft
Protecting system prompts & IP
Secret & Credential Exposure
Preventing API key & token leaks
Insecure Tool Use
Hardening agent tool-calling surfaces
Sensitive Data Disclosure
Stopping accidental PII & PDPA leaks
What You'll Build
Practical security projects you will complete during the workshop.
AI Agent Threat Scanner
Build an automated scanner that identifies vulnerabilities in AI agent configurations, API endpoints, and data flows.
OWASP AI Security Audit Tool
Implement OWASP Top 10 for LLM Applications checks against your AI systems with automated reporting.
Prompt Injection Defence
Build multi-layered defences against direct and indirect prompt injection, jailbreaking, and adversarial inputs targeting AI agents.
Prompt Theft Defence
Detect and block attempts to extract proprietary system prompts, custom instructions, and embedded business logic from your AI agents.
AI Secret & Credential Exposure Detection
Scan code, prompts, and agent contexts for leaked API keys, tokens, and credentials. Prevent the most common cause of AI security incidents.
API Security Gateway
Deploy a security gateway that monitors, rate-limits, and validates all AI API traffic with anomaly detection.
HRDC Training Architecture
A structured, hands-on approach to mastering AI security and data governance.
Day 1: Cybersecurity Foundations & AI Threat Landscape
Understanding the core security mechanics and the AI-specific threat landscape.
Core Theory
- Cybersecurity Fundamentals: CIA triad, attack surfaces, threat modelling, defence-in-depth strategy. The essential foundation before adding the AI layer.
- AI-Specific Threat Landscape: Prompt injection, data poisoning, model theft, adversarial attacks, hallucination exploitation. How AI agents create new attack vectors.
- OWASP Top 10 for LLM Applications: Deep dive into each vulnerability category: prompt injection, insecure output handling, training data poisoning, model denial of service, supply chain vulnerabilities, sensitive information disclosure, insecure plugin design, excessive agency, overreliance, model theft.
- Malaysian Regulatory Context: PDPA compliance, BNM RMiT for financial services, Cybersecurity Act 2024, industry-specific requirements and how AI intersects with existing regulations.
Hands-On Labs
Run direct and indirect prompt injection attacks against a live AI agent. Build layered defences — input filtering, output validation, and tool-allowlists — and re-test until each attack is blocked.
Use known prompt-extraction techniques to attempt to leak proprietary system prompts and embedded business logic. Implement counter-measures that protect IP without breaking legitimate use.
Scan repositories, prompts, and agent contexts for leaked API keys, tokens, and credentials. Apply rotation, vaulting, and least-privilege patterns to prevent the most common AI security incident.
Map your AI infrastructure end-to-end. Identify exposed endpoints, misconfigured tool surfaces, weak authentication, and PDPA-relevant data flow risks across the agent lifecycle.
Day 2: AI Agent Security & Data Governance
Securing AI agents, implementing governance, and advanced penetration testing.
Core Theory
- Securing AI Agents: Authentication, authorisation, sandboxing, output validation, tool-use restrictions. How to prevent AI agents from being weaponised or manipulated.
- Data Governance Frameworks: Data classification, access control matrices, retention policies, data lineage tracking, right-to-erasure compliance for AI training data.
- API & Integration Security: Securing AI API endpoints, rate limiting, input validation, output sanitisation, webhook security, and MCP server hardening.
- Security Architecture for AI Systems: Zero-trust principles applied to AI, network segmentation, secrets management, encrypted communication, and secure model deployment patterns.
Hands-On Labs
Conduct structured penetration tests against a live AI chatbot: prompt injection attacks, prompt theft attempts, sensitive data extraction, and privilege escalation through tool misuse.
Set up automated, scheduled vulnerability scans across your AI stack — endpoints, agent surfaces, and supporting infrastructure — with alerting for new exposures.
Implement input sanitisation, output filtering, rate limiting, and anomaly detection for an AI agent. Test each layer against known attack patterns and tune thresholds.
Audit how secrets flow through your AI workflows. Implement rotation, vaulting, and least-privilege access for the credentials AI agents use to call internal systems.
Day 3 (Optional): Enterprise AI Security Operations & Strategy
Phase 03Specifically designed for corporate consulting engagements. Covers:
- Security operations design for AI-augmented environments
- Threat intelligence integration with AI systems
- Red team / blue team exercises for AI security
- Building an AI security policy framework
- Vendor risk assessment for AI tools and platforms
- Board-level security reporting and risk communication
- Developing an organisational AI security roadmap
Who Should Attend?
This hands-on intensive is designed for technical professionals responsible for AI security and data governance.
IT Security Teams & CISOs
Security professionals responsible for securing AI deployments and ensuring regulatory compliance.
Software Engineers & DevOps
Developers building AI-powered applications who need to implement security-by-design principles.
Data Protection Officers
Compliance professionals managing data governance, PDPA requirements, and AI data handling policies.
CTOs & Technical Leaders
Decision-makers evaluating AI security risks and building organisational security strategies.
Experience the Workshop
A hands-on, high-energy environment where teams actually build, not just listen.
Our People
Learn from Malaysia's top AI security practitioners.
Shah Mijanur Rahman
Cybersecurity & Agentic Security Expert
Expert in cybersecurity, data pipelines, and AI agent security. Specialist in securing enterprise AI deployments, conducting penetration testing, and implementing data governance frameworks. Optimizes how AI agents retrieve and process internal knowledge securely.
Detailed FAQ
Addressing your technical, logistical, and HRDC inquiries.
Course Fee
Transparent pricing for your AI security transformation journey.
Self-Funded (non-HRDC)
Kickstart your AI Security journey
- 2 full days of in-person intensive training
- Complete programme materials, tools and templates
- Certificate of Completion
- 3-month post-training WhatsApp group support
- Admission to wider AI Learning Community
HRDC-Claimable
Upskill with your company's HRDC grant
- 2 full days of in-person intensive training
- Complete programme materials, tools and templates
- Certificate of Completion
- 3-month post-training WhatsApp group support
- Admission to wider AI Learning Community
About AITraining2U
AITraining2U was established by professionals to close the divide between academic theory, business and practical industry demands. Our mission is to ensure that AI education translates directly into measurable, real-world results. Since 2025, we have upskilled over 1,200 professionals across Malaysia in AI, Business Transformation, Agentic Automation, and Vibe Coding.
Driven by a core philosophy of "100%-focus on success" our expert faculty delivers highly interactive, hands-on learning experiences focused entirely on implementation. We don't just teach prompt engineering; we teach you how to architect robust, autonomous systems.
Whether through bespoke corporate masterclasses or intensive public bootcamps, we actively partner with enterprise leaders, technical specialists, and government bodies to accelerate their digital transformation journey and build confident, AI-native organizations.