Course curriculum

    1. Case Study: Air Canada’s Reputation Risk

    2. 3 AI Security Tiers: Public vs. Enterprise vs. On‑Premise

    3. Tier 1: Public LLMs

      FREE PREVIEW
    4. Tier 2: Enterprise, Your Go-To For Work

    5. Tier 3: Maximum Security, On-Premise

    6. Exercise: Which Security Tier?

    7. AI Safety Quiz 2

    1. Case Study: Replika's Data Privacy Failure

    2. Data Privacy and Sensitive Information in AI

      FREE PREVIEW
    3. Data Privacy Essentials: Protecting Confidential Company Information

    4. Writing Safe, Effective Prompts and Avoiding AI Hallucinations

      FREE PREVIEW
    5. Demo: Using Official Sources and Knowledge Bases with AI

      FREE PREVIEW
    6. Exercise 4 LLMs compared, Hallucination Hunt

    7. Prompt Engineering: Write Good Prompts and Get Good Results

      FREE PREVIEW
    8. Prompt Engineering Special Tips

    9. Quick Demo: Verifying Different Stats on Fast Cars

    10. Exercise & Demo of Good Prompting and Researching AI Safety Updates

    11. Quiz 3

    1. Case Study: Bias and Discrimination in Amazon Hiring Model

    2. AI Ethics and Bias: Understanding and Preventing Stereotypes

    3. Exercise: Testing 4 LLMs for Bias

    4. Human Responsibility and Oversight in AI Use

    5. Human-in-the-loop Safeguard

    6. Frameworks for Ethical and Responsible AI Use

    7. Copyright, Ownership, and Plagiarism in AI

    8. Exercise: Guess The Ownership

    9. Demo Plagiarism on Harry Potter?

    10. Quiz 4

    1. Regulatory Frameworks and Industry Limits on AI Use

    2. EU AI Act - What is it

    3. EU AI Act - 4 Levels of AI Risk

      FREE PREVIEW
    4. EU AI Act High-Risk AI Compliance Obligations

    5. EU AI Act Limited-Risk AI Compliance Obligations

    6. EU AI Act Minimal-Risk AI Compliance Obligations

    7. What are Other Countries Doing...

    8. AI Safety Regulation Quiz (Quiz 5)

    1. Verifying AI Outputs: Accuracy, Bias, and Compliance

About this course

  • $19.99
  • 54 lessons
  • 3.5 hours of video content

The Silent Threat Sitting in Your Employee’s Browser

Right now, you or your employees are using AI to do their jobs.

Whether it is drafting a client email, debugging code, or summarizing a confidential meeting strategy, Generative AI has become the invisible co-worker in your organization.

But here is the problem: Nearly 50% of employees admit to using AI tools without their employer's knowledge.

This is called "Shadow AI," and it is currently the single biggest cybersecurity and legal blind spot facing modern businesses.

When a well-meaning employee pastes a client’s sensitive financial data, your proprietary source code, or a draft of a confidential press release into a public Large Language Model (LLM) like the free version of ChatGPT, that data leaves your control. In many cases, it is used to train the model, meaning your trade secrets could effectively become public knowledge.

It happened to Samsung. Engineers accidentally leaked proprietary code by pasting it into a public chatbot to check for errors. It happened to Air Canada. A chatbot promised a refund policy that didn't exist, and the courts ruled the company was liable for the AI's "hallucination."

Is your team next?

You cannot afford to ban AI—it is too competitive an advantage. But you cannot afford to let your staff use it blindly. You need to bridge the gap between "Don't use it" and "Use it safely."

The Solution: Practical, Standardized AI Safety Training

This course is the solution to the Shadow AI problem. It is designed specifically for employees and anyone wanting to use AI safely. It is for business owners, HR directors, and Training Managers who need a plug-and-play solution to upskill their workforce on the risks and responsibilities of using LLMs.

We move beyond vague warnings and provide a concrete operational framework that employees can apply immediately to their daily workflows.

What Your Team Will Learn

This course breaks down complex cybersecurity and legal concepts into digestable, actionable lessons.

  • The "3-Tier" Framework: A simple, traffic-light system I have developed to help employees instantly decide which AI tool is safe for which type of data (Public vs. Enterprise vs. Secure).

  • How to Stop Data Leakage: We teach the art of "Data Sanitization"—how to strip PII (Personally Identifiable Information) from prompts so employees can use AI's power without exposing client secrets.

  • Avoiding Legal Liability: Using the Air Canada case study, we demonstrate why "The AI said so" is not a legal defense, and how to keep a "Human-in-the-Loop" to protect the company.

  • The Hallucination Trap: How to spot when an AI is lying, fabricating facts, or citing non-existent court cases.

  • Copyright & IP Dangers: Understanding who owns the output, and why using AI to generate code or content carries hidden plagiarism risks.

  • Bias & Ethics: How to recognize when an AI is reinforcing harmful stereotypes in hiring or customer service.

Who This AI Safety Course Is For

  • Business Owners who are terrified of a data breach but don't want to lose the productivity gains of AI.

  • HR & L&D Managers looking for a standardized "onboarding" course for AI usage policy.

  • IT Managers struggling to combat Shadow AI and needing a way to educate non-technical staff.

  • Team Leaders who want to encourage innovation but ensure compliance.

Why This AI Safety Course?

Most AI courses focus on "How to write better prompts" or "How to make money with AI."

This is the missing manual on SAFETY.

We don't just talk theory. We provide exercises on data sorting, anonymization challenges, and hallucination hunting. By the end of this course, your employees won't just be using AI faster—they will be using it smarter.


Key Topics in this AI Safety Course:

  • AI safety & governance

  • Responsible AI usage

  • AI compliance basics

  • Shadow AI & Workplace Risk

  • Workplace AI policy

  • Generative AI & LLM Risks

  • ChatGPT security risks

  • Microsoft Copilot safety

  • Claude AI security

  • Data Protection & Privacy

  • AI data leakage prevention

  • Data sanitization techniques

  • Prompt anonymization

  • AI legal liability

  • AI hallucination risks

  • AI copyright & IP risks

  • Plagiarism risks with AI

  • AI bias detection

  • Ethical AI practices

  • Responsible AI decision-making

Your Data is Your Most Valuable Asset. Don't let it leak into a public chatbot.

Enroll your team today. Turn your workforce from your biggest security risk into your strongest line of defense.

Use AI at Work Safely and Protect Yourself and Your Company