Module 1 — Responsible AI with GitHub Copilot

Exam weight: 15–20% Study time: 45–60 minutes Lessons: Risks & limitations · Ethical use · Practice

Exam tactic. Roughly one in five GH-300 questions tests responsible AI. The concepts are conceptually clean — careful study here secures easy points on exam day.

L01 — Risks and limitations of generative AI

Why is responsible AI on the certification exam?

GitHub Copilot is a generative AI tool. It is powerful, but it is not infallible. The GH-300 exam tests whether you understand both the benefits and the risks of Copilot — and whether you can act responsibly. Responsible AI is not just ethics in theory; it is the practical skill of recognizing when AI output is trustworthy and when it requires critical review.

Risks of generative AI

Hallucinations. Generative AI can produce information that looks plausible but is wrong. GitHub Copilot can suggest code that looks correct but contains logic errors, calls non-existent APIs, or imports libraries that do not exist.

Bias. Copilot is trained on a large body of public code. The training data can contain biased, outdated, or low-quality code.

Security risks. Copilot can suggest code containing security vulnerabilities such as SQL injection, XSS, or insufficient input validation.

Copyright risks. Copilot can produce code that resembles or is identical to code in its training data.

Limitations of generative AI

Limitations are different from risks — they are technical characteristics of how the AI works, not bugs.

Harms and mitigation strategies

Harm typeDescriptionMitigation
Code defectsIncorrect or non-working codeTesting, validation, code review
Security vulnerabilitiesSQL injection, XSS, missing validationSecurity warnings, SAST tools
Licensing issuesCode resembles licensed sourceDuplication Detection, license review
Privacy risksSensitive data ends up in Copilot's contextContent Exclusions, privacy settings
Biased or low-quality codeTraining data problems repeat in outputCode review policy, coding standards

L02 — Ethical AI use and the six responsible AI principles

Ethics in AI tooling is not a philosophical question — it is a professional skill. As a developer, you are accountable for the code you ship, regardless of who or what wrote it. Microsoft and GitHub commit to six principles of responsible AI that the GH-300 exam expects you to recognize.

Fairness

AI systems must not discriminate against people or groups. Training data can contain historical bias that reflects in AI output.

Reliability & safety

AI systems must work consistently and safely, even in unexpected situations. The official material distinguishes two ideas:

Copilot suggestions can be unreliable — the developer carries the responsibility of validation.

Privacy & security

AI systems must not compromise user privacy or organizational security. When using Copilot, take care that sensitive data does not enter the context.

Inclusiveness

AI systems must be accessible and fair to everyone:

Transparency

AI behavior should be understandable. The Copilot user must know how it works, what data it uses, and what its limitations are.

Accountability

People are accountable for AI output. "Copilot wrote it" does not remove the developer's responsibility for the code that ships.

Ethical use in practice

When NOT to use Copilot:

When Copilot use requires extra caution:

Validating AI output

Validation is the heart of responsible AI use. The exam asks why and how you validate AI-generated code:

  1. Read the code carefully — do not accept automatically.
  2. Test it — write or generate tests for every component the AI produces.
  3. Code review — use a colleague or Copilot Code Review. Tools like SonarQube add additional confidence in code quality.
  4. Security check — run SAST tools.
  5. Validate the logic — confirm the code does what it should do.

Exam-ready checklist (M01)

Common mistakes on the exam

Official source documents