Module 4 — Prompt Engineering and Context Management
Exam tactic. Prompt engineering is hard to teach without practice. The exam usually frames it as recognition: "Which of the following is a good example of few-shot prompting?" — Use Copilot actively while studying.
L01 — Effective prompt structure and context
The four parts of an effective prompt
- Role. Tell Copilot what role to take. "Act as an experienced Java developer specializing in security..."
- Task. Describe clearly what you want. "Write unit tests for this method..."
- Context. Provide background. "This service runs on Spring Boot 3.2 and Java 21..."
- Format. Specify the output shape. "Reply in JSON format..."
How Copilot collects context
Automatic context: active file, code around the cursor, open files (limited), imports, and adjacent comments — especially the comment directly above the cursor.
Manual context (# references): #file:src/api/UserController.java, #selection, #codebase.
Context optimization
Add context:
- Open relevant files in the IDE before starting.
- Add references with
#. - Write clear comments right before the code you want generated.
- Name variables and functions descriptively — they are context too.
Restrict context (privacy):
- Close sensitive files when not needed.
- Use Content Exclusions at org or repo level.
- Check what is open before pasting sensitive data into Chat.
Anatomy in one example
ROLE + TASK + CONTEXT + FORMAT = effective prompt
Weak: "Write a test"
Better: "Write JUnit 5 unit tests for UserService.createUser().
Test successful creation, duplicate email, and missing required field.
Use Mockito to mock UserRepository."
L02 — Zero-shot, few-shot, and advanced prompting
Zero-shot prompting
Instructions without examples. Use it for simple, well-described tasks.
// Zero-shot: instruction only, no examples
// Generate an email validation method using RFC 5322.
// Return true if valid, otherwise false.
Few-shot prompting
Instructions with examples that guide style and format. Use when style consistency matters.
// Few-shot: examples guide the style
// Examples of our existing validators:
// validateEmail(email) - RFC 5322, throws ValidationException
// validatePhone(phone) - international format, throws ValidationException
//
// Now generate in the same style:
// validatePostalCode(postalCode) - Finnish postcode (5 digits)
Chain-of-thought and role prompting
Chain-of-thought: ask Copilot to think step by step.
// Analyze this algorithm step by step:
// 1. Explain what it does
// 2. Identify performance issues
// 3. Suggest improvements
Role prompting: assign a domain role.
// Act as a security auditor. Review this code
// and identify every OWASP Top 10 vulnerability. Be critical.
Technique selection
| Technique | When to use | Typical use case |
|---|---|---|
| Zero-shot | Simple, clear tasks | Function generation, explanation |
| Few-shot | Style or format matters | Consistent code patterns |
| Chain-of-thought | Complex problems | Algorithm analysis, debugging |
| Role prompting | Need expert framing | Security review, code review |
L03 — Principles and process flow
Principles
- Specificity. Vague prompt → vague result. Bad: "improve this code." Good: "refactor to stream API and reduce cyclomatic complexity below 5."
- Context. Copilot does not know your business rules — state them.
- Iteration. The first answer is rarely the best. Refine.
- Clarity. Break complex requests into smaller pieces.
Prompt processing flow
User input
↓
Context assembly (open files, cursor, history)
↓
Prompt prioritization (most relevant context first)
↓
Tokenization (fills the context window)
↓
LLM inference
↓
Response
- When the context window fills, older chat history is dropped.
- Code closest to the cursor gets the most weight.
- Clear comments right before the code are extremely effective.
When to clear chat history
- Starting a completely new task.
- The previous task is complete and the new one is unrelated.
- Copilot starts mixing contexts from different tasks.
In VS Code: open a new chat or run "Clear Chat".
Exam-ready checklist (M04)
- Good prompt = Role + Task + Context + Format.
- Zero-shot: no examples; few-shot: examples for style.
- Chain-of-thought: think step by step; role prompting: assign expertise.
- Context is collected automatically and steered with
#references. - Iterate, but reset chat at the start of a new task.
