tuyishime.ca
tuyishime.ca
AboutProjectsLogs
AboutProjectsLogs

tuyishime.ca

Think

VSCode Agents 1.2: Custom Prompts

December 20, 2025

💡 Templates & Examples: Looking for ready-to-use prompt templates? Check out the gh-copilot-templates repository:

  • Generalized templates to copy and customize
  • Personalized examples showing them in action

If you've been using GitHub Copilot for more than a week, your engineer brain has probably already kicked in. You've noticed the patterns. You're typing the same context, the same constraints, the same "don't use console.log" over and over. And if you're anything like me, that repetition is physically painful. You want to optimize it out.

Enter Prompt Files.

The Customization Stack

Prompt files are markdown files that allow you to reuse your prompts. They are currently one of four ways to customize your Copilot workflow in VS Code:

  • Prompt Files
  • Instruction Files
  • Agent Files
  • Skills

I'll be doing deep dives on my experience with each of these, so stay tuned.

Setup

You have two places to store these:

  1. Repo-level: In a .github/prompts/ folder. Great for sharing workflows with your team.
  2. User-level: In your global profile. Use the Command Palette (Ctrl+Shift+P) -> "Chat: Configure Prompt Files..." -> "+ New prompt file..." -> "user data".

A Practical Example: Capturing Conventions

To give you a concrete example, here is a prompt file I use religiously. It's designed to capture conventions from a chat session before I wipe the slate clean.

(And yes, start new chats frequently. Models still suffer from context pollution that degrades performance over time). This prompt saves me from manually updating my instruction files every time the agent deviates from my preferences.

Here is the file:

---
agent: "agent"
description: "Capture conventions from chat corrections into instruction files"
---
 
# Capture Conventions
 
Review this chat for user corrections that should become conventions.
 
## Process
 
1. Scan for places user corrected your approach or clarified preferences
2. For each correction, ask:
   - Repeatable convention or one-off?
   - Would it help future models?
   - Specific enough to be actionable?
3. Skip if: factual error, too context-specific, or already documented
4. For qualifying conventions: add to appropriate `.github/instructions/*.md` file
 
## Instruction Files
 
- `copilot-instructions.md` - global conventions
- `dal.instructions.md` - Data Access Layer 
- `docs.instructions.md` - documentation
- `logging.instructions.md` - logging
- `tests.instructions.md` - tests
- `zod.instructions.md` - Zod schemas
- `git.instructions.md` - git/commits
 
## Output
 
If conventions found:
 
1. Edit instruction file(s)
2. Commit: `docs(instructions): capture conventions from chat`
3. Summarize additions
 
If none qualify: say "No new conventions identified"
 
## Writing Style
 
- One bullet per convention, 1-2 sentences max
- Actionable (what to DO)
- No fluff, no emojis, plain markdown
- **Plain markdown**: No emojis, minimal formatting, H2/H3 headers only

To use this, I simply type #file:capture-conventions or /capture-conventions in the chat. VS Code recognizes it and autocompletes it.

My Experience (The "Meh" Factor)

I'll be real with you: I haven't been using prompt files as much as I probably should.

Why? Because I rely heavily on Instruction Files. Instruction files are automatically provided to the agent based on the files it's accessing. That automatic context injection saves me from the repetition that prompt files are designed to solve, without me even having to type a reference.

However, Instruction Files have shortcomings. They aren't always served when you need them, specifically for non-file related operations.

Take git commits. Agents love to git add -A by default. If you're someone who likes to work on multiple features at once and wants atomic commits (or just doesn't want to accidentally commit a .env), this is annoying.

You can write a .prompt.md file for your commit workflow and refer to it using #file:commit-convention or the more common /commit-convention in the chat. While you can refer to instruction files the same way, the main benefit of prompt files here is specificity. You can restrict which model runs the prompt and what tools it has access to.

Here are the configuration options available in the frontmatter (Source: VS Code Docs):

FieldDescription
descriptionA short description of the prompt.
nameThe name of the prompt, used after typing / in chat. If not specified, the file name is used.
argument-hintOptional hint text shown in the chat input field to guide users on how to interact with the prompt.
agentThe agent used for running the prompt: copilot, edit, or a custom agent.
modelThe language model used when running the prompt. If not specified, the currently selected model is used.
toolsA list of tool or tool set names that are available for this prompt.

As the industry invents new prompting techniques, some end up cannibalizing each other. We have Instruction Files for "always-on" context and Prompt Files for "on-demand" context, but since you can also reference Instruction Files on-demand, the lines get blurry. It's a classic case of feature overlap where new tools partially eat into the use cases of existing ones.

They can work together, though. You can have a code review prompt file that explicitly tells the agent to consult your commit-conventions prompt file.

Automating the Automation

I eventually got lazy and wanted to automate the prompting of this capture file. I appended the following section to the end of my git.instructions.md file (right after the standard commit steps) to trigger it automatically after a commit:

## Post-Commit: Convention Capture
 
After successful commit, before closing chat:
 
1. Read `.github/prompts/capture-conventions.prompt.md`
2. Follow its instructions to review chat for user corrections

The result? It's the type of agency you love to see. I simply ask the agent to "commit changes" (without explicitly referencing the git.instructions.md file) and it all works automatically. The agent commits my code, then immediately says "No new conventions identified" or actually updates my docs without me asking. (I'll get into the details of how this automatic context injection works in my Instruction Files log).

Unfortunately, it wasn't 100% reliable. The agent didn't always read the prompt file after reading the git instructions. Adherence varied wildly depending on the model. Even the top coding models sometimes ignore implicit instructions like this.

This is actually a win for Prompt Files. Instruction files don't allow you to specify the model (they just use whatever you have selected in the chat). Prompt files do. If I really need that capture step to happen reliably, running it as a specific prompt with a high-reasoning model is safer than hoping the auto-context chain works.

Verdict

I think that's all I have to say about prompt files for now.

Again, I mostly use instruction files at this point, but prompt files remain a solid way to get your toes into workflow customization. They don't require much overthinking. It's literally just taking the prompts you notice yourself using most often, taking 2 minutes to refine them in a file, and remembering to refer to them instead of muscle-memory typing them out.


Transparency Note: This article was written with the help of Gemini 3 Pro (Preview). I'm experimenting with using different models for each log I write to better understand their distinct writing styles and capabilities. This is part of my research for an upcoming deep dive into the Language Models available in GitHub Copilot.