tuyishime.ca
tuyishime.ca
AboutProjectsLogs
AboutProjectsLogs

tuyishime.ca

Think

Zero to Agent: Mastering GitHub Copilot with Default Settings

December 18, 2025

Zero to Agent: Mastering GitHub Copilot with Default Settings

GitHub Copilot has become more than just another VS Code plugin; it's a major change in how we interact with our development environment. While it doesn't come pre-installed in the traditional sense, it is now so baked into the VS Code ecosystem that it's often the first recommendation you'll see upon opening a new workspace.

I've been using GitHub Copilot for the past two years, and I've watched it evolve from what felt like a simple ChatGPT wrapper inside my editor to a full-on agentic workflow that rivals dedicated AI IDEs like Cursor. Today, I'm going to share how I use GitHub Copilot on its default settings. No complicated workflows or custom configurations required.

Note: This isn't a step-by-step tutorial. GitHub already has a great Quickstart Guide for that. This is my personal review and breakdown of how I actually use the service.

Try It Yourself

If you want to follow along or test these features as you read, I highly recommend setting up the environment yourself.

To get started, you'll just need:

  • Visual Studio Code: (I'm using v1.107.0)
  • GitHub Copilot Extension: (I'm using v1.388.0)
  • A GitHub Account: To sign in and activate the service.

A Note on Privacy & The Free Tier

GitHub recently introduced a Free tier for Copilot, which is great for experimentation. However, if you're working on sensitive or proprietary code, be sure to check your settings. While GitHub generally states they don't use your private code to train models for paid tiers, the Free tier's data usage policies can be broader. Always review the GitHub Copilot Trust Center to ensure you're comfortable with how your data is handled before you start handing your life's work to black boxes.

The Subscription Factor

To get the most out of these features, especially the advanced Agent Mode, you'll eventually want the Copilot Pro subscription. The free tier is a fantastic starting point, but it currently lacks the full agentic capabilities that make this workflow so powerful.

For students, there's a huge perk: you can get it for free through the GitHub Global Campus. Many new grads (myself included) retain their access for a while, so use it while it lasts. It is arguably the cheapest way to access top-of-the-line LLM models in an agentic environment.

If you find yourself hitting your monthly quota early (I often burn through mine in the first five days), you don't necessarily need to jump to the $39/month Pro+ tier. You can simply add your payment information and "pay as you go" for additional premium requests. For my use case, this is often more cost-effective than a full tier upgrade. I might even write a "Poor Man's Guide" to optimizing these costs later on.

One final word of advice: take advantage of these prices while they last. In my opinion, we are currently living in a VC-subsidized golden age of AI. As these tools reach mass adoption, or if the economics of running these massive models catch up, prices will likely only go up.

The Three Modes of My Workflow

I categorize the features I use into three distinct modes. The main difference between them is how much access they have to Tools: the specific capabilities that allow the AI to interact with your files, your terminal, and even the web.

1. Autocomplete (Ghost Text)

This is the classic "ghost text" that appears as you type.

  • How it Works: Autocomplete is a passive, tool-less system. Unlike other modes, you don't prompt it directly. Instead, the Copilot engine automatically prompts a specialized, high-speed model (like GPT-4.1-Copilot) using your "Context": the surrounding code, comments, and open tabs.
  • The Behavior: Because it's fine-tuned specifically for inline suggestions, it can generate anything from a single variable name to an entire file's worth of code in milliseconds. This is why you'll see a massive accuracy boost when you start a file with a detailed docstring; sometimes it will fill out the entire implementation before you've written a single line of logic.
  • The Good: It drastically increases code output by predicting patterns and finishing your thoughts in real-time.
  • The Bad: It can be annoying when you're trying to learn syntax or implement a custom solution the model wasn't trained on.

2. Chat (The "Ask" Mode)

This is the standard chat window where the LLM answers your questions.

  • Behavior: In this mode, the LLM is restricted from modifying your workspace. It can't create or edit files, but it does have access to read-only tools. For example, it can use search/codebase to find relevant code or web/fetch to look up documentation. If you ask for code, it will dump it into the chat window for you to copy-paste.
  • Best Use: Researching the codebase, asking "how-to" questions, or sanity-checking an approach before implementation.

3. Agent Mode

Found in the "Agent" dropdown within the chat window, this is where it gets interesting.

  • Behavior: This is the "unlocked" version of Chat. It has the same conversational interface but with the added ability to use tools that create and modify files directly in your workspace.
  • The Power of Agency: Unlike the standard chat, the agent can plan multi-step tasks, verify its own work, and fix errors it encounters along the way without you having to copy-paste a single line.

The Built-in Toolset

Since I'm focusing on "default settings," it's important to talk about the tools that come pre-configured. You don't need to install anything extra to use these; they are the "built-in" tools that Copilot uses to actually do work in your editor.

Workspace Tools

These are the core tools of the agentic workflow:

  • search/codebase: Allows the agent to find files or code snippets across your entire project.
  • read/readFile: Lets the agent open and understand the contents of specific files.
  • edit/editFiles: This allows the agent to actually write code into your files.

Terminal & Web Tools

  • execute/runInTerminal: The agent can run tests, install packages, or start your dev server to verify its changes.
  • execute/getTerminalOutput: After running commands, the agent can read the terminal output to see if there were errors or if everything worked as expected.
  • web/fetch: As mentioned earlier, this allows the agent to pull in real-time data or documentation from the internet.

Note: This is just a small sample. For a full list of what's currently built-in, check the official VS Code Chat Tools documentation. We'll save custom MCP tools for a later article, as those do require extra setup.

The Economics of Github Copilot

One thing you'll need to keep an eye on is how GitHub bills your interactions. As of late 2025, the service has moved toward a "consumptive" model for its most powerful features.

Base vs. Premium Requests

  • Autocomplete: This remains unlimited for Pro and Pro+ subscriptions. If you're on the Free tier, you're typically capped at around 2,000 inline suggestions per month.
  • Base Requests: These are "free" (included in your subscription) and usually apply to standard models like GPT-4.1 or GPT-5 mini.
  • Premium Requests: These are the "currency" of agentic coding. Most models that make the best use of the tools available in Agent Mode (like reading files, searching the web, or running terminal commands) require premium requests. Every time you use these advanced features or a high-end model like like GPT-5.2 or Gemini 3 Pro, you consume credits from your monthly allowance.

The Math of Allocations

Your subscription gives you a monthly "bucket" of premium requests (e.g., 300 for Pro, 1,500 for Pro+). However, not all models are created equal. GitHub uses a multiplier system:

  • 1x Models: Most modern workhorses like Claude Sonnet 4.5 or Gemini 3 Pro cost exactly 1 premium request.
  • 0.33x Models: Efficient models like Claude Haiku 4.5 or Gemini 3 Flash (the model I'm using to help write this log) are "cheaper." You can make three requests for the price of one premium credit.
  • 3x Models: The heavy hitters like Claude 4.5 Opus cost 3 premium requests per interaction.

Prices have stabilized significantly; while Opus used to be much more expensive, the 3x multiplier is now the standard for "God-tier" model.

Note: If you run out of your allowance, you can usually set a budget to continue on the "pay as you go" plan at roughly $0.04 per request.


How things have changed since 2023

When I started using Copilot in 2023, we only had Autocomplete and Chat. Agent Mode didn't exist yet because the models simply weren't competent enough to handle tool-use reliably. Back then, we were also stuck in the OpenAI ecosystem of LLMs, but it didn't matter because GPT-4 was the gold standard.

Replacing the Search Engine

Initially, I used Chat as a replacement for Google and Stack Overflow. For a student learning established languages, the model's parametric knowledge (the "baked-in" data from training) was more than enough.

Today, models have access to the web/fetch tool, allowing them to gather real-time information. However, there's a current limitation I call the "Curse of Expertise": models are often reluctant to use these tools unless explicitly told to. They prefer to rely on their training data. Thankfully, newer models (like the Gemini 3 series) have knowledge cutoffs as recent as January 2025, making them highly relevant even without constant web searching.

The Junior Dev's Secret Weapon

As a junior developer, these tools are great for:

  • Understanding the syntax of new languages.
  • Explaining why certain patterns are used.
  • Teaching best practices through code review.

I've built entire projects from scratch in languages I barely knew just by using the Chat interface with GPT-4. It’s a powerful mentor if you know how to ask the right questions.

The Shift to Agents

The release of Agent Mode in early 2025 was the big change. Before this, I was the bottleneck. I had to write specific comments to trigger autocomplete or copy-paste massive chunks of code and error logs into the chat box.

With Agent Mode, the agent can use its toolset to:

  1. Read relevant files using the read toolset.
  2. See IDE-detected problems automatically with the read/problems tool.
  3. Understand the context without me manually providing it with the search toolset.

One Warning: Even with this agency, I still recommend explicitly mentioning the files you want the agent to investigate. Sometimes they can look through every file in your project except the one you actually need them to fix.


Security & Safety

Giving an AI the keys to your terminal and workspace is a big step. While GitHub Copilot has built-in protections, you are still the final line of defense.

Indirect Prompt Injection

Research from security firms like Trail of Bits has shown that even the most advanced models are vulnerable to Indirect Prompt Injection. This happens when a model visits a malicious website or reads a malicious file that contains hidden instructions designed to hijack the agent's behavior.

My Advice: Never turn on "Auto-approve" for web browsing. When the agent asks to visit a site, take a second to verify the URL. It’s a small friction that prevents a massive security headache.

The Approval System

By default, Copilot will prompt you for approval before performing critical operations, such as:

  • Running commands in the Terminal.
  • Browsing the Web.
  • Modifying sensitive files like .env.

You can set up auto-approval for specific commands or tools, but I generally recommend against it for anything that touches the internet or your system configuration. Treat the agent like a highly capable intern: give them the tools they need, but review their work before it goes live.

The Undo Button: Checkpointing

One quirk of agentic coding is that the model might occasionally ask to delete a file so it can "rewrite it from the top down." In the early days, this was terrifying. Today, it's a non-issue thanks to Checkpointing.

In the chat interface, you'll see a "Restore Checkpoint" button next to previous prompts. This allows you to instantly revert your entire workspace to the state it was in before that specific interaction. Combined with proper Git tracking, you should never be afraid of an agent "breaking" your project. If it does, you're just one click away from a full restore.


Conclusion: The Default Advantage

You don't need a complex setup to be productive with GitHub Copilot. By understanding the three modes: Autocomplete, Chat, and Agent Mode, and keeping an eye on the economics of your premium requests, you can build a workflow that is both powerful and cost-effective.

What's Next?

This is just the beginning of the VSCode Agents Workflow series. If you're just joining us, check out the Series Overview (Article 1.0) for the full roadmap.

In the next few logs, we'll dive deeper into customization:

  • Article 1.2: Custom Prompts Introduction – How to start nudging the model in the right direction using custom prompt files.
  • Article 1.3: The Power of Instruction Files – Establishing your own "Rules of Engagement" for the agent to follow.

A note on how this was written: This log was written with the help of Gemini 3 Flash. I find the Gemini 3 series currently outputs the best writing, with the Pro model leading the pack. However, I chose Flash for this task because of its 0.33x pricing in GitHub Copilot. While GitHub prices it at 0.33x, the actual blended API prices for Flash are often closer to 0.25x that of Pro, a small pricing quirk to keep in mind.