December 18, 2025
GitHub Copilot has become more than just another VS Code plugin; it's a major change in how we interact with our development environment. While it doesn't come pre-installed in the traditional sense, it is now so baked into the VS Code ecosystem that it's often the first recommendation you'll see upon opening a new workspace.
I've been using GitHub Copilot for the past two years, and I've watched it evolve from what felt like a simple ChatGPT wrapper inside my editor to a full-on agentic workflow that rivals dedicated AI IDEs like Cursor. Today, I'm going to share how I use GitHub Copilot on its default settings. No complicated workflows or custom configurations required.
Note: This isn't a step-by-step tutorial. GitHub already has a great Quickstart Guide for that. This is my personal review and breakdown of how I actually use the service.
If you want to follow along or test these features as you read, I highly recommend setting up the environment yourself.
To get started, you'll just need:
v1.107.0)v1.388.0)A Note on Privacy & The Free Tier
GitHub recently introduced a Free tier for Copilot, which is great for experimentation. However, if you're working on sensitive or proprietary code, be sure to check your settings. While GitHub generally states they don't use your private code to train models for paid tiers, the Free tier's data usage policies can be broader. Always review the GitHub Copilot Trust Center to ensure you're comfortable with how your data is handled before you start handing your life's work to black boxes.
To get the most out of these features, especially the advanced Agent Mode, you'll eventually want the Copilot Pro subscription. The free tier is a fantastic starting point, but it currently lacks the full agentic capabilities that make this workflow so powerful.
For students, there's a huge perk: you can get it for free through the GitHub Global Campus. Many new grads (myself included) retain their access for a while, so use it while it lasts. It is arguably the cheapest way to access top-of-the-line LLM models in an agentic environment.
If you find yourself hitting your monthly quota early (I often burn through mine in the first five days), you don't necessarily need to jump to the $39/month Pro+ tier. You can simply add your payment information and "pay as you go" for additional premium requests. For my use case, this is often more cost-effective than a full tier upgrade. I might even write a "Poor Man's Guide" to optimizing these costs later on.
One final word of advice: take advantage of these prices while they last. In my opinion, we are currently living in a VC-subsidized golden age of AI. As these tools reach mass adoption, or if the economics of running these massive models catch up, prices will likely only go up.
I categorize the features I use into three distinct modes. The main difference between them is how much access they have to Tools: the specific capabilities that allow the AI to interact with your files, your terminal, and even the web.
This is the classic "ghost text" that appears as you type.
This is the standard chat window where the LLM answers your questions.
search/codebase to find relevant code or web/fetch to look up documentation. If you ask for code, it will dump it into the chat window for you to copy-paste.Found in the "Agent" dropdown within the chat window, this is where it gets interesting.
Chat. It has the same conversational interface but with the added ability to use tools that create and modify files directly in your workspace.Since I'm focusing on "default settings," it's important to talk about the tools that come pre-configured. You don't need to install anything extra to use these; they are the "built-in" tools that Copilot uses to actually do work in your editor.
These are the core tools of the agentic workflow:
search/codebase: Allows the agent to find files or code snippets across your entire project.read/readFile: Lets the agent open and understand the contents of specific files.edit/editFiles: This allows the agent to actually write code into your files.execute/runInTerminal: The agent can run tests, install packages, or start your dev server to verify its changes.execute/getTerminalOutput: After running commands, the agent can read the terminal output to see if there were errors or if everything worked as expected.web/fetch: As mentioned earlier, this allows the agent to pull in real-time data or documentation from the internet.Note: This is just a small sample. For a full list of what's currently built-in, check the official VS Code Chat Tools documentation. We'll save custom MCP tools for a later article, as those do require extra setup.
One thing you'll need to keep an eye on is how GitHub bills your interactions. As of late 2025, the service has moved toward a "consumptive" model for its most powerful features.
Autocomplete: This remains unlimited for Pro and Pro+ subscriptions. If you're on the Free tier, you're typically capped at around 2,000 inline suggestions per month.Agent Mode (like reading files, searching the web, or running terminal commands) require premium requests. Every time you use these advanced features or a high-end model like like GPT-5.2 or Gemini 3 Pro, you consume credits from your monthly allowance.Your subscription gives you a monthly "bucket" of premium requests (e.g., 300 for Pro, 1,500 for Pro+). However, not all models are created equal. GitHub uses a multiplier system:
Prices have stabilized significantly; while Opus used to be much more expensive, the 3x multiplier is now the standard for "God-tier" model.
Note: If you run out of your allowance, you can usually set a budget to continue on the "pay as you go" plan at roughly $0.04 per request.
When I started using Copilot in 2023, we only had and Autocomplete. Chat didn't exist yet because the models simply weren't competent enough to handle tool-use reliably. Back then, we were also stuck in the OpenAI ecosystem of LLMs, but it didn't matter because GPT-4 was the gold standard.Agent Mode
Initially, I used as a replacement for Google and Stack Overflow. For a student learning established languages, the model's parametric knowledge (the "baked-in" data from training) was more than enough.Chat
Today, models have access to the web/fetch tool, allowing them to gather real-time information. However, there's a current limitation I call the "Curse of Expertise": models are often reluctant to use these tools unless explicitly told to. They prefer to rely on their training data. Thankfully, newer models (like the Gemini 3 series) have knowledge cutoffs as recent as January 2025, making them highly relevant even without constant web searching.
As a junior developer, these tools are great for:
I've built entire projects from scratch in languages I barely knew just by using the interface with GPT-4. It’s a powerful mentor if you know how to ask the right questions.Chat
The release of in early 2025 was the big change. Before this, I was the bottleneck. I had to write specific comments to trigger autocomplete or copy-paste massive chunks of code and error logs into the chat box.Agent Mode
With , the agent can use its toolset to:Agent Mode
read toolset.read/problems tool.search toolset.One Warning: Even with this agency, I still recommend explicitly mentioning the files you want the agent to investigate. Sometimes they can look through every file in your project except the one you actually need them to fix.
Giving an AI the keys to your terminal and workspace is a big step. While GitHub Copilot has built-in protections, you are still the final line of defense.
Research from security firms like Trail of Bits has shown that even the most advanced models are vulnerable to Indirect Prompt Injection. This happens when a model visits a malicious website or reads a malicious file that contains hidden instructions designed to hijack the agent's behavior.
My Advice: Never turn on "Auto-approve" for web browsing. When the agent asks to visit a site, take a second to verify the URL. It’s a small friction that prevents a massive security headache.
By default, Copilot will prompt you for approval before performing critical operations, such as:
.env.You can set up auto-approval for specific commands or tools, but I generally recommend against it for anything that touches the internet or your system configuration. Treat the agent like a highly capable intern: give them the tools they need, but review their work before it goes live.
One quirk of agentic coding is that the model might occasionally ask to delete a file so it can "rewrite it from the top down." In the early days, this was terrifying. Today, it's a non-issue thanks to Checkpointing.
In the chat interface, you'll see a "Restore Checkpoint" button next to previous prompts. This allows you to instantly revert your entire workspace to the state it was in before that specific interaction. Combined with proper Git tracking, you should never be afraid of an agent "breaking" your project. If it does, you're just one click away from a full restore.
You don't need a complex setup to be productive with GitHub Copilot. By understanding the three modes: , Autocomplete, and Chat, and keeping an eye on the economics of your premium requests, you can build a workflow that is both powerful and cost-effective.Agent Mode
This is just the beginning of the VSCode Agents Workflow series. If you're just joining us, check out the Series Overview (Article 1.0) for the full roadmap.
In the next few logs, we'll dive deeper into customization:
A note on how this was written: This log was written with the help of Gemini 3 Flash. I find the Gemini 3 series currently outputs the best writing, with the Pro model leading the pack. However, I chose Flash for this task because of its 0.33x pricing in GitHub Copilot. While GitHub prices it at 0.33x, the actual blended API prices for Flash are often closer to 0.25x that of Pro, a small pricing quirk to keep in mind.