I was thinking about some of the points from the Polyglot Conf list of predictions for Gen AI, titled “Second Order Effects of AI Acceleration: 22 Predictions from Polyglot Conference Vancouver“. One thing that stands out to me, and I’m sure many of you have read about the scenario, of misplaced keys, tokens, passwords and usernames, or whatever other security collateral left in a repo. It’s been such an issue orgs like AWS have setup triggers that when they find keys on the internet, they trace back and try to alert their users (i.e. if a user of theirs has stuck account keys in a repo). It’s wild how big of a problem this is.
Once you’ve spent any serious amount of time inside corporate IT, you eventually come to a slightly uncomfortable realization. Exponentially so if you focus on InfoSec or other security related things. Security, broadly speaking, is not in a particularly great state.
That might sound dramatic, but it’s not really. It is the standard modus operandi of corporate IT. The cost of really good security is too high for more corporations to focus where they should and often when some corporations focus on security they’ll often miss the forrest for the trees. There are absolutely teams doing excellent security work, so don’t get the idea I’m saying there aren’t some solid people doing the work to secure systems and environments. There are some organizations that invest heavily in it. There are people in security roles who take the mission extremely seriously and do very good engineering.
A lot of what passes for security is really just a mixture of documentation, policy, and a little bit of obscurity. Systems are complicated enough that people assume things are protected. Access is restricted mostly because people don’t know where to look. Credentials are hidden in configuration files or environment variables that nobody outside the team sees.
And that becomes the de facto security posture.
Not deliberate protection.
Just… quiet obscurity.
I’ve lost count of the number of times I’ve been pulled into a system review, or some troubleshooting session, where a secret shows up in a place it absolutely shouldn’t be. An API key sitting in a script. A database password in a config file. An environment file committed to a repository six months ago that nobody noticed.
That sort of thing happens constantly. Not out of malice. Out of convenience. But now we’ve introduced something new into the environment.
Generative AI.
More importantly though, the agentic tooling built around it. Tooling that literally takes actions on your behalf. Tools that can read entire repositories, analyze logs, scan infrastructure configuration, generate code, and help debug systems in seconds. Tools that engineers increasingly rely on as a kind of external thinking partner while they work through problems.
All that benefit is coming with AI tools. However AI doesn’t care about the secret. It’s just processing text. But the act of pasting it there matters. Because the moment that secret leaves your controlled environment, you no longer know exactly where it goes, how it’s stored, or how long it persists in the LLM.
The mental model a lot of people are using right now is wrong. They treat AI like a scratch pad or an extension of their own thoughts.
It isn’t.
The more accurate model is this: an AI tool is another resource participating in your workflow. Another staff member, effectively.
Except instead of being a person sitting at the desk next to you, it’s a system operated by someone else, running on infrastructure you don’t control, processing information you send to it. Including keys and secrets.
Once you start looking at it that way, a few things become obvious. You wouldn’t casually hand a contractor your production API keys while asking them to help debug something. You wouldn’t drop a full .env file containing service credentials into a conversation with someone who doesn’t actually need those values.
Yet that is exactly the pattern that is quietly emerging with generative AI tools. Especially among new users of said tools! Developers paste configuration files, snippets of infrastructure code, environment variables, connection strings, and logs directly into prompts because it’s the fastest way to get an answer.
It feels harmless. But secrets have a way of spreading through systems once they start moving.
The real issue here is that generative AI doesn’t create security problems. It amplifies the ones that already exist. Problems that the industry has failed (miserably might I add) at solving. If an organization already has sloppy credential management, AI just gives those credentials another place to leak. If engineers already pass secrets around informally to get work done, AI becomes another convenient channel for that behavior.
And because AI tools accelerate everything, they accelerate the consequences too. What used to take hours of searching through documentation can now happen instantly. A repository full of configuration files can be analyzed in seconds. Systems that were once opaque are now far easier to reason about.
The Takeaway (Including secrets!)
The practical takeaway here isn’t that people should stop using AI tools. That’s not realistic and frankly a career limiting maneuver at this point. The tools are genuinely useful and they’re going to become a permanent part of how software gets built.
What needs to change – desperately – is operational discipline.
Secrets should never be treated casually, and that includes interactions with generative systems. API keys, tokens, passwords, certificates, environment files, connection strings—none of those belong in prompts or screenshots or debugging sessions with external tools.
If you need to ask an AI for help, scrub the sensitive pieces first. Replace real values with placeholders. Remove anything that grants access to a system. Setup ignore for the env files and don’t let production env values (or vault values, whatever you’re using) leak into your Generative AI systems.
Treat every AI interaction the same way you would treat a conversation with another engineer outside your organization, or better yet outside the company (or Government, etc) altogether.
But not someone you hand the keys to the kingdom. Don’t give them to your AI tooling.
For years we talked a big game about documentation being “a product” (which I just wrote about yesterday right here) but let’s be honest, most of the industry never treated it that way. Docs were usually the afterthought stapled onto the release cycle, the box to tick for PMs, the chore no one wanted but everyone relied on. Then generative AI rolled in and quietly exposed just how brittle most documentation is. Suddenly the docs that were just barely acceptable for humans became completely useless for LLMs. That gap is now forcing organizations to rethink how docs get written, structured, published, and maintained.
The shift is subtle but fundamental. We’re no longer writing solely for people and search engines. We’re writing for people, search engines, and AI models that read differently than humans but still need clarity, structure, and semantic meaning to deliver accurate results. This new audience doesn’t replace human readers, it simply demands higher quality and tighter consistency. In the process, it pushes documentation to finally become the product we always claimed it was.
Why AI Is Changing How We Write Docs
AI assistants (tooling/agents/whatever) like ChatGPT and Claude don’t “browse” docs. They parse it. They consume it through embeddings or retrieval systems. They chunk it. They analyze the relationships between sentences, headings, bullets, and examples. When a user asks a question to an LLM, the model is leaning heavily on how well that documentation was written, how well it was structured, and how easily it can be transformed into a correct semantic representation.
When the docs are good, AI becomes the ultimate just-in-time guide. When the docs are sparse, meandering, inconsistent, or buried in PDFs, AI either hallucinate its way forward or simply fails. The AI lens exposes what humans have tolerated for years.
That is why companies are starting to optimize docs not only for readers and SEO crawlers, but for vector databases, RAG pipelines, and automated summarizers. The end result benefits everyone. Better structured content helps AI perform better and human readers navigate faster. AI becomes a multiplier for great doc systems and a harsh critic for bad ones.
What Makes Great Modern Documentation Now
Modern documentation can’t just be readable. It has to be machine digestible, SEO friendly, and human friendly at the same time. After picking through dozens of doc systems and tearing apart patterns in both good and terrible documentation, here is what consistently shows up in the good stuff.
The Criteria
Clear, hierarchical structure using consistent headings
Small, semantically meaningful chunks that can be indexed cleanly
Schema, config, API, and CLI references that are complete, not partial
Contextual explanations right next to code samples
Versioning that doesn’t break links every release
Upgrade guides that don’t pretend breaking changes are rare
A single authoritative source of truth instead of fractured side systems
Accessible to LLMs: consistent formatting, predictable patterns, clean text, no wild markdown gymnastics
Nothing magical here. Most teams already know these rules. AI just stops letting you ignore them.
Five Examples Of Documentation That Nails It
Below are five strong documentation ecosystems. Each one does something particularly well and gives AI models enough structure to be genuinely useful when parsing or answering questions. I’ll break down why each works and how it maps to the criteria above.
Stripe has been the gold standard for a while. Even after dozens of competitors tried to clone the style, Stripe still leads because they iterate constantly and keep everything ruthlessly consistent.
Why it’s great • Every endpoint is its own semantic block. LLMs love that. • Request and response examples are always complete, never partial. • Navigation is predictable and deep linking is stable. • They pair conceptual docs, quickstarts, and reference material without overlap. • All examples are real world and cross language.
How it maps to the criteria • Structured headings and deep linking check 1, 6, and 12. • Chunking and semantic units check 2 and 15. • Real examples and direct language check 3 and 5. • Pathfinding is excellent which checks 4. • Copy-pasteable working examples check 7.
MDN has decades of content, but it’s shockingly consistent, well-maintained, and semantically structured. It’s one of the best corpora for training and grounding AI models in web fundamentals.
Why it’s great • Long history yet content stays current. • Clear separation of reference vs guides vs tutorials. • Canonical examples for everything the web platform offers. • Clean, predictable markdown structure across thousands of pages.
How it maps • Nearly perfect hierarchy and predictable formatting check 1 and 15. • Chunked explanations with immediately adjacent examples check 2 and 11. • Stable URLs for almost everything check 6 and 12. • Strong pathfinding check 4.
Terraform’s documentation is extremely structured which makes it exceptionally machine readable.
Why it’s great • Providers, resources, and data sources follow identical templates. • Every argument and attribute is listed with exact behavior. • Examples aren’t fluff, they reflect real infrastructure patterns. • Cross linking between providers and core Terraform concepts is tight.
How it maps • The template system hits 1, 2, 6, 10, 11, and 15. • Cross linking and clear navigation cover 8. • Complete reference material covers 10. • Realistic examples check 3 and 7.
Kubernetes docs are huge, maybe too huge, but they’re structured well enough that LLMs and humans can still navigate them without losing their minds.
Why it’s great • Strong concept guides and operator manuals. • Structured task pages with prerequisites and step-by-step clarity. • Reference pages built from source-of-truth schemas. • Thoughtful linking between concepts and tasks.
How it maps • Strong hierarchy and navigation hit 1 and 6. • Machine readable chunks via consistent template patterns hit 2 and 15. • Clear examples and commands check 3 and 7. • Having both reference and conceptual breakdowns checks 4, 10, and 11.
Supabase’s docs are modern, developer-focused, and written with obvious attention to how AI and search engines consume content. They basically optimized for RAG without ever claiming they did.
Why it’s great • APIs, client libraries, schema definitions, and guides all interlink tightly. • Clear quickstarts that become progressively more advanced. • Rich examples spanning REST, RPC, SQL, and client SDKs. • Consistent layouts across different product surfaces.
How it maps • Strong pathfinding and multi-surface linking check 4 and 8. • Full reference material checks 10. • Predictable structure and formatting check 1 and 15. • Example-rich guides check 3, 7, and 11.
Documentation Is Finally Being Treated As A Real Product
The interesting thing is that AI didn’t magically fix documentation. It simply raised expectations. Companies now need their documentation to be clean, complete, structured, predictable, link-friendly, example-rich, and semantically coherent because that is the only way AI can navigate it and support users in meaningful ways. This pressure is good. It forces consistency. It rewards clarity. It makes the entire documentation discipline more rigorous.
The companies that embrace this will have far better support funnels, drastically fewer user frustrations, higher product adoption, and an ecosystem that AI can actually help with instead of stumbling through. The ones that don’t will keep wondering why users stay confused and why their AI chatbots give terrible answers.
Documentation has always been a product. AI is just the first thing that has held us accountable to that truth.
My initial review of CoPilot and getting started is available here.
(What it does well – and why it’s not magic, but almost!)
Let me be clear: the Copilot Chat feature in VS Code can feel like a miracle until it’s not. When it’s working, you fire off a multi-line prompt defining what you want: “Build a function for X, validate Y, return Z…” and boom VS Code’s inline chat generates a draft that is scary good.
What actually wins: It interprets your specification in context – your open file, project imports, naming conventions – and spits out runnable sample code. That’s not trivial; reputable models often lose context threading. Here, the chat lives in your editor, not detached, and that nuance matters. It’s like sketching the spec in natural language, then having VS Code autocomplete not just code but entire behavior.
What you have to still do: Take a breath, a le sigh, and read it. Always. Control flow, edge cases, off-by-one errors Copilot doesn’t care. Security? Data leakage? All on you. Copilot doesn’t own the logic; it just stitches together patterns it’s seen. You own the correctness.
Trick that matters: Iterate. Ask follow-up: “Okay, now handle invalid inputs by throwing InvalidArgumentException,” or “Refactor this to async/await.” Having a chat continuum in the editor is powerful but don’t forget it’s your spec, not the AI’s.
The code auto-generated fits into your skeleton, adhering to your naming, your data model.
You retain control over boundaries, types, and structure even before Copilot chimes in.
Downside? If your skeleton is misleading or incomplete, Copilot will “fill in” confidently, in code that compiles but does the wrong thing. Again, your code review has to rule.
Technique 3: In-Context Refactoring Conversations (AKA “Let me fix your mess, Copilot”)
Ever accepted a Copilot suggestion, then hated it? Instead of discarding, turn on Copilot Chat:
Ask it: “Refactor this to reduce nesting and improve readability,” or “Convert this to use .reduce() instead of .forEach().”
Watch it rewrite within the same context not tangential code thrown at you.
That’s one of its massive values – context-aware surgical refactoring – not blanket “clean this up” that ends in a different variable naming scheme or method order from your repo.
The catch: refactor prompts depend on Copilot’s parsing of your style. If your code is sloppy, it’s going to be sloppily refactored. So yes you still have to keep code clean, comment clearly, and limit complexity. Copilot is the editor version of duct tape not a refactor wizard.
The Brutal Truth
VS Code + Copilot isn’t a magical co-developer. It’s a smart auto-completer with chat, living in your IDE, context-aware but utterly obedient to your prompts.
The trick is not the AI it’s how you lead it. The better your spec, skeleton, or prompt, the better your code.
Your style skeptical, questioning, pragmatic fits perfectly. You don’t let it ride; you interrogate. And that’s exactly how it should be.
TL;DR Summary
Technique
What Works
What Fails Without You
Chat-first spec
Detailed natural-language spec → meaningful code
No spec clarity → garbage logic
Skeleton prompts
Provides structure, types, expectations
Bad skeleton = bad code, fast
In-editor refactoring chat
Context-preserving improvements
Messy code → messy refactor
If you want more details on how you integrate Copilot into CI, or your personal prompt templates drop me the demand below, and I’ll tackle it head-on next time.
Note: I’ve decided to start writing up the multitude of AI tools/tooling and this is the first of many posts on this topic. This post is effectively a baseline of what one should be familiar with to get rolling with Github Copilot. As I add posts, I’ll add them at the bottom of this post to reference the different tools, as well as back reference them to this post, etc, so that they’re all easily findable. With that, let’s roll…
Intro
GitHub Copilot is thoroughly changing how developers write code, serving as a kind of industry standard – almost – for AI-powered code completion and generation. As someone who’s been in software development for over two decades, I’ve seen many tools come and go, but the modern variant of Copilot represents a fundamental shift in how we approach coding – it’s not just a tool, it’s a new paradigm for human-AI collaboration in software development.
In this comprehensive guide, I’ll walk you through everything you need to know to get started with GitHub Copilot, from basic setup to advanced features that will transform your development workflow.
What is GitHub Copilot?
GitHub Copilot is an AI-powered code completion tool that acts as your virtual pair programmer. It’s built on OpenAI’s Codex model and trained on billions of lines of public code, making it incredibly adept at understanding context, suggesting completions, and even generating entire functions based on your comments and existing code.
Key Capabilities
Real-time code suggestions as you type
Comment-to-code generation from natural language descriptions
Multi-language support across 50+ programming languages
Context-aware completions that understand your project structure
IDE integration with VS Code, Visual Studio, Neovim, and JetBrains IDEs
Copilot provides real-time code suggestions as you type. These appear as gray text that you can accept by pressing Tab or Enter.
# Type this comment and Copilot will suggest the function
def calculate_compound_interest(principal, rate, time, compounds_per_year):
# Copilot will suggest the complete implementation
2. Comment-to-Code Generation
One of Copilot’s most powerful features is generating code from natural language comments.
// Create a function that validates email addresses using regex
// Copilot will generate the complete function with proper validation
3. Function Completion
Start typing a function and let Copilot complete it based on context:
def process_user_data(user_input):
# Start typing and Copilot will suggest the next lines
if not user_input:
return None
# Continue with the implementation
4. Test Generation
Copilot can generate test cases for your functions:
def add_numbers(a, b):
return a + b
# Type "test" or "def test_" and Copilot will suggest test functions
Advanced Features and Techniques
1. Multi-line Completions
Press Tab to accept suggestions line by line, or use Alt + ] to accept multiple lines at once.
2. Alternative Suggestions
When Copilot suggests code, press Alt + [ or Alt + ] to cycle through alternative suggestions.
3. Inline Chat (Copilot Chat)
The newer Copilot Chat feature allows you to have conversations about your code:
Press Ctrl + I (or Cmd + I on Mac) to open inline chat
Ask questions about your code
Request refactoring suggestions
Get explanations of complex code sections
4. Custom Prompts
Learn to write effective prompts for better code generation:
Good prompts:
# Create a REST API endpoint that accepts POST requests with JSON data,
# validates the input, and returns a success response with status code 201
Less effective prompts:
# Make an API endpoint
Best Practices for Effective Copilot Usage
1. Write Clear Comments
The quality of Copilot’s suggestions directly correlates with the clarity of your comments and context.
# Good: Clear, specific description
def parse_csv_file(file_path, delimiter=',', skip_header=True):
"""
Parse a CSV file and return a list of dictionaries.
Args:
file_path (str): Path to the CSV file
delimiter (str): Character used to separate fields
skip_header (bool): Whether to skip the first row as header
Returns:
list: List of dictionaries where keys are column names
"""
2. Provide Context
Help Copilot understand your project structure and coding style:
# This function follows the project's error handling pattern
# and uses the standard logging configuration
def process_payment(payment_data):
3. Review Generated Code
Always review and test code generated by Copilot:
Check for security vulnerabilities
Ensure it follows your project’s coding standards
Verify the logic matches your requirements
Run tests to confirm functionality
4. Iterative Refinement
Use Copilot as a starting point, then refine the code:
Accept the initial suggestion
Modify it to match your specific needs
Ask Copilot to improve specific aspects
Iterate until you have the desired result
Language-Specific Tips
Python
Copilot excels at Python due to its extensive training data
Great for data science, web development, and automation scripts
Excellent at generating docstrings and type hints
JavaScript/TypeScript
Strong support for modern ES6+ features
Good at React, Node.js, and frontend development patterns
Effective at generating test files and API clients
Java
Good support for Spring Boot and enterprise patterns
Effective at generating boilerplate code and tests
Strong understanding of Java conventions
Go
Growing support with good understanding of Go idioms
Effective at generating HTTP handlers and data structures
Good at following Go best practices
Troubleshooting Common Issues
1. Suggestions Not Appearing
Verify your Copilot subscription is active
Check that you’re signed into the correct GitHub account
Restart your IDE after authentication
Ensure the extension is properly installed and enabled
2. Poor Quality Suggestions
Improve your comments and context
Check that your file has the correct language extension
Provide more context about your project structure
Use more specific prompts
3. Performance Issues
Disable other AI coding extensions that might conflict
Check your internet connection (Copilot requires online access)
Restart your IDE if suggestions become slow
Update to the latest version of the extension
4. Security Concerns
Never paste sensitive data or credentials into Copilot
Review generated code for security vulnerabilities
Use Copilot in private repositories when possible
Be cautious with code that handles user input or authentication
Integration with Development Workflows
1. Pair Programming
Copilot can act as a third member of your pair programming session:
Generate alternative implementations for discussion
Create test cases to explore edge cases
Suggest refactoring opportunities
Help with debugging by generating test scenarios
2. Code Review
Use Copilot to enhance your code review process:
Generate additional test cases
Suggest alternative implementations
Identify potential improvements
Create documentation for complex functions
3. Learning and Exploration
Copilot is excellent for learning new technologies:
Generate examples of new language features
Create sample projects to explore frameworks
Build reference implementations
Practice with different coding patterns
Enterprise and Team Features
1. GitHub Copilot Business
Cost: $19/user/month
Features: Advanced security, compliance, and team management
Use Cases: Enterprise development teams, compliance requirements
2. GitHub Copilot Enterprise
Cost: Custom pricing
Features: Advanced security, custom models, dedicated support
Use Cases: Large enterprises, government, highly regulated industries
Learn to write prompts that leverage your project’s context:
# This function should follow the same pattern as the other API functions
# in this file, using the shared error handling and response formatting
def get_user_profile(user_id):
3. Testing Strategies
Use Copilot to generate comprehensive test suites:
# Generate tests that cover edge cases, error conditions, and normal operation
# Use the same testing patterns as the existing test files in this project
def test_user_authentication():
4. Documentation Generation
Let Copilot help with documentation:
# Generate comprehensive docstring following Google style
# Include examples, parameter descriptions, and return value details
def process_payment(payment_data, user_id, options=None):
Security and Privacy Considerations
1. Data Privacy
Copilot processes your code to provide suggestions
Avoid pasting sensitive information, credentials, or proprietary code
Use private repositories when working with confidential code
Review GitHub’s privacy policy and data handling practices
2. Code Security
Generated code may contain security vulnerabilities
Always review and test generated code
Use security scanning tools to identify potential issues
Follow security best practices for your specific domain
3. Compliance Requirements
Consider compliance requirements for your industry
Evaluate whether Copilot meets your security standards
Will the data going out and back be ok with the org?
Do additional SLAs or other requirements need put in place?
Consult with your security team before adoption
That in and out, being it is a service, could pose a significant number of risks for any org.
Consider enterprise deployment for better performance
Monitor network usage and optimize as needed
3. Resource Management
Disable other AI coding extensions
Monitor memory and CPU usage
Restart IDE periodically if performance degrades (?? I’ve seen this suggestion multiple places and it bothers me immensely)
Update extensions and IDE regularly
Conclusion
GitHub Copilot represents a fundamental shift in software development, moving us from manual coding to AI-assisted development. While it’s not a replacement for understanding programming fundamentals, it’s a powerful tool that can significantly enhance your productivity and code quality.
The key to success with Copilot is learning to work with it effectively writing clear prompts, providing good context, and always reviewing generated code. Start with the basics, practice regularly, and gradually incorporate more advanced features into your workflow.
As we move forward in this AI-augmented development era, developers who can effectively collaborate with AI tools like Copilot will have a significant advantage. The future of programming isn’t about replacing developers – albeit a whole lot of that might be happening right now – it’s more about augmenting their capabilities and enabling them to focus on higher-level problem solving and innovation.
Next Steps
Set up your GitHub Copilot subscription and install the extension
Practice with simple projects to get comfortable with the workflow
Experiment with different prompting techniques to improve suggestion quality
Integrate Copilot into your daily development routine
Share your experiences and learn from the community
Remember, mastery of AI programming tools like GitHub Copilot is a journey, not a destination. Start today, practice consistently, and you’ll be amazed at how quickly it transforms your development experience.
Next up, more on getting started with the various tools and the baseline knowledge you should have around each.
Follow me on LinkedIn, Mastadon, or Blue Sky for more insights on AI programming and software development.
A Principal Engineer is a senior software engineer who is responsible for the design and implementation of the company’s software architecture. They are also responsible for the technical direction of the company, or the team(s) they work with and the development of the company’s (or team(s)) software engineers.
Context: What is the Agentic Era?
The Agentic Era is a new era of software development where software is built by agents. Agents are software that can learn, reason, and act (to a degree). They are able to perform tasks autonomously (theoretically) and are able to learn from their environment.
Where I Am
Over the course of the last few years, we have seen the rise of AI agents. These agents are able to perform tasks autonomously and are able to learn from their environment. They are able to perform tasks that are typically performed by humans, such as coding, design, and problem solving. This of course, has dramatically changed the way we build software already.
What I’ve written here so far is an observation of the reality we live in. I’m not trying to make a judgement call or say agentic tooling is good or bad, just merely setting the baseline of where we are. Whether you love AI Tooling or hate it or are indifferent to it, it’s here. No matter how much we discover it makes you stupid and lazy over time or other horrid things, the reality is that it is here and it is causing significant changes.
My Observations & Experience as a Principal Engineer
My experience so far, as a principal engineer – or one who does the work of a principal engineer – regardless of role. Is that I’ve started doing more debuging, troubleshooting, and problem solving than any actual coding. Not to say I am not coding, I’m doing a ton of that, but just as much I need to bring my experience and knowledge into play to ensure the debugging, troubleshooting, and problems solving gets answers in a timely way. However, I have agentic systems build things for me that previously I’d have hired junior or mid level engineers to do. But the core of what a princpal engineer does is almost the same as it was 5 or 10 years ago, it just involves agentic systems taking care of probably 50% of the code I’d have hired juniors or mid-level engineers to knock out, that work is gone.
What does this change mean overall? My personal experience lately comes down to two specific things.
We are now able to build software faster and cheaper than the before era. Cheaper also meaning with less staff for longer stretches of time.
We are now able to build software that is more complex and more intelligent at a rate we couldn’t before.
Does it just help with these? No. Agentic systems can help us in many other ways too, this is just the specific two things I’ve seen occur. Let me dig into this more deeply.
One Scenario
In one scenario I was in, working on some project work I found the team used the tooling to effectively identify, debug, and resolve issues at a dramatically faster pace than these issues could have been dealt with before. The solutions were also more robust because of the skill and knowledge of the developers using the AI tooling. If it had been less experienced developers this could have created a catastrophic development debt that wouldn’t be recoverable from.
Which leads presciently into the next scenario.
Another Scenario
In another scenario I found myself in, as an observer, it wasn’t the particular project I was working on. I watched as a team started to build a greenfield project. In most scenarios you would think, if familiar with agentic coding, that this is the perfect scenario for agentic coding. However this team lacked the experience with the stack and the domain. They then found themselves building out a prototype, trying to take that and continue with it as a deployed production system. Not an entirely odd or shocking scenario.
But with the use of the agentic systems, skipping over key learning moments and not knowing the system they had built put the team in an unprepared situation upon the first issues that came up. Within weeks of deployment they realized their lack of familiarity with what they built had effectively made them unable to troubleshoot problems effectively.
It was literally the opposite of the first scenario. This scenario quickly became catastrophic and the project got abandoned, somewhat unceremoniously, and the team didn’t particularly learn good lessons from the experience. Sadly, since it should have been obvious what the overall issue was, it seemed to be more blamed on the agentic tooling. The fact is, the team should have realized they need to spend substantial time ensuring they read the generated docs, the generated code, and understood what they’d built. Instead the assumption was the agentic system would be able to keep up with all those aspects.
In the end, it failed.
In Closing, Observations at This Point in Time
First observation among everything is that agentic tooling when used effectively is a massive game changer. A Principal Engineer, setting precedent and direction, with 1-2 teams can easily take on what 2-4 teams could do previously. But the key to it is effective use and more experienced engineers (i.e. Principal and a few seniors sprinkled in) that can ensure bugs don’t become roadblocks, and that the agentic tooling is being wielded properly.
The second bit observation is that if a team isn’t going to use agentic tooling effectively, it’s going to be a massive detriment to the team. It’s going to slow down the team, it’s going to create a lot of technical debt, and very likely it could derail the project to the point of failure.
For now, that’s just a few of my many observations. More to come and maybe some paired agentic code slinging! In the meantime, happy thrashing code.
You must be logged in to post a comment.