AI Is Forcing Docs To Finally Grow Up

For years we talked a big game about documentation being “a product” (which I just wrote about yesterday right here) but let’s be honest, most of the industry never treated it that way. Docs were usually the afterthought stapled onto the release cycle, the box to tick for PMs, the chore no one wanted but everyone relied on. Then generative AI rolled in and quietly exposed just how brittle most documentation is. Suddenly the docs that were just barely acceptable for humans became completely useless for LLMs. That gap is now forcing organizations to rethink how docs get written, structured, published, and maintained.

The shift is subtle but fundamental. We’re no longer writing solely for people and search engines. We’re writing for people, search engines, and AI models that read differently than humans but still need clarity, structure, and semantic meaning to deliver accurate results. This new audience doesn’t replace human readers, it simply demands higher quality and tighter consistency. In the process, it pushes documentation to finally become the product we always claimed it was.

Why AI Is Changing How We Write Docs

AI assistants (tooling/agents/whatever) like ChatGPT and Claude don’t “browse” docs. They parse it. They consume it through embeddings or retrieval systems. They chunk it. They analyze the relationships between sentences, headings, bullets, and examples. When a user asks a question to an LLM, the model is leaning heavily on how well that documentation was written, how well it was structured, and how easily it can be transformed into a correct semantic representation.

When the docs are good, AI becomes the ultimate just-in-time guide. When the docs are sparse, meandering, inconsistent, or buried in PDFs, AI either hallucinate its way forward or simply fails. The AI lens exposes what humans have tolerated for years.

That is why companies are starting to optimize docs not only for readers and SEO crawlers, but for vector databases, RAG pipelines, and automated summarizers. The end result benefits everyone. Better structured content helps AI perform better and human readers navigate faster. AI becomes a multiplier for great doc systems and a harsh critic for bad ones.

What Makes Great Modern Documentation Now

Modern documentation can’t just be readable. It has to be machine digestible, SEO friendly, and human friendly at the same time. After picking through dozens of doc systems and tearing apart patterns in both good and terrible documentation, here is what consistently shows up in the good stuff.

The Criteria

  1. Clear, hierarchical structure using consistent headings
  2. Small, semantically meaningful chunks that can be indexed cleanly
  3. Realistic examples, not toy snippets
  4. Explicit pathfinding: quickstart, deeper guides, reference, troubleshooting
  5. Direct language without fluff
  6. Predictable URLs and logical navigation trees
  7. Copy-pastable awexamples that actually work
  8. Strong inbound and outbound linking
  9. No PDF dumping ground
  10. Schema, config, API, and CLI references that are complete, not partial
  11. Contextual explanations right next to code samples
  12. Versioning that doesn’t break links every release
  13. Upgrade guides that don’t pretend breaking changes are rare
  14. A single authoritative source of truth instead of fractured side systems
  15. Accessible to LLMs: consistent formatting, predictable patterns, clean text, no wild markdown gymnastics

Nothing magical here. Most teams already know these rules. AI just stops letting you ignore them.

Five Examples Of Documentation That Nails It

Below are five strong documentation ecosystems. Each one does something particularly well and gives AI models enough structure to be genuinely useful when parsing or answering questions. I’ll break down why each works and how it maps to the criteria above.

1. Stripe API Docs

https://stripe.com/docs/api

Stripe has been the gold standard for a while. Even after dozens of competitors tried to clone the style, Stripe still leads because they iterate constantly and keep everything ruthlessly consistent.

Why it’s great
• Every endpoint is its own semantic block. LLMs love that.
• Request and response examples are always complete, never partial.
• Navigation is predictable and deep linking is stable.
• They pair conceptual docs, quickstarts, and reference material without overlap.
• All examples are real world and cross language.

How it maps to the criteria
• Structured headings and deep linking check 1, 6, and 12.
• Chunking and semantic units check 2 and 15.
• Real examples and direct language check 3 and 5.
• Pathfinding is excellent which checks 4.
• Copy-pasteable working examples check 7.

2. MDN Web Docs

https://developer.mozilla.org

MDN has decades of content, but it’s shockingly consistent, well-maintained, and semantically structured. It’s one of the best corpora for training and grounding AI models in web fundamentals.

Why it’s great
• Long history yet content stays current.
• Clear separation of reference vs guides vs tutorials.
• Canonical examples for everything the web platform offers.
• Clean, predictable markdown structure across thousands of pages.

How it maps
• Nearly perfect hierarchy and predictable formatting check 1 and 15.
• Chunked explanations with immediately adjacent examples check 2 and 11.
• Stable URLs for almost everything check 6 and 12.
• Strong pathfinding check 4.

3. HashiCorp Terraform Docs

https://developer.hashicorp.com/terraform/docs

Terraform’s documentation is extremely structured which makes it exceptionally machine readable.

Why it’s great
• Providers, resources, and data sources follow identical templates.
• Every argument and attribute is listed with exact behavior.
• Examples aren’t fluff, they reflect real infrastructure patterns.
• Cross linking between providers and core Terraform concepts is tight.

How it maps
• The template system hits 1, 2, 6, 10, 11, and 15.
• Cross linking and clear navigation cover 8.
• Complete reference material covers 10.
• Realistic examples check 3 and 7.

4. Kubernetes Documentation

https://kubernetes.io/docs/home

Kubernetes docs are huge, maybe too huge, but they’re structured well enough that LLMs and humans can still navigate them without losing their minds.

Why it’s great
• Strong concept guides and operator manuals.
• Structured task pages with prerequisites and step-by-step clarity.
• Reference pages built from source-of-truth schemas.
• Thoughtful linking between concepts and tasks.

How it maps
• Strong hierarchy and navigation hit 1 and 6.
• Machine readable chunks via consistent template patterns hit 2 and 15.
• Clear examples and commands check 3 and 7.
• Having both reference and conceptual breakdowns checks 4, 10, and 11.

5. Supabase Docs

https://supabase.com/docs

Supabase’s docs are modern, developer-focused, and written with obvious attention to how AI and search engines consume content. They basically optimized for RAG without ever claiming they did.

Why it’s great
• APIs, client libraries, schema definitions, and guides all interlink tightly.
• Clear quickstarts that become progressively more advanced.
• Rich examples spanning REST, RPC, SQL, and client SDKs.
• Consistent layouts across different product surfaces.

How it maps
• Strong pathfinding and multi-surface linking check 4 and 8.
• Full reference material checks 10.
• Predictable structure and formatting check 1 and 15.
• Example-rich guides check 3, 7, and 11.

Documentation Is Finally Being Treated As A Real Product

The interesting thing is that AI didn’t magically fix documentation. It simply raised expectations. Companies now need their documentation to be clean, complete, structured, predictable, link-friendly, example-rich, and semantically coherent because that is the only way AI can navigate it and support users in meaningful ways. This pressure is good. It forces consistency. It rewards clarity. It makes the entire documentation discipline more rigorous.

The companies that embrace this will have far better support funnels, drastically fewer user frustrations, higher product adoption, and an ecosystem that AI can actually help with instead of stumbling through. The ones that don’t will keep wondering why users stay confused and why their AI chatbots give terrible answers.

Documentation has always been a product. AI is just the first thing that has held us accountable to that truth.

VS Code & Copilot: The Chat-First Spec Definition Method

My initial review of CoPilot and getting started is available here.

(What it does well – and why it’s not magic, but almost!)

Let me be clear: the Copilot Chat feature in VS Code can feel like a miracle until it’s not. When it’s working, you fire off a multi-line prompt defining what you want: “Build a function for X, validate Y, return Z…” and boom VS Code’s inline chat generates a draft that is scary good.

  • What actually wins: It interprets your specification in context – your open file, project imports, naming conventions – and spits out runnable sample code. That’s not trivial; reputable models often lose context threading. Here, the chat lives in your editor, not detached, and that nuance matters.
    It’s like sketching the spec in natural language, then having VS Code autocomplete not just code but entire behavior.
  • What you have to still do: Take a breath, a le sigh, and read it. Always. Control flow, edge cases, off-by-one errors Copilot doesn’t care. Security? Data leakage? All on you. Copilot doesn’t own the logic; it just stitches together patterns it’s seen. You own the correctness.
  • Trick that matters: Iterate. Ask follow-up: “Okay, now handle invalid inputs by throwing InvalidArgumentException,” or “Refactor this to async/await.” Having a chat continuum in the editor is powerful but don’t forget it’s your spec, not the AI’s.

Technique 2: Prompt With Skeleton First

Skip blindly describing behavior. Instead, scaffold it:

// Function: validateUserInput
// Takes { name: string, age: number }
// Returns { valid: boolean, errors: string[] }
// Edge cases: missing name, non-numeric age

function validateUserInput(input) {
  // ...
}

Then let Copilot fill in the body. Why this rocks:

  • You’re giving structure; types, return shapes, edge conditions.
  • The code auto-generated fits into your skeleton, adhering to your naming, your data model.
  • You retain control over boundaries, types, and structure even before Copilot chimes in.

Downside? If your skeleton is misleading or incomplete, Copilot will “fill in” confidently, in code that compiles but does the wrong thing. Again, your code review has to rule.

Technique 3: In-Context Refactoring Conversations (AKA “Let me fix your mess, Copilot”)

Ever accepted a Copilot suggestion, then hated it? Instead of discarding, turn on Copilot Chat:

  • Ask it: “Refactor this to reduce nesting and improve readability,” or “Convert this to use .reduce() instead of .forEach().”
  • Watch it rewrite within the same context not tangential code thrown at you.

That’s one of its massive values – context-aware surgical refactoring – not blanket “clean this up” that ends in a different variable naming scheme or method order from your repo.

The catch: refactor prompts depend on Copilot’s parsing of your style. If your code is sloppy, it’s going to be sloppily refactored. So yes you still have to keep code clean, comment clearly, and limit complexity. Copilot is the editor version of duct tape not a refactor wizard.

The Brutal Truth

  • VS Code + Copilot isn’t a magical co-developer. It’s a smart auto-completer with chat, living in your IDE, context-aware but utterly obedient to your prompts.
  • The trick is not the AI it’s how you lead it. The better your spec, skeleton, or prompt, the better your code.
  • Your style skeptical, questioning, pragmatic fits perfectly. You don’t let it ride; you interrogate. And that’s exactly how it should be.

TL;DR Summary

TechniqueWhat WorksWhat Fails Without You
Chat-first specDetailed natural-language spec → meaningful codeNo spec clarity → garbage logic
Skeleton promptsProvides structure, types, expectationsBad skeleton = bad code, fast
In-editor refactoring chatContext-preserving improvementsMessy code → messy refactor

If you want more details on how you integrate Copilot into CI, or your personal prompt templates drop me the demand below, and I’ll tackle it head-on next time.

GitHub Copilot: A Getting Started Guide to GitHub Copilot

Note: I’ve decided to start writing up the multitude of AI tools/tooling and this is the first of many posts on this topic. This post is effectively a baseline of what one should be familiar with to get rolling with Github Copilot. As I add posts, I’ll add them at the bottom of this post to reference the different tools, as well as back reference them to this post, etc, so that they’re all easily findable. With that, let’s roll…

Intro

GitHub Copilot is thoroughly changing how developers write code, serving as a kind of industry standard – almost – for AI-powered code completion and generation. As someone who’s been in software development for over two decades, I’ve seen many tools come and go, but the modern variant of Copilot represents a fundamental shift in how we approach coding – it’s not just a tool, it’s a new paradigm for human-AI collaboration in software development.

In this comprehensive guide, I’ll walk you through everything you need to know to get started with GitHub Copilot, from basic setup to advanced features that will transform your development workflow.

What is GitHub Copilot?

GitHub Copilot is an AI-powered code completion tool that acts as your virtual pair programmer. It’s built on OpenAI’s Codex model and trained on billions of lines of public code, making it incredibly adept at understanding context, suggesting completions, and even generating entire functions based on your comments and existing code.

Key Capabilities

  • Real-time code suggestions as you type
  • Comment-to-code generation from natural language descriptions
  • Multi-language support across 50+ programming languages
  • Context-aware completions that understand your project structure
  • IDE integration with VS Code, Visual Studio, Neovim, and JetBrains IDEs

Getting Started: Setup and Installation

Prerequisites

Installation Steps

1. Subscribe to GitHub Copilot

2. Install the Extension

  • VS Code: Search for “GitHub Copilot” in the Extensions marketplace
  • Visual Studio: Install from Visual Studio Marketplace
  • JetBrains IDEs: Install from JetBrains Marketplace
  • Neovim: Use copilot.vim or copilot.lua

3. Authenticate

  • Sign in to your GitHub account when prompted
  • Authorize the extension to access your account
  • Verify your Copilot subscription is active
Screenshot of Visual Studio Code showing the welcome interface, including options for opening chat features, managing code completions, and accessing recent projects.

Core Features and How to Use Them

1. Inline Suggestions

Copilot provides real-time code suggestions as you type. These appear as gray text that you can accept by pressing Tab or Enter.

# Type this comment and Copilot will suggest the function
def calculate_compound_interest(principal, rate, time, compounds_per_year):
    # Copilot will suggest the complete implementation

2. Comment-to-Code Generation

One of Copilot’s most powerful features is generating code from natural language comments.

// Create a function that validates email addresses using regex
// Copilot will generate the complete function with proper validation

3. Function Completion

Start typing a function and let Copilot complete it based on context:

def process_user_data(user_input):
    # Start typing and Copilot will suggest the next lines
    if not user_input:
        return None
    
    # Continue with the implementation

4. Test Generation

Copilot can generate test cases for your functions:

def add_numbers(a, b):
    return a + b

# Type "test" or "def test_" and Copilot will suggest test functions

Advanced Features and Techniques

1. Multi-line Completions

Press Tab to accept suggestions line by line, or use Alt + ] to accept multiple lines at once.

2. Alternative Suggestions

When Copilot suggests code, press Alt + [ or Alt + ] to cycle through alternative suggestions.

3. Inline Chat (Copilot Chat)

The newer Copilot Chat feature allows you to have conversations about your code:

  • Press Ctrl + I (or Cmd + I on Mac) to open inline chat
  • Ask questions about your code
  • Request refactoring suggestions
  • Get explanations of complex code sections

4. Custom Prompts

Learn to write effective prompts for better code generation:

Good prompts:

# Create a REST API endpoint that accepts POST requests with JSON data,
# validates the input, and returns a success response with status code 201

Less effective prompts:

# Make an API endpoint

Best Practices for Effective Copilot Usage

1. Write Clear Comments

The quality of Copilot’s suggestions directly correlates with the clarity of your comments and context.

# Good: Clear, specific description
def parse_csv_file(file_path, delimiter=',', skip_header=True):
    """
    Parse a CSV file and return a list of dictionaries.
    
    Args:
        file_path (str): Path to the CSV file
        delimiter (str): Character used to separate fields
        skip_header (bool): Whether to skip the first row as header
    
    Returns:
        list: List of dictionaries where keys are column names
    """

2. Provide Context

Help Copilot understand your project structure and coding style:

# This function follows the project's error handling pattern
# and uses the standard logging configuration
def process_payment(payment_data):

3. Review Generated Code

Always review and test code generated by Copilot:

  • Check for security vulnerabilities
  • Ensure it follows your project’s coding standards
  • Verify the logic matches your requirements
  • Run tests to confirm functionality

4. Iterative Refinement

Use Copilot as a starting point, then refine the code:

  • Accept the initial suggestion
  • Modify it to match your specific needs
  • Ask Copilot to improve specific aspects
  • Iterate until you have the desired result

Language-Specific Tips

Python

  • Copilot excels at Python due to its extensive training data
  • Great for data science, web development, and automation scripts
  • Excellent at generating docstrings and type hints

JavaScript/TypeScript

  • Strong support for modern ES6+ features
  • Good at React, Node.js, and frontend development patterns
  • Effective at generating test files and API clients

Java

  • Good support for Spring Boot and enterprise patterns
  • Effective at generating boilerplate code and tests
  • Strong understanding of Java conventions

Go

  • Growing support with good understanding of Go idioms
  • Effective at generating HTTP handlers and data structures
  • Good at following Go best practices

Troubleshooting Common Issues

1. Suggestions Not Appearing

  • Verify your Copilot subscription is active
  • Check that you’re signed into the correct GitHub account
  • Restart your IDE after authentication
  • Ensure the extension is properly installed and enabled

2. Poor Quality Suggestions

  • Improve your comments and context
  • Check that your file has the correct language extension
  • Provide more context about your project structure
  • Use more specific prompts

3. Performance Issues

  • Disable other AI coding extensions that might conflict
  • Check your internet connection (Copilot requires online access)
  • Restart your IDE if suggestions become slow
  • Update to the latest version of the extension

4. Security Concerns

  • Never paste sensitive data or credentials into Copilot
  • Review generated code for security vulnerabilities
  • Use Copilot in private repositories when possible
  • Be cautious with code that handles user input or authentication

Integration with Development Workflows

1. Pair Programming

Copilot can act as a third member of your pair programming session:

  • Generate alternative implementations for discussion
  • Create test cases to explore edge cases
  • Suggest refactoring opportunities
  • Help with debugging by generating test scenarios

2. Code Review

Use Copilot to enhance your code review process:

  • Generate additional test cases
  • Suggest alternative implementations
  • Identify potential improvements
  • Create documentation for complex functions

3. Learning and Exploration

Copilot is excellent for learning new technologies:

  • Generate examples of new language features
  • Create sample projects to explore frameworks
  • Build reference implementations
  • Practice with different coding patterns

Enterprise and Team Features

1. GitHub Copilot Business

  • Cost: $19/user/month
  • Features: Advanced security, compliance, and team management
  • Use Cases: Enterprise development teams, compliance requirements

2. GitHub Copilot Enterprise

  • Cost: Custom pricing
  • Features: Advanced security, custom models, dedicated support
  • Use Cases: Large enterprises, government, highly regulated industries

3. Team Management

  • Centralized billing and user management
  • Usage analytics and reporting
  • Security and compliance features
  • Integration with enterprise identity providers

Resources and Further Learning

Official Resources

Third-Party Tutorials and Guides

Advanced Techniques and Pro Tips

1. Custom Snippets and Templates

Create custom snippets that work well with Copilot:

// VS Code snippets.json
{
  "API Endpoint": {
    "prefix": "api-endpoint",
    "body": [
      "app.post('/${1:endpoint}', async (req, res) => {",
      "  try {",
      "    const { ${2:params} } = req.body;",
      "    ${3:// Copilot will suggest validation and processing logic}",
      "    res.status(201).json({ success: true, data: result });",
      "  } catch (error) {",
      "    res.status(500).json({ success: false, error: error.message });",
      "  }",
      "});"
    ]
  }
}

2. Context-Aware Prompts

Learn to write prompts that leverage your project’s context:

# This function should follow the same pattern as the other API functions
# in this file, using the shared error handling and response formatting
def get_user_profile(user_id):

3. Testing Strategies

Use Copilot to generate comprehensive test suites:

# Generate tests that cover edge cases, error conditions, and normal operation
# Use the same testing patterns as the existing test files in this project
def test_user_authentication():

4. Documentation Generation

Let Copilot help with documentation:

# Generate comprehensive docstring following Google style
# Include examples, parameter descriptions, and return value details
def process_payment(payment_data, user_id, options=None):

Security and Privacy Considerations

1. Data Privacy

  • Copilot processes your code to provide suggestions
  • Avoid pasting sensitive information, credentials, or proprietary code
  • Use private repositories when working with confidential code
  • Review GitHub’s privacy policy and data handling practices

2. Code Security

  • Generated code may contain security vulnerabilities
  • Always review and test generated code
  • Use security scanning tools to identify potential issues
  • Follow security best practices for your specific domain

3. Compliance Requirements

  • Consider compliance requirements for your industry
  • Evaluate whether Copilot meets your security standards
    • Will the data going out and back be ok with the org?
    • Do additional SLAs or other requirements need put in place?
  • Consult with your security team before adoption
    • That in and out, being it is a service, could pose a significant number of risks for any org.
  • Document usage policies and guidelines

Performance Optimization

1. IDE Configuration

Optimize your IDE for better Copilot performance:

// VS Code settings.json
{
  "github.copilot.enable": {
    "*": true,
    "plaintext": false,
    "markdown": false,
    "scminput": false
  },
  "github.copilot.suggestions": {
    "enable": true,
    "showInlineSuggestions": true
  }
}

2. Network Optimization

  • Ensure stable internet connection
  • Use VPN if required by your organization
  • Consider enterprise deployment for better performance
  • Monitor network usage and optimize as needed

3. Resource Management

  • Disable other AI coding extensions
  • Monitor memory and CPU usage
  • Restart IDE periodically if performance degrades (?? I’ve seen this suggestion multiple places and it bothers me immensely)
  • Update extensions and IDE regularly

Conclusion

GitHub Copilot represents a fundamental shift in software development, moving us from manual coding to AI-assisted development. While it’s not a replacement for understanding programming fundamentals, it’s a powerful tool that can significantly enhance your productivity and code quality.

The key to success with Copilot is learning to work with it effectively writing clear prompts, providing good context, and always reviewing generated code. Start with the basics, practice regularly, and gradually incorporate more advanced features into your workflow.

As we move forward in this AI-augmented development era, developers who can effectively collaborate with AI tools like Copilot will have a significant advantage. The future of programming isn’t about replacing developers – albeit a whole lot of that might be happening right now – it’s more about augmenting their capabilities and enabling them to focus on higher-level problem solving and innovation.

Next Steps

  1. Set up your GitHub Copilot subscription and install the extension
  2. Practice with simple projects to get comfortable with the workflow
  3. Experiment with different prompting techniques to improve suggestion quality
  4. Integrate Copilot into your daily development routine
  5. Share your experiences and learn from the community

Remember, mastery of AI programming tools like GitHub Copilot is a journey, not a destination. Start today, practice consistently, and you’ll be amazed at how quickly it transforms your development experience.

Next up, more on getting started with the various tools and the baseline knowledge you should have around each.

Follow me on LinkedInMastadon, or Blue Sky for more insights on AI programming and software development.

Staying in Software Dev? Best be able to just do things!

I sat down recently and started reading through some articles. Of all the articles of the 20 or so I was reading, one that stood out from the bunch was something Geoffrey Huntley wrote. In the article “The future belongs to people who can just do things” he brings up some points that I – and I think a LOT of people out there like Geoffrey and I – have been thinking in the preceding months. Let’s delve into a few of those thoughts, paraphrased, and elaborated on.

  • First and foremost, for those coders that have been making a living writing those artisanal, bespoke, hand crafted, single lines of thought out code – your time is nigh.
  • Second, if you’re one of those coders that churns out code, but you don’t care or don’t think about the bigger picture of the product you’re working on, you’re also in for a rude awakening.
  • Third, if you have your environment or your stack that you build with, and don’t explore much beyond that stack (i.e. you’re specialized in .NET or Java or iOS) and rarely touch anything outside of that singular stack, you’re running straight at a brick wall.
  • Fourth, if you’re pedantic about every line of code, every single feature, in a way that doesn’t further progress product but you love to climb up on that hill to die arguing about the state of the code base, you’re going to be left up on the hill to starve to death.

Beyond the developers. If you’re a technical product manager and can’t implement, or coder and can’t product, if you don’t understand the overlap then I’d bet you’ll run into some very serious issues in the coming years. If you’re a manager get ready to mitigate all of the above, and above all get ready to deal with a lot of upheaval. If you still focus on hiring the above focused and inflexible engineers, you’re likely to be falling on that sword for your team. Needless to say, there are very rough waters ahead.

That paints the picture of where the industry is right now and who is at greatest risk. Cast out and unable to realign and move forward with the industry – nay – the world in the coming weeks, months, and years. I’m not even going to mention why, what for, or how we got here. That’s a whole other article. I’m just going to focus on the now and the future.

Continue reading “Staying in Software Dev? Best be able to just do things!”

AI Prompt Engineering: Mastering Language Constructs

In the spirit of expanding upon the ideas laid out in Precision in Words, Precision in Code: The Power of Writing in Modern Development, I delve further into how the precision (where precise that is) of English. By extension I continue with the nuances of other language constructs which serves as a powerful tool when crafting prompts for AI systems. My exploration here, which is a few of the things I’ve discovered through deduction and some trial and error underscores the importance of choosing words with care. It also illuminates how language patterns can trigger distinct model behaviors.

Continue reading “AI Prompt Engineering: Mastering Language Constructs”