AI Is Forcing Docs To Finally Grow Up

For years we talked a big game about documentation being “a product” (which I just wrote about yesterday right here) but let’s be honest, most of the industry never treated it that way. Docs were usually the afterthought stapled onto the release cycle, the box to tick for PMs, the chore no one wanted but everyone relied on. Then generative AI rolled in and quietly exposed just how brittle most documentation is. Suddenly the docs that were just barely acceptable for humans became completely useless for LLMs. That gap is now forcing organizations to rethink how docs get written, structured, published, and maintained.

The shift is subtle but fundamental. We’re no longer writing solely for people and search engines. We’re writing for people, search engines, and AI models that read differently than humans but still need clarity, structure, and semantic meaning to deliver accurate results. This new audience doesn’t replace human readers, it simply demands higher quality and tighter consistency. In the process, it pushes documentation to finally become the product we always claimed it was.

Why AI Is Changing How We Write Docs

AI assistants (tooling/agents/whatever) like ChatGPT and Claude don’t “browse” docs. They parse it. They consume it through embeddings or retrieval systems. They chunk it. They analyze the relationships between sentences, headings, bullets, and examples. When a user asks a question to an LLM, the model is leaning heavily on how well that documentation was written, how well it was structured, and how easily it can be transformed into a correct semantic representation.

When the docs are good, AI becomes the ultimate just-in-time guide. When the docs are sparse, meandering, inconsistent, or buried in PDFs, AI either hallucinate its way forward or simply fails. The AI lens exposes what humans have tolerated for years.

That is why companies are starting to optimize docs not only for readers and SEO crawlers, but for vector databases, RAG pipelines, and automated summarizers. The end result benefits everyone. Better structured content helps AI perform better and human readers navigate faster. AI becomes a multiplier for great doc systems and a harsh critic for bad ones.

What Makes Great Modern Documentation Now

Modern documentation can’t just be readable. It has to be machine digestible, SEO friendly, and human friendly at the same time. After picking through dozens of doc systems and tearing apart patterns in both good and terrible documentation, here is what consistently shows up in the good stuff.

The Criteria

  1. Clear, hierarchical structure using consistent headings
  2. Small, semantically meaningful chunks that can be indexed cleanly
  3. Realistic examples, not toy snippets
  4. Explicit pathfinding: quickstart, deeper guides, reference, troubleshooting
  5. Direct language without fluff
  6. Predictable URLs and logical navigation trees
  7. Copy-pastable awexamples that actually work
  8. Strong inbound and outbound linking
  9. No PDF dumping ground
  10. Schema, config, API, and CLI references that are complete, not partial
  11. Contextual explanations right next to code samples
  12. Versioning that doesn’t break links every release
  13. Upgrade guides that don’t pretend breaking changes are rare
  14. A single authoritative source of truth instead of fractured side systems
  15. Accessible to LLMs: consistent formatting, predictable patterns, clean text, no wild markdown gymnastics

Nothing magical here. Most teams already know these rules. AI just stops letting you ignore them.

Five Examples Of Documentation That Nails It

Below are five strong documentation ecosystems. Each one does something particularly well and gives AI models enough structure to be genuinely useful when parsing or answering questions. I’ll break down why each works and how it maps to the criteria above.

1. Stripe API Docs

https://stripe.com/docs/api

Stripe has been the gold standard for a while. Even after dozens of competitors tried to clone the style, Stripe still leads because they iterate constantly and keep everything ruthlessly consistent.

Why it’s great
• Every endpoint is its own semantic block. LLMs love that.
• Request and response examples are always complete, never partial.
• Navigation is predictable and deep linking is stable.
• They pair conceptual docs, quickstarts, and reference material without overlap.
• All examples are real world and cross language.

How it maps to the criteria
• Structured headings and deep linking check 1, 6, and 12.
• Chunking and semantic units check 2 and 15.
• Real examples and direct language check 3 and 5.
• Pathfinding is excellent which checks 4.
• Copy-pasteable working examples check 7.

2. MDN Web Docs

https://developer.mozilla.org

MDN has decades of content, but it’s shockingly consistent, well-maintained, and semantically structured. It’s one of the best corpora for training and grounding AI models in web fundamentals.

Why it’s great
• Long history yet content stays current.
• Clear separation of reference vs guides vs tutorials.
• Canonical examples for everything the web platform offers.
• Clean, predictable markdown structure across thousands of pages.

How it maps
• Nearly perfect hierarchy and predictable formatting check 1 and 15.
• Chunked explanations with immediately adjacent examples check 2 and 11.
• Stable URLs for almost everything check 6 and 12.
• Strong pathfinding check 4.

3. HashiCorp Terraform Docs

https://developer.hashicorp.com/terraform/docs

Terraform’s documentation is extremely structured which makes it exceptionally machine readable.

Why it’s great
• Providers, resources, and data sources follow identical templates.
• Every argument and attribute is listed with exact behavior.
• Examples aren’t fluff, they reflect real infrastructure patterns.
• Cross linking between providers and core Terraform concepts is tight.

How it maps
• The template system hits 1, 2, 6, 10, 11, and 15.
• Cross linking and clear navigation cover 8.
• Complete reference material covers 10.
• Realistic examples check 3 and 7.

4. Kubernetes Documentation

https://kubernetes.io/docs/home

Kubernetes docs are huge, maybe too huge, but they’re structured well enough that LLMs and humans can still navigate them without losing their minds.

Why it’s great
• Strong concept guides and operator manuals.
• Structured task pages with prerequisites and step-by-step clarity.
• Reference pages built from source-of-truth schemas.
• Thoughtful linking between concepts and tasks.

How it maps
• Strong hierarchy and navigation hit 1 and 6.
• Machine readable chunks via consistent template patterns hit 2 and 15.
• Clear examples and commands check 3 and 7.
• Having both reference and conceptual breakdowns checks 4, 10, and 11.

5. Supabase Docs

https://supabase.com/docs

Supabase’s docs are modern, developer-focused, and written with obvious attention to how AI and search engines consume content. They basically optimized for RAG without ever claiming they did.

Why it’s great
• APIs, client libraries, schema definitions, and guides all interlink tightly.
• Clear quickstarts that become progressively more advanced.
• Rich examples spanning REST, RPC, SQL, and client SDKs.
• Consistent layouts across different product surfaces.

How it maps
• Strong pathfinding and multi-surface linking check 4 and 8.
• Full reference material checks 10.
• Predictable structure and formatting check 1 and 15.
• Example-rich guides check 3, 7, and 11.

Documentation Is Finally Being Treated As A Real Product

The interesting thing is that AI didn’t magically fix documentation. It simply raised expectations. Companies now need their documentation to be clean, complete, structured, predictable, link-friendly, example-rich, and semantically coherent because that is the only way AI can navigate it and support users in meaningful ways. This pressure is good. It forces consistency. It rewards clarity. It makes the entire documentation discipline more rigorous.

The companies that embrace this will have far better support funnels, drastically fewer user frustrations, higher product adoption, and an ecosystem that AI can actually help with instead of stumbling through. The ones that don’t will keep wondering why users stay confused and why their AI chatbots give terrible answers.

Documentation has always been a product. AI is just the first thing that has held us accountable to that truth.

Polyglot Conference Vancouver 2024: Real Talk About AI, Industry Hubris, and the Art of Unconferencing

Just got back from another incredible Polyglot Conference in Vancouver, and I’m still processing everything that went down. There’s something magical about this event – it’s not your typical conference with polished presentations and vendor booth nonsense. It’s an unconference, which means the real magic happens in the conversations, the debates, and the genuine human connections that form when you put a room full of smart, opinionated developers together and let them talk about what actually matters.

The People Make the Conference

It was excellent to meet so many new people and catch up with friends I’ve not gotten to see in some time! This is what makes Polyglot special – it’s not just about the content, it’s about the community. I found myself in conversations with developers from startups to enterprise, from different countries and backgrounds, all bringing their unique perspectives to the table.

There’s something refreshing about being in a room where everyone is there because they genuinely want to be there, not because their company sent them or because they’re trying to sell something. The conversations flow naturally, the questions are real, and the debates are substantive. No one’s trying to impress anyone with buzzwords or corporate speak (Albeit we’ll often laugh our asses off at the nonsense of Corp speak and marketecture).

I caught up with folks I hadn’t seen since before the pandemic, met new faces who are doing interesting work, and had those serendipitous hallway conversations that often lead to the most valuable insights. The kind of conversations where you’re still talking an hour later, completely forgetting that there’s a scheduled session happening somewhere else.

The Unconference Format: Getting to the Heart of Things

The sessions were, as always with an Unconference jam packed with content and when we dove in we got to the heart of the topics real quick. This is the beauty of the unconference format – there’s no time for fluff or corporate posturing. People show up with real problems, real experiences, and real opinions, and we get straight to the point.

Unlike traditional conferences where you sit through 45-minute presentations that could have been 15-minute talks, unconference sessions are dynamic and responsive. Someone brings up a topic, the group decides if it’s worth exploring, and then we dive deep. If the conversation isn’t going anywhere, we pivot. If it’s getting interesting, we keep going. The format respects everyone’s time and intelligence.

The sessions I participated in covered everything from microservices architecture to team dynamics in the face of agentic AI tooling, from introspecting databases with AI tooling to the future of programming languages in the face of AI tooling. But the most compelling discussions were around AI – not the hype, not the marketing, but the real-world implications of what we’re building and how it’s changing our industry – for better or worse – and there’s a lot of expectation it’s bring a lot of the later.

Coping with AI: The Real Talk

Some of the talks included coping with AI, and just the general insanity that surrounds the technology and the hubris of the industry right now. This is where things got really interesting, because we weren’t talking about AI in the abstract or as some distant future possibility. We were talking about it as a present reality that’s already reshaping how we work, think, and build software.

The “coping with AI” discussion was particularly revealing. We’re not talking about how to use AI tools effectively – that’s the easy part. We’re talking about how to maintain our sanity and professional integrity in an industry that’s gone completely off the rails with AI hype and magical thinking.

The Insanity of AI Hype

The insanity surrounding AI right now is breathtaking. Every company is trying to cram AI into every product, whether it makes sense or not. We’re seeing AI-powered toasters and AI-enhanced paper clips, things that have boolean operation where they’re burning through tokens to make a yes or no decision. Utter madness on that front, that’s like half a tree burned up, a windmill rotation, or a chunk or two of coal just to flip a light switch! The technology has become a solution in search of problems, and the industry is happy to oblige with increasingly absurd use cases.

But the real insanity isn’t the over-application of the technology – it’s the way we’re talking about it. AI is being positioned as the solution to every problem, the answer to every question, the future of everything. It’s not just a tool, it’s become a religion. And like any religion, it’s creating true believers who can’t see the limitations, the risks, or the unintended consequences. Maybe “cult” should be added to the “religion” moniker?

The conversations at Polyglot were refreshing because they cut through this hype. We talked about the real limitations of AI, the actual problems it creates (holy bananas there are a lot of them), and the genuine challenges of working with these systems in production. No one was trying to sell anyone on the latest AI miracle – we were trying to understand what’s actually happening and how to deal with it. Simply put, what’s our day to day action plan to mitigate these problems and what are we doing when the hubris and house of cards comes crumbling down? After all, much of the world’s economy is hinged on AI becoming all the things! Nuts!

The Hubris of the Industry

The hubris of the industry right now is staggering. We’re building systems that we don’t fully understand, deploying them at scale, and then acting surprised when they don’t work as expected. The confidence with which people make claims about AI capabilities is matched only by the lack of evidence supporting those claims.

I heard stories from developers who are being asked to implement AI solutions that don’t make technical sense, from managers who think AI can replace human judgment, and from executives who believe that throwing more AI at a problem will automatically make it better. The disconnect between what AI can actually do and what people think it can do is enormous.

The hubris extends beyond just the technology to the way we’re thinking about the future. There’s this assumption that AI will solve all our problems, that it will make us more productive, that it will create a better world. But we’re not asking the hard questions about what we’re actually building, who it serves, and what the long-term consequences might be.

The Real Challenges

The real challenges of working with AI aren’t technical – they’re human. How do you maintain code quality when your team is generating code they don’t fully understand? How do you make architectural decisions when the tools can generate solutions faster than you can evaluate them? How do you maintain professional standards when the industry is racing to the bottom in terms of quality and sustainability?

These are the questions that kept coming up in our discussions. Not “how do I use ChatGPT to write better code” but “how do I maintain my professional integrity in an environment where AI is being used to cut corners and avoid hard thinking?”

The conversations were honest and sometimes uncomfortable. People shared stories of being pressured to use AI in ways that didn’t make sense, of watching their colleagues become dependent on tools they didn’t understand, and of struggling to maintain quality standards in an environment that prioritizes speed over everything else.

The Path Forward

The most valuable part of these discussions wasn’t just identifying the problems – it was exploring potential solutions. How do we maintain our professional standards while embracing the benefits of AI? How do we educate our teams and our organizations about the real capabilities and limitations of these tools? How do we build systems that are both powerful and maintainable?

The consensus seemed to be that we need to be more thoughtful about how we integrate AI into our work. Not as a replacement for human judgment, but as a tool that augments our capabilities. Not as a way to avoid hard problems, but as a way to tackle them more effectively.

We also need to be more honest about the limitations and risks. The industry’s tendency to oversell AI capabilities is creating unrealistic expectations and dangerous dependencies. We need to have more conversations about what AI can’t do, what it shouldn’t do, and what the consequences might be when it’s used inappropriately.

The Value of Real Conversation

What struck me most about these discussions was how different they were from the typical AI conversations you hear at other conferences. There was no posturing, no trying to impress anyone with the latest buzzwords, no corporate speak about “digital transformation” or “AI-first strategies“.

Instead, we had real conversations about real problems with real people who are dealing with these issues every day. People shared their failures as well as their successes, their concerns as well as their optimism, their questions as well as their answers.

This is the value of the unconference format and the Polyglot community. It creates a space where people can be honest about what’s actually happening, where they can ask the hard questions, and where they can explore ideas without the pressure to conform to industry narratives or corporate agendas.

Looking Ahead

As I reflect on the conference, I’m struck by how much the industry has changed since the last time I was at Polyglot. AI has gone from being a niche topic to dominating every conversation. The questions we’re asking have shifted from “what is AI?” to “how do we live with AI?” and “how do we maintain our humanity in an AI-driven world?”

The conversations at Polyglot give me hope that we can navigate this transition thoughtfully. Not by rejecting AI or embracing it uncritically, but by engaging with it honestly and maintaining our professional standards and human values.

The industry needs more spaces like this – places where people can have real conversations about real problems without the hype, the marketing, or the corporate agenda getting in the way. Places where we can explore the hard questions and work together to find better answers.

The Takeaway

The biggest takeaway from Polyglot this year is that we’re at a critical juncture. The AI revolution isn’t coming – it’s here. And the choices we make now about how we integrate these tools into our work, our teams, and our industry will shape the future of software development for decades to come.

We can either let the hype and hubris drive us toward a future where software becomes disposable, quality becomes optional, and human judgment becomes obsolete. Or we can choose a different path – one where AI augments our capabilities without replacing our humanity, where we maintain our professional standards while embracing new tools, and where we build systems that are both powerful and sustainable.

The conversations at Polyglot suggest that there are people in the industry who are choosing the latter path. People who are thinking critically about AI, asking the hard questions, and working to build a future that serves human needs rather than corporate interests.

That gives me hope. And it makes me even more committed to being part of these conversations, to asking the hard questions, and to working with others who are trying to build a better future for our industry.

The Polyglot (Un)Conference and (Un)Conference like events continue to be one of the most valuable events in the software development community. If you’re looking for real conversations about real problems with real people, I can’t recommend it highly enough.

The conference was such a good time with such great topics, introductions, and interactions that I’ve already bought a ticket for next year. If you’re interested in joining the conversation, check out polyglotsoftware.com and grab your tickets at Eventbrite.

The Second Order Effects of AI Acceleration: 22 Predictions from Polyglot Conference Vancouver

I just wrapped up attending an absolutely fascinating session at the Polyglot Conference here in Vancouver, BC. The talk was titled “Second Order Effects of AI Acceleration” and it was thought-provoking discussions – surface level and a little into the meat – that gets the brain gears turning. A room full of developers, architects, product, thinkers, and tech leaders debating predictions about where this AI acceleration is actually taking us.

The format was brilliant with fpredictions followed by arguments for and against each one. No hand-waving, no corporate speak, just real people with real experience hashing out what they think is coming down the pipe. I took notes on all 22 predictions and my immediate gut reactions to each one.

1. Far More Vibe Coded Outages

My Reaction: Simple. I concur with the prediction.

This one hits close to home. We’re already seeing the early signs of this phenomenon. “Vibe coding” – that delightful term for AI-assisted development where developers rely heavily on LLM suggestions without fully understanding the underlying logic – is becoming the norm in many shops. The problem isn’t the AI assistance itself, but the lack of deep understanding that comes with it.

When you’re building on top of code you don’t fully comprehend, you’re essentially creating a house of cards. One small change, one edge case, one unexpected input, and the whole thing comes crashing down. The outages won’t be dramatic server failures necessarily, but subtle bugs that cascade through systems built on shaky foundations.

The real issue here is that debugging vibe-coded systems requires a level of understanding that the original developers may not possess. You can’t effectively troubleshoot what you don’t understand, and that’s going to lead to longer resolution times and more frequent failures.

2. Companies Will More Often Develop Their Own Custom Tools

My Reaction: 100% agreed, as I’ve already seen it happening in places I am working, which is evident anecdotally. However I’ve also seen evidence of fairly extensive glue code being put together via “vibe” coding in other places from Microsoft to Amazon to other places. All for better or worse.

This is already happening at scale, and I’ve witnessed it firsthand. The traditional model of buying off-the-shelf solutions and adapting them is being replaced by rapid prototyping and custom development. Why? Because AI makes it faster and cheaper to build something tailored to your specific needs than to integrate and customize existing solutions.

But here’s the catch – we’re seeing a lot of “glue code” being generated. Not the elegant, well-architected solutions we’d hope for, but rather quick-and-dirty integrations that work for now but create technical debt for later. I’ve seen this pattern at Microsoft, Amazon, and other major tech companies where teams are rapidly prototyping solutions that work in the short term but lack the architectural rigor of traditional enterprise software.

The upside is innovation and speed. The downside is maintenance nightmares and the potential for significant refactoring down the road.

3. We’re in the VC Subsidized Phase of AI; Will Get More Expensive Like Uber + En-shittification

My Reaction: I concur with this point too, which it is kind of odd that there is a agree or disagree here, since it is just the reality of the matter at current time. Eventually the cost, even with reduction in costs from efficiences and such, will go up just from the magnitude of what is being done. Efficiencies will only get us so far. The cost, will eventually have to go up, just as time consumption will start to go up around the organization and coordination of said tooling usage even though what can be done will exponentially grow. This entire specific prediction topic could and needs to be extensively expanded on.

This is the elephant in the room that everyone’s trying to ignore. Right now, we’re in the honeymoon phase where AI services are heavily subsidized by venture capital, similar to how Uber operated in its early days. The prices are artificially low to drive adoption and build market share.

But here’s the reality: the computational costs of running these models at scale are enormous. The energy consumption alone is staggering. As usage grows exponentially, the costs will have to follow. We’re already seeing early signs of this with API rate limits and pricing adjustments from major providers.

The “enshittification” aspect is particularly concerning. As these services become essential infrastructure, providers will have increasing leverage to extract more value. We’ll see feature degradation, increased lock-in, and pricing that reflects the true cost of the service rather than the subsidized rate.

This deserves its own deep dive post – the economics of AI infrastructure are going to fundamentally reshape how we think about software costs.

4. Junior Developers Will Become Senior Developers More Rapidly

My Reaction: Disagree and agree. The ramifications in the software engineering industry around this specific space is extensive. So much so, I’ll write an entirely new post just on this topic. It’s in the cooker, it’ll be ready soon!

This is a nuanced prediction that I have mixed feelings about. On one hand, AI tools are democratizing access to complex programming concepts. A junior developer can now generate sophisticated code patterns, implement complex algorithms, and work with technologies they might not have encountered before.

But here’s the critical distinction: there’s a difference between being able to generate code and being able to architect systems, debug complex issues, and make sound technical decisions under pressure. The latter requires experience, pattern recognition, and deep understanding that can’t be accelerated by AI alone.

I’m seeing a concerning trend where junior developers are being promoted based on their ability to produce working code quickly, but they lack the foundational knowledge to handle the inevitable problems that arise. This creates a dangerous gap in our industry.

The real question is: are we creating a generation of developers who can build but can’t maintain, debug, or evolve systems? This topic is so complex and important that it deserves its own dedicated post.

5. Existing Programming Languages Will Form a Hegemony

My Reaction: I mostly agree with this point. There may be some new languages that come up, and languages that more specifically allow for agents to communicate across paths without the need for human based languages and their respective error prone inefficiences.

The programming language landscape is consolidating around a few dominant players. Python, JavaScript, Java, and C# are becoming the de facto standards for most development work. This consolidation is driven by several factors: AI training data is heavily weighted toward these languages, tooling and ecosystem maturity, and the practical reality that most developers need to work with existing codebases.

However, I think we’ll see some interesting developments in agent-to-agent communication languages. As AI systems become more sophisticated, they may develop their own protocols and languages optimized for machine-to-machine communication rather than human readability. These won’t replace human programming languages, but they’ll exist alongside them for specific use cases.

The hegemony isn’t necessarily bad – it reduces fragmentation and makes it easier to find talent and resources. But it also risks stifling innovation and creating monocultures that are vulnerable to specific types of problems.

6. Value of Contrarian People Will Be Higher Than Yes Men

My Reaction: Agreed. Then of course those with a healthy dose of questions have always found a more useful path in society over time than the “yes men” type cowards. Politics in the US of course being an exception right now.

This prediction resonates deeply with me. In an environment where AI can generate code, documentation, and even architectural decisions at the click of a button, the ability to question, challenge, and think critically becomes exponentially more valuable.

The “yes men” who simply implement whatever is suggested without critical analysis are becoming obsolete. AI can do that job better and faster. What AI can’t do is ask the hard questions: “Is this the right approach?” “What are the long-term implications?” “How does this fit with our broader strategy?”

Contrarian thinking becomes a competitive advantage because it’s the one thing that AI can’t replicate – genuine skepticism and independent thought. The people who can look at AI-generated solutions and say “wait, this doesn’t make sense” or “we’re missing something important here” will become increasingly valuable.

This is especially true in technical leadership roles where the ability to make nuanced decisions and see around corners becomes critical.

7. Shienification of Software (Software Will Become More Like Fast Fashion)

My Reaction: I agree that this will start to happen, as US led capitalism tends toward a race to the bottom, sadly, and with all the negatives of fast fashion (there are a lot) this will happen with AI led software development. Everything from enshittification to the environmental negatives, this is going to happen and many in the industry can only do their best to mitigate the negatives.

This is perhaps the most concerning prediction on the list. The “Shienification” of software refers to the trend toward disposable, quickly-produced software that follows the fast fashion model: cheap, trendy, and designed to be replaced rather than maintained.

We’re already seeing signs of this. AI makes it incredibly easy to generate new applications, features, and even entire systems. The barrier to entry is lower than ever, which means more software is being produced with less thought given to long-term sustainability.

The environmental impact is particularly troubling. The computational resources required to train and run AI models are enormous, and if we’re producing more disposable software, we’re essentially burning through resources for short-term gains.

The challenge for the industry is to resist this trend and maintain focus on building software that’s designed to last, evolve, and provide long-term value rather than quick wins.

8. Relative Value of Fostering Talent to Name Things Well Will Be More Important

My Reaction: Under-reported prediction and NEED among skillsets. Using the right words at the right times for the right things in the right way are going to exponentially grow as a skillset need.

This is a subtle but profound prediction that I think is being overlooked. In an AI-driven development environment, the ability to name things well becomes critical because it directly impacts how effectively AI can understand and work with your code.

Good naming conventions, clear abstractions, and well-defined interfaces become the difference between AI that can effectively assist and AI that generates confusing, unmaintainable code. The people who can create clear, semantic naming schemes and architectural patterns will become incredibly valuable.

This extends beyond just variable names and function names. It includes the ability to create clear APIs, well-defined data models, and intuitive system architectures that both humans and AI can understand and work with effectively.

The irony is that as AI becomes more capable, the human skills around communication, clarity, and semantic design become more important, not less.

9. Optimize LLM for Specific Use Cases

My Reaction: Not sure this is a prediction, it’s already happening.

This is already well underway. We’re seeing specialized models for coding (GitHub Copilot, Cursor), for specific domains (legal, medical, financial), and for particular tasks (code review, documentation generation, testing).

The trend toward specialization makes sense from both a performance and cost perspective. A general-purpose model trying to be good at everything will inevitably be mediocre at most things. Specialized models can be optimized for specific use cases, leading to better results and more efficient resource usage.

We’re also seeing the emergence of model composition, where different specialized models work together to handle complex tasks. This is likely to continue and accelerate as the technology matures.

10. Companies Will Die Faster Because We Can Replicate Functionality Faster

My Reaction: Agreed.

This is a natural consequence of lowered barriers to entry. If AI makes it easier and faster to build software, then it also makes it easier and faster to replicate existing functionality. This creates a more competitive landscape where companies need to move faster and innovate more aggressively to maintain their competitive advantage.

The traditional moats around software companies – technical complexity, development time, specialized knowledge – are being eroded by AI. What used to take months or years to build can now be prototyped in days or weeks.

This isn’t necessarily bad for consumers, who will benefit from more competition and faster innovation. But it’s challenging for companies that rely on technical barriers to entry as their primary competitive advantage.

The companies that survive will be those that can move fastest, adapt most quickly, and find new ways to create value beyond just technical implementation.

11. Existence of Non-Technical Managers Will Decrease

My Reaction: I’m doubtful of this. If anything the use of AI will lower overall technical ability and will cause some significant issues around troubleshooting from lack of depth of those using the tooling to gloss over deep knowledge.

I’m skeptical of this prediction. While AI might make it easier for non-technical people to generate code, it doesn’t necessarily make them better at managing technical teams or making technical decisions.

In fact, I think we might see the opposite trend. As AI tools become more accessible, we might see more people in management roles who can generate code but lack the deep technical understanding needed to make sound architectural decisions or troubleshoot complex issues.

The real challenge will be ensuring that technical managers have both the AI-assisted productivity tools and the foundational knowledge needed to make good decisions. Simply being able to generate code doesn’t make someone a good technical leader.

12. Vibe Code Will Cause a Return to Small Teams with Microservices

My Reaction: Agree and disagree, in that order.

I agree that vibe coding will drive architectural changes, but I’m not sure microservices is the inevitable result. The challenge with vibe-coded systems is that they’re often built without a clear understanding of the underlying architecture, which can lead to tightly coupled, monolithic systems that are hard to maintain.

However, the trend toward microservices might be driven more by the need to isolate failures and limit the blast radius of bugs in vibe-coded systems. If you can’t trust the code quality, you need to architect around that uncertainty.

The disagreement comes from the fact that microservices also require significant architectural discipline and understanding, which might be at odds with the vibe coding approach. We might see a different architectural pattern emerge that’s better suited to AI-assisted development.

13. Software Will Become a Living Conversation, Not a Static Thing

My Reaction: Agree, more dynamic conversations form the speed increase will occur in many projects for some products and some services.

This is already happening in many development environments. The traditional model of writing code, testing it, and deploying it is being replaced by a more iterative, conversational approach where developers work with AI to continuously refine and improve their systems.

The speed of iteration is increasing dramatically. What used to take days or weeks can now happen in hours or minutes. This allows for more experimentation, faster feedback loops, and more responsive development processes.

However, this also creates challenges around version control, testing, and deployment. If software is constantly evolving, how do you ensure stability and reliability? How do you manage the complexity of systems that are always changing?

14. Website Search Will No Longer Be Relevant in ~3 Years

My Reaction: Is it now?

This prediction seems to assume that website search is currently highly relevant, which I’m not sure is the case. Traditional web search has been declining in relevance for years as content has moved to social media, apps, and other platforms.

The rise of AI-powered search and information retrieval might accelerate this trend, but I think the real question is whether website search was ever as relevant as we thought it was. The future of information discovery is likely to be more conversational and contextual, driven by AI rather than traditional keyword-based search.

15. LLMs Will Be Software and Replace Stacks

My Reaction: I’m not sure this isn’t the way it is already. The context and case and specificity isn’t really clear here.

This prediction is a bit vague, but I think it’s referring to the idea that LLMs might become the primary interface for interacting with software systems, potentially replacing traditional APIs and user interfaces.

We’re already seeing early signs of this with AI-powered interfaces that can understand natural language and translate it into system actions. The question is whether this will extend to the point where traditional software stacks become obsolete.

I’m skeptical that this will happen completely, but I do think we’ll see more AI-native interfaces and interactions that make traditional software feel more conversational and intuitive.

16. (Software) Libraries Will Become Less Relevant

My Reaction: Agreed.

As AI becomes more capable of generating code from scratch, the need for pre-built libraries and frameworks may decrease. Why use a library when you can have AI generate exactly what you need, tailored to your specific use case?

This trend is already visible in some areas where developers are using AI to generate custom implementations rather than pulling in external dependencies. The benefits include reduced dependency management, smaller bundle sizes, and more control over the implementation.

However, this also means losing the benefits of community-maintained, battle-tested code. The challenge will be finding the right balance between custom generation and proven libraries.

17. In the Future Ads Will Become Even More Precise; LLMs Will Have More Info for Targeting

My Reaction: Agreed. I hate this.

This is perhaps the most dystopian prediction on the list. As LLMs become more sophisticated and have access to more personal data, they’ll be able to create incredibly targeted and persuasive advertising that’s tailored to individual users’ psychology, preferences, and vulnerabilities.

The privacy implications are enormous. We’re already seeing early signs of this with AI-powered ad targeting that can analyze user behavior and create personalized content. As the technology improves, this will become even more sophisticated and invasive.

This is a trend that I find deeply concerning from both a privacy and societal perspective. The ability to manipulate individuals through highly targeted, AI-generated content represents a significant threat to autonomy and informed decision-making.

18. There Will Be a Standardization of Information Architecture Which Will Allow Faster Iteration of Tooling

My Reaction: Doubtful. If humanity and industry hasn’t done this already I see no reason we’ll do it now.

This prediction assumes that we’ll finally achieve the standardization that we’ve been trying to implement for decades. While AI might make it easier to work with standardized formats and protocols, I’m skeptical that it will drive the kind of widespread adoption needed for true standardization.

The history of technology is full of failed standardization attempts. Even when standards exist, they’re often ignored or implemented inconsistently. AI might make it easier to work with existing standards, but it won’t necessarily create the political and economic incentives needed for widespread adoption.

19. LLMs Will Cause a Dearth of New Innovation

My Reaction: I fear that this could happen, in some ways. But in other ways I think humanity can and will be forced – with the culmination of AI, the toxic immolation of democracies, and other horrors facing the world right now – we’ll innovate, change, and hopefully for the better in ways we don’t even grasp with these massive triggers effecting us.

This is a complex prediction that touches on fundamental questions about human creativity and innovation. On one hand, if AI can generate solutions to most problems, there might be less incentive for humans to engage in the kind of deep, creative thinking that leads to breakthrough innovations.

On the other hand, the challenges we’re facing as a society – climate change, political instability, economic inequality – are so profound that they might force innovation in ways we can’t currently imagine. The combination of AI capabilities and existential threats might actually accelerate innovation rather than stifle it.

The key question is whether AI will augment human creativity or replace it. I’m optimistic that it will be the former, but it’s not guaranteed.

20. AIs Will Invent Own Programming Language

My Reaction: Agreed. I believe to some degree they already have.

This is already happening in subtle ways. AI systems are developing their own internal representations and communication protocols that are optimized for machine-to-machine interaction rather than human readability.

As AI systems become more sophisticated and need to work together, they’ll likely develop more formal languages and protocols for communication. These won’t replace human programming languages, but they’ll exist alongside them for specific use cases.

The interesting question is whether these AI-invented languages will be more efficient or expressive than human-designed languages, and whether humans will eventually adopt them for certain types of programming tasks.

21. Some Countries Will Make AI Access a Universal Right

My Reaction: Agreed. While others will block it, ban it, and control it and shape it to further attack and rewrite known narratives the world accepts as positives.

This prediction reflects the growing recognition that AI access is becoming a fundamental requirement for participation in modern society. Just as internet access has become essential for education, employment, and civic participation, AI access is following the same trajectory.

Some countries will embrace this and provide universal access to AI tools and services, recognizing it as a public good. Others will restrict access, either for political reasons or to maintain control over information and communication.

The geopolitical implications are significant. Countries that provide universal AI access will have a competitive advantage in education, innovation, and economic development. Those that restrict it will fall behind.

22. Languages Used for Strengths More Than as a Panacea

My Reaction: “Languages used for strengths more than as a panacea.” I added this one. It’s more a hope than a prediction. For example, I hope Go is used for its strengths, Rust for its strengths, Java, C#, etc. Instead of trying to make C# or Rust the panacea across all platforms and all needs. One can hope!

This is my addition to the list, and it’s more of a hope than a prediction. In an AI-driven development environment, there’s a risk that we’ll default to whatever language the AI is most comfortable with, rather than choosing the right tool for the job.

I hope that AI will actually help empower us to make better language choices by understanding the strengths and weaknesses of different languages and recommending the most appropriate one for each use case. Go for its concurrency and simplicity, Rust for its safety and performance, Java for its enterprise ecosystem, C# for its Microsoft integration, and so on.

The goal should be to use each language for what it does best, rather than trying to make one language solve every problem. AI could actually help us achieve this by providing better guidance on language selection and architecture decisions.

The Big Picture

These 22 predictions all of us participants conjured up paint a picture of an industry in rapid transformation. Some trends are already visible, others are still emerging. The common thread is that AI is fundamentally changing how we think about software development, from the tools we use to the way we organize teams and make decisions – all for better or worse.f

The challenge for the industry is to navigate these changes thoughtfully, preserving what’s valuable about traditional software development while embracing the opportunities that AI presents. The predictions that concern me most are those that suggest a race to the bottom in terms of quality, sustainability, and long-term thinking.

The predictions that excite me most are those that suggest AI will augment human capabilities rather than replace them, enabling us to be more creative and experimental in our solutions while preserving the critical thinking and problem-solving skills that make good developers valuable.

As we move forward, the key will be maintaining our focus on building software that’s not just functional, but sustainable, maintainable, and valuable in the long term. AI can help us build faster, but it can’t replace the judgment and wisdom needed to build well.


What are your thoughts on these predictions? Which ones resonate with your experience, and which ones seem off-base? I’d love to hear your perspective on where you think AI is taking our industry.

I’m @ Mastadon https://metalhead.club/@adron, Threads https://www.threads.com/@adron, and Blue Sky https://bsky.app/profile/adron.bsky.social – hit me up with your thoughts!

VS Code & Copilot: The Chat-First Spec Definition Method

My initial review of CoPilot and getting started is available here.

(What it does well – and why it’s not magic, but almost!)

Let me be clear: the Copilot Chat feature in VS Code can feel like a miracle until it’s not. When it’s working, you fire off a multi-line prompt defining what you want: “Build a function for X, validate Y, return Z…” and boom VS Code’s inline chat generates a draft that is scary good.

  • What actually wins: It interprets your specification in context – your open file, project imports, naming conventions – and spits out runnable sample code. That’s not trivial; reputable models often lose context threading. Here, the chat lives in your editor, not detached, and that nuance matters.
    It’s like sketching the spec in natural language, then having VS Code autocomplete not just code but entire behavior.
  • What you have to still do: Take a breath, a le sigh, and read it. Always. Control flow, edge cases, off-by-one errors Copilot doesn’t care. Security? Data leakage? All on you. Copilot doesn’t own the logic; it just stitches together patterns it’s seen. You own the correctness.
  • Trick that matters: Iterate. Ask follow-up: “Okay, now handle invalid inputs by throwing InvalidArgumentException,” or “Refactor this to async/await.” Having a chat continuum in the editor is powerful but don’t forget it’s your spec, not the AI’s.

Technique 2: Prompt With Skeleton First

Skip blindly describing behavior. Instead, scaffold it:

// Function: validateUserInput
// Takes { name: string, age: number }
// Returns { valid: boolean, errors: string[] }
// Edge cases: missing name, non-numeric age

function validateUserInput(input) {
  // ...
}

Then let Copilot fill in the body. Why this rocks:

  • You’re giving structure; types, return shapes, edge conditions.
  • The code auto-generated fits into your skeleton, adhering to your naming, your data model.
  • You retain control over boundaries, types, and structure even before Copilot chimes in.

Downside? If your skeleton is misleading or incomplete, Copilot will “fill in” confidently, in code that compiles but does the wrong thing. Again, your code review has to rule.

Technique 3: In-Context Refactoring Conversations (AKA “Let me fix your mess, Copilot”)

Ever accepted a Copilot suggestion, then hated it? Instead of discarding, turn on Copilot Chat:

  • Ask it: “Refactor this to reduce nesting and improve readability,” or “Convert this to use .reduce() instead of .forEach().”
  • Watch it rewrite within the same context not tangential code thrown at you.

That’s one of its massive values – context-aware surgical refactoring – not blanket “clean this up” that ends in a different variable naming scheme or method order from your repo.

The catch: refactor prompts depend on Copilot’s parsing of your style. If your code is sloppy, it’s going to be sloppily refactored. So yes you still have to keep code clean, comment clearly, and limit complexity. Copilot is the editor version of duct tape not a refactor wizard.

The Brutal Truth

  • VS Code + Copilot isn’t a magical co-developer. It’s a smart auto-completer with chat, living in your IDE, context-aware but utterly obedient to your prompts.
  • The trick is not the AI it’s how you lead it. The better your spec, skeleton, or prompt, the better your code.
  • Your style skeptical, questioning, pragmatic fits perfectly. You don’t let it ride; you interrogate. And that’s exactly how it should be.

TL;DR Summary

TechniqueWhat WorksWhat Fails Without You
Chat-first specDetailed natural-language spec → meaningful codeNo spec clarity → garbage logic
Skeleton promptsProvides structure, types, expectationsBad skeleton = bad code, fast
In-editor refactoring chatContext-preserving improvementsMessy code → messy refactor

If you want more details on how you integrate Copilot into CI, or your personal prompt templates drop me the demand below, and I’ll tackle it head-on next time.

GitHub Copilot: A Getting Started Guide to GitHub Copilot

Note: I’ve decided to start writing up the multitude of AI tools/tooling and this is the first of many posts on this topic. This post is effectively a baseline of what one should be familiar with to get rolling with Github Copilot. As I add posts, I’ll add them at the bottom of this post to reference the different tools, as well as back reference them to this post, etc, so that they’re all easily findable. With that, let’s roll…

Intro

GitHub Copilot is thoroughly changing how developers write code, serving as a kind of industry standard – almost – for AI-powered code completion and generation. As someone who’s been in software development for over two decades, I’ve seen many tools come and go, but the modern variant of Copilot represents a fundamental shift in how we approach coding – it’s not just a tool, it’s a new paradigm for human-AI collaboration in software development.

In this comprehensive guide, I’ll walk you through everything you need to know to get started with GitHub Copilot, from basic setup to advanced features that will transform your development workflow.

What is GitHub Copilot?

GitHub Copilot is an AI-powered code completion tool that acts as your virtual pair programmer. It’s built on OpenAI’s Codex model and trained on billions of lines of public code, making it incredibly adept at understanding context, suggesting completions, and even generating entire functions based on your comments and existing code.

Key Capabilities

  • Real-time code suggestions as you type
  • Comment-to-code generation from natural language descriptions
  • Multi-language support across 50+ programming languages
  • Context-aware completions that understand your project structure
  • IDE integration with VS Code, Visual Studio, Neovim, and JetBrains IDEs

Getting Started: Setup and Installation

Prerequisites

Installation Steps

1. Subscribe to GitHub Copilot

2. Install the Extension

  • VS Code: Search for “GitHub Copilot” in the Extensions marketplace
  • Visual Studio: Install from Visual Studio Marketplace
  • JetBrains IDEs: Install from JetBrains Marketplace
  • Neovim: Use copilot.vim or copilot.lua

3. Authenticate

  • Sign in to your GitHub account when prompted
  • Authorize the extension to access your account
  • Verify your Copilot subscription is active
Screenshot of Visual Studio Code showing the welcome interface, including options for opening chat features, managing code completions, and accessing recent projects.

Core Features and How to Use Them

1. Inline Suggestions

Copilot provides real-time code suggestions as you type. These appear as gray text that you can accept by pressing Tab or Enter.

# Type this comment and Copilot will suggest the function
def calculate_compound_interest(principal, rate, time, compounds_per_year):
    # Copilot will suggest the complete implementation

2. Comment-to-Code Generation

One of Copilot’s most powerful features is generating code from natural language comments.

// Create a function that validates email addresses using regex
// Copilot will generate the complete function with proper validation

3. Function Completion

Start typing a function and let Copilot complete it based on context:

def process_user_data(user_input):
    # Start typing and Copilot will suggest the next lines
    if not user_input:
        return None
    
    # Continue with the implementation

4. Test Generation

Copilot can generate test cases for your functions:

def add_numbers(a, b):
    return a + b

# Type "test" or "def test_" and Copilot will suggest test functions

Advanced Features and Techniques

1. Multi-line Completions

Press Tab to accept suggestions line by line, or use Alt + ] to accept multiple lines at once.

2. Alternative Suggestions

When Copilot suggests code, press Alt + [ or Alt + ] to cycle through alternative suggestions.

3. Inline Chat (Copilot Chat)

The newer Copilot Chat feature allows you to have conversations about your code:

  • Press Ctrl + I (or Cmd + I on Mac) to open inline chat
  • Ask questions about your code
  • Request refactoring suggestions
  • Get explanations of complex code sections

4. Custom Prompts

Learn to write effective prompts for better code generation:

Good prompts:

# Create a REST API endpoint that accepts POST requests with JSON data,
# validates the input, and returns a success response with status code 201

Less effective prompts:

# Make an API endpoint

Best Practices for Effective Copilot Usage

1. Write Clear Comments

The quality of Copilot’s suggestions directly correlates with the clarity of your comments and context.

# Good: Clear, specific description
def parse_csv_file(file_path, delimiter=',', skip_header=True):
    """
    Parse a CSV file and return a list of dictionaries.
    
    Args:
        file_path (str): Path to the CSV file
        delimiter (str): Character used to separate fields
        skip_header (bool): Whether to skip the first row as header
    
    Returns:
        list: List of dictionaries where keys are column names
    """

2. Provide Context

Help Copilot understand your project structure and coding style:

# This function follows the project's error handling pattern
# and uses the standard logging configuration
def process_payment(payment_data):

3. Review Generated Code

Always review and test code generated by Copilot:

  • Check for security vulnerabilities
  • Ensure it follows your project’s coding standards
  • Verify the logic matches your requirements
  • Run tests to confirm functionality

4. Iterative Refinement

Use Copilot as a starting point, then refine the code:

  • Accept the initial suggestion
  • Modify it to match your specific needs
  • Ask Copilot to improve specific aspects
  • Iterate until you have the desired result

Language-Specific Tips

Python

  • Copilot excels at Python due to its extensive training data
  • Great for data science, web development, and automation scripts
  • Excellent at generating docstrings and type hints

JavaScript/TypeScript

  • Strong support for modern ES6+ features
  • Good at React, Node.js, and frontend development patterns
  • Effective at generating test files and API clients

Java

  • Good support for Spring Boot and enterprise patterns
  • Effective at generating boilerplate code and tests
  • Strong understanding of Java conventions

Go

  • Growing support with good understanding of Go idioms
  • Effective at generating HTTP handlers and data structures
  • Good at following Go best practices

Troubleshooting Common Issues

1. Suggestions Not Appearing

  • Verify your Copilot subscription is active
  • Check that you’re signed into the correct GitHub account
  • Restart your IDE after authentication
  • Ensure the extension is properly installed and enabled

2. Poor Quality Suggestions

  • Improve your comments and context
  • Check that your file has the correct language extension
  • Provide more context about your project structure
  • Use more specific prompts

3. Performance Issues

  • Disable other AI coding extensions that might conflict
  • Check your internet connection (Copilot requires online access)
  • Restart your IDE if suggestions become slow
  • Update to the latest version of the extension

4. Security Concerns

  • Never paste sensitive data or credentials into Copilot
  • Review generated code for security vulnerabilities
  • Use Copilot in private repositories when possible
  • Be cautious with code that handles user input or authentication

Integration with Development Workflows

1. Pair Programming

Copilot can act as a third member of your pair programming session:

  • Generate alternative implementations for discussion
  • Create test cases to explore edge cases
  • Suggest refactoring opportunities
  • Help with debugging by generating test scenarios

2. Code Review

Use Copilot to enhance your code review process:

  • Generate additional test cases
  • Suggest alternative implementations
  • Identify potential improvements
  • Create documentation for complex functions

3. Learning and Exploration

Copilot is excellent for learning new technologies:

  • Generate examples of new language features
  • Create sample projects to explore frameworks
  • Build reference implementations
  • Practice with different coding patterns

Enterprise and Team Features

1. GitHub Copilot Business

  • Cost: $19/user/month
  • Features: Advanced security, compliance, and team management
  • Use Cases: Enterprise development teams, compliance requirements

2. GitHub Copilot Enterprise

  • Cost: Custom pricing
  • Features: Advanced security, custom models, dedicated support
  • Use Cases: Large enterprises, government, highly regulated industries

3. Team Management

  • Centralized billing and user management
  • Usage analytics and reporting
  • Security and compliance features
  • Integration with enterprise identity providers

Resources and Further Learning

Official Resources

Third-Party Tutorials and Guides

Advanced Techniques and Pro Tips

1. Custom Snippets and Templates

Create custom snippets that work well with Copilot:

// VS Code snippets.json
{
  "API Endpoint": {
    "prefix": "api-endpoint",
    "body": [
      "app.post('/${1:endpoint}', async (req, res) => {",
      "  try {",
      "    const { ${2:params} } = req.body;",
      "    ${3:// Copilot will suggest validation and processing logic}",
      "    res.status(201).json({ success: true, data: result });",
      "  } catch (error) {",
      "    res.status(500).json({ success: false, error: error.message });",
      "  }",
      "});"
    ]
  }
}

2. Context-Aware Prompts

Learn to write prompts that leverage your project’s context:

# This function should follow the same pattern as the other API functions
# in this file, using the shared error handling and response formatting
def get_user_profile(user_id):

3. Testing Strategies

Use Copilot to generate comprehensive test suites:

# Generate tests that cover edge cases, error conditions, and normal operation
# Use the same testing patterns as the existing test files in this project
def test_user_authentication():

4. Documentation Generation

Let Copilot help with documentation:

# Generate comprehensive docstring following Google style
# Include examples, parameter descriptions, and return value details
def process_payment(payment_data, user_id, options=None):

Security and Privacy Considerations

1. Data Privacy

  • Copilot processes your code to provide suggestions
  • Avoid pasting sensitive information, credentials, or proprietary code
  • Use private repositories when working with confidential code
  • Review GitHub’s privacy policy and data handling practices

2. Code Security

  • Generated code may contain security vulnerabilities
  • Always review and test generated code
  • Use security scanning tools to identify potential issues
  • Follow security best practices for your specific domain

3. Compliance Requirements

  • Consider compliance requirements for your industry
  • Evaluate whether Copilot meets your security standards
    • Will the data going out and back be ok with the org?
    • Do additional SLAs or other requirements need put in place?
  • Consult with your security team before adoption
    • That in and out, being it is a service, could pose a significant number of risks for any org.
  • Document usage policies and guidelines

Performance Optimization

1. IDE Configuration

Optimize your IDE for better Copilot performance:

// VS Code settings.json
{
  "github.copilot.enable": {
    "*": true,
    "plaintext": false,
    "markdown": false,
    "scminput": false
  },
  "github.copilot.suggestions": {
    "enable": true,
    "showInlineSuggestions": true
  }
}

2. Network Optimization

  • Ensure stable internet connection
  • Use VPN if required by your organization
  • Consider enterprise deployment for better performance
  • Monitor network usage and optimize as needed

3. Resource Management

  • Disable other AI coding extensions
  • Monitor memory and CPU usage
  • Restart IDE periodically if performance degrades (?? I’ve seen this suggestion multiple places and it bothers me immensely)
  • Update extensions and IDE regularly

Conclusion

GitHub Copilot represents a fundamental shift in software development, moving us from manual coding to AI-assisted development. While it’s not a replacement for understanding programming fundamentals, it’s a powerful tool that can significantly enhance your productivity and code quality.

The key to success with Copilot is learning to work with it effectively writing clear prompts, providing good context, and always reviewing generated code. Start with the basics, practice regularly, and gradually incorporate more advanced features into your workflow.

As we move forward in this AI-augmented development era, developers who can effectively collaborate with AI tools like Copilot will have a significant advantage. The future of programming isn’t about replacing developers – albeit a whole lot of that might be happening right now – it’s more about augmenting their capabilities and enabling them to focus on higher-level problem solving and innovation.

Next Steps

  1. Set up your GitHub Copilot subscription and install the extension
  2. Practice with simple projects to get comfortable with the workflow
  3. Experiment with different prompting techniques to improve suggestion quality
  4. Integrate Copilot into your daily development routine
  5. Share your experiences and learn from the community

Remember, mastery of AI programming tools like GitHub Copilot is a journey, not a destination. Start today, practice consistently, and you’ll be amazed at how quickly it transforms your development experience.

Next up, more on getting started with the various tools and the baseline knowledge you should have around each.

Follow me on LinkedInMastadon, or Blue Sky for more insights on AI programming and software development.