12 Days into 2026 – Status & update of my projects.

As mentioned in my end of 2025 post, there were several projects I intended to start, continue, or finish up in 2026. This is a quick status and an additional few things I’ve started working on.

https://interlinedlist.com – This is live. Albeit not very built out, but the start is live. You can even sign up for an account and play around with it. A caveat though, this is pre-alpha, and not very feature rich at all and I don’t have the programmer oriented tasks integrated in yet. It’s just the micro-blogging. But hey, some progress is better than no progress!

https://www.datadiluvium.com – This is still live and I’ve got some changes to push, but they’re breaking, so as soon as I dig up a bit of free coder time those will get resolved and I’ll get the latest tweaks pushed. I also need to get some basic documentation and also something posted here on the blog about what you can do, or would want to do with it.

dashingarrivals – First commit isn’t done. Working on it, it’ll be coming soon per the pervious post.

collectorstunetracker – No current update. Albeit I need to do something about my collection, cuz it has only grown and not shrunk! 🤘🏻

Writing – This is one of many posts. I wrote this one too, not some AI nonsense.

New News About New Projects

I discussed it a while back on one of the social medias (I’m on Threads, Mastadon, and Blue Sky – join me there) the idea of doing some videos on algorithms and data structures. The intent is to put together some videos similar to these “Coding with AI: A Comparative Analysis“. It could be good, and I’d tack onto the algorithms, data structures some additional lagniappe via concurrency patterns I previously wrote about. If interested, subscript to the blog here or subscribe to my YouTube at https://www.youtube.com/adronhall.

Until then, keep thrashing the code! 🤘🏻

A Reflection on SOLID: Decades of Code, Principles, and a Changing Future

After decades of building, breaking, refactoring, and rebuilding systems, from scrappy startups to enterprise labyrinths, I’ve seen a lot of patterns come and go.
But few have stuck around as stubbornly as the SOLID principles.

They’re like the veteran developers of software philosophy: reliable, experienced, and still showing up to standups long after everyone else has moved on to the latest framework or architecture fad. For years, teams I’ve been on, and many I’ve led, have leaned heavily on SOLID as the bedrock of maintainable software design. For the most part, it’s served us well.

Still, as the craft of development shifts into an AI-augmented era, I can’t help but wonder: is SOLID still as solid as it once was?

The Foundation We Built On

Let’s run through the familiar set – but not as definitions you could pull from Wikipedia. Let’s talk about what they actually meant in practice.

Single Responsibility Principle
This was the sanity rule. Keep your classes (or other similar code elements) from doing too much. If it has more than one reason to change, it’ll eventually collapse under its own weight. I learned this one early, often the hard way, cleaning up classes that had taken on the personality of their developer, a little of everything, in one chaotic pile.

Open/Closed Principle
The guiding star of extensibility: extend without modifying. In theory, that meant safety from regression and in many cases if done well it would help with regression and ongoing feature development. In reality, it meant endless debates about whether you were “violating OCP” every time you opened a file to fix something.

Liskov Substitution Principle
The quiet workhorse. Inheritance should make sense. If your subclass breaks expectations, you’re not using polymorphism, you’re just lying to your codebase.

Interface Segregation Principle
This was the rebellion against “god interfaces.” The countless times I’ve seen an IThingManager with thirty methods, half of which every implementation ignores. ISP was the call to split those monsters into sane, digestible contracts.

Dependency Inversion Principle
The rise of abstractions over implementations. This principle was both a blessing and a curse, the birthplace of dependency injection frameworks and inversion-of-control containers that we alternately loved and cursed. It gave structure but also a new layer of complexity.

When SOLID Worked

Over the years, I’ve seen SOLID absolutely save teams, including mine, from coding disasters. For example when a project scaled from three developers to thirty, SOLID acted as the stabilizing force. It created a shared language around what “good code” meant.

It kept monoliths from turning to spaghetti. It made refactoring survivable. It let teams ship features faster without worrying that a small change in the billing module would trigger a cascade failure in authentication.

In essence, SOLID made code cooperative. It taught us to think in modules, contracts, and boundaries. Those were lessons worth keeping.

When SOLID Became a Problem

But for every clean, modular, testable success story, there’s an over-engineered mess hiding behind the same banner.
I’ve walked into codebases where every noun in the business domain had an interface, an abstract class, and two decorators, none of which did anything meaningful. All in the name of being “SOLID.”

Here’s where it tends to go wrong:

  • Too Many Abstractions. “Open for extension” became “never touch anything again.” Layers of indirection were added to prevent change, not to enable it.
  • Framework Fetishism. Dependency injection containers got treated like religion. If you weren’t injecting it, mocking it, or wrapping it in a factory, were you even a developer?
  • Premature Architecture. Entire hierarchies were built for features that never came. Extensibility for the sake of hypotheticals.
  • Cognitive Overhead. Code so “modular” that new developers spent a week just tracing through interfaces before they found the line that actually did something.

SOLID was meant to reduce complexity, but taken too far, it became its own source of it.

The Shift: SOLID in the Age of AI-Generated Code

Now we’re standing at a strange crossroads.
AI-assisted development has changed the cost equation of software. Code is no longer expensive to write, it’s expensive to understand.

That changes everything.

When an AI can rewrite, refactor, or regenerate entire systems on command, the reasons we relied on SOLID begin to shift. SOLID was about human maintainability, protecting code from the chaos of change over time. But if AI can refactor in seconds what once took hours, do we still need all that protective architecture?

Maybe not. Or maybe, it needs reinterpretation.

AI doesn’t care if your class violates the Single Responsibility Principle; it can just regenerate it into smaller, purpose-built components when needed. But for humans reading, debugging, or reasoning about the system, SOLID still matters. It’s how we think about structure, even if we don’t handcraft it anymore.

The next evolution might be AI-native SOLID, principles guiding how AI systems generate and organize code for clarity, composability, and self-repair.

What Remains Solid?

In the end, SOLID wasn’t just about code. It was about discipline. About having a mental model for systems that grow beyond a single developer’s head.

But as AI tooling takes over more of the mechanical parts of design, the real question becomes:

Are we still designing for humans to understand the system or for systems to understand themselves?

That’s the next frontier.
And maybe, just maybe, it’s time to redefine what “SOLID” software design means in this new era.

The Second Order Effects of AI Acceleration: 22 Predictions from Polyglot Conference Vancouver

I just wrapped up attending an absolutely fascinating session at the Polyglot Conference here in Vancouver, BC. The talk was titled “Second Order Effects of AI Acceleration” and it was thought-provoking discussions – surface level and a little into the meat – that gets the brain gears turning. A room full of developers, architects, product, thinkers, and tech leaders debating predictions about where this AI acceleration is actually taking us.

The format was brilliant with fpredictions followed by arguments for and against each one. No hand-waving, no corporate speak, just real people with real experience hashing out what they think is coming down the pipe. I took notes on all 22 predictions and my immediate gut reactions to each one.

1. Far More Vibe Coded Outages

My Reaction: Simple. I concur with the prediction.

This one hits close to home. We’re already seeing the early signs of this phenomenon. “Vibe coding” – that delightful term for AI-assisted development where developers rely heavily on LLM suggestions without fully understanding the underlying logic – is becoming the norm in many shops. The problem isn’t the AI assistance itself, but the lack of deep understanding that comes with it.

When you’re building on top of code you don’t fully comprehend, you’re essentially creating a house of cards. One small change, one edge case, one unexpected input, and the whole thing comes crashing down. The outages won’t be dramatic server failures necessarily, but subtle bugs that cascade through systems built on shaky foundations.

The real issue here is that debugging vibe-coded systems requires a level of understanding that the original developers may not possess. You can’t effectively troubleshoot what you don’t understand, and that’s going to lead to longer resolution times and more frequent failures.

2. Companies Will More Often Develop Their Own Custom Tools

My Reaction: 100% agreed, as I’ve already seen it happening in places I am working, which is evident anecdotally. However I’ve also seen evidence of fairly extensive glue code being put together via “vibe” coding in other places from Microsoft to Amazon to other places. All for better or worse.

This is already happening at scale, and I’ve witnessed it firsthand. The traditional model of buying off-the-shelf solutions and adapting them is being replaced by rapid prototyping and custom development. Why? Because AI makes it faster and cheaper to build something tailored to your specific needs than to integrate and customize existing solutions.

But here’s the catch – we’re seeing a lot of “glue code” being generated. Not the elegant, well-architected solutions we’d hope for, but rather quick-and-dirty integrations that work for now but create technical debt for later. I’ve seen this pattern at Microsoft, Amazon, and other major tech companies where teams are rapidly prototyping solutions that work in the short term but lack the architectural rigor of traditional enterprise software.

The upside is innovation and speed. The downside is maintenance nightmares and the potential for significant refactoring down the road.

3. We’re in the VC Subsidized Phase of AI; Will Get More Expensive Like Uber + En-shittification

My Reaction: I concur with this point too, which it is kind of odd that there is a agree or disagree here, since it is just the reality of the matter at current time. Eventually the cost, even with reduction in costs from efficiences and such, will go up just from the magnitude of what is being done. Efficiencies will only get us so far. The cost, will eventually have to go up, just as time consumption will start to go up around the organization and coordination of said tooling usage even though what can be done will exponentially grow. This entire specific prediction topic could and needs to be extensively expanded on.

This is the elephant in the room that everyone’s trying to ignore. Right now, we’re in the honeymoon phase where AI services are heavily subsidized by venture capital, similar to how Uber operated in its early days. The prices are artificially low to drive adoption and build market share.

But here’s the reality: the computational costs of running these models at scale are enormous. The energy consumption alone is staggering. As usage grows exponentially, the costs will have to follow. We’re already seeing early signs of this with API rate limits and pricing adjustments from major providers.

The “enshittification” aspect is particularly concerning. As these services become essential infrastructure, providers will have increasing leverage to extract more value. We’ll see feature degradation, increased lock-in, and pricing that reflects the true cost of the service rather than the subsidized rate.

This deserves its own deep dive post – the economics of AI infrastructure are going to fundamentally reshape how we think about software costs.

4. Junior Developers Will Become Senior Developers More Rapidly

My Reaction: Disagree and agree. The ramifications in the software engineering industry around this specific space is extensive. So much so, I’ll write an entirely new post just on this topic. It’s in the cooker, it’ll be ready soon!

This is a nuanced prediction that I have mixed feelings about. On one hand, AI tools are democratizing access to complex programming concepts. A junior developer can now generate sophisticated code patterns, implement complex algorithms, and work with technologies they might not have encountered before.

But here’s the critical distinction: there’s a difference between being able to generate code and being able to architect systems, debug complex issues, and make sound technical decisions under pressure. The latter requires experience, pattern recognition, and deep understanding that can’t be accelerated by AI alone.

I’m seeing a concerning trend where junior developers are being promoted based on their ability to produce working code quickly, but they lack the foundational knowledge to handle the inevitable problems that arise. This creates a dangerous gap in our industry.

The real question is: are we creating a generation of developers who can build but can’t maintain, debug, or evolve systems? This topic is so complex and important that it deserves its own dedicated post.

5. Existing Programming Languages Will Form a Hegemony

My Reaction: I mostly agree with this point. There may be some new languages that come up, and languages that more specifically allow for agents to communicate across paths without the need for human based languages and their respective error prone inefficiences.

The programming language landscape is consolidating around a few dominant players. Python, JavaScript, Java, and C# are becoming the de facto standards for most development work. This consolidation is driven by several factors: AI training data is heavily weighted toward these languages, tooling and ecosystem maturity, and the practical reality that most developers need to work with existing codebases.

However, I think we’ll see some interesting developments in agent-to-agent communication languages. As AI systems become more sophisticated, they may develop their own protocols and languages optimized for machine-to-machine communication rather than human readability. These won’t replace human programming languages, but they’ll exist alongside them for specific use cases.

The hegemony isn’t necessarily bad – it reduces fragmentation and makes it easier to find talent and resources. But it also risks stifling innovation and creating monocultures that are vulnerable to specific types of problems.

6. Value of Contrarian People Will Be Higher Than Yes Men

My Reaction: Agreed. Then of course those with a healthy dose of questions have always found a more useful path in society over time than the “yes men” type cowards. Politics in the US of course being an exception right now.

This prediction resonates deeply with me. In an environment where AI can generate code, documentation, and even architectural decisions at the click of a button, the ability to question, challenge, and think critically becomes exponentially more valuable.

The “yes men” who simply implement whatever is suggested without critical analysis are becoming obsolete. AI can do that job better and faster. What AI can’t do is ask the hard questions: “Is this the right approach?” “What are the long-term implications?” “How does this fit with our broader strategy?”

Contrarian thinking becomes a competitive advantage because it’s the one thing that AI can’t replicate – genuine skepticism and independent thought. The people who can look at AI-generated solutions and say “wait, this doesn’t make sense” or “we’re missing something important here” will become increasingly valuable.

This is especially true in technical leadership roles where the ability to make nuanced decisions and see around corners becomes critical.

7. Shienification of Software (Software Will Become More Like Fast Fashion)

My Reaction: I agree that this will start to happen, as US led capitalism tends toward a race to the bottom, sadly, and with all the negatives of fast fashion (there are a lot) this will happen with AI led software development. Everything from enshittification to the environmental negatives, this is going to happen and many in the industry can only do their best to mitigate the negatives.

This is perhaps the most concerning prediction on the list. The “Shienification” of software refers to the trend toward disposable, quickly-produced software that follows the fast fashion model: cheap, trendy, and designed to be replaced rather than maintained.

We’re already seeing signs of this. AI makes it incredibly easy to generate new applications, features, and even entire systems. The barrier to entry is lower than ever, which means more software is being produced with less thought given to long-term sustainability.

The environmental impact is particularly troubling. The computational resources required to train and run AI models are enormous, and if we’re producing more disposable software, we’re essentially burning through resources for short-term gains.

The challenge for the industry is to resist this trend and maintain focus on building software that’s designed to last, evolve, and provide long-term value rather than quick wins.

8. Relative Value of Fostering Talent to Name Things Well Will Be More Important

My Reaction: Under-reported prediction and NEED among skillsets. Using the right words at the right times for the right things in the right way are going to exponentially grow as a skillset need.

This is a subtle but profound prediction that I think is being overlooked. In an AI-driven development environment, the ability to name things well becomes critical because it directly impacts how effectively AI can understand and work with your code.

Good naming conventions, clear abstractions, and well-defined interfaces become the difference between AI that can effectively assist and AI that generates confusing, unmaintainable code. The people who can create clear, semantic naming schemes and architectural patterns will become incredibly valuable.

This extends beyond just variable names and function names. It includes the ability to create clear APIs, well-defined data models, and intuitive system architectures that both humans and AI can understand and work with effectively.

The irony is that as AI becomes more capable, the human skills around communication, clarity, and semantic design become more important, not less.

9. Optimize LLM for Specific Use Cases

My Reaction: Not sure this is a prediction, it’s already happening.

This is already well underway. We’re seeing specialized models for coding (GitHub Copilot, Cursor), for specific domains (legal, medical, financial), and for particular tasks (code review, documentation generation, testing).

The trend toward specialization makes sense from both a performance and cost perspective. A general-purpose model trying to be good at everything will inevitably be mediocre at most things. Specialized models can be optimized for specific use cases, leading to better results and more efficient resource usage.

We’re also seeing the emergence of model composition, where different specialized models work together to handle complex tasks. This is likely to continue and accelerate as the technology matures.

10. Companies Will Die Faster Because We Can Replicate Functionality Faster

My Reaction: Agreed.

This is a natural consequence of lowered barriers to entry. If AI makes it easier and faster to build software, then it also makes it easier and faster to replicate existing functionality. This creates a more competitive landscape where companies need to move faster and innovate more aggressively to maintain their competitive advantage.

The traditional moats around software companies – technical complexity, development time, specialized knowledge – are being eroded by AI. What used to take months or years to build can now be prototyped in days or weeks.

This isn’t necessarily bad for consumers, who will benefit from more competition and faster innovation. But it’s challenging for companies that rely on technical barriers to entry as their primary competitive advantage.

The companies that survive will be those that can move fastest, adapt most quickly, and find new ways to create value beyond just technical implementation.

11. Existence of Non-Technical Managers Will Decrease

My Reaction: I’m doubtful of this. If anything the use of AI will lower overall technical ability and will cause some significant issues around troubleshooting from lack of depth of those using the tooling to gloss over deep knowledge.

I’m skeptical of this prediction. While AI might make it easier for non-technical people to generate code, it doesn’t necessarily make them better at managing technical teams or making technical decisions.

In fact, I think we might see the opposite trend. As AI tools become more accessible, we might see more people in management roles who can generate code but lack the deep technical understanding needed to make sound architectural decisions or troubleshoot complex issues.

The real challenge will be ensuring that technical managers have both the AI-assisted productivity tools and the foundational knowledge needed to make good decisions. Simply being able to generate code doesn’t make someone a good technical leader.

12. Vibe Code Will Cause a Return to Small Teams with Microservices

My Reaction: Agree and disagree, in that order.

I agree that vibe coding will drive architectural changes, but I’m not sure microservices is the inevitable result. The challenge with vibe-coded systems is that they’re often built without a clear understanding of the underlying architecture, which can lead to tightly coupled, monolithic systems that are hard to maintain.

However, the trend toward microservices might be driven more by the need to isolate failures and limit the blast radius of bugs in vibe-coded systems. If you can’t trust the code quality, you need to architect around that uncertainty.

The disagreement comes from the fact that microservices also require significant architectural discipline and understanding, which might be at odds with the vibe coding approach. We might see a different architectural pattern emerge that’s better suited to AI-assisted development.

13. Software Will Become a Living Conversation, Not a Static Thing

My Reaction: Agree, more dynamic conversations form the speed increase will occur in many projects for some products and some services.

This is already happening in many development environments. The traditional model of writing code, testing it, and deploying it is being replaced by a more iterative, conversational approach where developers work with AI to continuously refine and improve their systems.

The speed of iteration is increasing dramatically. What used to take days or weeks can now happen in hours or minutes. This allows for more experimentation, faster feedback loops, and more responsive development processes.

However, this also creates challenges around version control, testing, and deployment. If software is constantly evolving, how do you ensure stability and reliability? How do you manage the complexity of systems that are always changing?

14. Website Search Will No Longer Be Relevant in ~3 Years

My Reaction: Is it now?

This prediction seems to assume that website search is currently highly relevant, which I’m not sure is the case. Traditional web search has been declining in relevance for years as content has moved to social media, apps, and other platforms.

The rise of AI-powered search and information retrieval might accelerate this trend, but I think the real question is whether website search was ever as relevant as we thought it was. The future of information discovery is likely to be more conversational and contextual, driven by AI rather than traditional keyword-based search.

15. LLMs Will Be Software and Replace Stacks

My Reaction: I’m not sure this isn’t the way it is already. The context and case and specificity isn’t really clear here.

This prediction is a bit vague, but I think it’s referring to the idea that LLMs might become the primary interface for interacting with software systems, potentially replacing traditional APIs and user interfaces.

We’re already seeing early signs of this with AI-powered interfaces that can understand natural language and translate it into system actions. The question is whether this will extend to the point where traditional software stacks become obsolete.

I’m skeptical that this will happen completely, but I do think we’ll see more AI-native interfaces and interactions that make traditional software feel more conversational and intuitive.

16. (Software) Libraries Will Become Less Relevant

My Reaction: Agreed.

As AI becomes more capable of generating code from scratch, the need for pre-built libraries and frameworks may decrease. Why use a library when you can have AI generate exactly what you need, tailored to your specific use case?

This trend is already visible in some areas where developers are using AI to generate custom implementations rather than pulling in external dependencies. The benefits include reduced dependency management, smaller bundle sizes, and more control over the implementation.

However, this also means losing the benefits of community-maintained, battle-tested code. The challenge will be finding the right balance between custom generation and proven libraries.

17. In the Future Ads Will Become Even More Precise; LLMs Will Have More Info for Targeting

My Reaction: Agreed. I hate this.

This is perhaps the most dystopian prediction on the list. As LLMs become more sophisticated and have access to more personal data, they’ll be able to create incredibly targeted and persuasive advertising that’s tailored to individual users’ psychology, preferences, and vulnerabilities.

The privacy implications are enormous. We’re already seeing early signs of this with AI-powered ad targeting that can analyze user behavior and create personalized content. As the technology improves, this will become even more sophisticated and invasive.

This is a trend that I find deeply concerning from both a privacy and societal perspective. The ability to manipulate individuals through highly targeted, AI-generated content represents a significant threat to autonomy and informed decision-making.

18. There Will Be a Standardization of Information Architecture Which Will Allow Faster Iteration of Tooling

My Reaction: Doubtful. If humanity and industry hasn’t done this already I see no reason we’ll do it now.

This prediction assumes that we’ll finally achieve the standardization that we’ve been trying to implement for decades. While AI might make it easier to work with standardized formats and protocols, I’m skeptical that it will drive the kind of widespread adoption needed for true standardization.

The history of technology is full of failed standardization attempts. Even when standards exist, they’re often ignored or implemented inconsistently. AI might make it easier to work with existing standards, but it won’t necessarily create the political and economic incentives needed for widespread adoption.

19. LLMs Will Cause a Dearth of New Innovation

My Reaction: I fear that this could happen, in some ways. But in other ways I think humanity can and will be forced – with the culmination of AI, the toxic immolation of democracies, and other horrors facing the world right now – we’ll innovate, change, and hopefully for the better in ways we don’t even grasp with these massive triggers effecting us.

This is a complex prediction that touches on fundamental questions about human creativity and innovation. On one hand, if AI can generate solutions to most problems, there might be less incentive for humans to engage in the kind of deep, creative thinking that leads to breakthrough innovations.

On the other hand, the challenges we’re facing as a society – climate change, political instability, economic inequality – are so profound that they might force innovation in ways we can’t currently imagine. The combination of AI capabilities and existential threats might actually accelerate innovation rather than stifle it.

The key question is whether AI will augment human creativity or replace it. I’m optimistic that it will be the former, but it’s not guaranteed.

20. AIs Will Invent Own Programming Language

My Reaction: Agreed. I believe to some degree they already have.

This is already happening in subtle ways. AI systems are developing their own internal representations and communication protocols that are optimized for machine-to-machine interaction rather than human readability.

As AI systems become more sophisticated and need to work together, they’ll likely develop more formal languages and protocols for communication. These won’t replace human programming languages, but they’ll exist alongside them for specific use cases.

The interesting question is whether these AI-invented languages will be more efficient or expressive than human-designed languages, and whether humans will eventually adopt them for certain types of programming tasks.

21. Some Countries Will Make AI Access a Universal Right

My Reaction: Agreed. While others will block it, ban it, and control it and shape it to further attack and rewrite known narratives the world accepts as positives.

This prediction reflects the growing recognition that AI access is becoming a fundamental requirement for participation in modern society. Just as internet access has become essential for education, employment, and civic participation, AI access is following the same trajectory.

Some countries will embrace this and provide universal access to AI tools and services, recognizing it as a public good. Others will restrict access, either for political reasons or to maintain control over information and communication.

The geopolitical implications are significant. Countries that provide universal AI access will have a competitive advantage in education, innovation, and economic development. Those that restrict it will fall behind.

22. Languages Used for Strengths More Than as a Panacea

My Reaction: “Languages used for strengths more than as a panacea.” I added this one. It’s more a hope than a prediction. For example, I hope Go is used for its strengths, Rust for its strengths, Java, C#, etc. Instead of trying to make C# or Rust the panacea across all platforms and all needs. One can hope!

This is my addition to the list, and it’s more of a hope than a prediction. In an AI-driven development environment, there’s a risk that we’ll default to whatever language the AI is most comfortable with, rather than choosing the right tool for the job.

I hope that AI will actually help empower us to make better language choices by understanding the strengths and weaknesses of different languages and recommending the most appropriate one for each use case. Go for its concurrency and simplicity, Rust for its safety and performance, Java for its enterprise ecosystem, C# for its Microsoft integration, and so on.

The goal should be to use each language for what it does best, rather than trying to make one language solve every problem. AI could actually help us achieve this by providing better guidance on language selection and architecture decisions.

The Big Picture

These 22 predictions all of us participants conjured up paint a picture of an industry in rapid transformation. Some trends are already visible, others are still emerging. The common thread is that AI is fundamentally changing how we think about software development, from the tools we use to the way we organize teams and make decisions – all for better or worse.f

The challenge for the industry is to navigate these changes thoughtfully, preserving what’s valuable about traditional software development while embracing the opportunities that AI presents. The predictions that concern me most are those that suggest a race to the bottom in terms of quality, sustainability, and long-term thinking.

The predictions that excite me most are those that suggest AI will augment human capabilities rather than replace them, enabling us to be more creative and experimental in our solutions while preserving the critical thinking and problem-solving skills that make good developers valuable.

As we move forward, the key will be maintaining our focus on building software that’s not just functional, but sustainable, maintainable, and valuable in the long term. AI can help us build faster, but it can’t replace the judgment and wisdom needed to build well.


What are your thoughts on these predictions? Which ones resonate with your experience, and which ones seem off-base? I’d love to hear your perspective on where you think AI is taking our industry.

I’m @ Mastadon https://metalhead.club/@adron, Threads https://www.threads.com/@adron, and Blue Sky https://bsky.app/profile/adron.bsky.social – hit me up with your thoughts!

VS Code & Copilot: The Chat-First Spec Definition Method

My initial review of CoPilot and getting started is available here.

(What it does well – and why it’s not magic, but almost!)

Let me be clear: the Copilot Chat feature in VS Code can feel like a miracle until it’s not. When it’s working, you fire off a multi-line prompt defining what you want: “Build a function for X, validate Y, return Z…” and boom VS Code’s inline chat generates a draft that is scary good.

  • What actually wins: It interprets your specification in context – your open file, project imports, naming conventions – and spits out runnable sample code. That’s not trivial; reputable models often lose context threading. Here, the chat lives in your editor, not detached, and that nuance matters.
    It’s like sketching the spec in natural language, then having VS Code autocomplete not just code but entire behavior.
  • What you have to still do: Take a breath, a le sigh, and read it. Always. Control flow, edge cases, off-by-one errors Copilot doesn’t care. Security? Data leakage? All on you. Copilot doesn’t own the logic; it just stitches together patterns it’s seen. You own the correctness.
  • Trick that matters: Iterate. Ask follow-up: “Okay, now handle invalid inputs by throwing InvalidArgumentException,” or “Refactor this to async/await.” Having a chat continuum in the editor is powerful but don’t forget it’s your spec, not the AI’s.

Technique 2: Prompt With Skeleton First

Skip blindly describing behavior. Instead, scaffold it:

// Function: validateUserInput
// Takes { name: string, age: number }
// Returns { valid: boolean, errors: string[] }
// Edge cases: missing name, non-numeric age

function validateUserInput(input) {
  // ...
}

Then let Copilot fill in the body. Why this rocks:

  • You’re giving structure; types, return shapes, edge conditions.
  • The code auto-generated fits into your skeleton, adhering to your naming, your data model.
  • You retain control over boundaries, types, and structure even before Copilot chimes in.

Downside? If your skeleton is misleading or incomplete, Copilot will “fill in” confidently, in code that compiles but does the wrong thing. Again, your code review has to rule.

Technique 3: In-Context Refactoring Conversations (AKA “Let me fix your mess, Copilot”)

Ever accepted a Copilot suggestion, then hated it? Instead of discarding, turn on Copilot Chat:

  • Ask it: “Refactor this to reduce nesting and improve readability,” or “Convert this to use .reduce() instead of .forEach().”
  • Watch it rewrite within the same context not tangential code thrown at you.

That’s one of its massive values – context-aware surgical refactoring – not blanket “clean this up” that ends in a different variable naming scheme or method order from your repo.

The catch: refactor prompts depend on Copilot’s parsing of your style. If your code is sloppy, it’s going to be sloppily refactored. So yes you still have to keep code clean, comment clearly, and limit complexity. Copilot is the editor version of duct tape not a refactor wizard.

The Brutal Truth

  • VS Code + Copilot isn’t a magical co-developer. It’s a smart auto-completer with chat, living in your IDE, context-aware but utterly obedient to your prompts.
  • The trick is not the AI it’s how you lead it. The better your spec, skeleton, or prompt, the better your code.
  • Your style skeptical, questioning, pragmatic fits perfectly. You don’t let it ride; you interrogate. And that’s exactly how it should be.

TL;DR Summary

TechniqueWhat WorksWhat Fails Without You
Chat-first specDetailed natural-language spec → meaningful codeNo spec clarity → garbage logic
Skeleton promptsProvides structure, types, expectationsBad skeleton = bad code, fast
In-editor refactoring chatContext-preserving improvementsMessy code → messy refactor

If you want more details on how you integrate Copilot into CI, or your personal prompt templates drop me the demand below, and I’ll tackle it head-on next time.

GitHub Copilot: A Getting Started Guide to GitHub Copilot

Note: I’ve decided to start writing up the multitude of AI tools/tooling and this is the first of many posts on this topic. This post is effectively a baseline of what one should be familiar with to get rolling with Github Copilot. As I add posts, I’ll add them at the bottom of this post to reference the different tools, as well as back reference them to this post, etc, so that they’re all easily findable. With that, let’s roll…

Intro

GitHub Copilot is thoroughly changing how developers write code, serving as a kind of industry standard – almost – for AI-powered code completion and generation. As someone who’s been in software development for over two decades, I’ve seen many tools come and go, but the modern variant of Copilot represents a fundamental shift in how we approach coding – it’s not just a tool, it’s a new paradigm for human-AI collaboration in software development.

In this comprehensive guide, I’ll walk you through everything you need to know to get started with GitHub Copilot, from basic setup to advanced features that will transform your development workflow.

What is GitHub Copilot?

GitHub Copilot is an AI-powered code completion tool that acts as your virtual pair programmer. It’s built on OpenAI’s Codex model and trained on billions of lines of public code, making it incredibly adept at understanding context, suggesting completions, and even generating entire functions based on your comments and existing code.

Key Capabilities

  • Real-time code suggestions as you type
  • Comment-to-code generation from natural language descriptions
  • Multi-language support across 50+ programming languages
  • Context-aware completions that understand your project structure
  • IDE integration with VS Code, Visual Studio, Neovim, and JetBrains IDEs

Getting Started: Setup and Installation

Prerequisites

Installation Steps

1. Subscribe to GitHub Copilot

2. Install the Extension

  • VS Code: Search for “GitHub Copilot” in the Extensions marketplace
  • Visual Studio: Install from Visual Studio Marketplace
  • JetBrains IDEs: Install from JetBrains Marketplace
  • Neovim: Use copilot.vim or copilot.lua

3. Authenticate

  • Sign in to your GitHub account when prompted
  • Authorize the extension to access your account
  • Verify your Copilot subscription is active
Screenshot of Visual Studio Code showing the welcome interface, including options for opening chat features, managing code completions, and accessing recent projects.

Core Features and How to Use Them

1. Inline Suggestions

Copilot provides real-time code suggestions as you type. These appear as gray text that you can accept by pressing Tab or Enter.

# Type this comment and Copilot will suggest the function
def calculate_compound_interest(principal, rate, time, compounds_per_year):
    # Copilot will suggest the complete implementation

2. Comment-to-Code Generation

One of Copilot’s most powerful features is generating code from natural language comments.

// Create a function that validates email addresses using regex
// Copilot will generate the complete function with proper validation

3. Function Completion

Start typing a function and let Copilot complete it based on context:

def process_user_data(user_input):
    # Start typing and Copilot will suggest the next lines
    if not user_input:
        return None
    
    # Continue with the implementation

4. Test Generation

Copilot can generate test cases for your functions:

def add_numbers(a, b):
    return a + b

# Type "test" or "def test_" and Copilot will suggest test functions

Advanced Features and Techniques

1. Multi-line Completions

Press Tab to accept suggestions line by line, or use Alt + ] to accept multiple lines at once.

2. Alternative Suggestions

When Copilot suggests code, press Alt + [ or Alt + ] to cycle through alternative suggestions.

3. Inline Chat (Copilot Chat)

The newer Copilot Chat feature allows you to have conversations about your code:

  • Press Ctrl + I (or Cmd + I on Mac) to open inline chat
  • Ask questions about your code
  • Request refactoring suggestions
  • Get explanations of complex code sections

4. Custom Prompts

Learn to write effective prompts for better code generation:

Good prompts:

# Create a REST API endpoint that accepts POST requests with JSON data,
# validates the input, and returns a success response with status code 201

Less effective prompts:

# Make an API endpoint

Best Practices for Effective Copilot Usage

1. Write Clear Comments

The quality of Copilot’s suggestions directly correlates with the clarity of your comments and context.

# Good: Clear, specific description
def parse_csv_file(file_path, delimiter=',', skip_header=True):
    """
    Parse a CSV file and return a list of dictionaries.
    
    Args:
        file_path (str): Path to the CSV file
        delimiter (str): Character used to separate fields
        skip_header (bool): Whether to skip the first row as header
    
    Returns:
        list: List of dictionaries where keys are column names
    """

2. Provide Context

Help Copilot understand your project structure and coding style:

# This function follows the project's error handling pattern
# and uses the standard logging configuration
def process_payment(payment_data):

3. Review Generated Code

Always review and test code generated by Copilot:

  • Check for security vulnerabilities
  • Ensure it follows your project’s coding standards
  • Verify the logic matches your requirements
  • Run tests to confirm functionality

4. Iterative Refinement

Use Copilot as a starting point, then refine the code:

  • Accept the initial suggestion
  • Modify it to match your specific needs
  • Ask Copilot to improve specific aspects
  • Iterate until you have the desired result

Language-Specific Tips

Python

  • Copilot excels at Python due to its extensive training data
  • Great for data science, web development, and automation scripts
  • Excellent at generating docstrings and type hints

JavaScript/TypeScript

  • Strong support for modern ES6+ features
  • Good at React, Node.js, and frontend development patterns
  • Effective at generating test files and API clients

Java

  • Good support for Spring Boot and enterprise patterns
  • Effective at generating boilerplate code and tests
  • Strong understanding of Java conventions

Go

  • Growing support with good understanding of Go idioms
  • Effective at generating HTTP handlers and data structures
  • Good at following Go best practices

Troubleshooting Common Issues

1. Suggestions Not Appearing

  • Verify your Copilot subscription is active
  • Check that you’re signed into the correct GitHub account
  • Restart your IDE after authentication
  • Ensure the extension is properly installed and enabled

2. Poor Quality Suggestions

  • Improve your comments and context
  • Check that your file has the correct language extension
  • Provide more context about your project structure
  • Use more specific prompts

3. Performance Issues

  • Disable other AI coding extensions that might conflict
  • Check your internet connection (Copilot requires online access)
  • Restart your IDE if suggestions become slow
  • Update to the latest version of the extension

4. Security Concerns

  • Never paste sensitive data or credentials into Copilot
  • Review generated code for security vulnerabilities
  • Use Copilot in private repositories when possible
  • Be cautious with code that handles user input or authentication

Integration with Development Workflows

1. Pair Programming

Copilot can act as a third member of your pair programming session:

  • Generate alternative implementations for discussion
  • Create test cases to explore edge cases
  • Suggest refactoring opportunities
  • Help with debugging by generating test scenarios

2. Code Review

Use Copilot to enhance your code review process:

  • Generate additional test cases
  • Suggest alternative implementations
  • Identify potential improvements
  • Create documentation for complex functions

3. Learning and Exploration

Copilot is excellent for learning new technologies:

  • Generate examples of new language features
  • Create sample projects to explore frameworks
  • Build reference implementations
  • Practice with different coding patterns

Enterprise and Team Features

1. GitHub Copilot Business

  • Cost: $19/user/month
  • Features: Advanced security, compliance, and team management
  • Use Cases: Enterprise development teams, compliance requirements

2. GitHub Copilot Enterprise

  • Cost: Custom pricing
  • Features: Advanced security, custom models, dedicated support
  • Use Cases: Large enterprises, government, highly regulated industries

3. Team Management

  • Centralized billing and user management
  • Usage analytics and reporting
  • Security and compliance features
  • Integration with enterprise identity providers

Resources and Further Learning

Official Resources

Third-Party Tutorials and Guides

Advanced Techniques and Pro Tips

1. Custom Snippets and Templates

Create custom snippets that work well with Copilot:

// VS Code snippets.json
{
  "API Endpoint": {
    "prefix": "api-endpoint",
    "body": [
      "app.post('/${1:endpoint}', async (req, res) => {",
      "  try {",
      "    const { ${2:params} } = req.body;",
      "    ${3:// Copilot will suggest validation and processing logic}",
      "    res.status(201).json({ success: true, data: result });",
      "  } catch (error) {",
      "    res.status(500).json({ success: false, error: error.message });",
      "  }",
      "});"
    ]
  }
}

2. Context-Aware Prompts

Learn to write prompts that leverage your project’s context:

# This function should follow the same pattern as the other API functions
# in this file, using the shared error handling and response formatting
def get_user_profile(user_id):

3. Testing Strategies

Use Copilot to generate comprehensive test suites:

# Generate tests that cover edge cases, error conditions, and normal operation
# Use the same testing patterns as the existing test files in this project
def test_user_authentication():

4. Documentation Generation

Let Copilot help with documentation:

# Generate comprehensive docstring following Google style
# Include examples, parameter descriptions, and return value details
def process_payment(payment_data, user_id, options=None):

Security and Privacy Considerations

1. Data Privacy

  • Copilot processes your code to provide suggestions
  • Avoid pasting sensitive information, credentials, or proprietary code
  • Use private repositories when working with confidential code
  • Review GitHub’s privacy policy and data handling practices

2. Code Security

  • Generated code may contain security vulnerabilities
  • Always review and test generated code
  • Use security scanning tools to identify potential issues
  • Follow security best practices for your specific domain

3. Compliance Requirements

  • Consider compliance requirements for your industry
  • Evaluate whether Copilot meets your security standards
    • Will the data going out and back be ok with the org?
    • Do additional SLAs or other requirements need put in place?
  • Consult with your security team before adoption
    • That in and out, being it is a service, could pose a significant number of risks for any org.
  • Document usage policies and guidelines

Performance Optimization

1. IDE Configuration

Optimize your IDE for better Copilot performance:

// VS Code settings.json
{
  "github.copilot.enable": {
    "*": true,
    "plaintext": false,
    "markdown": false,
    "scminput": false
  },
  "github.copilot.suggestions": {
    "enable": true,
    "showInlineSuggestions": true
  }
}

2. Network Optimization

  • Ensure stable internet connection
  • Use VPN if required by your organization
  • Consider enterprise deployment for better performance
  • Monitor network usage and optimize as needed

3. Resource Management

  • Disable other AI coding extensions
  • Monitor memory and CPU usage
  • Restart IDE periodically if performance degrades (?? I’ve seen this suggestion multiple places and it bothers me immensely)
  • Update extensions and IDE regularly

Conclusion

GitHub Copilot represents a fundamental shift in software development, moving us from manual coding to AI-assisted development. While it’s not a replacement for understanding programming fundamentals, it’s a powerful tool that can significantly enhance your productivity and code quality.

The key to success with Copilot is learning to work with it effectively writing clear prompts, providing good context, and always reviewing generated code. Start with the basics, practice regularly, and gradually incorporate more advanced features into your workflow.

As we move forward in this AI-augmented development era, developers who can effectively collaborate with AI tools like Copilot will have a significant advantage. The future of programming isn’t about replacing developers – albeit a whole lot of that might be happening right now – it’s more about augmenting their capabilities and enabling them to focus on higher-level problem solving and innovation.

Next Steps

  1. Set up your GitHub Copilot subscription and install the extension
  2. Practice with simple projects to get comfortable with the workflow
  3. Experiment with different prompting techniques to improve suggestion quality
  4. Integrate Copilot into your daily development routine
  5. Share your experiences and learn from the community

Remember, mastery of AI programming tools like GitHub Copilot is a journey, not a destination. Start today, practice consistently, and you’ll be amazed at how quickly it transforms your development experience.

Next up, more on getting started with the various tools and the baseline knowledge you should have around each.

Follow me on LinkedInMastadon, or Blue Sky for more insights on AI programming and software development.