A Reflection on SOLID: Decades of Code, Principles, and a Changing Future

After decades of building, breaking, refactoring, and rebuilding systems, from scrappy startups to enterprise labyrinths, I’ve seen a lot of patterns come and go.
But few have stuck around as stubbornly as the SOLID principles.

They’re like the veteran developers of software philosophy: reliable, experienced, and still showing up to standups long after everyone else has moved on to the latest framework or architecture fad. For years, teams I’ve been on, and many I’ve led, have leaned heavily on SOLID as the bedrock of maintainable software design. For the most part, it’s served us well.

Still, as the craft of development shifts into an AI-augmented era, I can’t help but wonder: is SOLID still as solid as it once was?

The Foundation We Built On

Let’s run through the familiar set – but not as definitions you could pull from Wikipedia. Let’s talk about what they actually meant in practice.

Single Responsibility Principle
This was the sanity rule. Keep your classes (or other similar code elements) from doing too much. If it has more than one reason to change, it’ll eventually collapse under its own weight. I learned this one early, often the hard way, cleaning up classes that had taken on the personality of their developer, a little of everything, in one chaotic pile.

Open/Closed Principle
The guiding star of extensibility: extend without modifying. In theory, that meant safety from regression and in many cases if done well it would help with regression and ongoing feature development. In reality, it meant endless debates about whether you were “violating OCP” every time you opened a file to fix something.

Liskov Substitution Principle
The quiet workhorse. Inheritance should make sense. If your subclass breaks expectations, you’re not using polymorphism, you’re just lying to your codebase.

Interface Segregation Principle
This was the rebellion against “god interfaces.” The countless times I’ve seen an IThingManager with thirty methods, half of which every implementation ignores. ISP was the call to split those monsters into sane, digestible contracts.

Dependency Inversion Principle
The rise of abstractions over implementations. This principle was both a blessing and a curse, the birthplace of dependency injection frameworks and inversion-of-control containers that we alternately loved and cursed. It gave structure but also a new layer of complexity.

When SOLID Worked

Over the years, I’ve seen SOLID absolutely save teams, including mine, from coding disasters. For example when a project scaled from three developers to thirty, SOLID acted as the stabilizing force. It created a shared language around what “good code” meant.

It kept monoliths from turning to spaghetti. It made refactoring survivable. It let teams ship features faster without worrying that a small change in the billing module would trigger a cascade failure in authentication.

In essence, SOLID made code cooperative. It taught us to think in modules, contracts, and boundaries. Those were lessons worth keeping.

When SOLID Became a Problem

But for every clean, modular, testable success story, there’s an over-engineered mess hiding behind the same banner.
I’ve walked into codebases where every noun in the business domain had an interface, an abstract class, and two decorators, none of which did anything meaningful. All in the name of being “SOLID.”

Here’s where it tends to go wrong:

  • Too Many Abstractions. “Open for extension” became “never touch anything again.” Layers of indirection were added to prevent change, not to enable it.
  • Framework Fetishism. Dependency injection containers got treated like religion. If you weren’t injecting it, mocking it, or wrapping it in a factory, were you even a developer?
  • Premature Architecture. Entire hierarchies were built for features that never came. Extensibility for the sake of hypotheticals.
  • Cognitive Overhead. Code so “modular” that new developers spent a week just tracing through interfaces before they found the line that actually did something.

SOLID was meant to reduce complexity, but taken too far, it became its own source of it.

The Shift: SOLID in the Age of AI-Generated Code

Now we’re standing at a strange crossroads.
AI-assisted development has changed the cost equation of software. Code is no longer expensive to write, it’s expensive to understand.

That changes everything.

When an AI can rewrite, refactor, or regenerate entire systems on command, the reasons we relied on SOLID begin to shift. SOLID was about human maintainability, protecting code from the chaos of change over time. But if AI can refactor in seconds what once took hours, do we still need all that protective architecture?

Maybe not. Or maybe, it needs reinterpretation.

AI doesn’t care if your class violates the Single Responsibility Principle; it can just regenerate it into smaller, purpose-built components when needed. But for humans reading, debugging, or reasoning about the system, SOLID still matters. It’s how we think about structure, even if we don’t handcraft it anymore.

The next evolution might be AI-native SOLID, principles guiding how AI systems generate and organize code for clarity, composability, and self-repair.

What Remains Solid?

In the end, SOLID wasn’t just about code. It was about discipline. About having a mental model for systems that grow beyond a single developer’s head.

But as AI tooling takes over more of the mechanical parts of design, the real question becomes:

Are we still designing for humans to understand the system or for systems to understand themselves?

That’s the next frontier.
And maybe, just maybe, it’s time to redefine what “SOLID” software design means in this new era.

Polyglot Conference Vancouver 2024: Real Talk About AI, Industry Hubris, and the Art of Unconferencing

Just got back from another incredible Polyglot Conference in Vancouver, and I’m still processing everything that went down. There’s something magical about this event – it’s not your typical conference with polished presentations and vendor booth nonsense. It’s an unconference, which means the real magic happens in the conversations, the debates, and the genuine human connections that form when you put a room full of smart, opinionated developers together and let them talk about what actually matters.

The People Make the Conference

It was excellent to meet so many new people and catch up with friends I’ve not gotten to see in some time! This is what makes Polyglot special – it’s not just about the content, it’s about the community. I found myself in conversations with developers from startups to enterprise, from different countries and backgrounds, all bringing their unique perspectives to the table.

There’s something refreshing about being in a room where everyone is there because they genuinely want to be there, not because their company sent them or because they’re trying to sell something. The conversations flow naturally, the questions are real, and the debates are substantive. No one’s trying to impress anyone with buzzwords or corporate speak (Albeit we’ll often laugh our asses off at the nonsense of Corp speak and marketecture).

I caught up with folks I hadn’t seen since before the pandemic, met new faces who are doing interesting work, and had those serendipitous hallway conversations that often lead to the most valuable insights. The kind of conversations where you’re still talking an hour later, completely forgetting that there’s a scheduled session happening somewhere else.

The Unconference Format: Getting to the Heart of Things

The sessions were, as always with an Unconference jam packed with content and when we dove in we got to the heart of the topics real quick. This is the beauty of the unconference format – there’s no time for fluff or corporate posturing. People show up with real problems, real experiences, and real opinions, and we get straight to the point.

Unlike traditional conferences where you sit through 45-minute presentations that could have been 15-minute talks, unconference sessions are dynamic and responsive. Someone brings up a topic, the group decides if it’s worth exploring, and then we dive deep. If the conversation isn’t going anywhere, we pivot. If it’s getting interesting, we keep going. The format respects everyone’s time and intelligence.

The sessions I participated in covered everything from microservices architecture to team dynamics in the face of agentic AI tooling, from introspecting databases with AI tooling to the future of programming languages in the face of AI tooling. But the most compelling discussions were around AI – not the hype, not the marketing, but the real-world implications of what we’re building and how it’s changing our industry – for better or worse – and there’s a lot of expectation it’s bring a lot of the later.

Coping with AI: The Real Talk

Some of the talks included coping with AI, and just the general insanity that surrounds the technology and the hubris of the industry right now. This is where things got really interesting, because we weren’t talking about AI in the abstract or as some distant future possibility. We were talking about it as a present reality that’s already reshaping how we work, think, and build software.

The “coping with AI” discussion was particularly revealing. We’re not talking about how to use AI tools effectively – that’s the easy part. We’re talking about how to maintain our sanity and professional integrity in an industry that’s gone completely off the rails with AI hype and magical thinking.

The Insanity of AI Hype

The insanity surrounding AI right now is breathtaking. Every company is trying to cram AI into every product, whether it makes sense or not. We’re seeing AI-powered toasters and AI-enhanced paper clips, things that have boolean operation where they’re burning through tokens to make a yes or no decision. Utter madness on that front, that’s like half a tree burned up, a windmill rotation, or a chunk or two of coal just to flip a light switch! The technology has become a solution in search of problems, and the industry is happy to oblige with increasingly absurd use cases.

But the real insanity isn’t the over-application of the technology – it’s the way we’re talking about it. AI is being positioned as the solution to every problem, the answer to every question, the future of everything. It’s not just a tool, it’s become a religion. And like any religion, it’s creating true believers who can’t see the limitations, the risks, or the unintended consequences. Maybe “cult” should be added to the “religion” moniker?

The conversations at Polyglot were refreshing because they cut through this hype. We talked about the real limitations of AI, the actual problems it creates (holy bananas there are a lot of them), and the genuine challenges of working with these systems in production. No one was trying to sell anyone on the latest AI miracle – we were trying to understand what’s actually happening and how to deal with it. Simply put, what’s our day to day action plan to mitigate these problems and what are we doing when the hubris and house of cards comes crumbling down? After all, much of the world’s economy is hinged on AI becoming all the things! Nuts!

The Hubris of the Industry

The hubris of the industry right now is staggering. We’re building systems that we don’t fully understand, deploying them at scale, and then acting surprised when they don’t work as expected. The confidence with which people make claims about AI capabilities is matched only by the lack of evidence supporting those claims.

I heard stories from developers who are being asked to implement AI solutions that don’t make technical sense, from managers who think AI can replace human judgment, and from executives who believe that throwing more AI at a problem will automatically make it better. The disconnect between what AI can actually do and what people think it can do is enormous.

The hubris extends beyond just the technology to the way we’re thinking about the future. There’s this assumption that AI will solve all our problems, that it will make us more productive, that it will create a better world. But we’re not asking the hard questions about what we’re actually building, who it serves, and what the long-term consequences might be.

The Real Challenges

The real challenges of working with AI aren’t technical – they’re human. How do you maintain code quality when your team is generating code they don’t fully understand? How do you make architectural decisions when the tools can generate solutions faster than you can evaluate them? How do you maintain professional standards when the industry is racing to the bottom in terms of quality and sustainability?

These are the questions that kept coming up in our discussions. Not “how do I use ChatGPT to write better code” but “how do I maintain my professional integrity in an environment where AI is being used to cut corners and avoid hard thinking?”

The conversations were honest and sometimes uncomfortable. People shared stories of being pressured to use AI in ways that didn’t make sense, of watching their colleagues become dependent on tools they didn’t understand, and of struggling to maintain quality standards in an environment that prioritizes speed over everything else.

The Path Forward

The most valuable part of these discussions wasn’t just identifying the problems – it was exploring potential solutions. How do we maintain our professional standards while embracing the benefits of AI? How do we educate our teams and our organizations about the real capabilities and limitations of these tools? How do we build systems that are both powerful and maintainable?

The consensus seemed to be that we need to be more thoughtful about how we integrate AI into our work. Not as a replacement for human judgment, but as a tool that augments our capabilities. Not as a way to avoid hard problems, but as a way to tackle them more effectively.

We also need to be more honest about the limitations and risks. The industry’s tendency to oversell AI capabilities is creating unrealistic expectations and dangerous dependencies. We need to have more conversations about what AI can’t do, what it shouldn’t do, and what the consequences might be when it’s used inappropriately.

The Value of Real Conversation

What struck me most about these discussions was how different they were from the typical AI conversations you hear at other conferences. There was no posturing, no trying to impress anyone with the latest buzzwords, no corporate speak about “digital transformation” or “AI-first strategies“.

Instead, we had real conversations about real problems with real people who are dealing with these issues every day. People shared their failures as well as their successes, their concerns as well as their optimism, their questions as well as their answers.

This is the value of the unconference format and the Polyglot community. It creates a space where people can be honest about what’s actually happening, where they can ask the hard questions, and where they can explore ideas without the pressure to conform to industry narratives or corporate agendas.

Looking Ahead

As I reflect on the conference, I’m struck by how much the industry has changed since the last time I was at Polyglot. AI has gone from being a niche topic to dominating every conversation. The questions we’re asking have shifted from “what is AI?” to “how do we live with AI?” and “how do we maintain our humanity in an AI-driven world?”

The conversations at Polyglot give me hope that we can navigate this transition thoughtfully. Not by rejecting AI or embracing it uncritically, but by engaging with it honestly and maintaining our professional standards and human values.

The industry needs more spaces like this – places where people can have real conversations about real problems without the hype, the marketing, or the corporate agenda getting in the way. Places where we can explore the hard questions and work together to find better answers.

The Takeaway

The biggest takeaway from Polyglot this year is that we’re at a critical juncture. The AI revolution isn’t coming – it’s here. And the choices we make now about how we integrate these tools into our work, our teams, and our industry will shape the future of software development for decades to come.

We can either let the hype and hubris drive us toward a future where software becomes disposable, quality becomes optional, and human judgment becomes obsolete. Or we can choose a different path – one where AI augments our capabilities without replacing our humanity, where we maintain our professional standards while embracing new tools, and where we build systems that are both powerful and sustainable.

The conversations at Polyglot suggest that there are people in the industry who are choosing the latter path. People who are thinking critically about AI, asking the hard questions, and working to build a future that serves human needs rather than corporate interests.

That gives me hope. And it makes me even more committed to being part of these conversations, to asking the hard questions, and to working with others who are trying to build a better future for our industry.

The Polyglot (Un)Conference and (Un)Conference like events continue to be one of the most valuable events in the software development community. If you’re looking for real conversations about real problems with real people, I can’t recommend it highly enough.

The conference was such a good time with such great topics, introductions, and interactions that I’ve already bought a ticket for next year. If you’re interested in joining the conversation, check out polyglotsoftware.com and grab your tickets at Eventbrite.

The Second Order Effects of AI Acceleration: 22 Predictions from Polyglot Conference Vancouver

I just wrapped up attending an absolutely fascinating session at the Polyglot Conference here in Vancouver, BC. The talk was titled “Second Order Effects of AI Acceleration” and it was thought-provoking discussions – surface level and a little into the meat – that gets the brain gears turning. A room full of developers, architects, product, thinkers, and tech leaders debating predictions about where this AI acceleration is actually taking us.

The format was brilliant with fpredictions followed by arguments for and against each one. No hand-waving, no corporate speak, just real people with real experience hashing out what they think is coming down the pipe. I took notes on all 22 predictions and my immediate gut reactions to each one.

1. Far More Vibe Coded Outages

My Reaction: Simple. I concur with the prediction.

This one hits close to home. We’re already seeing the early signs of this phenomenon. “Vibe coding” – that delightful term for AI-assisted development where developers rely heavily on LLM suggestions without fully understanding the underlying logic – is becoming the norm in many shops. The problem isn’t the AI assistance itself, but the lack of deep understanding that comes with it.

When you’re building on top of code you don’t fully comprehend, you’re essentially creating a house of cards. One small change, one edge case, one unexpected input, and the whole thing comes crashing down. The outages won’t be dramatic server failures necessarily, but subtle bugs that cascade through systems built on shaky foundations.

The real issue here is that debugging vibe-coded systems requires a level of understanding that the original developers may not possess. You can’t effectively troubleshoot what you don’t understand, and that’s going to lead to longer resolution times and more frequent failures.

2. Companies Will More Often Develop Their Own Custom Tools

My Reaction: 100% agreed, as I’ve already seen it happening in places I am working, which is evident anecdotally. However I’ve also seen evidence of fairly extensive glue code being put together via “vibe” coding in other places from Microsoft to Amazon to other places. All for better or worse.

This is already happening at scale, and I’ve witnessed it firsthand. The traditional model of buying off-the-shelf solutions and adapting them is being replaced by rapid prototyping and custom development. Why? Because AI makes it faster and cheaper to build something tailored to your specific needs than to integrate and customize existing solutions.

But here’s the catch – we’re seeing a lot of “glue code” being generated. Not the elegant, well-architected solutions we’d hope for, but rather quick-and-dirty integrations that work for now but create technical debt for later. I’ve seen this pattern at Microsoft, Amazon, and other major tech companies where teams are rapidly prototyping solutions that work in the short term but lack the architectural rigor of traditional enterprise software.

The upside is innovation and speed. The downside is maintenance nightmares and the potential for significant refactoring down the road.

3. We’re in the VC Subsidized Phase of AI; Will Get More Expensive Like Uber + En-shittification

My Reaction: I concur with this point too, which it is kind of odd that there is a agree or disagree here, since it is just the reality of the matter at current time. Eventually the cost, even with reduction in costs from efficiences and such, will go up just from the magnitude of what is being done. Efficiencies will only get us so far. The cost, will eventually have to go up, just as time consumption will start to go up around the organization and coordination of said tooling usage even though what can be done will exponentially grow. This entire specific prediction topic could and needs to be extensively expanded on.

This is the elephant in the room that everyone’s trying to ignore. Right now, we’re in the honeymoon phase where AI services are heavily subsidized by venture capital, similar to how Uber operated in its early days. The prices are artificially low to drive adoption and build market share.

But here’s the reality: the computational costs of running these models at scale are enormous. The energy consumption alone is staggering. As usage grows exponentially, the costs will have to follow. We’re already seeing early signs of this with API rate limits and pricing adjustments from major providers.

The “enshittification” aspect is particularly concerning. As these services become essential infrastructure, providers will have increasing leverage to extract more value. We’ll see feature degradation, increased lock-in, and pricing that reflects the true cost of the service rather than the subsidized rate.

This deserves its own deep dive post – the economics of AI infrastructure are going to fundamentally reshape how we think about software costs.

4. Junior Developers Will Become Senior Developers More Rapidly

My Reaction: Disagree and agree. The ramifications in the software engineering industry around this specific space is extensive. So much so, I’ll write an entirely new post just on this topic. It’s in the cooker, it’ll be ready soon!

This is a nuanced prediction that I have mixed feelings about. On one hand, AI tools are democratizing access to complex programming concepts. A junior developer can now generate sophisticated code patterns, implement complex algorithms, and work with technologies they might not have encountered before.

But here’s the critical distinction: there’s a difference between being able to generate code and being able to architect systems, debug complex issues, and make sound technical decisions under pressure. The latter requires experience, pattern recognition, and deep understanding that can’t be accelerated by AI alone.

I’m seeing a concerning trend where junior developers are being promoted based on their ability to produce working code quickly, but they lack the foundational knowledge to handle the inevitable problems that arise. This creates a dangerous gap in our industry.

The real question is: are we creating a generation of developers who can build but can’t maintain, debug, or evolve systems? This topic is so complex and important that it deserves its own dedicated post.

5. Existing Programming Languages Will Form a Hegemony

My Reaction: I mostly agree with this point. There may be some new languages that come up, and languages that more specifically allow for agents to communicate across paths without the need for human based languages and their respective error prone inefficiences.

The programming language landscape is consolidating around a few dominant players. Python, JavaScript, Java, and C# are becoming the de facto standards for most development work. This consolidation is driven by several factors: AI training data is heavily weighted toward these languages, tooling and ecosystem maturity, and the practical reality that most developers need to work with existing codebases.

However, I think we’ll see some interesting developments in agent-to-agent communication languages. As AI systems become more sophisticated, they may develop their own protocols and languages optimized for machine-to-machine communication rather than human readability. These won’t replace human programming languages, but they’ll exist alongside them for specific use cases.

The hegemony isn’t necessarily bad – it reduces fragmentation and makes it easier to find talent and resources. But it also risks stifling innovation and creating monocultures that are vulnerable to specific types of problems.

6. Value of Contrarian People Will Be Higher Than Yes Men

My Reaction: Agreed. Then of course those with a healthy dose of questions have always found a more useful path in society over time than the “yes men” type cowards. Politics in the US of course being an exception right now.

This prediction resonates deeply with me. In an environment where AI can generate code, documentation, and even architectural decisions at the click of a button, the ability to question, challenge, and think critically becomes exponentially more valuable.

The “yes men” who simply implement whatever is suggested without critical analysis are becoming obsolete. AI can do that job better and faster. What AI can’t do is ask the hard questions: “Is this the right approach?” “What are the long-term implications?” “How does this fit with our broader strategy?”

Contrarian thinking becomes a competitive advantage because it’s the one thing that AI can’t replicate – genuine skepticism and independent thought. The people who can look at AI-generated solutions and say “wait, this doesn’t make sense” or “we’re missing something important here” will become increasingly valuable.

This is especially true in technical leadership roles where the ability to make nuanced decisions and see around corners becomes critical.

7. Shienification of Software (Software Will Become More Like Fast Fashion)

My Reaction: I agree that this will start to happen, as US led capitalism tends toward a race to the bottom, sadly, and with all the negatives of fast fashion (there are a lot) this will happen with AI led software development. Everything from enshittification to the environmental negatives, this is going to happen and many in the industry can only do their best to mitigate the negatives.

This is perhaps the most concerning prediction on the list. The “Shienification” of software refers to the trend toward disposable, quickly-produced software that follows the fast fashion model: cheap, trendy, and designed to be replaced rather than maintained.

We’re already seeing signs of this. AI makes it incredibly easy to generate new applications, features, and even entire systems. The barrier to entry is lower than ever, which means more software is being produced with less thought given to long-term sustainability.

The environmental impact is particularly troubling. The computational resources required to train and run AI models are enormous, and if we’re producing more disposable software, we’re essentially burning through resources for short-term gains.

The challenge for the industry is to resist this trend and maintain focus on building software that’s designed to last, evolve, and provide long-term value rather than quick wins.

8. Relative Value of Fostering Talent to Name Things Well Will Be More Important

My Reaction: Under-reported prediction and NEED among skillsets. Using the right words at the right times for the right things in the right way are going to exponentially grow as a skillset need.

This is a subtle but profound prediction that I think is being overlooked. In an AI-driven development environment, the ability to name things well becomes critical because it directly impacts how effectively AI can understand and work with your code.

Good naming conventions, clear abstractions, and well-defined interfaces become the difference between AI that can effectively assist and AI that generates confusing, unmaintainable code. The people who can create clear, semantic naming schemes and architectural patterns will become incredibly valuable.

This extends beyond just variable names and function names. It includes the ability to create clear APIs, well-defined data models, and intuitive system architectures that both humans and AI can understand and work with effectively.

The irony is that as AI becomes more capable, the human skills around communication, clarity, and semantic design become more important, not less.

9. Optimize LLM for Specific Use Cases

My Reaction: Not sure this is a prediction, it’s already happening.

This is already well underway. We’re seeing specialized models for coding (GitHub Copilot, Cursor), for specific domains (legal, medical, financial), and for particular tasks (code review, documentation generation, testing).

The trend toward specialization makes sense from both a performance and cost perspective. A general-purpose model trying to be good at everything will inevitably be mediocre at most things. Specialized models can be optimized for specific use cases, leading to better results and more efficient resource usage.

We’re also seeing the emergence of model composition, where different specialized models work together to handle complex tasks. This is likely to continue and accelerate as the technology matures.

10. Companies Will Die Faster Because We Can Replicate Functionality Faster

My Reaction: Agreed.

This is a natural consequence of lowered barriers to entry. If AI makes it easier and faster to build software, then it also makes it easier and faster to replicate existing functionality. This creates a more competitive landscape where companies need to move faster and innovate more aggressively to maintain their competitive advantage.

The traditional moats around software companies – technical complexity, development time, specialized knowledge – are being eroded by AI. What used to take months or years to build can now be prototyped in days or weeks.

This isn’t necessarily bad for consumers, who will benefit from more competition and faster innovation. But it’s challenging for companies that rely on technical barriers to entry as their primary competitive advantage.

The companies that survive will be those that can move fastest, adapt most quickly, and find new ways to create value beyond just technical implementation.

11. Existence of Non-Technical Managers Will Decrease

My Reaction: I’m doubtful of this. If anything the use of AI will lower overall technical ability and will cause some significant issues around troubleshooting from lack of depth of those using the tooling to gloss over deep knowledge.

I’m skeptical of this prediction. While AI might make it easier for non-technical people to generate code, it doesn’t necessarily make them better at managing technical teams or making technical decisions.

In fact, I think we might see the opposite trend. As AI tools become more accessible, we might see more people in management roles who can generate code but lack the deep technical understanding needed to make sound architectural decisions or troubleshoot complex issues.

The real challenge will be ensuring that technical managers have both the AI-assisted productivity tools and the foundational knowledge needed to make good decisions. Simply being able to generate code doesn’t make someone a good technical leader.

12. Vibe Code Will Cause a Return to Small Teams with Microservices

My Reaction: Agree and disagree, in that order.

I agree that vibe coding will drive architectural changes, but I’m not sure microservices is the inevitable result. The challenge with vibe-coded systems is that they’re often built without a clear understanding of the underlying architecture, which can lead to tightly coupled, monolithic systems that are hard to maintain.

However, the trend toward microservices might be driven more by the need to isolate failures and limit the blast radius of bugs in vibe-coded systems. If you can’t trust the code quality, you need to architect around that uncertainty.

The disagreement comes from the fact that microservices also require significant architectural discipline and understanding, which might be at odds with the vibe coding approach. We might see a different architectural pattern emerge that’s better suited to AI-assisted development.

13. Software Will Become a Living Conversation, Not a Static Thing

My Reaction: Agree, more dynamic conversations form the speed increase will occur in many projects for some products and some services.

This is already happening in many development environments. The traditional model of writing code, testing it, and deploying it is being replaced by a more iterative, conversational approach where developers work with AI to continuously refine and improve their systems.

The speed of iteration is increasing dramatically. What used to take days or weeks can now happen in hours or minutes. This allows for more experimentation, faster feedback loops, and more responsive development processes.

However, this also creates challenges around version control, testing, and deployment. If software is constantly evolving, how do you ensure stability and reliability? How do you manage the complexity of systems that are always changing?

14. Website Search Will No Longer Be Relevant in ~3 Years

My Reaction: Is it now?

This prediction seems to assume that website search is currently highly relevant, which I’m not sure is the case. Traditional web search has been declining in relevance for years as content has moved to social media, apps, and other platforms.

The rise of AI-powered search and information retrieval might accelerate this trend, but I think the real question is whether website search was ever as relevant as we thought it was. The future of information discovery is likely to be more conversational and contextual, driven by AI rather than traditional keyword-based search.

15. LLMs Will Be Software and Replace Stacks

My Reaction: I’m not sure this isn’t the way it is already. The context and case and specificity isn’t really clear here.

This prediction is a bit vague, but I think it’s referring to the idea that LLMs might become the primary interface for interacting with software systems, potentially replacing traditional APIs and user interfaces.

We’re already seeing early signs of this with AI-powered interfaces that can understand natural language and translate it into system actions. The question is whether this will extend to the point where traditional software stacks become obsolete.

I’m skeptical that this will happen completely, but I do think we’ll see more AI-native interfaces and interactions that make traditional software feel more conversational and intuitive.

16. (Software) Libraries Will Become Less Relevant

My Reaction: Agreed.

As AI becomes more capable of generating code from scratch, the need for pre-built libraries and frameworks may decrease. Why use a library when you can have AI generate exactly what you need, tailored to your specific use case?

This trend is already visible in some areas where developers are using AI to generate custom implementations rather than pulling in external dependencies. The benefits include reduced dependency management, smaller bundle sizes, and more control over the implementation.

However, this also means losing the benefits of community-maintained, battle-tested code. The challenge will be finding the right balance between custom generation and proven libraries.

17. In the Future Ads Will Become Even More Precise; LLMs Will Have More Info for Targeting

My Reaction: Agreed. I hate this.

This is perhaps the most dystopian prediction on the list. As LLMs become more sophisticated and have access to more personal data, they’ll be able to create incredibly targeted and persuasive advertising that’s tailored to individual users’ psychology, preferences, and vulnerabilities.

The privacy implications are enormous. We’re already seeing early signs of this with AI-powered ad targeting that can analyze user behavior and create personalized content. As the technology improves, this will become even more sophisticated and invasive.

This is a trend that I find deeply concerning from both a privacy and societal perspective. The ability to manipulate individuals through highly targeted, AI-generated content represents a significant threat to autonomy and informed decision-making.

18. There Will Be a Standardization of Information Architecture Which Will Allow Faster Iteration of Tooling

My Reaction: Doubtful. If humanity and industry hasn’t done this already I see no reason we’ll do it now.

This prediction assumes that we’ll finally achieve the standardization that we’ve been trying to implement for decades. While AI might make it easier to work with standardized formats and protocols, I’m skeptical that it will drive the kind of widespread adoption needed for true standardization.

The history of technology is full of failed standardization attempts. Even when standards exist, they’re often ignored or implemented inconsistently. AI might make it easier to work with existing standards, but it won’t necessarily create the political and economic incentives needed for widespread adoption.

19. LLMs Will Cause a Dearth of New Innovation

My Reaction: I fear that this could happen, in some ways. But in other ways I think humanity can and will be forced – with the culmination of AI, the toxic immolation of democracies, and other horrors facing the world right now – we’ll innovate, change, and hopefully for the better in ways we don’t even grasp with these massive triggers effecting us.

This is a complex prediction that touches on fundamental questions about human creativity and innovation. On one hand, if AI can generate solutions to most problems, there might be less incentive for humans to engage in the kind of deep, creative thinking that leads to breakthrough innovations.

On the other hand, the challenges we’re facing as a society – climate change, political instability, economic inequality – are so profound that they might force innovation in ways we can’t currently imagine. The combination of AI capabilities and existential threats might actually accelerate innovation rather than stifle it.

The key question is whether AI will augment human creativity or replace it. I’m optimistic that it will be the former, but it’s not guaranteed.

20. AIs Will Invent Own Programming Language

My Reaction: Agreed. I believe to some degree they already have.

This is already happening in subtle ways. AI systems are developing their own internal representations and communication protocols that are optimized for machine-to-machine interaction rather than human readability.

As AI systems become more sophisticated and need to work together, they’ll likely develop more formal languages and protocols for communication. These won’t replace human programming languages, but they’ll exist alongside them for specific use cases.

The interesting question is whether these AI-invented languages will be more efficient or expressive than human-designed languages, and whether humans will eventually adopt them for certain types of programming tasks.

21. Some Countries Will Make AI Access a Universal Right

My Reaction: Agreed. While others will block it, ban it, and control it and shape it to further attack and rewrite known narratives the world accepts as positives.

This prediction reflects the growing recognition that AI access is becoming a fundamental requirement for participation in modern society. Just as internet access has become essential for education, employment, and civic participation, AI access is following the same trajectory.

Some countries will embrace this and provide universal access to AI tools and services, recognizing it as a public good. Others will restrict access, either for political reasons or to maintain control over information and communication.

The geopolitical implications are significant. Countries that provide universal AI access will have a competitive advantage in education, innovation, and economic development. Those that restrict it will fall behind.

22. Languages Used for Strengths More Than as a Panacea

My Reaction: “Languages used for strengths more than as a panacea.” I added this one. It’s more a hope than a prediction. For example, I hope Go is used for its strengths, Rust for its strengths, Java, C#, etc. Instead of trying to make C# or Rust the panacea across all platforms and all needs. One can hope!

This is my addition to the list, and it’s more of a hope than a prediction. In an AI-driven development environment, there’s a risk that we’ll default to whatever language the AI is most comfortable with, rather than choosing the right tool for the job.

I hope that AI will actually help empower us to make better language choices by understanding the strengths and weaknesses of different languages and recommending the most appropriate one for each use case. Go for its concurrency and simplicity, Rust for its safety and performance, Java for its enterprise ecosystem, C# for its Microsoft integration, and so on.

The goal should be to use each language for what it does best, rather than trying to make one language solve every problem. AI could actually help us achieve this by providing better guidance on language selection and architecture decisions.

The Big Picture

These 22 predictions all of us participants conjured up paint a picture of an industry in rapid transformation. Some trends are already visible, others are still emerging. The common thread is that AI is fundamentally changing how we think about software development, from the tools we use to the way we organize teams and make decisions – all for better or worse.f

The challenge for the industry is to navigate these changes thoughtfully, preserving what’s valuable about traditional software development while embracing the opportunities that AI presents. The predictions that concern me most are those that suggest a race to the bottom in terms of quality, sustainability, and long-term thinking.

The predictions that excite me most are those that suggest AI will augment human capabilities rather than replace them, enabling us to be more creative and experimental in our solutions while preserving the critical thinking and problem-solving skills that make good developers valuable.

As we move forward, the key will be maintaining our focus on building software that’s not just functional, but sustainable, maintainable, and valuable in the long term. AI can help us build faster, but it can’t replace the judgment and wisdom needed to build well.


What are your thoughts on these predictions? Which ones resonate with your experience, and which ones seem off-base? I’d love to hear your perspective on where you think AI is taking our industry.

I’m @ Mastadon https://metalhead.club/@adron, Threads https://www.threads.com/@adron, and Blue Sky https://bsky.app/profile/adron.bsky.social – hit me up with your thoughts!