Architects Evolving: How AI Is Reshaping the Role

In my last post, I broke down the many kinds of Software Architects, the ivory tower variety, the hands-on technical ones, and the practice architects who design how design happens. But here’s the thing: the role is shifting. The same forces that transformed how we build, deploy, and ship software are transforming what it even means to design (or “architect” as many say verb-izing the word) software.

We’re now entering an era where architects and principal engineers aren’t just bridging teams and systems. They’re orchestrating collaboration between humans and AI. The tools we use now think with us, and that changes everything about the craft.

The Changing Landscape

Ten years ago, a lot of architecture work was about managing complexity by building abstractions; frameworks, templates, deployment patterns. We spent weeks designing scaffolding so teams could build faster and more consistently. But that kind of work is getting automated. AI systems can generate scaffolding, boilerplate, and full service templates in minutes. The problem we used to solve, how to get started, isn’t really a problem anymore.

When the thing you used to design can now be generated instantly, the focus has to move. Architects aren’t defining structure anymore; they’re defining intelligence. You’re not asking “What should this system look like?” You’re asking “How do we make sure the AI knows why it should look that way?” It’s about shaping the inputs, context, and data so that what’s generated aligns with intent. The job shifts from building code foundations to engineering the thinking process that builds them.

The Rise of the AI-Literate Architect

We’re watching a new type of architect emerge: the AI-literate one. They still understand the core principles: separation of concerns, scalability, event-driven architecture, fault tolerance, all of it. But they also understand how generative systems influence those principles in real time.

An AI-literate architect knows how to embed architectural context into the ecosystem. They define how teams use AI-assisted tools safely and effectively. They make sure the AI understands the system’s constraints and style. They design for AI participation, not just human interaction.

This requires a mental shift. You stop thinking in one-way delivery lines and start thinking in loops. Human input feeds AI generation. AI output is validated and tuned by humans. That feedback updates the system and documentation automatically. It’s no longer a linear workflow, it’s a learning cycle. If you’re the architect, you design that cycle.

From Gatekeeper to Curator

In the old model, architects were the gatekeepers; reviewing, approving, enforcing standards through checklists and review boards. The problem is, AI doesn’t wait for review meetings. It just keeps generating.

That means the architect’s role evolves from enforcing architecture to curating it. You’re building adaptive systems that can evolve in real time while staying within safe boundaries. Architecture becomes an active process, not a static deliverable.

You start embedding your architectural intelligence directly into the system. Instead of writing 40 pages of guidelines that nobody reads, you teach the AI to enforce them. You build architectural context into templates, pipelines, and code generation rules. You define the patterns and anti-patterns that the tooling itself understands. It’s a shift from “humans enforcing rules” to “systems embedding rules.”

Practice Architects, in particular, are going to be the ones who make this leap first. They’ll design how AI participates in engineering: defining prompt libraries, training models on architectural standards, and creating governance systems that are continuous instead of ceremonial. Architecture review won’t be a meeting anymore; it’ll be a real-time feedback process that’s baked into every commit.

Bridging in the Age of Intelligent Systems

Architects and principal engineers have always been bridges between silos. That part doesn’t change it just gets more complex. Now you’re bridging not just Product, Engineering, and Operations, but also human and machine reasoning.

You’re designing how information moves between people, systems, and AI tools. You have to ensure that AI-driven design decisions stay aligned with business intent, because left unchecked, they’ll drift fast. AI is great at generating something that looks right but is completely wrong. So now part of the architect’s responsibility is to make sure the system learns properly and doesn’t hallucinate structure or logic that doesn’t exist.

This is where the role scales. Architects are no longer managing systems, they’re managing sociotechnical ecosystems. People, tools, code, feedback loops, all moving parts in one continuous adaptive system. That’s the new frontier.

The New Shape of Senior Engineering

Principal Engineers, Staff Engineers, and Architects are converging. The boundaries are blurring because the core responsibility is shifting toward orchestration of people, systems, and now AI agents. The job isn’t just designing systems; it’s designing the processes that generate and sustain them.

The most effective senior engineers in this new landscape are systems thinkers. They understand feedback loops, both technical and human. They can teach teams how to reason with AI and validate its output. They can embed governance and ethics into automation. And most importantly, they know how to keep humans in the loop without killing speed or creativity.

The point isn’t to resist AI. It’s to shape it, to make sure it becomes an extension of good engineering judgment, not a replacement for it.

The Path Forward

Software architecture isn’t fading away it’s evolving into something more dynamic. The diagrams and frameworks are still there, but now they’re part of a system that can reason about itself. The architect’s job is to make sure that system is learning the right lessons.

The best architects in this new era will stop treating architecture as documentation and start treating it as infrastructure. They’ll design processes that teach systems how to design themselves safely and intelligently. They’ll move from drawing boxes to defining the feedback loops between them. They’ll stop defending their ivory towers and start engineering adaptive ecosystems that learn and evolve continuously.

The next generation of architects won’t just design software. They’ll design intelligence into how software gets made. And that’s a far more interesting challenge.

Architects Among Us: The Many Shapes of Software Architecture

There’s a certain mystique that hangs around the title Software Architect. Some folks imagine a mythical being who levitates above the codebase, drawing circles and boxes on whiteboards, rarely touching a keyboard. Others see a burned-out senior developer who couldn’t let go of control, so they were promoted sideways. Both caricatures exist. Neither tells the whole story.

After years of designing systems, writing code, and watching teams rise and collapse under architectural decisions, I’ve realized that software architecture is less about diagrams and more about bridging people, systems, and time. It’s about building something that won’t rot under its own weight while still shipping what the business needs today.

The Job, at Its Core

A software architect’s real job is to balance competing forces. You’re thinking about performance, maintainability, developer experience, delivery velocity, and of course, cost. You deal in tradeoffs, not absolutes.

At its simplest, the role centers on three things:

  1. Defining the structure and intent of the system and what we’re building, and how it fits together.
  2. Guarding the integrity of that structure as it evolves.
  3. Communicating decisions clearly across the organization, from executives to developers.

It sounds straightforward until you’re six months into a project, two refactors deep, and someone in leadership just promised a feature that blows your architecture wide open.

The Many Flavors of Architect

There isn’t just one “Software Architect.” There are variations, each shaped by how close they stay to the code and how they interact with people and process.

The Ivory Tower Architect

This type rarely writes code and speaks in abstractions so detached from reality that teams can’t implement them without rewriting half the stack. The intentions are usually good, high-level vision, strong conceptual models,but the execution falls apart because they’re disconnected from how things actually get built and deployed. You’ll find them making PowerPoints, not pull requests.

The Technical Architect

This one’s hands-on. They know the quirks of the stack, the CI/CD pipeline, the caching edge cases, and which query in the database keeps locking under load. They’re the first to prototype, the last to stop tweaking configurations. They live closer to engineering and sometimes lose sight of business priorities, but when they’re balanced, they become the backbone of any effective engineering effort.

The Practice Architect

This is where the role expands beyond a single project. A practice architect doesn’t just design systems, they design how systems get designed. They establish frameworks, decision records, and architectural practices that teams can use without waiting for top-down approval. Their deliverables aren’t just diagrams, but processes: decision logs, architecture review boards, documentation standards, and communication channels that make architectural intent stick beyond a single sprint.

They drive consistency across multiple teams, creating the connective tissue that enables autonomy without chaos. Common patterns, shared libraries, CI/CD foundations, and standard observability practices are all part of their toolkit. The practice architect creates the scaffolding that keeps an engineering organization from splintering apart.

The Work: Business, Technical, and Solution Specific

A solid architect has to be fluent in three languages: business, technical, and solution. Each speaks a different truth.

Business Work

Here’s where the architect translates intent into implications. Understanding the revenue model, customer expectations, regulatory constraints, and delivery pressure is part of the job. You turn “real-time analytics” into “streaming data ingestion with 99.9% uptime and a sub-500ms query response.” That translation is the work.

Technical Work

This is where you live in the code and infrastructure. Choosing frameworks, defining service boundaries, selecting protocols, and ensuring scalability and resiliency are the bread and butter. You make sure the system can evolve. This is where non-functional requirements stop being buzzwords and start being design constraints you engineer around.

Solution Work

This is synthesis. You take the business intent and technical constraints, and build something that delivers both. It’s part art, part engineering, and entirely iterative. It’s also where you wrestle with the mess; the legacy code, the politics, and the reality that greenfield never stays green for long. Good solution architecture isn’t about the cleanest design, it’s about the right design for right now.

Working Across the Organizational Web

The real power of an architect isn’t in drawing boxes, it’s in connecting people who live in different ones. Architecture is as much a social system as it is a technical one.

A capable architect spends just as much time aligning teams as writing code or reviewing it. They operate in the seams of the organization: the handoff between Product and Engineering, where DevOps meets Security, where Marketing’s promises collide with technical reality, and where Support feels the fallout from design shortcuts. This bridging work is fundamental. It’s not optional, and it’s not “soft skills.” It’s the backbone of how complex systems actually get delivered.

Every senior technical role shares this responsibility. Staff, Principal, or Architect it doesn’t matter what your title says, your success depends on your ability to bridge silos. As a Principal Engineer, you’re often spanning horizontally across product lines and technical domains, ensuring cohesion between efforts. As an Architect, you’re bridging vertically and horizontally while translating business strategy into technical execution and back again. Both require deep context, credibility, and communication.

You talk to Product about tradeoffs and timelines. To Technical Product Owners about system boundaries and risk mitigation. To Developer Relations about usability and external developer experience. To Marketing about aligning technical capabilities with how the product is positioned. To Support and Operations about observability, resilience, and making sure the system doesn’t implode at 2AM. You’re the connective tissue between worlds that don’t naturally interact.

That’s what makes this level of engineering different. It’s not about writing more code. It’s about broadening your influence about seeing how design decisions ripple through people, processes, and systems. You learn to speak the language of each group without losing your technical grounding. Product won’t listen if you don’t understand deadlines. Engineers won’t respect you if you can’t code. Executives won’t care if you can’t tie architecture to revenue or risk. You have to earn trust in every direction.

The Reality Check

A software architect is part diplomat, part engineer, part historian. You carry institutional memory, technical rationale, and enough humility to admit when a “perfect design” isn’t the right one today. You have to stay close enough to the implementation to feel its pain, but far enough from the weeds to see the horizon. The balance is strategy informed by code, and code informed by strategy.

To be truly effective as an architect you don’t live in an ivory tower. You live in the intersection where technical ambition meets human limitation. And if you’re doing it right, you build systems that last just long enough for someone else to rewrite them better. 🤙🏻

Creating Distributed Database Application Starter Kits

I’ve boarded a bus, and as always, when I board a bus I almost always code. Unless of course there are people I’m hanging out with then I chit chat, but right now this is the 212 and I don’t know anybody on this chariot anyway. So into the code I go.

I’ve been re-reviewing the Docker and related collateral we offer at DataStax. In that review it seems like it would be worth having some starter kit applications along with these “default” Docker options. This post I’ve created to provide the first language & tech stack of several starter kits I’m going to create.

Starter Kit – The Todo List Template

This first set of starter kits will be based upon a todo list application. It’s really simple, minimal in features, and offers a complete top to bottom implementation of a service, and an application on top of that service all built on Apache Cassandra. In some places, and I’ll clearly mark these places, I might add a few DataStax Enterprise features around search, analytics, or graph.

The Todo List

Features: The following detail the features, from the users perspective, that this application will provide. Each implementation will provide all of these features.

  • A user wants to create a user account to create todo lists with.
  • A user wants to be able to store a username, full name, email, and some simple notes with their account.
  • A user wants to be able to create a todo list that is identified by a user defined name. (i.e. “Grocery List”, “Guitar List”, or “Stuff to do List”)
  • A user want to be able to logout and return, then retrieve a list from a list of their lists.
  • A user wants to be able to delete a todo list.
  • A user wants to be able to update a todo list name.
  • A user wants to be able to add items to a todo list.
  • A user wants to be able to update items in the todo list.
  • A user wants to be able to delete items in a todo list.

Architecture: The following is the architecture of the todo list starter kit application.

  • Database: Apache Cassandra.
  • Service: A small service to manage the data tier of the application.
  • User Interface: A web interface using React/Vuejs ??

As you can see, some of the items are incomplete, but I’ll decide on them soon. My next review is to check out what I really want to use for the user interface, and also to get a user account system figured out. I don’t really want to create the entire user interface, but instead would like to use something like Auth0 or Okta.

May I Ask?

There are numerous things I’d love help with. Are there any user stories you think are missing? Should I add something? What would make these helpful to you? Leave a comment, or tweet at me @Adron. I’d be happy to get some feedback and other’s thoughts on the matter so that I can ensure that these are simple, to the point, usable, and helpful to people. Cheers!

Let’s Really Discuss Lock In

For to long lock-in has been referred to with an almost entirely negative connotation even though it can be inferred in positive and negative situations. The fact is that there’s a much more nuanced and balanced range to benefits and disadvantages of lock-in. Often this may even be referred to as this or that dependency, but either way a dependency often is just another form of lock in. Weighing those and finding the right balance for your projects can actually lead to lock-in being a positive game changer or something that simply provides one a basis in which to work and operate. Sometimes lock-in actually will provide a way to remove lock-in by providing more choices to other things, that in turn may provide another variance of lock-in.

Concrete Lock-in Examples

The JavaScript Lock-In

IT Security icons. Simplus seriesTake the language we choose to build an application in. JavaScript is a great example. It has become the singular language of the web, at least on the client side. This was long ago, a form of lock-in that browser makers (and standards bodies) chose that dictated how and in which direction the web – at least web pages – would progress.

JavaScript has now become a prominent language on the server side now too thanks to Node.js. It has even moved in as a first class language in serverless technology like AWS’s Lambda. JavaScript is a perfect example of a language, initially being a source of specific lock-in, but required for the client, that eventually expanded to allow programming in a number of other environments – reducing JavaScript’s lock in – but displacing lock in through abstractions to other spaces such as the server side and and serverless functions.

The .NET Windows SQL Server Lock In

IT Security icons. Simplus seriesJavaScript is merely one example, and a relatively positive one that expands one’s options in more ways than limits one’s efforts. But let’s say the decision is made to build a high speed trading platform and choose SQL Server, .NET C#, and Windows Server. Immediately this is a technology combination that has notoriously illuminated in the past * how lock-in can be extremely dangerous.

This application, say it was built out with this set of technology platforms and used stored procedures in SQL Server, locking the application into the specific database, used proprietary Windows specific libraries in .NET with the C# code, and on Windows used IIS specific advances to make the application faster. When it was first built it seemed plenty fast and scaled just right according to the demand at the time.

Fast forward to today. The application now has a sharded database when it hit a mere 8 Terabytes, loaded on two super pumped up – at least for today – servers that have many cores, many CPUs, GPUs, and all that jazz. They came in around $240k each! The application is tightly coupled to a middle tier, that is then sort of tightly coupled to those famous stored procedures, and the application of course has a turbo capability per those IIS Servers.

But today it’s slow. Looking at benchmarks and query times the database is having a hard time dealing with things as is, and the application has outages on a routine basis for a whole variation of reasons. Sometimes tracing and debugging solves the problems quickly, other times the servers just oversubscribe resources and sit thrashing.

Where does this application go? How does one resolve the database loading issues? They’ve already sunk a half million on servers, they’re pegged out already, horizontally scaling isn’t an option, they’re tightly coupled to Window Servers running IIS removing the possibility of effectively scaling out the application servers via container technologies, and other issues. Without recourse, this is the type of lock in that will kill the company if something is changed in a massive way very soon.

To add, this is the description of an actual company that is now defunct. I phrased it as existing today only to make the point. The hard reality is the company went under, almost entirely because of the costs of maintaining and unsustainable architecture that caused an exorbitant lock in to very specific tools – largely because the company drank the cool aid to use the tools as suggested. They developed the product into a corner. That mistake was so expensive that it decimated the finances of the company. Not a good scenario, not a happy outcome, and something to be avoided in every way! This is truly the epitomy of negative lock in.

Of course there’s this distinctive lock in we have to steer clear from, but there’s the lock in associated with languages and other technology capabilities that will help your company move forward faster, easier, and with increasing capabilities. Those are the choices, the ties to technology and capabilities that decision makers can really leverage with fewer negative consequences.

The “Lock In” That Enables

IT Security icons. Simplus seriesOne common statement is, “the right tool for the job”. This is of course for the ideal world where ideal decisions can be made all the time. This doesn’t exist and we have to strive for balance between decisions that will wreck the ship or decisions that will give us clear waters ahead.

For databases we need to choose the right databases for where we want to go versus where we are today. Not to gold plate the solution, but to have intent and a clear focus on what we want our future technology to hold for us. If we intend to expand our data and want to maintain the ability to effectively query – let’s take the massive SQL Server for example – what could we have done to prevent it from becoming a debilitating decision?

A solution that could have effectively come into play would have been not to shard the relational database, but instead to either export or split the data in a more horizontal way and put it into a distributed database store. Start building the application so that this system could be used instead of being limited by the relational database. As the queries are built out and the tight coupling to SQL Server removed, the new distributed database could easily add nodes to compensate for the ever growing size of the data stored. The options are numerous, that all are a form of lock-in, but not the kind that eventually killed this company that had limited and detrimentally locked itself into use of a relational database.

At the application tier, another solution could have been made to remove the ties to IIS and start figuring out a way to containerize the application. One way years ago would have been to move away from .NET, but let’s say that wasn’t really an option for other reasons. The idea to mimic containerization could have been done through shifting to a self-contained web server on Windows that would allow the .NET application to run under a singular service and then have those services spin off the application as needed. This would decouple from IIS, and enable spreading the load more quickly across a set number of machines and eventually when .NET Core was released offer the ability to actually containerize and shift entirely off of Windows Server to a more cost efficient solution under Linux.

These are just some ideas. The solutions of course would vary and obviously provide different results. Above all there are pathways away from negative lock in and a direction toward positive lock in that enables. Realize there’s the balance, and find those that leverage lock in positively.

Nuanced Pedantic Notes:

  • Note I didn’t say all examples, but just that this combo has left more than a few companies out on a limb over the years. There are of course other technologies that have put companies (people actually) in awkward situations too. I’m just using this combo here as an example. For instance, probably some of the most notorious lock in comes from the legal ramifications of using Oracle products and being tied into their sales agreements. On the opposite end of the spectrum, Stack Overflow is a great example of how choosing .NET and scaling with it, SQL Server, and related technologies can work just fine.

Riak Developer Guidance

The “Client Round Robin Anti-Pattern”

One of the features that is often available in Riak Client software (including the CorrguatedIron .NET Client, the riak-js client and others) is the ability to send requests to the Riak Cluster through a round robin style approach. What this means is each IP, of each node within the Riak Cluster is entered into a config file for the client. The client then goes through that list to send off requests to read, write or delete data in the database.

The client being responsible and knowledgeable about the data tier of the application in an architecture is an immediate red flag! The concept around SoC (Separation of Concerns) dictates that

“SoC is a principle for separating a computer program into distinct sections, such that each section addresses a separate concern.

Having the client provide a network tier layer to round robin communication with the database leaves us in a scenario that should be separated into individual concerns. Below is some basic guidance on eliminating this SoC issue.

  • Client ONLY sends and receives communication: The client, especially in the situation with a distributed system like Riak should only be dealing with sending and receiving information from the cluster or a facade that provides an interface for that cluster.
  • Another layer should deal with the network communication and division of nodes and node communication. Ideally, in the case or Riak, and most distributed systems this should be dealt with at the network device layer (router).
  • The network device (router) layer would ideally be able to have (through software likely) a way to automate the failure, inclusion or exclusion of nodes with the cluster system. If a node goes down, the network device should handle the immediate cessation of communication with that node from all clients, routing the communication accordingly to an active node.
  • The node itself needs to maintain a continual information state available to the network. Ideally the network state would identify any addition or removal of a node and if possible the immediate failure of a node. Of course it isn’t always possible to be informed of a failure, but the first line of defense should start within the cluster itself among the nodes.

The Anti-Pattern

Having the client handle all of these parts of the functional architecture leads to a number of problems, not merely that the guidance of the SoC concept is broken. With the client attempting to track and be aware of the individual nodes in the cluster, it sets the client with a huge responsibility.

Take for instance the riak-js client. If a node goes down the client will need to be aware of which node has gone down. For a few seconds (yes, you have to wait entire seconds at this level) the node will be gone and the client won’t know it is down. The client would just have to reasonably wait. When the communication times out, the client would then have to have the responsibility of marking that particular node as down. At this point the client must track which node it is in some type of data repository local to the client. The client must also set a time or some way to identify when the node comes back up. Several questions start to come up such as;

  • Does the client do an arbitrary test to determine when the node comes back up?
  • When the node comes back up is it considered alive or damaged?
  • How would the client manage the IP (or identifier) of the node that has gone down?
  • How long would the client store that the node is down?

The list of questions can get long pretty quick, thus the bad karma of not following a good practice around separating your concerns appropriately! One has to be careful, a god class might be right around the corner otherwise! That’s it for this quick journey into some distributed database usage guidelines. Until next, happy data sciencing.  😉