The Second Order Effects of AI Acceleration: 22 Predictions from Polyglot Conference Vancouver

I just wrapped up attending an absolutely fascinating session at the Polyglot Conference here in Vancouver, BC. The talk was titled “Second Order Effects of AI Acceleration” and it was thought-provoking discussions – surface level and a little into the meat – that gets the brain gears turning. A room full of developers, architects, product, thinkers, and tech leaders debating predictions about where this AI acceleration is actually taking us.

The format was brilliant with fpredictions followed by arguments for and against each one. No hand-waving, no corporate speak, just real people with real experience hashing out what they think is coming down the pipe. I took notes on all 22 predictions and my immediate gut reactions to each one.

1. Far More Vibe Coded Outages

My Reaction: Simple. I concur with the prediction.

This one hits close to home. We’re already seeing the early signs of this phenomenon. “Vibe coding” – that delightful term for AI-assisted development where developers rely heavily on LLM suggestions without fully understanding the underlying logic – is becoming the norm in many shops. The problem isn’t the AI assistance itself, but the lack of deep understanding that comes with it.

When you’re building on top of code you don’t fully comprehend, you’re essentially creating a house of cards. One small change, one edge case, one unexpected input, and the whole thing comes crashing down. The outages won’t be dramatic server failures necessarily, but subtle bugs that cascade through systems built on shaky foundations.

The real issue here is that debugging vibe-coded systems requires a level of understanding that the original developers may not possess. You can’t effectively troubleshoot what you don’t understand, and that’s going to lead to longer resolution times and more frequent failures.

2. Companies Will More Often Develop Their Own Custom Tools

My Reaction: 100% agreed, as I’ve already seen it happening in places I am working, which is evident anecdotally. However I’ve also seen evidence of fairly extensive glue code being put together via “vibe” coding in other places from Microsoft to Amazon to other places. All for better or worse.

This is already happening at scale, and I’ve witnessed it firsthand. The traditional model of buying off-the-shelf solutions and adapting them is being replaced by rapid prototyping and custom development. Why? Because AI makes it faster and cheaper to build something tailored to your specific needs than to integrate and customize existing solutions.

But here’s the catch – we’re seeing a lot of “glue code” being generated. Not the elegant, well-architected solutions we’d hope for, but rather quick-and-dirty integrations that work for now but create technical debt for later. I’ve seen this pattern at Microsoft, Amazon, and other major tech companies where teams are rapidly prototyping solutions that work in the short term but lack the architectural rigor of traditional enterprise software.

The upside is innovation and speed. The downside is maintenance nightmares and the potential for significant refactoring down the road.

3. We’re in the VC Subsidized Phase of AI; Will Get More Expensive Like Uber + En-shittification

My Reaction: I concur with this point too, which it is kind of odd that there is a agree or disagree here, since it is just the reality of the matter at current time. Eventually the cost, even with reduction in costs from efficiences and such, will go up just from the magnitude of what is being done. Efficiencies will only get us so far. The cost, will eventually have to go up, just as time consumption will start to go up around the organization and coordination of said tooling usage even though what can be done will exponentially grow. This entire specific prediction topic could and needs to be extensively expanded on.

This is the elephant in the room that everyone’s trying to ignore. Right now, we’re in the honeymoon phase where AI services are heavily subsidized by venture capital, similar to how Uber operated in its early days. The prices are artificially low to drive adoption and build market share.

But here’s the reality: the computational costs of running these models at scale are enormous. The energy consumption alone is staggering. As usage grows exponentially, the costs will have to follow. We’re already seeing early signs of this with API rate limits and pricing adjustments from major providers.

The “enshittification” aspect is particularly concerning. As these services become essential infrastructure, providers will have increasing leverage to extract more value. We’ll see feature degradation, increased lock-in, and pricing that reflects the true cost of the service rather than the subsidized rate.

This deserves its own deep dive post – the economics of AI infrastructure are going to fundamentally reshape how we think about software costs.

4. Junior Developers Will Become Senior Developers More Rapidly

My Reaction: Disagree and agree. The ramifications in the software engineering industry around this specific space is extensive. So much so, I’ll write an entirely new post just on this topic. It’s in the cooker, it’ll be ready soon!

This is a nuanced prediction that I have mixed feelings about. On one hand, AI tools are democratizing access to complex programming concepts. A junior developer can now generate sophisticated code patterns, implement complex algorithms, and work with technologies they might not have encountered before.

But here’s the critical distinction: there’s a difference between being able to generate code and being able to architect systems, debug complex issues, and make sound technical decisions under pressure. The latter requires experience, pattern recognition, and deep understanding that can’t be accelerated by AI alone.

I’m seeing a concerning trend where junior developers are being promoted based on their ability to produce working code quickly, but they lack the foundational knowledge to handle the inevitable problems that arise. This creates a dangerous gap in our industry.

The real question is: are we creating a generation of developers who can build but can’t maintain, debug, or evolve systems? This topic is so complex and important that it deserves its own dedicated post.

5. Existing Programming Languages Will Form a Hegemony

My Reaction: I mostly agree with this point. There may be some new languages that come up, and languages that more specifically allow for agents to communicate across paths without the need for human based languages and their respective error prone inefficiences.

The programming language landscape is consolidating around a few dominant players. Python, JavaScript, Java, and C# are becoming the de facto standards for most development work. This consolidation is driven by several factors: AI training data is heavily weighted toward these languages, tooling and ecosystem maturity, and the practical reality that most developers need to work with existing codebases.

However, I think we’ll see some interesting developments in agent-to-agent communication languages. As AI systems become more sophisticated, they may develop their own protocols and languages optimized for machine-to-machine communication rather than human readability. These won’t replace human programming languages, but they’ll exist alongside them for specific use cases.

The hegemony isn’t necessarily bad – it reduces fragmentation and makes it easier to find talent and resources. But it also risks stifling innovation and creating monocultures that are vulnerable to specific types of problems.

6. Value of Contrarian People Will Be Higher Than Yes Men

My Reaction: Agreed. Then of course those with a healthy dose of questions have always found a more useful path in society over time than the “yes men” type cowards. Politics in the US of course being an exception right now.

This prediction resonates deeply with me. In an environment where AI can generate code, documentation, and even architectural decisions at the click of a button, the ability to question, challenge, and think critically becomes exponentially more valuable.

The “yes men” who simply implement whatever is suggested without critical analysis are becoming obsolete. AI can do that job better and faster. What AI can’t do is ask the hard questions: “Is this the right approach?” “What are the long-term implications?” “How does this fit with our broader strategy?”

Contrarian thinking becomes a competitive advantage because it’s the one thing that AI can’t replicate – genuine skepticism and independent thought. The people who can look at AI-generated solutions and say “wait, this doesn’t make sense” or “we’re missing something important here” will become increasingly valuable.

This is especially true in technical leadership roles where the ability to make nuanced decisions and see around corners becomes critical.

7. Shienification of Software (Software Will Become More Like Fast Fashion)

My Reaction: I agree that this will start to happen, as US led capitalism tends toward a race to the bottom, sadly, and with all the negatives of fast fashion (there are a lot) this will happen with AI led software development. Everything from enshittification to the environmental negatives, this is going to happen and many in the industry can only do their best to mitigate the negatives.

This is perhaps the most concerning prediction on the list. The “Shienification” of software refers to the trend toward disposable, quickly-produced software that follows the fast fashion model: cheap, trendy, and designed to be replaced rather than maintained.

We’re already seeing signs of this. AI makes it incredibly easy to generate new applications, features, and even entire systems. The barrier to entry is lower than ever, which means more software is being produced with less thought given to long-term sustainability.

The environmental impact is particularly troubling. The computational resources required to train and run AI models are enormous, and if we’re producing more disposable software, we’re essentially burning through resources for short-term gains.

The challenge for the industry is to resist this trend and maintain focus on building software that’s designed to last, evolve, and provide long-term value rather than quick wins.

8. Relative Value of Fostering Talent to Name Things Well Will Be More Important

My Reaction: Under-reported prediction and NEED among skillsets. Using the right words at the right times for the right things in the right way are going to exponentially grow as a skillset need.

This is a subtle but profound prediction that I think is being overlooked. In an AI-driven development environment, the ability to name things well becomes critical because it directly impacts how effectively AI can understand and work with your code.

Good naming conventions, clear abstractions, and well-defined interfaces become the difference between AI that can effectively assist and AI that generates confusing, unmaintainable code. The people who can create clear, semantic naming schemes and architectural patterns will become incredibly valuable.

This extends beyond just variable names and function names. It includes the ability to create clear APIs, well-defined data models, and intuitive system architectures that both humans and AI can understand and work with effectively.

The irony is that as AI becomes more capable, the human skills around communication, clarity, and semantic design become more important, not less.

9. Optimize LLM for Specific Use Cases

My Reaction: Not sure this is a prediction, it’s already happening.

This is already well underway. We’re seeing specialized models for coding (GitHub Copilot, Cursor), for specific domains (legal, medical, financial), and for particular tasks (code review, documentation generation, testing).

The trend toward specialization makes sense from both a performance and cost perspective. A general-purpose model trying to be good at everything will inevitably be mediocre at most things. Specialized models can be optimized for specific use cases, leading to better results and more efficient resource usage.

We’re also seeing the emergence of model composition, where different specialized models work together to handle complex tasks. This is likely to continue and accelerate as the technology matures.

10. Companies Will Die Faster Because We Can Replicate Functionality Faster

My Reaction: Agreed.

This is a natural consequence of lowered barriers to entry. If AI makes it easier and faster to build software, then it also makes it easier and faster to replicate existing functionality. This creates a more competitive landscape where companies need to move faster and innovate more aggressively to maintain their competitive advantage.

The traditional moats around software companies – technical complexity, development time, specialized knowledge – are being eroded by AI. What used to take months or years to build can now be prototyped in days or weeks.

This isn’t necessarily bad for consumers, who will benefit from more competition and faster innovation. But it’s challenging for companies that rely on technical barriers to entry as their primary competitive advantage.

The companies that survive will be those that can move fastest, adapt most quickly, and find new ways to create value beyond just technical implementation.

11. Existence of Non-Technical Managers Will Decrease

My Reaction: I’m doubtful of this. If anything the use of AI will lower overall technical ability and will cause some significant issues around troubleshooting from lack of depth of those using the tooling to gloss over deep knowledge.

I’m skeptical of this prediction. While AI might make it easier for non-technical people to generate code, it doesn’t necessarily make them better at managing technical teams or making technical decisions.

In fact, I think we might see the opposite trend. As AI tools become more accessible, we might see more people in management roles who can generate code but lack the deep technical understanding needed to make sound architectural decisions or troubleshoot complex issues.

The real challenge will be ensuring that technical managers have both the AI-assisted productivity tools and the foundational knowledge needed to make good decisions. Simply being able to generate code doesn’t make someone a good technical leader.

12. Vibe Code Will Cause a Return to Small Teams with Microservices

My Reaction: Agree and disagree, in that order.

I agree that vibe coding will drive architectural changes, but I’m not sure microservices is the inevitable result. The challenge with vibe-coded systems is that they’re often built without a clear understanding of the underlying architecture, which can lead to tightly coupled, monolithic systems that are hard to maintain.

However, the trend toward microservices might be driven more by the need to isolate failures and limit the blast radius of bugs in vibe-coded systems. If you can’t trust the code quality, you need to architect around that uncertainty.

The disagreement comes from the fact that microservices also require significant architectural discipline and understanding, which might be at odds with the vibe coding approach. We might see a different architectural pattern emerge that’s better suited to AI-assisted development.

13. Software Will Become a Living Conversation, Not a Static Thing

My Reaction: Agree, more dynamic conversations form the speed increase will occur in many projects for some products and some services.

This is already happening in many development environments. The traditional model of writing code, testing it, and deploying it is being replaced by a more iterative, conversational approach where developers work with AI to continuously refine and improve their systems.

The speed of iteration is increasing dramatically. What used to take days or weeks can now happen in hours or minutes. This allows for more experimentation, faster feedback loops, and more responsive development processes.

However, this also creates challenges around version control, testing, and deployment. If software is constantly evolving, how do you ensure stability and reliability? How do you manage the complexity of systems that are always changing?

14. Website Search Will No Longer Be Relevant in ~3 Years

My Reaction: Is it now?

This prediction seems to assume that website search is currently highly relevant, which I’m not sure is the case. Traditional web search has been declining in relevance for years as content has moved to social media, apps, and other platforms.

The rise of AI-powered search and information retrieval might accelerate this trend, but I think the real question is whether website search was ever as relevant as we thought it was. The future of information discovery is likely to be more conversational and contextual, driven by AI rather than traditional keyword-based search.

15. LLMs Will Be Software and Replace Stacks

My Reaction: I’m not sure this isn’t the way it is already. The context and case and specificity isn’t really clear here.

This prediction is a bit vague, but I think it’s referring to the idea that LLMs might become the primary interface for interacting with software systems, potentially replacing traditional APIs and user interfaces.

We’re already seeing early signs of this with AI-powered interfaces that can understand natural language and translate it into system actions. The question is whether this will extend to the point where traditional software stacks become obsolete.

I’m skeptical that this will happen completely, but I do think we’ll see more AI-native interfaces and interactions that make traditional software feel more conversational and intuitive.

16. (Software) Libraries Will Become Less Relevant

My Reaction: Agreed.

As AI becomes more capable of generating code from scratch, the need for pre-built libraries and frameworks may decrease. Why use a library when you can have AI generate exactly what you need, tailored to your specific use case?

This trend is already visible in some areas where developers are using AI to generate custom implementations rather than pulling in external dependencies. The benefits include reduced dependency management, smaller bundle sizes, and more control over the implementation.

However, this also means losing the benefits of community-maintained, battle-tested code. The challenge will be finding the right balance between custom generation and proven libraries.

17. In the Future Ads Will Become Even More Precise; LLMs Will Have More Info for Targeting

My Reaction: Agreed. I hate this.

This is perhaps the most dystopian prediction on the list. As LLMs become more sophisticated and have access to more personal data, they’ll be able to create incredibly targeted and persuasive advertising that’s tailored to individual users’ psychology, preferences, and vulnerabilities.

The privacy implications are enormous. We’re already seeing early signs of this with AI-powered ad targeting that can analyze user behavior and create personalized content. As the technology improves, this will become even more sophisticated and invasive.

This is a trend that I find deeply concerning from both a privacy and societal perspective. The ability to manipulate individuals through highly targeted, AI-generated content represents a significant threat to autonomy and informed decision-making.

18. There Will Be a Standardization of Information Architecture Which Will Allow Faster Iteration of Tooling

My Reaction: Doubtful. If humanity and industry hasn’t done this already I see no reason we’ll do it now.

This prediction assumes that we’ll finally achieve the standardization that we’ve been trying to implement for decades. While AI might make it easier to work with standardized formats and protocols, I’m skeptical that it will drive the kind of widespread adoption needed for true standardization.

The history of technology is full of failed standardization attempts. Even when standards exist, they’re often ignored or implemented inconsistently. AI might make it easier to work with existing standards, but it won’t necessarily create the political and economic incentives needed for widespread adoption.

19. LLMs Will Cause a Dearth of New Innovation

My Reaction: I fear that this could happen, in some ways. But in other ways I think humanity can and will be forced – with the culmination of AI, the toxic immolation of democracies, and other horrors facing the world right now – we’ll innovate, change, and hopefully for the better in ways we don’t even grasp with these massive triggers effecting us.

This is a complex prediction that touches on fundamental questions about human creativity and innovation. On one hand, if AI can generate solutions to most problems, there might be less incentive for humans to engage in the kind of deep, creative thinking that leads to breakthrough innovations.

On the other hand, the challenges we’re facing as a society – climate change, political instability, economic inequality – are so profound that they might force innovation in ways we can’t currently imagine. The combination of AI capabilities and existential threats might actually accelerate innovation rather than stifle it.

The key question is whether AI will augment human creativity or replace it. I’m optimistic that it will be the former, but it’s not guaranteed.

20. AIs Will Invent Own Programming Language

My Reaction: Agreed. I believe to some degree they already have.

This is already happening in subtle ways. AI systems are developing their own internal representations and communication protocols that are optimized for machine-to-machine interaction rather than human readability.

As AI systems become more sophisticated and need to work together, they’ll likely develop more formal languages and protocols for communication. These won’t replace human programming languages, but they’ll exist alongside them for specific use cases.

The interesting question is whether these AI-invented languages will be more efficient or expressive than human-designed languages, and whether humans will eventually adopt them for certain types of programming tasks.

21. Some Countries Will Make AI Access a Universal Right

My Reaction: Agreed. While others will block it, ban it, and control it and shape it to further attack and rewrite known narratives the world accepts as positives.

This prediction reflects the growing recognition that AI access is becoming a fundamental requirement for participation in modern society. Just as internet access has become essential for education, employment, and civic participation, AI access is following the same trajectory.

Some countries will embrace this and provide universal access to AI tools and services, recognizing it as a public good. Others will restrict access, either for political reasons or to maintain control over information and communication.

The geopolitical implications are significant. Countries that provide universal AI access will have a competitive advantage in education, innovation, and economic development. Those that restrict it will fall behind.

22. Languages Used for Strengths More Than as a Panacea

My Reaction: “Languages used for strengths more than as a panacea.” I added this one. It’s more a hope than a prediction. For example, I hope Go is used for its strengths, Rust for its strengths, Java, C#, etc. Instead of trying to make C# or Rust the panacea across all platforms and all needs. One can hope!

This is my addition to the list, and it’s more of a hope than a prediction. In an AI-driven development environment, there’s a risk that we’ll default to whatever language the AI is most comfortable with, rather than choosing the right tool for the job.

I hope that AI will actually help empower us to make better language choices by understanding the strengths and weaknesses of different languages and recommending the most appropriate one for each use case. Go for its concurrency and simplicity, Rust for its safety and performance, Java for its enterprise ecosystem, C# for its Microsoft integration, and so on.

The goal should be to use each language for what it does best, rather than trying to make one language solve every problem. AI could actually help us achieve this by providing better guidance on language selection and architecture decisions.

The Big Picture

These 22 predictions all of us participants conjured up paint a picture of an industry in rapid transformation. Some trends are already visible, others are still emerging. The common thread is that AI is fundamentally changing how we think about software development, from the tools we use to the way we organize teams and make decisions – all for better or worse.f

The challenge for the industry is to navigate these changes thoughtfully, preserving what’s valuable about traditional software development while embracing the opportunities that AI presents. The predictions that concern me most are those that suggest a race to the bottom in terms of quality, sustainability, and long-term thinking.

The predictions that excite me most are those that suggest AI will augment human capabilities rather than replace them, enabling us to be more creative and experimental in our solutions while preserving the critical thinking and problem-solving skills that make good developers valuable.

As we move forward, the key will be maintaining our focus on building software that’s not just functional, but sustainable, maintainable, and valuable in the long term. AI can help us build faster, but it can’t replace the judgment and wisdom needed to build well.


What are your thoughts on these predictions? Which ones resonate with your experience, and which ones seem off-base? I’d love to hear your perspective on where you think AI is taking our industry.

I’m @ Mastadon https://metalhead.club/@adron, Threads https://www.threads.com/@adron, and Blue Sky https://bsky.app/profile/adron.bsky.social – hit me up with your thoughts!

An Approach to Learning Java Fast for New & Experienced Polyglot Coders

If you’re just starting out learning Java, here are the top 5 things you can do to get started effectively:

  1. Set Up Your Development Environment:
    • Install the Java Development Kit (JDK) on your computer. You can download it from the official Oracle website or use an open-source distribution like OpenJDK. It can be confusing at first, because there are a bunch of versions you *could* get started with depending on a million different variables, just pick the latest though and get going.
    • Choose a code editor or Integrated Development Environment (IDE) like Eclipse, IntelliJ IDEA, or Visual Studio Code to write and run your Java code. IDEs offer features like code completion and debugging, which can be very helpful for beginners.
  2. Learn the Basics of Java:
    • Start with the fundamental concepts of Java, such as variables, data types, operators, and control structures (if statements, loops).
    • Understand the object-oriented programming (OOP) principles that Java is based on, including classes, objects, inheritance, encapsulation, and polymorphism. Even if you do not use any of these and you’re taking a different approach (functional, top-down, etc) it’s really important to at least learn and understand the OOP concepts and capabilities of Java, as at some point you will see these and to understand what is going on, you’ll need to understand this part of Java.
Continue reading “An Approach to Learning Java Fast for New & Experienced Polyglot Coders”

In-memory Orchestrate Local Development Database

I was talking with Tory Adams @BEZEI2K about working with Orchestrate‘s Services. We’re totally sold on what they offer and are looking forward to a lot of the technology that is in the works. The day to day building against Orchestrate is super easy, and setting up collections for dev or test or whatever are so easy nothing has stood in our way. Except one thing…

Every once in a while we have to work disconnected. For whatever the reason might be; Comcast cable goes out, we decide to jump on a train or one of us ends up on one of those Q400 puddle jumpers that doesn’t have wifi! But regardless of being disconnected from wifi, cable or internet connectivity we still want to be able to code and test!

In Memory Orchestrate Wrapper

Enter the idea of creating an in memory Orchestrate database wrapper. Using something like convict.js one could easily redirect all the connections as necessary when developing locally. That way development continues right along and when the application is pushed live, it’s redirected to the appropriate Orchestrate connections and keys!

This in memory “fake” or “mock” would need to have the key value, events, and graph store setup just like Orchestrate. With the possibility of having this in memory one could also easily write tests against a real fake and be able to test connected or disconnected without mocking. Not to say that’s a good or bad idea, but just one more tool in the tool chest doesn’t hurt!

If something like this doesn’t pop up in the next week or three, I might just have to kick off this project myself! If anybody is interested please reach out to me and let’s discuss! I’m open to writing it in JavaScript, C#, Java or whatever poison pill you’d prefer. (I’m not polyglot to limit my options!!)

Other Ideas, Development Shop Swap

Another idea that I’ve been pondering is setting up a development shop swap. I’ll leave the reader to determine what that means!  😉  Feel free to throw down ideas that this might bring up and I’ll incorporate that into the soon to be implementation. I’ll have more information about that idea right here once the project gets rolling. In the meantime, happy coding!

Architectural PaaS Cracks or Crack PaaS

Over the last couple years there have been two prominent open source PaaS Solutions come onto the market. Cloud Foundry & OpenShift. There’s been a lot of talk about these plays and the talk has slowly but steadily turned into traction. Large enterprises are picking these up and giving their developers and operations staff a real chance to make changes. Sometimes disruptive in a very good way.

However, with all the grandeur I’m going to hit on the negatives. These are the missing parts, the serious pain points beyond just some little deployment nuisance. Then a last note on why, even amidst the pain points, you still need to make real movement with PaaS tooling and technologies.

Negative: The Data Story is Lacking

Both Cloud Foundry and OpenShift have a way to plug into databases easily.

Cloud Foundry provides ways to build a Cloud Foundry Service that becomes the bound and hooked in SQL Server, MySQL, Postgresql, Redis or whatever data storage service you need. For more details on building a service, check out the echo example on the vcap sample github project.

OpenShift has what are called Cartridges which provide the ability to add databases and other services into the system. For more information about the cartridges check out Red Hat’s OpenShift Documentation and also the forums.

Cloud Foundry and OpenShift however have distinctive weak spots when it comes to services that go beyond a mere single instance database. In the case of a true distributed database such as Cassandra, HBase or Riak, it is inordinately difficult to integrate a system that any PaaS inter-operates with well. In some cases it’s irrelevant to even try.

The key problem being that both of the PaaS systems assume the mantle of master while subjugating the distributed database a lower tier of coordination. The way to resolve this at the moment is to do an autonomous installation of Riak, Cassandra, Neo4j or other database that may be distributed, stored hot swappable, or otherwise spread across multiple machine or instance points. Then create a bound connection between it and the PaaS Application that is hosted. This is the big negative in PaaS systems and tooling right now, the data story just doesn’t expand well to the latest in data and database technologies. I’ll elaborate more about this below.

Negative: Deployment is Sometimes Easy, Maintenance is Sometimes Hard

Cloud Foundry is extremely rough to deploy, unless you use Bosh to deploy to either VMware Virtualized instances or AWS. Now, you could if resources were available get Bosh to deploy your Cloud Foundry environment anywhere you wanted. However, that’s not easy to do. Bosh is still a bit of a black box. I myself along with others in the community are working to document Bosh, but it is slow going.

OpenShift is dramatically easier to deploy, but is missing a few key pieces once deployed that draw some additional operational overhead. One of those is that OpenShift requires more networking management to handle routing between various parts of the PaaS Ecosystem.

Overall, this boils down to what you need between the two PaaS tool chains. If you want Cloud Foundry’s automatic routing and management between nodes. This is a viable route, but if your team wants to manage the networking tier more autonomous from the PaaS environment then maybe OpenShift is the way to go. In the end, it’s negative bumpy territory to determine which you may or may not want based on that.

Negative: Full Spectrum Polyglot, Missing Some

Cloud Foundry has a wider selection of languages and frameworks with community involvement around those with groups like Iron Foundry. OpenShift I’m sure will be getting to parity in the coming months. I have no doubt between both of these PaaS Ecosystems that they’ll expand to new languages and frameworks over time. Being polyglot after all is a no brainer these days!

Why PaaS Is, IMHO, Still Vitally Important

First toss out the idea that huge, web scale, Facebooks and Googles need to be built. Think about what the majority of developers out there in the world work on. Tons and tons and tons of legacy or greenfield enterprise applications. Sometimes the developer is lucky enough to work on a full vertical mix of things for a small business, but generally, the standard developer in the world is working on an enterprise app.

PaaS tooling takes the vast majority of that enterprise app maintenance from an operational side and tosses it out. Instead of managing a bunch of servers with a bunch of different apps the operations team manages an ecosystem that has a bunch of apps. This, for the enterprises that have enough foresight and have managed their IT assets well enough to be able to implement and use PaaS tooling, is HUGE!

For companies working to stay relevant in the enterprise, for companies looking to make inroads into the enterprise and especially for enterprises that are looking to maintain, grow or struggling to keep ahead of the curve – PaaS tooling is something that is a must have.

Just ask a dev, do they want to spend a few hours configuring and testing a server?  Do they want to deploy their application and focus on building more value into that application?

…being I’ve spent a few years being the developer, I’ll hedge on the side of adding value.

What’s Next?

So what’s next? Two major things in my opinion.

1. Fill the data gap. Most of the PaaS tooling needs to bridge the gap with the data story. I’m working my part with testing, development and efforts to get real options built into these environments, but this often leads back to the data story of PaaS being weak. What’s the solution here? I’m in talks, ongoing, planning sessions ongoing, and we’ll eventually get a solid solution around the data side.

2. Fix deployments & deployment management. Bosh isn’t straight forward or obvious in what it does, Cloud Foundry is easily the hardest thing to deploy with many dependencies. OpenShift is easier to deploy and neither of them actually have a solid management story over time. Bosh does some impressive updates of Cloud Foundry, and OpenShift has some upgrade methods, but still over time and during day to day operations there hasn’t been any clear cut wins with viewing, monitoring and managing nodes and data within these environments.

Write the Docs, Proper Portland Brew, Hack n’ Bike and Polyglot Conference 2013

Blog Entry Index:

I just wrapped up a long weekend of staycation. Monday kicked off Write the Docs this week and today, Tuesday, I’m getting back into the saddle.

Write the Docs

The Write the Docs Conference this week, a two day affair, has kicked off an expanding community around document creation. This conference is about what documentation is, how we create documentation as technical writers, writers, coders and others in the field.

Not only is it about those things it is about how people interact and why documentation is needed in projects. This is one of the things I find interesting, as it seems obvious, but is entirely not obvious because of the battle between good documentation, bad documentation or a complete lack of documentation. The later being the worse situation.

The Bloody War of Documentation!

At this conference it has been identified that the ideal documentation scenario is that building it starts before any software is even built. I do and don’t agree with this, because I know we must avoid BDUF (Big Design Up Front). But we must take this idea, of documentation first, in the appropriate context of how we’re speaking about documentation at the conference. Just as tests & behaviors identified up front, before the creation of the actual implementation is vital to solid, reliable, consistent, testable & high quality production software, good documentation is absolutely necessary.

There are some situations, the exceptions, such as with agencies that create software, in which the software is throwaway. I’m not and don’t think much of the conference is about those types of systems. What we’ve been speaking about at the conference is the systems, or ecosystems, in which software is built, maintained and used for many years. We’re talking about the APIs that are built and then used by dozens, hundreds or thousands of people. Think of Facebook, Github and Twitter. All of these have APIs that thousands upon thousands use everyday. They’re successful in large part, extremely so, because of stellar documentation. In the case of Facebook, there’s some love and hate to go around because they’ve gone between good documentation and bad documentation. However whenever it has been reliable, developers move forward with these APIs and have built billion dollar empires that employ hundreds of people and benefit thousands of people beyond that.

As developers that have been speaking at the conference, and developers in the audience, and this developer too all tend to agree, build that README file before you build a single other thing within the project. Keep that README updated, keep it marked up and easy to read, and make sure people know what your intent is as best you can. Simply put, document!

You might also have snarkily asked, does Write the Docs have docs,why yes, it does:

http://docs.writethedocs.org/ <- Give em’ a read, they’re solid docs.

Portland Proper Brew

Today while using my iPhone, catching up on news & events over the time I had my staycation I took a photo. On that photo I used Stitch to put together some arrows. Kind of a Portland Proper Brew (PPB) with documentation. (see what I did there!) It exemplifies a great way to start the day.

Everyday I bike (or ride the train or bus) in to downtown Porltand anywhere from 5-9 kilometers and swing into Barista on 3rd. Barista is one of the finest coffee shops, in Portland & the world. If you don’t believe me, drag your butt up here and check it out. Absolutely stellar baristas, the best coffee (Coava, Ritual, Sightglass, Stumptown & others), and pretty sweet digs to get going in the morning.

I’ll have more information on a new project I’ve kicked off. Right now it’s called Bike n’ Hack, which will be a scavenger style code hacking & bicycle riding urban awesome game. If you’re interested in hearing more about this, the project, the game & how everything will work be sure to contact me via twitter @adron or jump into the bike n’ hack github organization and the team will be adding more information about who, what, where, when and why this project is going to be a blast!

Polyglot Conference & the Zombie Apocalypse

I’ll be teaching a tutorial, “Introduction to Distributed Databases” at Polyglot Conference in Vancouver in May!  So it has begun & I’m here for you! Come and check out how to get a Riak deployment running in your survival bunker’s data center. Zombies or just your pointy hair boss scenarios of apocalypse we’ll discuss how consistent hashing, hinted handoff and gossipping can help your systems survive infestations! Here’s a basic outline of what I’ll cover…

Introducing Riak, a database designed to survive the Zombie Plague. Riak Architecture & 5 Minute History of Riak & Zombies.

Architecture deep dive:

  • Consistent Hashing, managing to track changes when your kill zone is littered with Zombies.
  • Intelligent Replication, managing your data against each of your bunkers.
  • Data Re-distribution, sometimes they overtake a bunker, how your data is re-distributed.
  • Short Erlang Introduction, a language fit for managing post-civil society.
  • Getting Erlang

Installing Riak on…

  • Ubuntu, RHEL & the Linux Variety.
  • OS-X, the only user centered computers to survive the apocolypse.
  • From source, maintained and modernized for humanities survival.
  • Upgrading Riak, when a bunker is retaken from the zomibes, it’s time to update your Riak.
  • Setting up

Devrel – A developer’s machine w/ Riak – how to manage without zombie bunkers.

  • 5 nodes, a basic cluster
  • Operating Riak
  • Starting, stopping, and restarting
  • Scaling up & out
  • Managing uptime & data integrity
  • Accessing & writing data

Polyglot client libraries

  • JavaScript/Node.js & Erlang for the zombie curing mad scientists.
  • C#/.NET & Java for the zombie creating corporations.
  • Others, for those trying to just survive the zombie apocolypse.

If you haven’t registered for the Polyglot Conference yet, get registered ASAP as it could sell out!

Some of the other tutorials that are happening, that I wish I could clone myself for…

That’s it for updates right now, more code & news later. Cheers!