Polyglot Conference Vancouver 2024: Real Talk About AI, Industry Hubris, and the Art of Unconferencing

Just got back from another incredible Polyglot Conference in Vancouver, and I’m still processing everything that went down. There’s something magical about this event – it’s not your typical conference with polished presentations and vendor booth nonsense. It’s an unconference, which means the real magic happens in the conversations, the debates, and the genuine human connections that form when you put a room full of smart, opinionated developers together and let them talk about what actually matters.

The People Make the Conference

It was excellent to meet so many new people and catch up with friends I’ve not gotten to see in some time! This is what makes Polyglot special – it’s not just about the content, it’s about the community. I found myself in conversations with developers from startups to enterprise, from different countries and backgrounds, all bringing their unique perspectives to the table.

There’s something refreshing about being in a room where everyone is there because they genuinely want to be there, not because their company sent them or because they’re trying to sell something. The conversations flow naturally, the questions are real, and the debates are substantive. No one’s trying to impress anyone with buzzwords or corporate speak (Albeit we’ll often laugh our asses off at the nonsense of Corp speak and marketecture).

I caught up with folks I hadn’t seen since before the pandemic, met new faces who are doing interesting work, and had those serendipitous hallway conversations that often lead to the most valuable insights. The kind of conversations where you’re still talking an hour later, completely forgetting that there’s a scheduled session happening somewhere else.

The Unconference Format: Getting to the Heart of Things

The sessions were, as always with an Unconference jam packed with content and when we dove in we got to the heart of the topics real quick. This is the beauty of the unconference format – there’s no time for fluff or corporate posturing. People show up with real problems, real experiences, and real opinions, and we get straight to the point.

Unlike traditional conferences where you sit through 45-minute presentations that could have been 15-minute talks, unconference sessions are dynamic and responsive. Someone brings up a topic, the group decides if it’s worth exploring, and then we dive deep. If the conversation isn’t going anywhere, we pivot. If it’s getting interesting, we keep going. The format respects everyone’s time and intelligence.

The sessions I participated in covered everything from microservices architecture to team dynamics in the face of agentic AI tooling, from introspecting databases with AI tooling to the future of programming languages in the face of AI tooling. But the most compelling discussions were around AI – not the hype, not the marketing, but the real-world implications of what we’re building and how it’s changing our industry – for better or worse – and there’s a lot of expectation it’s bring a lot of the later.

Coping with AI: The Real Talk

Some of the talks included coping with AI, and just the general insanity that surrounds the technology and the hubris of the industry right now. This is where things got really interesting, because we weren’t talking about AI in the abstract or as some distant future possibility. We were talking about it as a present reality that’s already reshaping how we work, think, and build software.

The “coping with AI” discussion was particularly revealing. We’re not talking about how to use AI tools effectively – that’s the easy part. We’re talking about how to maintain our sanity and professional integrity in an industry that’s gone completely off the rails with AI hype and magical thinking.

The Insanity of AI Hype

The insanity surrounding AI right now is breathtaking. Every company is trying to cram AI into every product, whether it makes sense or not. We’re seeing AI-powered toasters and AI-enhanced paper clips, things that have boolean operation where they’re burning through tokens to make a yes or no decision. Utter madness on that front, that’s like half a tree burned up, a windmill rotation, or a chunk or two of coal just to flip a light switch! The technology has become a solution in search of problems, and the industry is happy to oblige with increasingly absurd use cases.

But the real insanity isn’t the over-application of the technology – it’s the way we’re talking about it. AI is being positioned as the solution to every problem, the answer to every question, the future of everything. It’s not just a tool, it’s become a religion. And like any religion, it’s creating true believers who can’t see the limitations, the risks, or the unintended consequences. Maybe “cult” should be added to the “religion” moniker?

The conversations at Polyglot were refreshing because they cut through this hype. We talked about the real limitations of AI, the actual problems it creates (holy bananas there are a lot of them), and the genuine challenges of working with these systems in production. No one was trying to sell anyone on the latest AI miracle – we were trying to understand what’s actually happening and how to deal with it. Simply put, what’s our day to day action plan to mitigate these problems and what are we doing when the hubris and house of cards comes crumbling down? After all, much of the world’s economy is hinged on AI becoming all the things! Nuts!

The Hubris of the Industry

The hubris of the industry right now is staggering. We’re building systems that we don’t fully understand, deploying them at scale, and then acting surprised when they don’t work as expected. The confidence with which people make claims about AI capabilities is matched only by the lack of evidence supporting those claims.

I heard stories from developers who are being asked to implement AI solutions that don’t make technical sense, from managers who think AI can replace human judgment, and from executives who believe that throwing more AI at a problem will automatically make it better. The disconnect between what AI can actually do and what people think it can do is enormous.

The hubris extends beyond just the technology to the way we’re thinking about the future. There’s this assumption that AI will solve all our problems, that it will make us more productive, that it will create a better world. But we’re not asking the hard questions about what we’re actually building, who it serves, and what the long-term consequences might be.

The Real Challenges

The real challenges of working with AI aren’t technical – they’re human. How do you maintain code quality when your team is generating code they don’t fully understand? How do you make architectural decisions when the tools can generate solutions faster than you can evaluate them? How do you maintain professional standards when the industry is racing to the bottom in terms of quality and sustainability?

These are the questions that kept coming up in our discussions. Not “how do I use ChatGPT to write better code” but “how do I maintain my professional integrity in an environment where AI is being used to cut corners and avoid hard thinking?”

The conversations were honest and sometimes uncomfortable. People shared stories of being pressured to use AI in ways that didn’t make sense, of watching their colleagues become dependent on tools they didn’t understand, and of struggling to maintain quality standards in an environment that prioritizes speed over everything else.

The Path Forward

The most valuable part of these discussions wasn’t just identifying the problems – it was exploring potential solutions. How do we maintain our professional standards while embracing the benefits of AI? How do we educate our teams and our organizations about the real capabilities and limitations of these tools? How do we build systems that are both powerful and maintainable?

The consensus seemed to be that we need to be more thoughtful about how we integrate AI into our work. Not as a replacement for human judgment, but as a tool that augments our capabilities. Not as a way to avoid hard problems, but as a way to tackle them more effectively.

We also need to be more honest about the limitations and risks. The industry’s tendency to oversell AI capabilities is creating unrealistic expectations and dangerous dependencies. We need to have more conversations about what AI can’t do, what it shouldn’t do, and what the consequences might be when it’s used inappropriately.

The Value of Real Conversation

What struck me most about these discussions was how different they were from the typical AI conversations you hear at other conferences. There was no posturing, no trying to impress anyone with the latest buzzwords, no corporate speak about “digital transformation” or “AI-first strategies“.

Instead, we had real conversations about real problems with real people who are dealing with these issues every day. People shared their failures as well as their successes, their concerns as well as their optimism, their questions as well as their answers.

This is the value of the unconference format and the Polyglot community. It creates a space where people can be honest about what’s actually happening, where they can ask the hard questions, and where they can explore ideas without the pressure to conform to industry narratives or corporate agendas.

Looking Ahead

As I reflect on the conference, I’m struck by how much the industry has changed since the last time I was at Polyglot. AI has gone from being a niche topic to dominating every conversation. The questions we’re asking have shifted from “what is AI?” to “how do we live with AI?” and “how do we maintain our humanity in an AI-driven world?”

The conversations at Polyglot give me hope that we can navigate this transition thoughtfully. Not by rejecting AI or embracing it uncritically, but by engaging with it honestly and maintaining our professional standards and human values.

The industry needs more spaces like this – places where people can have real conversations about real problems without the hype, the marketing, or the corporate agenda getting in the way. Places where we can explore the hard questions and work together to find better answers.

The Takeaway

The biggest takeaway from Polyglot this year is that we’re at a critical juncture. The AI revolution isn’t coming – it’s here. And the choices we make now about how we integrate these tools into our work, our teams, and our industry will shape the future of software development for decades to come.

We can either let the hype and hubris drive us toward a future where software becomes disposable, quality becomes optional, and human judgment becomes obsolete. Or we can choose a different path – one where AI augments our capabilities without replacing our humanity, where we maintain our professional standards while embracing new tools, and where we build systems that are both powerful and sustainable.

The conversations at Polyglot suggest that there are people in the industry who are choosing the latter path. People who are thinking critically about AI, asking the hard questions, and working to build a future that serves human needs rather than corporate interests.

That gives me hope. And it makes me even more committed to being part of these conversations, to asking the hard questions, and to working with others who are trying to build a better future for our industry.

The Polyglot (Un)Conference and (Un)Conference like events continue to be one of the most valuable events in the software development community. If you’re looking for real conversations about real problems with real people, I can’t recommend it highly enough.

The conference was such a good time with such great topics, introductions, and interactions that I’ve already bought a ticket for next year. If you’re interested in joining the conversation, check out polyglotsoftware.com and grab your tickets at Eventbrite.