InterlinedList Is Live: Lists, Posts, and Markdown Finally in One Place

It’s the 73rd Day of 2026! Per my previous post, I promised updates and thus, updates delivered!

There’s a problem I’ve run into repeatedly over the years. Actually, it’s more like a pattern of problems.

I’ve got:

  • notes scattered across markdown files
  • lists living in some task app
  • social media posts written in drafts somewhere else
  • and half-finished ideas bouncing between GitHub issues, notebooks, and random documents.

Individually, each of these tools is “fine” yet fragmented and leaves ideas, messaging, and lists leaking and losing ideas to the nebulous.

That’s exactly the mess that led me to build InterlinedList.

And now it’s live: 👉 https://interlinedlist.com

What InterlinedList Actually Is

At its core, InterlinedList is a platform that ties together three things that are usually awkwardly separated:

  1. Lists
  2. Social media posting (to your other accounts too, not just on IntelinedList)
  3. Markdown documents

Each of these solves a different part of the “organize your thinking and output” problem. But the real value shows up when they’re connected.

InterlinedList brings them together into a single system.

Not another note app.
Not another scheduling tool.
Not another task manager.

Instead, it’s a workflow** platform for ideas that turn into posts, lists, and documents.

Lists That Connect to What You Do

Everyone has lists. They might be all over the place. With InterlinedList you can create your own lists, with whatever schema of columns you want.

Ideas lists.
Research lists.
Feature lists.
Writing queues.
Project breakdowns.

The problem is most list tools treat lists like dead data. You write them down, check things off, and that’s about it. InterlinedList treats lists more like launch points. A list item can become:

  • a social media post
  • a markdown document
  • a reference entry
  • a trackable idea

Instead of bouncing between five tools, the list becomes the center of gravity. Which is how most people actually work. Over time, my intent is to bring these features to be even more seamlessly connected. Eventually, there will even be options to bring together your LLMs you prefer to extend the capabilities of each of these things in your workflow.

Social Media Posting Without the Chaos

Posting to social platforms today usually looks like this:

  • Write something somewhere
  • Copy it into another platform
  • Schedule it somewhere else
  • Lose track of what you’ve already posted

InterlinedList brings posting directly into the workflow.

You can:

  • draft posts
  • schedule posts
  • organize posts into lists
  • connect posts to notes or markdown docs
  • refer to your cross-posted posts from InterlinedList (for example, see image!)
Screenshot of a social media post by Adron Hall discussing Ba Bar in University Village, featuring images of the restaurant and links to Mastodon and Blue Sky.

The goal is simple: make posting part of your idea workflow instead of a disconnected chore.

The first integrations include platforms like:

  • Mastodon
  • Bluesky

And the idea is to keep expanding that ecosystem. More to come and also open to ideas!

Markdown Documents That Fit the Workflow

If you’re like me, markdown is where the real thinking happens.

Articles. Notes. Research. Drafts. Documentation.

But markdown tools often exist in their own isolated worlds.

InterlinedList allows you to maintain markdown documents directly alongside your lists and posts, making it possible to move naturally between: writing, organizing, and publishing.

Why These Three Things Belong Together

This was the key realization. Lists, posts, and markdown aren’t separate activities. They’re three phases of the same process:

  1. Capture the idea → lists
  2. Develop the idea → markdown
  3. Share the idea → social posts

Most tools treat these as unrelated workflows. InterlinedList treats them as one continuous pipeline. Which means less context switching, less tool juggling, and far fewer lost ideas.

Early Access Offer

To kick things off an early access offer, I’m doing something simple. If you’re interested in organizing ideas, posts, and documents in one place, now’s a great time to jump in.

The first 10 users who sign up will receive a full-featured subscription account for free.

No trial. No feature restrictions. Just the full platform.

Built Because I Wanted It

Like a lot of the things I’ve built, InterlinedList started as something I wanted for myself.

I needed a place where:

  • research lists
  • post drafts
  • markdown articles
  • and publishing

could actually live in the same ecosystem.

After building it and using it, the obvious next step was to open it up so others could use it too. Let me know what you think!

With that, stay tuned, the team has a lot more coming!

** I’d add that, this is absolutely a work in progress and the team will be working to bring together more of the workflow concept and features to bridge this set of tooling together to be even more seamless.

Security Was Already a Mess. Generative AI Is About to Prove It.

I was thinking about some of the points from the Polyglot Conf list of predictions for Gen AI, titled “Second Order Effects of AI Acceleration: 22 Predictions from Polyglot Conference Vancouver“. One thing that stands out to me, and I’m sure many of you have read about the scenario, of misplaced keys, tokens, passwords and usernames, or whatever other security collateral left in a repo. It’s been such an issue orgs like AWS have setup triggers that when they find keys on the internet, they trace back and try to alert their users (i.e. if a user of theirs has stuck account keys in a repo). It’s wild how big of a problem this is.

Once you’ve spent any serious amount of time inside corporate IT, you eventually come to a slightly uncomfortable realization. Exponentially so if you focus on InfoSec or other security related things. Security, broadly speaking, is not in a particularly great state.

That might sound dramatic, but it’s not really. It is the standard modus operandi of corporate IT. The cost of really good security is too high for more corporations to focus where they should and often when some corporations focus on security they’ll often miss the forrest for the trees. There are absolutely teams doing excellent security work, so don’t get the idea I’m saying there aren’t some solid people doing the work to secure systems and environments. There are some organizations that invest heavily in it. There are people in security roles who take the mission extremely seriously and do very good engineering.

A lot of what passes for security is really just a mixture of documentation, policy, and a little bit of obscurity. Systems are complicated enough that people assume things are protected. Access is restricted mostly because people don’t know where to look. Credentials are hidden in configuration files or environment variables that nobody outside the team sees.

And that becomes the de facto security posture.

Not deliberate protection.

Just… quiet obscurity.

I’ve lost count of the number of times I’ve been pulled into a system review, or some troubleshooting session, where a secret shows up in a place it absolutely shouldn’t be. An API key sitting in a script. A database password in a config file. An environment file committed to a repository six months ago that nobody noticed.

That sort of thing happens constantly. Not out of malice. Out of convenience. But now we’ve introduced something new into the environment.

Generative AI.

More importantly though, the agentic tooling built around it. Tooling that literally takes actions on your behalf. Tools that can read entire repositories, analyze logs, scan infrastructure configuration, generate code, and help debug systems in seconds. Tools that engineers increasingly rely on as a kind of external thinking partner while they work through problems.

All that benefit is coming with AI tools. However AI doesn’t care about the secret. It’s just processing text. But the act of pasting it there matters. Because the moment that secret leaves your controlled environment, you no longer know exactly where it goes, how it’s stored, or how long it persists in the LLM.

The mental model a lot of people are using right now is wrong. They treat AI like a scratch pad or an extension of their own thoughts.

It isn’t.

The more accurate model is this: an AI tool is another resource participating in your workflow. Another staff member, effectively.

Except instead of being a person sitting at the desk next to you, it’s a system operated by someone else, running on infrastructure you don’t control, processing information you send to it. Including keys and secrets.

Once you start looking at it that way, a few things become obvious. You wouldn’t casually hand a contractor your production API keys while asking them to help debug something. You wouldn’t drop a full .env file containing service credentials into a conversation with someone who doesn’t actually need those values.

Yet that is exactly the pattern that is quietly emerging with generative AI tools. Especially among new users of said tools! Developers paste configuration files, snippets of infrastructure code, environment variables, connection strings, and logs directly into prompts because it’s the fastest way to get an answer.

It feels harmless. But secrets have a way of spreading through systems once they start moving.

The real issue here is that generative AI doesn’t create security problems. It amplifies the ones that already exist. Problems that the industry has failed (miserably might I add) at solving. If an organization already has sloppy credential management, AI just gives those credentials another place to leak. If engineers already pass secrets around informally to get work done, AI becomes another convenient channel for that behavior.

And because AI tools accelerate everything, they accelerate the consequences too. What used to take hours of searching through documentation can now happen instantly. A repository full of configuration files can be analyzed in seconds. Systems that were once opaque are now far easier to reason about.

The Takeaway (Including secrets!)

The practical takeaway here isn’t that people should stop using AI tools. That’s not realistic and frankly a career limiting maneuver at this point. The tools are genuinely useful and they’re going to become a permanent part of how software gets built.

What needs to change – desperately – is operational discipline.

Secrets should never be treated casually, and that includes interactions with generative systems. API keys, tokens, passwords, certificates, environment files, connection strings—none of those belong in prompts or screenshots or debugging sessions with external tools.

If you need to ask an AI for help, scrub the sensitive pieces first. Replace real values with placeholders. Remove anything that grants access to a system. Setup ignore for the env files and don’t let production env values (or vault values, whatever you’re using) leak into your Generative AI systems.

Treat every AI interaction the same way you would treat a conversation with another engineer outside your organization, or better yet outside the company (or Government, etc) altogether.

But not someone you hand the keys to the kingdom. Don’t give them to your AI tooling.

Additive vs. Mutative vs. Destructive Code Changes (and Why AI Agents Love the Wrong One at 2:13AM)

There’s a particular kind of pain that only software developers know.

Not the “production is down” kind of pain.

Not the “we deployed on Friday” kind of pain.

No, I mean the slow creeping dread of pulling the latest changes, running the tests, and realizing the codebase has been “improved” in a way that feels like someone rearranged your entire kitchen… but left all the knives on the floor.

And lately, this pain has been supercharged by AI tooling. Cursor, Claude, Copilot, Gemini, ChatGPT-driven agents, whatever your poison is this week, all share a similar behavioral pattern:

They can produce a stunning amount of output at an impressive speed… while quietly reshaping your system into something that looks correct but behaves like an alien artifact from a parallel universe.

The reason is simple: AI agents don’t “change code” the way humans do. They don’t naturally respect boundaries unless you explicitly enforce them. They operate like an overly enthusiastic intern with root access and no fear of consequences. To understand why this happens, we need to talk about the different types of code changes, and how AI tooling tends to drift toward the most dangerous ones.

So let’s name the beasts.

The Four (Actually Six) Types of Code Changes

Continue reading “Additive vs. Mutative vs. Destructive Code Changes (and Why AI Agents Love the Wrong One at 2:13AM)”

12 Days into 2026 – Status & update of my projects.

As mentioned in my end of 2025 post, there were several projects I intended to start, continue, or finish up in 2026. This is a quick status and an additional few things I’ve started working on.

https://interlinedlist.com – This is live. Albeit not very built out, but the start is live. You can even sign up for an account and play around with it. A caveat though, this is pre-alpha, and not very feature rich at all and I don’t have the programmer oriented tasks integrated in yet. It’s just the micro-blogging. But hey, some progress is better than no progress!

https://www.datadiluvium.com – This is still live and I’ve got some changes to push, but they’re breaking, so as soon as I dig up a bit of free coder time those will get resolved and I’ll get the latest tweaks pushed. I also need to get some basic documentation and also something posted here on the blog about what you can do, or would want to do with it.

dashingarrivals – First commit isn’t done. Working on it, it’ll be coming soon per the pervious post.

collectorstunetracker – No current update. Albeit I need to do something about my collection, cuz it has only grown and not shrunk! 🤘🏻

Writing – This is one of many posts. I wrote this one too, not some AI nonsense.

New News About New Projects

I discussed it a while back on one of the social medias (I’m on Threads, Mastadon, and Blue Sky – join me there) the idea of doing some videos on algorithms and data structures. The intent is to put together some videos similar to these “Coding with AI: A Comparative Analysis“. It could be good, and I’d tack onto the algorithms, data structures some additional lagniappe via concurrency patterns I previously wrote about. If interested, subscript to the blog here or subscribe to my YouTube at https://www.youtube.com/adronhall.

Until then, keep thrashing the code! 🤘🏻

The End

Today is, for the most part, the end of my professional year of 2025. What have I done, what am I looking forward to in 2026?

2025 Year in Review

This year I’ve had the chance to work on some really cool tech. You can read more about it here, where I put together a quick QR code prototype. I also did a ton of documentation writing, but also shipping it as a product, and making sure it was updated and kept up with the products and services it was being shipped with; How Principal Engineers Shape Documentation as a Product + Punch List Lagniappe. Beyond that I’ve shipped tons of AI processing pipeline code that changed about 4,923,123** times because the tools provided weren’t up to snuff, so reinvention and filling the gap in the tools it was. Lots of fun challenges, great tech shipped, and users of said software up and running.

On a personal front I got a reasonable amount of my expectations met for 2025, which mind you, wasn’t a lot considering the changing of the guard and the current state of the tech industry. To which, let’s talk brass talks, not beat around the bush, dispense with formalities, and get to the pending chaos of 2026.

Let’s Talk, No Filter

Continue reading “The End”