Alexander Parker

Full-stack Technology Expert

Perth, Western Australia

Hello!

I build software solutions and developer tools, with a focus on human-centric design concepts.

Pet Projects

Developer Tools

Instruction Template Specification (ITS)

A JSON specification that standardises how we create instruction-based content templates, making AI integration more predictable and maintainable.

ITS Compiler (Python)

The reference implementation that transforms ITS templates into AI-ready prompts, bridging the gap between structured content and conversational AI.

ITS Compiler CLI

A command-line tool that brings ITS compilation into your development workflow, making template processing seamless and scriptable.

Blobify

A CLI tool with matching VS Code extension that generates AI-friendly digests of entire codebases, designed for context engineering and large-scale code analysis.

Git Cam (Commit AI Messages)

A git extension that provides AI-powered code peer-review and commit message generation, streamlining development workflows.

Experimental

Clacks

A peer-to-peer network messaging system with no permanent data storage—messages exist only in temporary memory queues and network transmission.

zyn.js

A ~2KB procedural synthesizer that generates audio entirely in JavaScript. Try the demo to hear what's possible in such a tiny package.

Just for Fun

Space Strafer

A 5-star rated mobile game on Google Play that combines classic arcade gameplay with modern polish. Featured in GameCloud interview.

Let's Connect

Working on something interesting? I'd love to hear about it.

Thought Cabinet

AI Agents Burning Money on Incomplete Context

In my observation, AI agents "work" by repeatedly making then fixing mistakes, largely because of context window limitations. They suggest elaborate refactors and go for hours down rabbit holes for problems that already have simple solutions a couple of directories over. The issue is that agents can't hold full codebase context simultaneously right now.

To get around this, most agentic tools use RAG or similar approaches that retrieve relevant code snippets. This works reasonably for Q&A but has obvious limitations for code generation, where every line written must take the entire codebase into consideration.

Working Around the Limitations

After experiencing these issues firsthand, I started focusing on what I can actually control: how to present as much codebase context as possible to a LLM.

What I found helps is giving agents as much code from a project as possible within their context limits: the complete codebase is best, but for larger projects or dependancies, a smaller filtered document containing the API / method signatures works very well. We just need to make sure the model has enough context to fully understand the architecture and structure of the codebase in the context of the task it is working on.

Building Blobify

I built blobify to solve this problem for myself. It started off as a simple script that just blobbed together all the files in a folder, but I added new features as I needed them.

This filtering is configurable through `.blobify` files with different views or "contexts" for different tasks. This allows the developer to quickly generate the required LLM contexts from any codebase with a single CLI command, which, even running manually, I find to be much faster and less headache-inducing than trying to explain the codebase to an agent and undo mistakes that they can spend hours making.

Results

The difference is noticeable. Agents with a more complete architectural context make fewer obvious mistakes than those which only see the output of individual RAG queries. They reuse existing patterns more often, suggest more appropriate abstractions, and understand the codebase's design patterns from earlier in the conversation.

I've been using this approach for months on both personal projects and at work. Development conversations with AI chatbots are more productive more quickly than relying on an AI agent to figure out something that works eventually.

Context Engineering: The LLM Productivity Cycle

Every AI coding session follows the same arc: 10 minutes explaining project structure, 1-2 hours of productive work, then context degradation as the conversation gets too long. You end up starting fresh sessions just when the AI finally understands your codebase. It's inefficient and frustrating to have to go back to the start and explain everything all over again, several times per day.

The issue is that many users treat AI interactions like casual conversations rather than structured information exchange. We wouldn't write production code without proper interfaces and documentation, but we'll spend ages explaining our codebase to an AI through natural language. Given the context limitations we're working with, this approach wastes a lot of tokens.

Context is King

I see context preparation as a first-class engineering activity. Instead of explaining as I go, I give the model comprehensive and targtetted context upfront. This means providing the complete API surface: function signatures, class hierarchies, module relationships, and type definitions—everything needed for architectural decision-making without implementation clutter.

Tools like blobify automate this process. What used to take 20 minutes of manual explanation now happens in seconds with a standardised format the AI can immediately parse and understand.

Measurable Improvements

With systematic context engineering, I reach peak productivity in the first post. The productive plateau extends dramatically because the AI makes fewer context-dependent errors.

On small-to-medium sized projects, sessions that used to deliver 1-2 hours of useful work now deliver 2-3 hours. On larger projects that would typically exhaust limits immediately, I can engineer more refined contexts to squeeze more productivity out of dwindling LLM input tokens.

This structured approach to context engineering has become as fundamental to my workflow as version control. It's the difference between using AI as an expensive autocomplete and using it as a knowledgeable pair programming partner.

Enterprise AI Needs Infrastructure, Not More Chatbots

The AI industry is obsessed with building better chatbots while enterprises struggle with basic deployment problems. Compliance teams watch thousands of employees copy-paste potentially sensitive data into uncontrolled prompts. Regulated industries need audit trails and approval workflows, not more conversational interfaces.

The "chat with an AI" paradigm doesn't map to real-world business requirements. You can't audit a conversation you don't know exists. You can't version control a prompt someone on the team wrote in notepad. You can't enforce compliance standards on ad-hoc natural language interactions.

Building Missing Standards

The fundamental problem is lack of standardisation. Every team reinvents prompt engineering from scratch, leading to inconsistent results and no reusable patterns. We needed infrastructure that treats AI prompts as first-class engineering artifacts.

I'm building the Instruction Template Specification (ITS) to solve this systematically. It's a JSON schema that makes prompts versionable, testable, and reusable. Instead of ad-hoc prompt engineering, you get standardised templates with proper variable substitution, conditional logic, and validation.

The complete toolchain exists: specification, reference compiler (Python), and CLI tool (all on PyPI). This creates the foundation for controlled AI deployment that enterprises actually need rather than what the AI industry thinks they want.

Real Enterprise Requirements

Enterprises need expert-in-the-loop workflows, audit trails, and controlled data integration. They need to version prompts like code and test them like APIs. They need compliance controls and approval processes that work with existing governance frameworks.

ITS enables all of this through standardised template formats and proper tooling. It's the infrastructure layer that makes AI practical for businesses that can't just "chat with a bot" and call it enterprise-ready.

While others build more sophisticated chatbots, this focuses on the boring infrastructure work that actually enables enterprise AI deployment. Templates, standards, tooling, and processes that work with existing business requirements rather than ignoring them.