AI Usage
It’s important to set personal limits on how we approach and use AI tools as they becomes more prevalent throughout our society. I have created this permanent page, inspired by Derek Sivers and Damola Morenikeji. I will keep it updated with how I personally use AI to add value to my life in a sustainable way.
Writing
I almost never use AI to help me write. Writing is challenging, but the challenge is what makes it worth it. Think of writing like exercise for the brain. Your brain, like other parts of your body, needs exercise to stay healthy. I enjoy flexing those writing muscles - and you should too!
And I write quite a bit. The nature of working in a mostly-remote company means that a lot of our work is done asynchronously via written communication. I can see how it’s tempting to use AI to help you write, but I think it’s overall more harmful than helpful. Formulating thoughts to the point you can write them down and communicate them clearly is an incredibly valuable process and clearly elevates the quality of your work. Writing well makes you really think about what you want to say and forces you to string together sometimes disjointed thoughts into something concrete.
The one exception to this rule is autocomplete text messages or emails. Ones that are transactional in nature - ones that don’t require me to flex those creative writing muscles.
Research
I often use AI (primarily Gemini) for general Q&A and research. This is an area modern LLMs excel at. Combine that with web search and multi-modal capabilities, these tools are, quite frankly, incredible.
My Pixel 9 Pro that I purchased in December 2024 included a free one year subscription to Google’s Gemini Advanced AI, and it’s been a really great conversational research “partner”. The conversational style voice synthesis is really, scary good.
You do need to be cautious when using AI tools for research. After all LLMs cannot actually reason and simply use mathematical probability to generate one token at a time. LLMs are notorious for “hallucinating” facts precisely because of this. LLMs with better training data will be better at reproducing accurate information, but the generated text can only be as accurate as the training data. And guess what? There’s a lot of inaccurate information out there on the internet.
I like to treat text generated by LLMs like I do other untrusted information on the internet. Trust but verify. Use it as a jumping-off point for future, more in-depth research. I wouldn’t use it to answer a question that I need to 100% verify is correct, but I think it can be an incredibly useful tool to help generate ideas.
Software Development
Around the turn of the new year in 2026 I started experimenting with coding agents - first Claude Code and later OpenCode and I have to say - they are incredibly powerful.
At work (Zendesk) I use Claude Code almost daily for writing code. I do not blindly accept whatever code Claude wants to give me (vibe coding) but I do use it in a couple ways:
- A “first-pass” at writing a feature.
- A starting point for something I’m thinking about.
- Writing tests.
- Writing lots of boilerplate.
- Writing documentation.
- Simple questions (syntax, regexes, etc)
I almost never accept the first pass at what it has written. I will usually tweak it, expand, or (most often) remove complexity. But it’s undeniable how much more a coding agent can make you if you already know what you’re doing and clearly know what you’re looking for.
For personal coding projects, I use OpenCode in the same capacity. I use OpenCode because I can hop around to different models (and take advantages of discounts). I also get to support an excellent open source product.
I use Gemini for lots of quick question answering and even generating small bits of code.
I use Claude Code + ChatGPT (at work) and OpenCode + Gemini (personal).
I also have in-line AI code completion turned on in my preferred editor (Zed) and it’s become invaluable.
Agents
Everyone these days is talking about Openclaw.
It’s a very cool and surprisingly simple tool. From a hacker’s perspective, I love it. The potential is amazing. But there are (currently) far too many risks (security, alignment, etc) for me to blindly give an agent like this access to things I care about. I think everyone should spend time playing with OpenClaw (or other open agent implementatios) in a sandboxed environment to see what they can do.
CLIs
CLIs are making a major comeback in 2026 because it turns out they’re the perfect interface for generic agents to interact with your application/service data. If you have a powerful API (and a good CLI that exposes it) an agent can do pretty much anything.