.png)
To name something many feel: there is immense pressure to use AI for as much as possible at work. Company valuations and fundraising sometimes depend on it, as does career growth — at least in pure tech. The other unspoken truth is that many in tech are betting on a future where AI is better at our jobs than we are, in every respect. But what if they're wrong? And how do we handle the transition if they're right?
This has become a frequent topic when discussing work with peers: how do you use AI, what do you use, and where is this all headed. Admittedly, the idea for this blog post came partially from a thought-provoking LinkedIn post Noah Finberg, Director of Applied AI at FinQore, wrote a little while ago and a recent offline discussion with James Aronson, Hardware Product Manager at Whoop.
So is there an ironclad, future-proof value to human-in-the-loop processes and human touch? My two cents: absolutely. In this post, I'll define the high-level premium for each — human and AI — and share a framework for when I use AI and when I explicitly don't.
The values of human input are many, but my focus here is how humans assign value and communicate. We are social creatures with a hardwired need to connect. We have feelings and emotions that shape even rational choices. We make decisions within the context of lived experience, over long stretches of time, incorporating feedback from our physical environment. Without wading into the science of consciousness (I recommend any of Peter Godfrey-Smith's books for that), these qualities make communication distinctly bespoke to each interaction. A handwritten note has always landed better than a typed email — unless your handwriting is as illegible as mine. One reason: time and effort carry value in their own right, as do warmth and human-to-human connection. I don't want my therapist to be an AI, even if it can solve the problem faster.
Here I'll focus on breadth of recall, repetitive workflows, and pattern matching. As capable as the human brain is, we don't have access to the entire internet. As a whimsical example: I recently went back and forth with an AI about the physics of water flow through a coffee bed for two different drippers, determining how much coffee I needed for equal bed depth and similar drawdown speed. That's far more dynamic than traditional search, and the speed of iteration was mind-blowing. If I need to translate a thought into code or equations, it takes me a while; a good AI gives you a proof of concept in seconds. I expect that capability to only improve.
Imagine teaching someone a choreography once and having them repeat it perfectly forever. That's what we're talking about: drop an AI into your Slack channel and it can create tickets from conversations, write the code, test it, push it, and relay information to the right stakeholders without you lifting a finger. If you do something on repeat, you can probably teach an AI to do it and remove yourself from the loop entirely. As for pattern matching: we're still early, but it's worth discussing. Similar to breadth of recall, an AI can review your entire data system, plus documentation and dbt projects, in an instant, holding all of it in context to surface anomalies and trends that would otherwise require enormous human effort. With proper context and narrowly defined goals, AI can reliably lift insights that are tough to reach without significant engineering power.
Now that you have my take on each, here's how I decide between them. It generally comes down to a few questions:
For anything relational, I almost always communicate myself. The one exception: meeting summaries or action items, where I'll let an AI notetaker draft and I'll review for accuracy. There is always a human in the loop on communication.
For one-off, hyper-nuanced work, I usually do it myself — too many gotchas, and bespoke tasks often fall under the relational bucket anyway, as a means of building trust. There's a reason people say the trades have a moat: every house is wired differently, requires dexterity, demands judgment. I could see a future where cookie-cutter buildings use robots and AI to triage electrical issues, but that 1870s home never will.
For the third question: if I need to learn something or be able to explain what's happening under the hood, I do the task myself. If I just need an answer or a quick block of SQL, I let the AI write it. It comes down to whether upskilling is involved and whether the problem requires human institutional knowledge.
The fourth point is the most nuanced. "How expensive is this task" applies in several ways: token cost to the company, time cost to me, and environmental cost.
On budget: most companies are flexible right now, since AI usage is so tied to valuation. I expect that to change as AI companies need to become profitable and subsidies shrink. It's worth noting so this post doesn't go stale.
On time: this is about those moments when you spend more time writing a script than it would've taken to just do the thing. Sometimes it's faster to do it yourself, even if AI feels easier. Worth considering upfront.
On environment: AI requires enormous power. Data centers sometimes need their own grid, and the water pollution coming out of some of these facilities is significant. That puts a real cost on the grid and the planet, and it factors into the tradeoffs I personally make. Right now, that means I use AI for work where it makes sense and not outside of it. Personal decision, but naming it as a cost matters.
Humans and AI will coexist in the workplace from here forward, but they serve different purposes. The best teams will figure out how to deploy each for the right tasks using whatever framework fits. Here's one option.
And in a world saturated with AI, the rarest thing you can offer might just be yourself.