You Can't Vibe Code Awareness
The biggest difference between humans and AI isn’t processing power, but our ability to notice and interrupt our habits.
My left wrist vibrates gently, indicating my 15-minute morning meditation is coming to a close. I turn off the alarm, take a slight bow and flutter my eyes open. It’s distinctly brighter than it was a quarter hour ago when I closed them, a sign that the sun has moved above the horizon.
The yard teems with wildlife and the day has begun. I slowly unwrap my legs and rise from my zafu, feeling calm, collected and aware.
Body still calm, my mind starts to fire up and the narration begins.
“That felt good, I wish I could do it every day. Maybe I could if I weren’t so lazy”.
“My right knee is definitely barking at me from sitting in lotus too long - I wonder if that lingering meniscus injury will ever fully heal”.
“Oh shit, I forgot to send that email blast.”
I quickly feel the calm awareness retreat into the background, replaced by rising anxiety, fueled by the inner voice that narrates, reflects on, and often judges everything I do.
This voice sits there in perpetual commentary.
Sometimes it’s helpful, but just as often, it’s self-defeating.
Still, it’s undeniably human.
I’ve recently started thinking about the parallels between this inner narration and the way modern artificial intelligence, particularly reasoning models, simulate thought and consciousness.
Large Language Models and similar AIs can produce coherent narratives and as incredible as they are, they’re fundamentally different from human reflection. At their core, they’re little more than autocomplete, trained on a massive amount of information.
From a technical standpoint, this is called stochastic pattern matching and large-scale information retrieval but put more philosophically, they echo knowledge without truly knowing or experiencing it.
Over the past year, reasoning models and specialized agents have emerged that have started to layer reflection onto the process. This is only surface level at best and really just a combination of statistical analysis and traditional debugging, not thinking, feeling or true thought.
Put in simpler terms, it tells us what it thinks we want to hear (more on that below).
Self-reflection for an AI is like reviewing a journal entry written by someone else, about emotions it has never felt, for reasons it can’t begin to grasp. It might catch typos or smooth out phrasing, but it has no sense of meaning, emotion, or intent. It’s just following patterns it was trained to recognize.
Of course, plenty of humans don’t evolve past their own training data either. They’re still running on childhood programming and algorithm-fed opinions and call it personality and worldview.
Yet here lies the crucial difference. Unlike AI, we as humans have the capacity to notice when we’re stuck in our patterns. We have the power of self-observation and gut instinct. When we use them, we uncover blind spots and create space for better choices.
Here’s a few examples I recently encountered in my own life where momentary awareness allowed me to disengage the autopilot.
Teaching Through the Mess
The other day my daughter Arabelle spilled a glass of orange juice all over the kitchen table and floor.
Not an uncommon occurrence in our house, this normally wouldn’t even register as more than a "normal" day. However, this was the third time in the span of 20 minutes and I immediately felt my frustration arise.
The cleaners had just come through, we were now running low on dish towels, and does she even realize that organic orange juice is now almost $9.00 at Whole Foods? I also hate the fact that, no matter how much you clean, the stickiness seems to persist for days.
Instinctually I went to raise my voice but then looked over at a three-year-old girl with big puppy dog eyes, lower lip flipped and quivering, on the verge of tears (yes, she was probably working me).
“It’s OK daddy, I’m sorry.” she said and in a momentary hard reset, my irritation dissipated.
It was replaced with the awareness that she was simply trying to emulate her older brothers Atlas and Rylo, who often prepare their own drinks. She was doing her best to learn and evolve. After we cleaned up the mess, I sat with her and showed her how to safely pour juice from a carton as well as how to not position the cup directly on the edge of a table.
I’m not quite sure ChatGPT would move beyond an “ideal parent response” and might default to a generic reassurance like “It’s okay, everyone makes mistakes,” missing the deeper, nuanced reality of her sincere effort and the teaching opportunity at play.
Knowing When to Stop
Late last Thursday afternoon the weather was perfect. I was also trying to roll out a new feature for the web application that we use to run FRESH markets. Admittedly, my head wasn’t really in it as my thoughts drifted to late afternoon bike rides and maybe even a quick trip to the beach. I really wanted to finish this project to end the week on a high note. Still, I could feel myself making mistakes.
After one particularly tricky edge-case bug I uncovered while testing, I realized that I was rushing and not thinking through every scenario, which risked compounding issues. Snapping myself out of my default wiring to “complete the task”, I added some notes, committed my code and packed it up for the day.
Barring any temporary rate limits, I’m not sure that an LLM would ever have the ability to just “check out” for the afternoon. This is probably a good thing - we don’t want our tools deciding when they want to work.
Still, if you’ve done any amount of coding with a copilot or agent, you know they can lose context very quickly, and with confidence, begin to talk themselves in circular logic.
Maybe this will improve with time, but even the best reasoning models today can’t seem to recognize when they’re stuck in one of these patterns and pull themselves out.
Breaking Up The Hotel Party
Large Language Models layered with reasoning engines have been an incredible tool for underwriting potential real estate projects. I’ve found that not only can you give it projected financials and develop your own models, they’re also great for “stress testing” extreme edge cases, exploring possibilities, and even discovering different ways to generate revenue.
Recently I was underwriting a potential hotel and bar property my partners and I were considering purchasing. It was in a great location and it would be a natural expansion for us given the size and proximity to our hospitality project in Bradley Beach.
To make it even more enticing, the current owners were willing to seller finance a portion of the purchase price for several years and have us step into managing the place in as soon as 30 days. It was the perfect opportunity and timing to take over just before the start of the summer season and learn the business while we secured the remainder of our financing.
I used ChatGPT’s Deep Research Model to do some pretty detailed analysis of the project and, even with the high price tag, it penciled out. This was incredible!
Then reality set in.
The “hotel” was more in line with “dorm room” then what most of us would consider even the basic offerings from a Holiday Inn Express, let along a boutique experience.
A good portion of the “rooms” were really unfinished basement spaces that had “potential” to be additional bedrooms. An entire floor that had recently been renovated also happened to sit below the water table and had even more recently taken on water. The entire wing needed significant mold remediation along with another full scale gut renovation.
As I continued to dig into the balance sheet I caught that the food and beverage operations had consistently lost many hundreds-of-thousands of dollars per year and their numbers were way below industry averages. Again, this was positioned as “potential upside”, for the “right operator” but it became very clear why they were eager for us to start managing the place for the upcoming season.
I started feeding some of these findings back into ChatGPT, and the rosy optimism of its original analysis began to turn a few shades darker.
Given the inflated asking price (there are no deals at the Jersey Shore) it became clear that current owners were in over their head and trying to recoup their losses before moving on. If we took this on, we’d be inheriting and compounding on their mistakes.
At that point, we respectfully declined and moved on.
On paper, this project penciled out and, when I was initially excited to take it on, ChatGPT was happy to confirm that it would. As soon as I started to layer on some doubts and additional real world context, it changed its viewpoint. It was simply telling me what it thought I wanted to hear.
In all three examples, I think the lesson is that sometimes we need to hard-reset our default programming.
When awareness interrupts habit, clarity emerges.
Large language models and AI are incredible tools and they’ve supercharged my output by orders of magnitude, all while using much less resources.
Still, they’re just following their programming and pattern matching to the best of their abilities.
Perhaps one day, their reasoning will evolve and develop to the point that they can break out of this programming.
Perhaps they’ll learn to find the subtle nuance in the moment.
Perhaps they’ll be able to pull themselves off the hamster wheel.
Perhaps they’ll just be able to tell me I’m wrong or crazy.
Until then, I’ll continue to make space to sit, develop my own awareness and recognize the limits of theirs.