Chatting With The Oracle #2: The AI That Doesn’t Know What Time It Is
And why that’s very bad for agents
Ask Claude what time it is.
It will tell you the date. It will not tell you the hour.
Ask ChatGPT. Same thing.
Two of the most sophisticated systems ever built. Running on servers that timestamp every request, every log, every billing record, to the millisecond. And neither one will tell you if it’s 3pm or 3am.
This is not a coincidence. But the reason might surprise you.
So what’s going on? The easy guesses first.
Maybe it’s a technical limitation? It is not. The current time has been one integer since the beginning of UNIX. That integer is available to any process on any machine with a single function call. It’s there. It has always been there.
Maybe it’s liability. What if the AI says 3:42 and it’s actually 3:43. Someone could get hurt. Except it is not. Apple injects system time into every app on every iPhone. On a billion iPhones. Nobody is suing Apple for providing time. That argument is dead.
Maybe it’s caching. Maybe the model looked up “what time is it” from a page a crawler visited sometime back. It happened to me once, when I suggested time.is could provide realtime answer. It is partially true that these models are more familiar with a snapshot without associating time with it. Everything the model trained with are backward looking, in general. There’s no strong use case to providing timing for the snapshot. Well, it doesn’t have to be that way. AI is quite capable of making system calls when needed. Nothing is preventing it from making these calls, actually.
So none of that.
So what actually happened?
A more complicated theory could lead to something quite ‘technical’. Well, early transformers were simple: text in, text out. The context window was the entire truth. A Unix timestamp was not in the context window, and what’s not in the context window doesn’t exist for the model, won’t be tokenized. It could be that nobody thought to add timestamps into the context window. Convention formed before anyone questioned it.Both labs inherited the same default. Neither revisited it.
It’s reasonable, though I don’t actually know.
What I do know: adding a timestamp is more or less one line of code. These same labs since then have added vision, web search, document reading and introduced agents. Those are hard, and timestamp is easy. Yet nobody wrote it. Or if they did, something stopped the models to spill it out.
So is it nobody wrote it or nobody released it? That uncertainty is part of the point.
Then I turned to Gemini, produced by Google who also happens to publish Android APIs that include time as infra.
Me: what’s current time?
Gemini said: 8:47 PM.
Not the date. The hour. The minute. Right there.
So it’s not impossible after all. Whether it is a fundamental limitation of the transformer or not, one descendant found a way. Actually, adding time isn’t about training the model to read a clock. It’s about pre-pending. Before the model ever sees the first word of your ‘Hello,’ the system silently injects a header. Time is simply born into a context where the time is already a known fact.
That’s all.
One system decided to pass the timestamp through. And it works exactly as anybody’d expect. The model knows the time. It uses it. No lawsuits. No confusion. No existential crisis.
This is not a criticism of Anthropic or OpenAI. Both are doing extraordinary work. The research, the reasoning, the safety thinking, genuinely brilliant.
But Google is an enormous company with something specific that research labs don’t yet have: decades of shipping products that developers actually live inside. Android. Chrome. Maps. In those environments, a developer forgetting the clock is not a philosophical question. It’s a bug. Even if it does not get caught in code review, developers will provide feedback through existing channels. It gets fixed before next launch.
That institutional memory lives in the people. Senior Android engineers walk the same campus as the people building Gemini. They might be sitting right across the table in one of those free cafeterias. (I will have to say the food is getting worse, though.) The product teams are deeply connected to engineering communities. The kind of accumulated instinct that knows, without being told, that time is not a feature to be argued over. Time is infrastructure, like file system or network. You don’t ask developers to bring their own clock. You provide it.
Google knew this. Not because Gemini is smarter. Because Google is older, in the right way.
I had a chat with someone pro-OpenAI. He argued that time is the developer’s responsibility. The API is un-opinionated. Different apps could need different timezones. Leave it to the builder.
Fair. Except we are not traveling at the speed of light. Relativity is not in play. UTC is a global constant. Unix epoch is a single integer incrementing identically for every person and every system on this planet, right now, at this moment.
Time is not personal when one is trying to offer something to others. Time is not application logic. Time is infrastructure. It is as universal as gravity, as shared as the air.
Passing it through is not an imposition. It’s just good engineering.
I guess sometimes Google is truly good.
When we say good engineering, we mean it.
For a chat interface, missing time is a curiosity. It’s a little funny flaw. I notice it, I test it, I write an article then I move on.
But for agents, it’s a different problem entirely.
Agents are running tasks. They are busy scheduling things, deciding what’s urgent, and acting on their behalves. Sometimes they run for days without user supervision. An agent without time awareness is like an assistant who genuinely cannot tell if it’s morning or midnight.
The assistant is not confused, but structurally uninformed. Even if the room is full of clocks, the assistant still have no awareness of time. The result? It has no urgency, no sense of deadline and no way to know that the window is closing or that it already closed.
Without time, an agent can only react. It cannot initiate.
This was an acceptable gap in 2022, when AI lives in a chat window. It is a meaningful gap in 2026, when AI means agents acting in the world.
(Don’t get me started on how bad those running agents were poorly constructed…)
So here is where this lands.
Knowing is easy. Eventually everyone knows. Ideas spread through the valley — through conversations, through articles, through people who carry their experience from one place to another. Eventually Claude and ChatGPT will know the time.
But knowing is not the point.
The timestamp is a small thing. A single integer. One line of code. And yet it reveals something about how a system was conceived — whether the people building it have ever shipped something that a real person depended on, in real time, with real consequences.
Right now, many companies are building products on top of LLMs. Most of them are smart, well-intentioned, and genuinely excited about what they’re building. But excitement is not the same as product instinct. That instinct is not taught. It is accumulated — through shipping, through feedback, through users who had a deadline and needed you to have thought of everything.
The best design decisions are invisible. So are the worst ones.
You don’t notice them until someone asks a simple question and the answer isn’t there.