🤖 AI Agents – A Full Circle?

While building an agent to answer some complex queries about my data, I noticed the first few drafts revealed something interesting. The screenshot below shows the agent’s detailed reasoning and thinking process right before it finally responded to my test query.

It’s incredible to see how a machine can mimic human thought process before responding back almost precisely* like a human – almost like a real support agent!

| The word “precise” is the “precise” reason that led me to write this article, thus the highlight. You’ll see why in a later section 😉

The Evolution of Work

While it is fantastical to think how our world is evolving with the revolutionary technology that we have on our hands now, it is also amusing to think, in many ways, we might even be taking a full circle back to our “human” roots.

Decades ago, we worked with humans. Then came machines, and we started working with humans and machines. And now, we are at a point where the line between human and machine is at its thinnest. Will we ever work solely with machines? Probably not, but as machines evolve, we might find ourselves working with machines that think like humans. (Ironically, we say someone “works like a machine” when they work hard; soon we might say a “machine works like a human” when it thinks deeply!) This brings us full circle to decades ago when people worked with humans – think about booking a ticket in the 1970s or buying stocks.

The Ironic Reversal

What amuses me is how my agent’s thinking and reasoning resembled typical IT/customer support interactions. Having dealt with IT support for over a decade, I was amazed to see the agent’s reasoning process mirror conversations I’ve had with actual humans in the past.

This is how the interaction went:

  • I knew what I wanted (just some filtered data to test things out)
  • My instructions clearly stated what I wanted and where the agent should look for it
  • I shared my request (essentially saying: “I want something with this one filter for now”)
  • The agent assumed I hadn’t provided enough information (“No actual query is provided.”). In other words, it saw me as an idiot.
  • The agent then crafted a polite, professional response that essentially told me I hadn’t asked properly (it planned to acknowledge its role, explain how to proceed, and ask for a “specific” query)
  • Finally, it delivered this seemingly helpful response: “Since no specific query has been provided yet, I’m unable to retrieve or display any matches at this time. Let me know what you’re looking for!”

The “Let me know what you’re looking for!” irritated me the most. It sounds helpful and eager, but I had seen its actual thought process – it assumed I was asking a stupid question by not providing enough information. Then it politely asked what it was really thinking: “What EXACTLY do you want, man?”

What This Means for Our Future

It’s both fascinating and somewhat unsettling to observe this interaction – seeing how a machine can mimic human-like thinking, not just in its responses but in its underlying reasoning process!

As we begin to realize the true potential of human-like machines, it is intriguing to imagine a future where humans and machines genuinely work together – not just with machines as tools or assistants, but as employees, agents, or perhaps even partners and colleagues. Could these intelligent machines someday earn the same respect we reserve for our peers, especially if intelligence is the trait we value most? It prompts us to ask: will there come a time when machines are so advanced that we move beyond merely using them, and instead respect their judgments and decisions not just for us as individuals, but for the benefit of humanity as a whole?

Perhaps we already treat machines as more than mere tools – whether it’s collaborating with them through “vibe coding,” relying on them as writing assistants, asking them to generate images, or even seeking their input on decisions like buying or selling stocks, among countless other tasks we once handled ourselves. Yet, this is only the beginning, and it raises the question of how the very nature of human-machine interaction will continue to evolve in the future.

…so, what was the problem with the agent?

Although reflecting on the broader philosophies surrounding AI and humanity is always tempting, I have to stay within the boundaries of reality and the present.

What I encountered wasn’t really a technical obstacle, but rather a semantic misunderstanding in my instructions. When I assigned the AI agent its role, I asked it to provide “precise” data – thinking, as I had learned in school, that this meant something “almost exact” or “really close to” what I wanted. In reality, what I needed were responses that were simply “close enough.” Updating that single word in my instructions – from “precise” to “close to” – resolved the issue entirely 🤦🏻

Ironically, I ended up spending more time writing and polishing this article, my very first on LinkedIn, than I did actually creating, troubleshooting, and fixing the agent itself!

But do let me know what you think the future will hold for humans and machines working together!

yes

AI has helped in writing this article

The information provided on this topic is not a substitute for professional advice, and you should consult with a qualified professional for specific advice that is tailored to your situation. While we strive to ensure the accuracy and timeliness of the information provided, we do not make any warranties or representations of any kind, express or implied, about the completeness, accuracy, reliability, suitability, or availability of the information, products, services, or related graphics for any purpose. Any reliance you place on this information is at your own risk. We cannot be held liable for any consequences that may arise from the use of this information. It is always advisable to seek guidance from a qualified professional.