Abstract created by Sensible Solutions AI
In abstract:
- Microsoft’s Ram Shankar Siva Kumar argues that customers ought to give attention to trusting AI builders fairly than the AI fashions themselves when evaluating chatbot reliability.
- PCWorld stories that agentic AI, which capabilities as digital assistants, is gaining recognition however could lack obligatory safeguards for widespread deployment.
- The important thing concern includes unrestricted AI entry to customers’ digital lives, which may trigger important harm with out correct safety measures from reliable builders.
I ask buddies on a regular basis in the event that they belief ChatGPT or Gemini, particularly after they inform me they feed these AI chatbots medical check outcomes, deeply private ideas, and even delicate work points. However after speaking with Microsoft at RSAC’s 2026 cybersecurity convention, I spotted I’ve been approaching that query all unsuitable. You most likely have, too.
Ram Shankar Siva Kumar, Knowledge Cowboy and AI Crimson Crew Lead at Microsoft, says most individuals ask the query of belief an AI. That’s, asking be taught sufficient about it and its internal workings to be able to make a name on its dependability. Their focus is on the mannequin and its code.
As a substitute, Kumar suggests we must always ask: “Do I belief the developer?”
This new method to belief got here from a fast dialog round agentic AI, which is able to enchantment to most shoppers. As Kumar says, this sort of AI helps “with the drudgery of life.” It’s not arduous to see the attract of getting an AI agent as a digital assistant, in a position to deal with multi-step duties with little enter. However Kumar additionally expressed concern that the majority shoppers don’t know that some AI tasks simply aren’t prepared for prime time but.
For instance, if an AI agent has unrestricted entry to your complete digital life, it’s possible you’ll not notice that safeguards aren’t correctly in place to stop large errors and potential irreparable harm. (Kumar and I ended up referring to this as a “YOLO mannequin” for AI.) There’s proof of this within the wild—we’ve already seen repeat tales of AI brokers deleting recordsdata they weren’t requested to, with a distinguished current instance being a Meta exec dropping 200 emails to OpenClaw.
In fact, you may method agentic AI use extra fastidiously, as my colleague Ben Patterson explains. However his recommendations require residing and respiration AI far more than most individuals do—my buddies, household, and acquaintances simply plop their questions right into a chatbot or use AI abstract buttons with out forming a sport plan first.
What’s simpler is to easily ask your self if you happen to consider the developer of the AI mannequin will ship on the product’s advertising and marketing. Many AI growth groups can in truth say a brand new launch is their most superior mannequin but. However on an goal stage, is it actually superior in each options and safety from exploits?
The reply to this query isn’t essentially to chop your self off from attention-grabbing new instruments. Somewhat, train sharp judgement about what and who you belief—and to what diploma.

