The Info reported that an AI agent inside Meta took unauthorized motion that led to an worker making a safety breach on the social firm final week. In line with the publication, an worker used an in-house agentic AI to research a question from a second worker on an inner discussion board. The AI agent posted a response to the second worker with recommendation although the primary individual didn’t direct it to take action.
The second worker took the agent’s really useful motion, sparking a domino impact that led to some engineers gaining access to Meta programs that they should not have permission to see. A consultant from the corporate confirmed the incident to The Info and mentioned that “no consumer information was mishandled.” Meta’s inner report indicated that there have been unspecified extra points that led to the breach. A supply mentioned that there was no proof that anybody took benefit of the sudden entry or that the information was made public throughout the two hours when the safety breach was energetic. Nevertheless, which may be the results of dumb luck greater than anything.
Many tech leaders and corporations have touted the advantages of synthetic intelligence, that is simply the most recent incident the place human staff have misplaced management over an AI agent. Amazon Net Companies skilled a 13-hour outage earlier this yr that additionally (apparently coincidentally) concerned its Kiro agentic AI coding device. Moltbook, the social community for AI brokers lately acquired by Meta, had a safety flaw that uncovered consumer data due to an oversight within the vibe-coded platform.

