• Tame your AI gremlins before the chaos becomes permanent

    From TechnologyDaily@1337:1/100 to All on Thu Apr 9 12:00:30 2026
    Tame your AI gremlins before the chaos becomes permanent

    Date:
    Thu, 09 Apr 2026 10:49:34 +0000

    Description:
    AI agents are moving fast, but without clear identity and control, they
    become chaos machines.

    FULL STORY ======================================================================Copy link Facebook X Whatsapp Reddit Pinterest Flipboard Threads Email Share this article 0 Join the conversation Follow us Add us as a preferred source on Google Newsletter Tech Radar Pro Are you a pro? Subscribe to our newsletter Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed! Become a Member in Seconds Unlock instant access to exclusive member features. Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over. You are
    now subscribed Your newsletter sign-up was successful Join the club Get full access to premium articles, exclusive features and a growing list of member rewards. Explore An account already exists for this email address, please log in. Subscribe to our newsletter Theres a word that sums up where the software industry is right now: chaos. I was going to write is heading, but that would have been accurate six months ago. Its here already.

    AI coding has made it cheap to change any software you want, so everyone has started changing everything at the same time: infrastructure , internal
    tools, APIs, security models, CI pipelines, even entire product surfaces. The cost of producing code is falling fast, but the cost of understanding what that code does has not. That mismatch is where your AI gremlins live. Article continues below You may like From Black Box to White Box: why AI agents shouldnt be a mystery to enterprises Before you roll out more AI, answer
    this: Who's accountable? How businesses can stop their AI agents from running amok Avery Pennarun Social Links Navigation

    CEO and Co-founder of Tailscale. For the past couple of years, the loudest AI security conversation has been about employees pasting sensitive data into chatbots . Thats a real concern, and it deserves attention, but its not the problem that will define the next wave of incidents, because the real shift isnt AI that talks. Its AI that acts.

    Coding assistants now open pull requests, and agents merge branches, file tickets, trigger CI jobs, query databases, and call internal APIs. In a growing number of organizations, these systems are no longer experiments.
    They are part of how work gets done.

    That changes the risk category: shadow AI stops being a policy issue and starts being a privileged access issue.

    Once an agent can take actions, the question isnt Did someone paste the wrong thing into a prompt? Its Who did what, using which credentials, and under
    what authority? Most organizations still cant answer that cleanly. Are you a pro? Subscribe to our newsletter Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed! Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors By submitting
    your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over. The real problem isnt speed. Its bypass A common framing is that security teams are lagging AI adoption. The more accurate version is that theyre being bypassed. AI adoption moves at product speed, while
    security review moves at organizational speed. When the two collide, the industrys default behavior has been predictable: ship first, govern later.

    Later usually arrives during incident response, when you discover your logs can tell you that the bot did something but cant reliably tell you who initiated it, what policy was evaluated, or what scope limitation was
    supposed to apply.

    We are building workflows that can take powerful actions, and then acting surprised when we cant explain those actions afterward. What to read next AI agents can only be trusted as Junior Engineers The year of the AI agents?
    More outages? Heres what lies ahead for IT teams in 2026 OpenClaw is making terrifying mistakes showing AI agents aren't ready for real responsibility

    This is the part that should worry leaders, not because AI is mystical, but because it makes old mistakes easier to repeat at scale. Weve all spent years trying to eliminate shared credentials and unclear ownership. Agentic workflows have a talent for resurrecting both. A familiar pattern: the demo works, then security sees it Heres a pattern Ive seen more than once, and it never shows up in a strategy deck.

    A team prototypes an agent to speed up engineering. It starts innocently:
    read tickets, propose code changes, open PRs. Someone adds the ability to
    call internal tools because its just one more step, and suddenly the agent
    can touch GitHub, CI, and deployment.

    The credentials are whatever is easiest: a shared token, a service account,
    an API key sitting in a secrets store.

    It ships. Everyones happy. Work moves faster.

    Then someone from security takes a closer look and has the same reaction
    every experienced security person has when they find a powerful automation running on broad, long-lived credentials: What are you thinking?

    That moment matters. Its not that security hate AI tools . Its that security understands a basic rule that everyone else is temporarily trying to ignore: actions require accountability. If you cannot say who authorized an action, you cannot convincingly claim you control it.

    In the best case, the team pauses, routes the agent through a proper access path, scopes its permissions, and adds real attribution. In the worst case, the agent stays wired in temporarily, which is a word that can mean anything from one day to the heat death of the universe. Cheap code amplifies sloppy identity Weve seen this movie before. When virtual machines got easy, we got server sprawl. When cloud storage got cheap, we got public buckets. When CI became self-serve, we got pipelines nobody fully understood. Now code is cheap, so integration sprawl is next.

    Agents are being wired into GitHub, CI, ticketing, databases , and internal APIs using whatever credential is closest at hand. Often that means
    long-lived tokens stored in environment variables, configuration files, or endpoints.

    Sometimes those tokens belong to a human. Sometimes theyre shared service accounts. Sometimes theyre temporary keys that have survived three reorganizations.

    It works until it doesnt. A continuously running agent with a broadly scoped credential is effectively a privileged insider operating at machine speed. It will do exactly what its permissions allow, and it will do it more consistently than a tired human at 2 a.m. If your access model is sloppy, AI wont fix it. It will scale it.

    Right now the industry is in a hurry. Roadmaps are being rewritten around AI-first, and teams are rebuilding workflows because models make it possible, not because its necessarily wise. Hurry creates activity, but it doesnt
    create coherence.

    In a hurry, teams grant broad permissions to get a demo working, drop
    provider keys onto endpoints because its convenient, and defer identity
    design because it feels like plumbing.

    But plumbing is what keeps the building standing. One principle that makes
    the rest survivable Theres a simple principle that should anchor AI
    governance going forward: if an AI system can take actions, it needs an identity of its own.

    Not a shared service account, not a copied human API key, and not a static token living in a configuration file. A real, governed identity . That identity should use short-lived credentials, have tightly scoped permissions, and be evaluated against policy at the moment of each tool call.

    Every action should also be attributable back to a known user or workload intent, so you can apply controls at decision time rather than reconstruct intent in a postmortem.

    This pushes you toward standardizing how agents reach your systems.
    Centralize access through one approved path rather than letting ten ad hoc integrations bloom in parallel. Keep provider keys off endpoints as much as possible. Treat tool calls like production changes, because in practice thats what they are.

    The control plane doesnt go away just because the interface got chatty. A gut check for the 3 a.m. page If you remember only one thing, make it this: AI didnt invent a new security problem. It made an old security problem run faster.

    The old problem is unaccountable power. The habit of scattering credentials across endpoints. The belief that you can clean up later. Weve been trying to stamp that out for twenty years, and it keeps coming back whenever a new wave of tooling makes shortcuts feel harmless again.

    So heres the test you can run the next time someone proposes wiring an agent into production-adjacent systems. Imagine the 3 a.m. page. Something
    happened. The logs say an agent did it. The business is asking what went wrong.

    Can you answer, plainly and confidently, who authorized that action and why the system allowed it?

    If you cant, you dont have an AI program. You have a chaos generator with a polite user interface.

    Tame the gremlins now, while the integrations are young and the habits are still forming. Retrofitting governance later is possible. Its just the expensive kind of possible. We've featured the best AI website builder. This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro



    ======================================================================
    Link to news story: https://www.techradar.com/pro/tame-your-ai-gremlins-before-the-chaos-becomes-p ermanent


    --- Mystic BBS v1.12 A49 (Linux/64)
    * Origin: tqwNet Technology News (1337:1/100)