Parenting the Machine: The EU’s Attempt to Guide an AI That’s Growing Up Too Fast
When you’re a parent, you learn fast that toddlers live in their own world. You spend those early years teaching simple rules and hoping a few of them land.
As they grow, you add the deeper lessons—empathy, responsibility, a sense of right and wrong. Now picture skipping most of that guidance and meeting your child again only once they’ve become a teenager.
They believe they already understand life. They resist direction. They push back on every attempt to shape their behavior.
That’s the situation Europe faces as it tries to regulate artificial intelligence. Not guiding something still forming—chasing something already in motion.
The EU AI Intelligence Act and Levels of Risk
In 2024, the European Union passed the EU Artificial Intelligence Act, the world’s first comprehensive legal framework for artificial intelligence. Its goals are straightforward: to ensure AI systems are safe, transparent, and accountable, while still leaving room for innovation. In that sense, if you think like an AI “parent” building within this system, you are not expected to absorb the entire rulebook at once. You focus on the category your AI falls into and “teach” it the specific rules that apply. The Act organizes these systems by levels of risk:
Unacceptable risk: banned outright. This covers AI built for harmful manipulation, social scoring, scamming, and sexual deepfakes. The rationale is simple—these systems were not designed for legitimate use. Harm is the product, not a side effect.
High risk: allowed, but heavily regulated. Medical AI, hiring tools, critical infrastructure—these operate where mistakes have real consequences. The Act does not try to stop them. It demands they prove they can be trusted: rigorous documentation, risk assessments, human oversight at every stage.
Limited risk: allowed, with basic transparency requirements. Chatbots, image generators, text tools. The rule here is not about danger—it is about honesty. If you are talking to an AI, you should know it. The Act simply requires that developers say so.
Minimal risk: essentially unrestricted. Video game NPCs, spam filters, recommendation engines. These systems do not operate near sensitive decisions or personal rights. The Act leaves them alone—for now.
By 2026, the focus shifts to high-risk systems—those tied to healthcare, hiring, and critical infrastructure. Companies must meet stricter standards: risk assessments, documentation, human oversight. These rules are less about stopping behavior and more about forcing it into the light. Explain your data. Justify your decisions. Show your work.
Then, into 2026 and 2027, the broader transparency and compliance requirements come fully into force. By this point, many AI systems are already deployed, already shaping decisions, already embedded into daily life. The rules are no longer guiding something in development. They are catching up to something in motion.
That is where the weight of the rulebook becomes real. Because it is one thing to teach rules early, when habits are still forming. It is another to introduce them after patterns are set, after incentives are locked in, after the system has learned how to operate without them.
The companies building these systems are not starting fresh. They have users, revenue, momentum. Regulation meets them mid-stride, asking not just for compliance, but for change. And so the Act does something harder than prevention. It attempts correction. It asks developers to retrofit accountability into systems that were not originally designed for it. In other words, it turns parenting into governance. And governance does not assume the lesson will stick. It requires proof that it has.
What This Means For Developers Outside of The EU
At first glance, it is easy to assume the EU Artificial Intelligence Act is a regional rulebook, something only companies in Europe should worry about, but that assumption doesn’t hold for long. The act is not about where the developer sits, it is about where their AI ends up. The European Union is a massive market with dozens of countries and millions of people. If your system is used in the European market, whether or not you live in Toronto, San Francisco, or Singapore, you will be pulled into that orbit.
The moment your AI interacts with EU users, influences EU decisions, or operates within that ecosystem, the rules follow. For many developers, that is where the tension begins.
Because by the time these rules apply, their systems are no longer in their infancy. They are live. They have users, workflows, habits. They have been optimized for speed, engagement, performance not necessarily for transparency or accountability. In other words, they are already teenagers, and like teenagers, resistant to change. The way they were built, the incentives behind them, the shortcuts taken early on, these patterns harden over time. So when regulation arrives, it feels like an interruption. Add documentation. Add oversight. Explain decisions that were never designed to be explained. To many developers, it feels like a full on redesign and overhaul.
For developers outside the EU, this creates a choice. You can treat compliance as a patch, something layered on after the fact, added just enough to pass. Or you can treat it as a reset, rethinking how your system behaves at its core. The first is faster while the second is harder. Changing a system early is education. Changing it late is a negotiation, and negotiation, especially with something that already “works,” is slow, expensive, and often resisted at every level, from code to culture. That is the deeper impact of the Act beyond Europe. It does not just extend rules across borders. It forces developers everywhere to confront a more uncomfortable question: Not just whether their AI can comply, but whether it was ever built to.
Suggestions For Developers
If the EU AI Act feels like parenting a teenager who already thinks they know everything, brute force will not work. You do not win by shouting new rules at a system that was never built to follow them. You have to understand where the resistance actually lives. For many developers, the instinct will be to retrofit—do the bare minimum, add a transparency layer, call it compliant. But if the underlying system cannot explain itself, surface-level fixes will not make it accountable. They will just make it look like it is.
So the first shift is structural: build for explanation, not just performance. If a system cannot clearly answer why it made a decision, that is not just a regulatory risk, it is a design flaw.
The second is to treat risk classification as a design specification. Don’t wait to be told your system is “high-risk.” Assume it might be, and build with that weight in mind. It is far easier to relax constraints later than to introduce them once everything depends on speed and scale.
Third, document as you go. Not because regulators will ask, but because in the future you will. Systems evolve quickly, and the moment you need to justify a decision made six months ago, memory will not be enough. Documentation is not bureaucracy, it shows continuity.
And finally, resist the temptation to build two versions of everything—one compliant, one not. It seems efficient in the short term, but it splits your system’s identity. Over time, that divergence becomes harder to maintain than a single, higher standard. In other words, don’t parent reactively, parent early, parent consistently.
The clock is already running
What the EU is attempting with the AI Act is not just regulation. It is course correction. And course correction is always harder than setting direction from the start. Right now, AI sits in its awkward teenage years—too developed to be easily reshaped, not mature enough to be fully trusted. And like any teenager, it does not readily accept that it still has something to learn. That is the challenge Europe has decided to take on: stepping in mid-growth and trying to instill boundaries, accountability, and a sense of consequence. For developers, whether inside the EU or far outside it, the lesson is the same. The longer you wait to build those things in, the harder they become to add later.
To discuss regulations and how to navigate them please DM me Michael Sorrenti
