The internet is becoming a checkpoint
The deeper risk is not one bad law It is the slow normalisation of permanent ID checks online
9 min readApr 17, 2026

This started as a child safety debate
The public case for age verification is easy to understand. Children are online early, social platforms are powerful, harmful content spreads fast, and governments do not want to be seen doing nothing. That is why age assurance rules have moved so quickly from the margins into the centre of internet policy. In Australia, age restricted social media rules took effect on 10 December 2025, requiring platforms to take reasonable steps to stop under 16s from creating or keeping accounts. In the UK, strong age checks for pornography have been required since 25 July 2025, and Ofcom said in March 2026 that age checks were now spreading more broadly across social media, dating, gaming, and messaging. In Australia, search engine age assurance requirements are due by 27 June 2026, with app distribution services following by 9 September 2026. What this really means is that the debate has already moved past a few niche corners of the web. The internet is not being treated as one open environment anymore. It is increasingly being treated as a space where access should be filtered, segmented, and proven before it is granted.
It is important to be precise here, because precision matters when a debate is emotional. Australia has not made government accredited Digital ID mandatory for social media age checks. The official Digital ID System site says clearly that people will not be forced to use a government accredited Digital ID to prove they are at least 16, and that platforms must offer multiple ways to confirm age. The same official material also says Australia’s Digital ID system is voluntary and was set up to provide secure and convenient identity verification for online transactions with government and businesses. So the claim that the government has already forced everyone to use digital ID to log into the internet would be wrong. But the deeper concern does not vanish because the current law stops short of that. The infrastructure is still being built. A voluntary digital identity framework now exists, it can be reused across services, and it can be offered as one path inside a larger age assurance system. That may sound modest today, but infrastructure has a habit of becoming normal once it is available, technically workable, and politically useful.
The checkpoint logic is still spreading
This is where the bigger story begins. Even if one jurisdiction says a government ID system is optional, the broader pattern still points in one direction. Australia now has social media age restrictions in force, search engine age assurance deadlines coming in June 2026, and app distribution age assurance deadlines coming in September 2026. The UK has already normalised strong age checks for pornography and says age assurance is spreading across additional categories of online service. The European Commission says its age verification solution is technically ready as of 15 April 2026 and will be available soon as an app, with several member states already preparing customised versions. California’s Digital Age Assurance Act is due to become operative on 1 January 2027 and would require operating system providers to collect age data at setup and provide age bracket signals to apps. Seen one by one, these moves can sound separate and sensible. Seen together, they describe a real architectural shift. The internet is slowly being reorganised around proof of status. Not always proof of full identity yet, but proof of enough identity attributes to decide whether you may proceed. That is what a checkpoint system looks like in digital form.
Once the device becomes the gatekeeper everything changes
The most important line in this debate may be the one between service level checks and device level checks. A website that asks for proof of age is one thing. An operating system or app store that classifies the user and passes age bracket signals to software is something much bigger. California’s law points directly in that direction by requiring age information at account setup and a real time signal for whether a user is under 13, between 13 and 16, between 16 and 18, or 18 and over. Supporters will say this is cleaner because it avoids every app asking the same question over and over. That is partly true. But centralising the checkpoint does not remove the checkpoint. It makes it more foundational. Once the device itself becomes part of the permission layer, access control stops feeling like an exception and starts feeling like the default. The phone, tablet, or computer becomes a gatekeeper. That changes the culture of digital life. It also puts enormous pressure on smaller developers, open systems, and alternative platforms that were never designed to serve as identity routing infrastructure. This is where things change from a policy debate about children into a structural debate about who controls access to the web.
Privacy problems do not disappear because the goal sounds noble
The strongest argument for these systems is also the one that can blind people to their risks. Child safety is a real concern, so many people assume the technical details can be sorted out later. That is a mistake. Australia’s privacy regulator warned in guidance published on 17 March 2026 that organisations should escalate to more intrusive personal information handling only as necessary, should not seek to reveal identity when validating age, should minimise sensitive information, and should destroy biometric or identity document inputs once the purpose is met. That is careful language, and it tells you something important. The privacy risks are not theoretical. They are already serious enough that the regulator is spelling out how to avoid overreach. Even eSafety’s own public material on social media age restrictions makes clear that platforms may use signals such as language style, interaction patterns, school schedule patterns, visual content analysis, and audio analysis to review accounts that appear underage. So even when a government says it is not forcing one single digital ID method, the overall ecosystem can still drift toward more behavioural monitoring, more biometric analysis, and more sensitive data handling than ordinary users ever expected.
The internet can become more watched even when you are not handing over your passport
This is the part people often miss. Many imagine the danger only as a blunt requirement to upload a passport or driver licence before they can use a service. That is one risk, but it is not the only one. Age assurance can also work through inference, estimation, and layered escalation. That means systems may watch behaviour, analyse photos, analyse voice, score risk, and decide whether someone should be challenged for more evidence. In other words, the price of access can become more observation rather than more paperwork. The OAIC guidance makes that tension explicit by describing age assurance as a broad umbrella that includes verifying, estimating, and inferring age or age range, then urging entities to choose reasonably necessary and proportionate methods. Europe’s privacy preserving work also shows why this matters. France’s CNIL has long argued for third party privacy preserving models where the relying service learns only the necessary proof and not the user’s identity, while Google has published zero knowledge proof libraries aimed at letting someone prove they are over 18 without revealing anything else. Those ideas exist because the ordinary path is risky. If the common age check were already safe, private, and narrow, nobody would need to build better cryptographic alternatives.
Courts are already warning that broad age gates hit speech and access
There is another problem here that often gets pushed aside in public debate. Age gates are not only about safety and privacy. They are also about speech, lawful access, and the shape of civic life online. Courts in the United States have already shown discomfort with broad online age verification mandates. Reuters reported in December 2025 that a federal judge blocked Texas from enforcing a new law requiring app stores and developers to verify users’ ages, saying it likely violated First Amendment protections. Reuters also reported in February 2026 that a federal judge blocked Virginia’s law restricting social media use for children, finding likely constitutional problems because it burdened speech rights too broadly. The Electronic Frontier Foundation has been blunt in its criticism, arguing that age gating laws create unnecessary barriers to access protected speech, hurt small and open source developers, and expose everyone to privacy and security harms. People do not need to agree with every advocacy group to see the underlying point. Once identity style checks spread beyond a narrow set of truly age restricted services, they start changing who can speak, who can read, and who can participate without friction, delay, or fear. That is not a side effect. It is part of the system.
Better technology exists but it is not yet the default
There is no need to pretend the only options are total openness or full blown surveillance. Privacy preserving systems do exist in principle, and that matters. The European Commission’s current age verification blueprint includes zero knowledge proof technology and says the app is technically ready. Google’s published zero knowledge libraries aim to let developers build age assurance that proves a narrow fact without revealing other personal data. CNIL’s demonstrator was built around the same basic idea, with a trusted third party and proofs that do not directly reveal the user’s identity or even the site requesting the age check. That is a far better direction than a web built on constant document uploads, biometric storage, and ad hoc behavioural profiling. But the problem is that these cleaner systems are still not the norm across the internet. The politics have outrun the privacy engineering. Governments are mandating outcomes first and leaving the hard implementation details to platforms, vendors, app stores, and regulators after the fact. That is why public trust remains fragile. People are being told the checkpoint is necessary before they have been shown a checkpoint system that is truly minimal, secure, fair, and easy to contest when it gets something wrong.
The deeper cultural shift is the normalisation of permission based internet access
What this really means is that the biggest change may not be technical at all. It may be cultural. The old internet carried a messy but powerful assumption that access came first. Unless something was clearly illegal or tightly restricted, you could arrive, read, browse, and participate with relatively little friction. The new model is different. It assumes more categories, more gates, more proof, more status checks, and more systems deciding what kind of user you are before you get through the door. Sometimes that will be sold as age assurance. Sometimes it will be sold as safety by design. Sometimes it will be presented as a harmless signal rather than a full identity check. But the lived effect can still be the same. The web becomes a place where permission is continuously verified. Australia’s official position on Digital ID shows that governments know people are nervous about this, which is why the current line is that accredited Digital ID is voluntary and privacy protections are built in. That is welcome. But it is also why the next few years matter so much. Once the infrastructure of checkpoints becomes normal, the argument is no longer about whether to build it. It becomes about what else it should be used for.
What changes next
The next stage will be shaped by three fights happening at once. One is a policy fight over how far governments push age assurance into search, app stores, operating systems, and other layers of digital life. Another is a technology fight over whether privacy preserving proofs become the standard or whether crude inference, facial analysis, and repeated identity requests remain the everyday norm. The third is a legal and cultural fight over whether the public accepts a more permission based internet as the new price of safety. My own view is that this should worry people even if they support stronger child protection. A safer internet for children should not quietly become an internet of permanent checkpoints for everyone else. Australia has not crossed all the way into mandatory government digital ID for general login, and it would be wrong to say it has. But the pattern is clear enough to justify concern. Social media restrictions are already in force, search engine age assurance is coming in June, app distribution requirements follow in September, Europe now has a technically ready age verification app, and California is trying to move age bracketing into the device layer from 2027. The deeper risk is not one dramatic switch. It is the steady construction of a web where proving yourself becomes ordinary
