Competitive Topologies & The Monopoly Problem: Why Democratic Institutions Need a Theory of Competition
A Response to Eric Schmidt and Andrew Sorota's "This Is No Way to Rule a Country"
Raeez Lorgat
Eric Schmidt and Andrew Sorota have diagnosed a real crisis. Democratic institutions are losing legitimacy at precisely the moment machine decision systems become powerful enough to automate consequential decisions. Their worry is structural: whoever controls the model, the data, and the compute will in effect govern whomever the model decides things for, and if that substrate is supplied by a platform monopolist or an authoritarian, democratic self-government becomes a veneer over infrastructural capture. Their prescription, deliberation-augmentation in the manner of Taiwan's vTaiwan, robust identity verification, content authenticity, is a response to a concentration risk they take seriously and describe accurately. It deserves a serious reply rather than a dismissal.
We agree with more of their argument than may be obvious. The concentration worry is real. Any proposal to re-engineer the substrate of governance inherits it, including the one sketched below. We also grant that a verification layer, knowing who is who, and which artifacts are genuine, is a precondition for any digital-first polity worth having. What we dispute is the implicit theory of the disease. Schmidt and Sorota locate the failure of democratic institutions on the voice margin: the channels by which citizens speak, deliberate, and aggregate preferences have been degraded, and better tooling can restore them. We locate the deeper failure on the exit margin, in Albert Hirschman's sense (Exit, Voice, and Loyalty, Harvard, 1970). Democratic governments are territorial monopolies. Between elections, citizens have voice but almost no exit, and it is monopoly without exit, rather than monopoly with imperfect voice, that bends institutions toward decay.
By competitive topology we mean the graph of governance relations: which authorities can be chosen, unchosen, or bypassed, by whom, at what cost, and over what domain. By higher-dimensional governance we mean the idea, elaborated by Bruno Frey and Reiner Eichenberger as functional, overlapping, competing jurisdictions (FOCJ, International Review of Law and Economics, 1996), that governance need not partition territory; multiple jurisdictions can coexist over the same geography along different functional axes. By exit rights we mean, following Hirschman, the capacity to withdraw patronage from an institution at tolerable cost, whether by leaving, by shifting compliance, or by choosing a competing provider. The argument is that digital infrastructure is now lowering the cost of exit across several dimensions faster than it is lowering the cost of voice, and that this shift, properly channeled and bounded by floor standards, is a candidate structural remedy Schmidt and Sorota gesture at but decline to name.
We are aware, and Hirschman warned, that this argument can cut the other way. If exit becomes too easy for the most quality-sensitive members of an institution, they leave first, the voice pool deteriorates, and the "lazy monopoly" slides into steeper decline rather than reform. East Germans who could reach West Berlin did not generally stay to reform the GDR. Our claim is not that exit always dominates voice. It is that in the contemporary democratic case, exit has been so thoroughly suppressed, by the costs of physical relocation, the indivisibility of citizenship, and the territorial framing of almost every public service, that restoring it, under conditions spelled out below, is corrective rather than corrosive. Whether that restoration in fact complements voice, or cannibalizes it, is the empirical question this essay stakes out. We do not claim to settle it.
The Decay Function of Monopolies Without Exit
Hirschman's insight was that organizations, firms and states alike, recover from slack through the joint operation of exit and voice; remove one and the other bears strain it was not designed to carry. A monopoly whose customers cannot leave is not disciplined by their complaints alone. Managers become selectively deaf to voice precisely because voice cannot be backed by the threat of departure. Institutions in that position optimize for internal convenience, tolerate capture by insiders, and maintain outdated processes because no mechanism makes the cost of doing so legible. This is not a theory about bad intentions. It is a theory about feedback.
Democratic institutions are, in their territorial dimension, monopolies of this kind. The well-documented symptoms are consistent with a monopoly whose customers cannot easily leave: declining trust (Pew Research Center, Americans' Views of Government, 2024; Edelman Trust Barometer, 2024); chronic procurement failures; the U.S. Department of Defense's inability to earn a clean audit opinion since its first full agency-wide audit in 2018, across successive reports covering trillions of dollars in reported assets (DoD Office of Inspector General, annual financial statement audits); legacy software running critical systems (Government Accountability Office, Federal Agencies Need to Address Aging Legacy Systems, GAO-16-468). These are not proof of lazy citizens or of incompetent elites. They are what Hirschman would have predicted once territorial exclusivity foreclosed exit and the electoral cycle stretched voice across four-year intervals.
Polarization and the sense that contemporary politics has become negative-sum are documented by serious political-science work (McCarty, Poole, and Rosenthal, Polarized America, 2006; Mason, Uncivil Agreement, 2018; Levitsky and Ziblatt, How Democracies Die, 2018), and by concrete measures of share of global output. At market exchange rates, the G7's share of world GDP has fallen from two-thirds in the early 1990s to under half today; at purchasing-power parity the shift is sharper, with the G7 share below a third and a BRICS bloc whose PPP output now exceeds it (World Bank World Development Indicators; IMF World Economic Outlook, October 2024). This is a real shift, and it raises the stakes of internal political conflict. The causal direction runs both ways. The clean claim is narrower: when relative fortunes tighten and exit is foreclosed, incumbents have fewer reasons to invest in substantive reform.
This is the condition Schmidt and Sorota correctly identify. Populations that cannot leave and whose voice cycles are long can, when institutions underperform, turn to whatever alternative promises effectiveness: authoritarian strongmen, technocratic programs, or, increasingly, algorithmic ones. Their response is to strengthen voice through better tooling. Ours is that voice-tooling alone cannot discipline an institution whose customers cannot walk out. The two responses are not mutually exclusive. They are aimed at different margins of the same system.
The Prior Literature on Jurisdictional Competition
The claim that jurisdictions can be disciplined by the mobility of their citizens is neither new nor naïvely optimistic. Charles Tiebout formalized it (Journal of Political Economy, 64:5, 1956) as a pure theory of local public goods: if households can sort across jurisdictions costlessly and with full information, and if the jurisdictions capture no externalities, then local governments will be disciplined to provide the bundles their residents want. Tiebout's five assumptions have been the centerpiece of a sixty-year debate. They are demonstrably unrealistic for ordinary citizens. Truman Bewley (1981) cataloged the failures of the frictionless-mobility premise in real labor markets, which has been the main reason economists have treated the Tiebout model as suggestive rather than operational for human beings.
The debate inside corporate law is sharper and more useful here. Roberta Romano ("Law as a Product: Some Pieces of the Incorporation Puzzle," Journal of Law, Economics, and Organization, 1:2, 1985) argued that jurisdictional competition among U.S. states for corporate charters produced a race to the top: Delaware's dominance reflects specialized courts, a developed body of precedent, and efficient rule-production. William Cary had earlier framed the opposite charge ("Federalism and Corporate Law: Reflections upon Delaware," Yale Law Journal, 83, 1974): the same competition rewards states that cater to managers at the expense of shareholders and outsiders, producing a race to the bottom on dimensions that the choosing party does not fully internalize. Lucian Bebchuk ("Federalism and the Corporation," Harvard Law Review, 105, 1992) and later work with Ferrell refined the critique: where the gains from chartering accrue to insiders while costs are externalized onto outsiders, competition systematically misaligns. Frey and Eichenberger's FOCJ proposal (1996) tried to thread the needle by disaggregating sovereignty into functional, overlapping jurisdictions, letting competition operate along one axis at a time rather than bundling every public good under a single territorial banner. Elinor Ostrom's Governing the Commons (Cambridge, 1990) showed, from a different direction, that polycentric orders can sustain common-pool resources without collapsing into either the tragedy of the commons or the rigidity of centralized planning, provided certain design principles are met.
Two observations follow. First, the received wisdom is neither "competition good" nor "competition bad." It is that jurisdictional competition races to the top on the dimensions for which the chooser internalizes costs and benefits (dispute resolution speed, clarity of chartering rules, quality of court opinions), and races to the bottom on dimensions where the chooser does not (third-party externalities, distributional consequences, systemic risk). Any honest proposal for expanding exit must therefore say which dimensions it expects to improve and which it requires floor standards to protect. Second, the canonical examples adduced for or against the competition thesis are themselves contested. Delaware's primacy has generated a large body of work on both sides. China's Special Economic Zones are cited sometimes as a triumph of regional experimentation (Naughton, The Chinese Economy, 2007) and sometimes as evidence that the most dynamic growth occurred outside the zoned structures, driven by township-and-village enterprises and rural entrepreneurship the center did not plan (Huang, Capitalism with Chinese Characteristics, 2008). The SEZ case also sits uneasily with the competition thesis in a more basic sense: economic decentralization was permitted within a framework of unchallenged political centralization. The example shows that local experimentation can generate growth. It does not show that exit disciplines a central monopolist who can, at any moment, withdraw the experiment.
The useful move is to ask what the digital layer changes. Our claim is narrower than Tiebout's. It is not that human citizens can costlessly sort across jurisdictions. It is that software-operated entities, legal entities whose compliance behavior is executed by software rather than by humans, can approximate the Tiebout assumptions on the dimensions where the assumptions were most implausible for humans: physical relocation cost, information asymmetry about jurisdictional rules, and social ties that bind human decisions to incumbent regimes. Where Tiebout's mobile citizen was an idealization, the mobile entity can be built. This is a real argument and we will make it below, but it is not a general claim that exit solves everything, and it does not purport to escape the race-to-the-bottom objection. It localizes the question: on which dimensions can mobility produce better outcomes, and on which must it be disciplined by shared floor standards?
Network Topology as a Frame
By topology we mean, here, two things at once: the graph of who can address whom within a governance system (information and authority flow), and the graph of which services a citizen can choose, drop, or compose without relocating (substitutability). Both are structural; both matter. The former decides how signals move; the latter decides how institutions are disciplined.
Modern democracies operate with an authority graph that is largely hub-and-spoke and with a substitutability graph that is largely trivial. National governments sit at a hub; citizens sit at the leaves; decisions flow downward through bureaucracy; voice flows upward through intermittent elections. For most public services, the citizen's substitute set is the empty set: one schools one, one insures one, one resolves one's disputes. This topology was adequate when coordination across distance was costly and when citizens' public-service choices were in practice tied to a single geography. It is now showing strain of a kind that deliberation tools, however good, cannot relieve: a voice channel with no exit channel at its other end is the defining shape of monopoly.
FOCJ-style proposals (Frey and Eichenberger, 1996; Vaubel's later extensions) imagine an alternative: overlapping jurisdictions whose boundaries are functional rather than territorial, so that citizens can be members of one jurisdiction for schooling, another for dispute resolution, another for environmental protection, each of which competes to retain its domain. This is the serious academic ancestor of what looser writing calls "higher-dimensional governance," and we use the term in that sense here. The substantive test for any such proposal is whether the functional jurisdictions can be honestly separated, so that exit on one axis does not collapse provision on another, and whether floor standards can be held across them.
vTaiwan is worth naming precisely. It is a Pol.is-based deliberation platform developed with Audrey Tang and collaborators, designed to surface cross-cleavage consensus on specific policy questions within the Taiwanese state (see Small et al., "Polis: Scaling Deliberation by Mapping High-Dimensional Opinion Spaces," Recerca, 2021). It is an effective deliberation tool. It is not a failed exit tool, because it was never meant to be one. Schmidt and Sorota's reliance on it is appropriate for the diagnosis they are making. Our disagreement is with the premise that voice-augmentation alone addresses monopoly pathology, and nothing in vTaiwan's design requires one to hold that premise.
Exit, Voice, and the Lazy-Monopoly Warning
Hirschman's 1970 argument is more specific than is sometimes remembered. Exit and voice are not pure complements; they are partial substitutes mediated by loyalty. When exit becomes easy, the first to leave are precisely the members whose complaints carry diagnostic information. Their departure can deteriorate the voice pool, the "lazy monopoly" outcome, leaving the institution with the apathetic and the loyal, and accelerating its decline. This is the strongest objection to any essay of the sort we are writing, and it must be answered on its own terms.
Two things would have to be true for cheap exit to produce a lazy-monopoly outcome in the democratic case. First, the exit-capable population would have to be disproportionately drawn from the voice-active. Second, the voice channel would have to depend on the presence of that population to produce signal. Both are plausible conditions, and neither is automatically defeated by digital infrastructure. We therefore make the argument conditionally. Our claim is not that lowering exit costs is costless. It is that, in current conditions, the marginal product of another unit of voice-tooling is low, because the voice channel is already saturated with well-designed mechanisms that institutions ignore, while the marginal product of a unit of credible exit capacity is high, because the absence of exit is what makes the voice channel ignorable in the first place. If we are wrong about that relative elasticity, Hirschman's warning wins.
The objection cuts harder on a second front. The early movers in any newly available exit channel are typically the capital-rich and voice-capable: those with the resources, legal advice, and optionality to leave. Their departure can leave a rump population bearing the fixed costs of the jurisdiction they left. We take this seriously, and it is the basis for the floor-standards point we develop below. An honest defense of expanded exit must specify which externalities a departing entity carries with it (carbon, financial-systemic risk, money-laundering exposure, distributive burden) and which public-good contributions it must continue to make even once it has exited administratively. Without that specification, the argument collapses into a libertarian arbitrage story and inherits every objection to such stories.
What we can argue positively is narrower, but real. Most of the damage a modern state does to its least mobile citizens does not derive from its inability to hear them; it derives from its inability to lose them. Even a perfectly deliberative legislature will ignore preferences it is under no structural pressure to honor. Mechanisms that reduce the cost of exit along specific axes, among them business registration, dispute resolution, capital formation, professional licensing, and research funding, put structural pressure on the territorial monopolist without requiring any citizen to emigrate. That pressure operates continuously, between elections rather than on them. It is, in effect, a market for the competencies states claim to provide. Jurisdictions that provide those competencies badly begin to lose the activity that fills their treasuries, and the bureaucracies that sit atop those treasuries become more legible to the legislature that authorizes them. This is a Tiebout-style argument with two changes: the mobile unit is a software-operated legal entity rather than a human household, and the externality-heavy axes are explicitly excluded from the competition.
Infrastructure as a Boundary Condition, Not a Solution
Representative democracy emerged under binding logistical constraints. You could not gather a million people into a room; you could not aggregate a million ballots in a day; you could not transmit preferences faithfully across a continent. Much of what we now call "democratic deliberation" is the institutional residue of those constraints. As the constraints soften, the institutions that were shaped by them should be subject to scrutiny rather than preservation for its own sake. This is the narrow, sharable point in both Schmidt and Sorota's argument and ours.
Two families of mechanism have become newly operable. The first, delegated or "liquid" democracy, lets citizens delegate voting power issue by issue and revoke the delegation at will (Ford, "Delegative Democracy," 2002; Green-Armytage, 2015). The empirical record is genuinely mixed. LiquidFeedback inside the German Pirate Party and related experiments consolidated voting weight into a small number of super-delegates, producing a new concentration pattern rather than dissolving the old one (Kling et al., "Voting Behaviour and Power in Online Democracy," Policy & Internet, 2015). This is not a refutation of delegated mechanisms; it is an open problem about sybil-resistance, delegate accountability, and default settings. Calling it a solved design would be premature.
The second family comprises preference-intensity aggregation rules, most prominently quadratic voting (Lalley and Weyl, "Quadratic Voting: How Mechanism Design Can Radicalize Democracy," AEA Papers and Proceedings, 108, 2018). Quadratic voting has been piloted, most notably by the Colorado House Democratic caucus for internal budget prioritization, and in field experiments by the RxC community. These pilots are informative, not dispositive. Taiwan's civic-tech stack is often cited in this connection; the Taiwan mechanism of record is Pol.is, which produces a two-dimensional map of opinion space via clustering. Pol.is and quadratic voting are distinct mechanisms, and the conflation is worth avoiding. Both Pol.is and quadratic voting carry promise. Both face unresolved questions about collusion resistance, quasi-linear utility assumptions, and base-rate effects in small populations.
We mention these because serious proposals to rethink governance infrastructure must engage the specifics of what has been tried and what has not been tried. Calling an experiment a proof is a category error that turns argument into advertisement. The honest statement is that these mechanisms are research-grade, not industrial-grade; the task is to develop them, not to declare them finished.
Parallel Experimentation and Its Limits
Part of what makes territorial monopoly brittle is that it cannot fail in pieces. A policy adopted at the national level stands or falls for everyone; the cost of a wrong guess is borne by the entire population, and incumbents internalize that risk as a bias toward conservative incrementalism. Decentralized orders, by contrast, can sustain partial failure: the successful variant spreads, the failed variant is contained. This is the core attraction of Ostrom's polycentric analysis (Governing the Commons, Cambridge, 1990) and of federalist "laboratories of democracy" arguments going back to Brandeis. Both rest on the premise that the knowledge required to govern well is distributed, local, and revised by experience.
The American federal system exhibits elements of this logic. States differ on healthcare financing, cannabis policy, occupational licensing, and criminal justice; outcomes are observable; diffusion happens, if slowly. The literature on policy diffusion (Walker 1969; Berry and Berry 1990; Shipan and Volden 2012) documents both the mechanism and its frictions: imitation is common but slow, and is often mediated by party networks rather than by impartial performance comparison. Interstate competition has produced clear gains on some axes (dispute resolution in Delaware courts, unemployment-insurance design) and clear losses on others (corporate-tax base erosion, regulatory arbitrage in consumer finance). The pattern is the one the prior section anticipated: gains where the chooser internalizes costs and benefits, losses where the chooser does not.
We noted the ambiguity of the Chinese case earlier. It bears repeating here because the SEZ story is invoked too readily as a parable of competitive governance. Huang (2008) and Rozelle and Hell (Invisible China, 2020) argue that much of China's growth happened outside the zones and that SEZ policy concentrated investment at the cost of regional inequality. Even to the extent the zones functioned as competitive laboratories, they did so inside a political system in which the central authority could and did override their terms at will. A framework that permits experimentation at the pleasure of a monopolist is no counterexample to the monopoly problem. It is a special case of it.
What digital infrastructure changes is narrower than a general "software-speed governance." Research in the legal-informatics literature, notably Merigoux, Chataing, and Protzenko's Catala (Proceedings of the ACM on Programming Languages, ICFP 2021) and the L4 project at Singapore's Centre for Computational Law, is developing ways to write rules in a form that is both human-readable and machine-executable. When that becomes feasible, three frictions fall together: the cost of comparing one jurisdiction's rules against another's, the cost of operating across jurisdictions, and the cost of observing outcomes. None of these is "software speed" in a serious sense. A policy is still implemented in flesh and institutions, and its effects still take years to reveal. What changes is the cost of comparison and of movement, and those are the costs that discipline a territorial monopolist.
The disanalogy with software deployment is worth stating plainly. Lessig's Code and Other Laws of Cyberspace (1999; revised Code v2, 2006) and Winner's The Whale and the Reactor (1986) argued, and subsequent work has developed, that code-as-law differs from law-as-law on dimensions the CI/CD metaphor flattens: reversibility, legitimacy, consent, exception-handling, and the treatment of lives already altered by a rule. A software deploy can be rolled back; a policy whose consequences have been lived through cannot be. In domains where these differences bind hard, among them fundamental rights, criminal justice, and civil liberties, no reasonable reading of digital infrastructure authorizes faster iteration as a norm. In narrower domains such as business registration, tax compliance, dispute resolution, and professional licensing, the disanalogy softens and rapid adjustment is compatible with the substantive stakes. The claim we defend is narrow: some governance can be rendered in software, and where it can, the cost of comparing and exiting falls.
Programmable Organizations and the Nominal-Effective Gap
One strand of Schmidt and Sorota's worry is that automated governance, by displacing discretionary human judgment, leaves nobody to hold accountable. This concern is sound. It is also why we should be careful about a common move in the adjacent literature, in which the mere fact that an organization is implemented in code, or that its governance runs through on-chain votes, is taken as evidence that it is decentralized. The distinction between nominal and effective decentralization is the central finding of a now-substantial body of work, and it applies to our own argument with full force.
Angela Walch's "Deconstructing 'Decentralization': Exploring the Core Claim of Crypto Systems" (in Brummer, ed., Cryptoassets: Legal, Regulatory, and Monetary Perspectives, Oxford, 2019) catalogues the gap between the rhetoric of decentralization and its operational reality: core developer concentration, validator concentration, governance-token concentration, dependency on oracles and off-chain inputs, and the reliance of nominally trustless systems on trusted parties at their edges. De Filippi and Wright, Blockchain and the Law: The Rule of Code (Harvard, 2018), extend the point into legal theory: "code is law" replaces one opacity with another, shifting the interpretive burden from the lawyer to the developer and leaving most non-technical users with less capacity to scrutinize the rules that bind them, not more. Empirical studies of governance-token distribution (Barbereau et al., "DeFi, Not So Decentralized: The Measured Distribution of Voting Rights," HICSS, 2023) typically find majority voting power concentrated in a handful of addresses. The historical cases are instructive and unflattering: TheDAO (2016), the Parity wallet freeze, the Tornado Cash OFAC sanction, the Uniswap fee-switch debate. A serious engagement with decentralized organizations as a governance form must treat these as the data rather than as exceptions to it.
It follows that we should distinguish between two claims that are often run together. The first is that rules written in code, where appropriate, can be more legible, more comparable, and more portable than the same rules written in unstructured natural language. This is a narrow, testable, and largely correct claim, and it is the claim the research programs we have already cited, Catala, L4, and the broader literature on formal compliance, are working on. The second claim is stronger: that on-chain organizations are, as a consequence of being on-chain, governed by their stated rules rather than by whoever controls their privileged operational levers. This second claim is false as a generalization and dangerous as a premise. Any proposal that relies on programmable organizations inherits the burden of specifying who operates the substrate, how upgrade authority is distributed, how key-loss and capture are handled, and what accountability mechanism exists when effective control departs from nominal control. That burden applies to the proposal sketched in this essay.
A few jurisdictions have experimented with legal wrappers for such organizations. Wyoming's DAO LLC statute (2021, amended 2022) and the Republic of the Marshall Islands' DAO Act (2022) recognize these entities for limited purposes; both are best read as regulatory-arbitrage offerings for crypto-adjacent activity rather than as serious experiments in sovereign governance. Estonia's e-residency program is narrower than it is sometimes described: it enables non-residents to access Estonian digital services, register businesses, and use digital signatures under Estonian law, without conferring voting rights, physical residency, or EU citizenship (Kotka, Alvarez del Castillo, and Korjus, "Estonian e-Residency: Redefining the Nation-State in the Digital Era," 2015). These programs serve as existence proofs that digital legal participation is technically and legally tractable. They are not demonstrations that digital sovereignty has been built, and we are careful not to claim otherwise.
Prediction Markets and Their Actual Record
A further set of mechanisms that becomes cheaper under digital infrastructure is prediction markets, and the literature on them is older and more measured than their popular reputation suggests. Wolfers and Zitzewitz's survey ("Prediction Markets," Journal of Economic Perspectives, 18:2, 2004) and Arrow et al.'s Science piece "The Promise of Prediction Markets" (320, 2008) report that well-designed markets aggregate information from dispersed participants with calibration that typically meets or exceeds expert forecasts, especially on questions where base rates are stable and participation is broad. They also document the regime in which this claim has most force (high liquidity, well-specified questions, absence of thin-market manipulation) and the regime in which it has least force (sparse markets, whale concentration, and questions entangled with the market participants' own behavior).
Polymarket is the most visible current venue and a useful case. It priced the 2024 U.S. presidential outcome closer to the ultimate result than most public pollsters did ahead of election day, and it misfired on several smaller contests in the same cycle where volume was thinner. Academic assessments treat the platform as informative where liquid and unreliable where thin, which is what theory predicts. The honest statement is that prediction markets, where designed and populated well, carry information that survives comparison with expert forecasts. They are not oracles, and the conditions under which they function well are narrower than press accounts imply.
The relevance to governance is oblique. Robin Hanson's "futarchy" proposal ("Shall We Vote on Values But Bet on Beliefs?" Journal of Political Philosophy, 21:2, 2013) argues that citizens vote on the metric by which policies are judged and markets forecast which policies will best advance that metric. The proposal has known objections, among them goodharting of the chosen metric, manipulation by well-capitalized actors, and the distributional question of who can participate, and it has not been implemented at any governance scale that would settle them. We mention it because it illustrates a useful feature of the design space: decision mechanisms that separate the normative question (what do we value?) from the empirical one (which policy advances it?), each handled by an institution suited to it. Whether any such design survives contact with the manipulation and participation problems is an open question on which this essay does not take a position.
Verification Without Surveillance
Schmidt and Sorota are right that digital-first democratic participation requires a verification substrate, and right that any such substrate, done badly, becomes surveillance infrastructure that outlasts the legitimacy that authorized it. The cryptographic tools that mitigate this, zero-knowledge proofs and the broader family of privacy-preserving verifiable computation, are mature in parts and immature in others, and the distinction is worth keeping in view.
The underlying primitives are well-established. Zk-SNARKs and zk-STARKs now run in production for rollup systems processing significant transaction volume; the cryptography is no longer speculative. What has not been solved, and what Buterin himself treats as unresolved ("What do I think about biometric proof of personhood?", 2023), is the identity layer on which such proofs depend. Proof-of-personhood systems, including Worldcoin, Sismo, Anon Aadhaar, and the BrightID family, each confront the same set of tradeoffs: sybil resistance versus biometric data concentration; key-loss catastrophe versus custodial recovery; state endorsement versus independence; regulatory fit versus technical purity. None of these tradeoffs is resolved in a way that would permit the substrate Schmidt and Sorota describe to be built today at population scale. The question "who is a legitimate participant?" is upstream of every voting or petitioning mechanism, and it is not answered by the availability of zero-knowledge proofs downstream of it.
What can be said is narrower. In bounded settings, such as member verification inside a chartered organization, eligibility for a specific benefit, or accreditation for a licensed role, the mathematics and the institutional arrangements are now adequate to verify a property without disclosing the underlying credential. That is a useful capability. Generalizing it to democratic participation at national scale is a program of institutional design, not a technical exercise, and it is the program in which Schmidt and Sorota's verification concerns and this essay's exit-rights concerns meet. Neither position is served by presenting the verification layer as a solved problem.
Functional Jurisdictions and Their Limits
The political theory of the modern state holds that governance is geographically contiguous and jurisdictionally comprehensive: a single authority controls a defined territory and regulates all activities within it, and citizenship is inherited or resident rather than chosen. Frey and Eichenberger's FOCJ proposal was, in effect, a re-examination of that premise. They argued that some public goods are naturally bounded by function rather than by territory (school districts, water districts, and emergency-service districts already display this structure at sub-national scale), and that their governance could be organized around the functional constituency rather than the territorial one. Exit, in such a scheme, is exit from a functional jurisdiction, not from a country.
This is the serious academic ancestor of the phrase "higher-dimensional governance," and we use it in that sense. The substantive question it raises is whether such jurisdictions, which already exist in narrow forms, can be extended without running into four failure modes that the FOCJ literature itself has documented. First, cherry-picking: functional jurisdictions that attract low-cost-to-serve constituents while shedding high-cost-to-serve ones produce distributional pathologies identical to those the race-to-the-bottom critique identified in corporate chartering (Bebchuk 1992). Second, coordination: some public goods cannot be disaggregated without collapsing the public good itself; national defense and macroeconomic policy are the clearest cases, but the boundary is broader than it first appears. Third, legitimacy: a citizen who has opted into multiple functional jurisdictions and out of others is subject to rules she did not consent to only in the domains where she has not exited, and the coherence of a political community under such conditions is the topic of a real debate (Pettit's Republicanism, 1997, and Shapiro's Politics Against Domination, 2016, bear on this directly). Fourth, enforcement: functional jurisdictions still need coercive power to enforce their rules, and in the absence of a general territorial state that power either concentrates in a secondary monopoly or disappears.
We take these to be real constraints on what the proposal can accomplish, not technicalities to be waved aside. A functional-jurisdictional order that works must specify which domains are plausibly disaggregable, which require shared floor standards across jurisdictions, and which are irreducibly territorial. Carbon regulation, financial-systemic stability, anti-money-laundering coordination, and civil-rights floors are the clearest candidates for the last category, with externalities too diffuse or stakes too high to leave to local exit. We do not say that no domain should be opened to competition. We say the specification matters and cannot be replaced by a general enthusiasm for "higher dimensions."
What digital infrastructure offers on this front is concrete rather than sweeping: a reduction in the transaction cost of operating across functional jurisdictions, a reduction in the information cost of comparing them, and a reduction in the switching cost of moving between them. In domains where the cherry-picking, coordination, legitimacy, and enforcement failures do not dominate, including business registration, many forms of civil dispute resolution, certain kinds of professional credentialing, and some research-funding structures, these reductions are meaningful. In domains where those failures do dominate, no amount of reduced friction cures them.
Bundling and the Design of Choice
One feature of contemporary democratic systems is that they bundle preferences at coarse granularity. A vote for a party is a vote for a platform, and the platform will cover fiscal policy, social policy, foreign policy, environmental regulation, and criminal justice in a single package that few individual voters agree with in all its parts. Arrow's impossibility theorem and the subsequent social-choice literature (Sen, Collective Choice and Social Welfare, 1970) tell us that no aggregation rule recovers a coherent social preference from diverse individual preferences without some loss; the bundling pattern of party politics is one particular way to handle that loss, and its costs are well-studied.
There are two honest points to make about digital alternatives. The first is that unbundling is possible in principle: citizens can, in a technical sense, express preferences on multiple dimensions independently. Several mechanisms have been proposed for this, including direct democracy on specific questions, liquid democracy in Ford's sense (2002) and Green-Armytage's refinement (2015), and quadratic voting as Lalley and Weyl formalize it (2018). Pilots exist: LiquidFeedback inside the German Pirate Party; quadratic-voting experiments in the Colorado House Democratic caucus; Pol.is clustering in Taiwan's civic-tech stack, which we noted earlier is a different mechanism from quadratic voting and should not be conflated with it. These pilots are informative.
The second point is that the pilots also reveal consistent failure modes. The Pirate Party's LiquidFeedback experience produced strong concentration of voting weight in a small number of "super-delegates" (Kling et al., 2015), reproducing in a new form the concentration pattern the experiment was meant to dissolve. Quadratic-voting pilots have been small enough and short enough that the collusion-resistance, quasi-linear-utility, and bribery-vulnerability concerns discussed in the theoretical literature (Weyl, "The Robustness of Quadratic Voting," Public Choice, 172, 2017) have not been fully stressed. The honest position is that alternative aggregation rules are research-grade, promising in their specific settings, and not ready to carry the weight of national elections. The useful question is whether a richer set of mechanisms can be layered underneath bundled partisan voting for domains where the bundling cost is highest and the collusion risk is manageable, including local budgeting, intra-party priority-setting, and participatory allocation of specific funds, where the pilots have most evidence.
The reduction of polarization is often invoked as a benefit of unbundling. The evidence on what causes polarization, surveyed by McCarty, Poole, and Rosenthal (2006), Mason (2018), and the subsequent literature on sorting and negative partisanship, does not support the simple claim that better voting mechanisms would resolve it. Polarization is partly identity, partly elite cueing, partly media structure, and partly sorting. Aggregation rules are a smaller lever than this essay would like them to be, and we are not entitled to claim otherwise.
The Speed Question, Carefully
The most common version of the argument for digital-first governance rests on the claim that institutions must adapt faster to keep pace with technological and social change. The claim is half-right and half a category error, and the halves deserve separation. It is half-right that many regulatory frameworks were built for stable industrial economies and do not gracefully handle distributed digital services, gig labor, or cross-border data flows; the gap is real and its consequences are observable. It is a category error to infer from this that governance should generally run at software speed.
There is a reason why democratic institutions move slowly. Stability is itself a public good. The legal certainty that induces investment and long-horizon decisions depends on rules that change at an observable and predictable pace. Rights protections matter in part because they cannot be altered on the monthly cadence of a software deploy. James Scott's Seeing Like a State (1998) and Andy Stirling's work on directionality in innovation argue that rapid feedback loops can produce oscillation, overshooting, and manipulation by the feedback signal instead of convergence on good outcomes. Markets can race to bad equilibria at the same speed they race to good ones. Reform remains necessary. The right argument distinguishes domains where cycle time is the binding constraint from domains where the binding constraint is legitimacy, rights-protection, or shared expectation of stability.
Where the speed argument does work is narrower. Registration, licensing, compliance filings, dispute-resolution for specific high-frequency classes of dispute, and the administrative architecture of tax and subsidy delivery are domains in which users experience cycle time as a binding constraint, rights-protection stakes are moderate, and the consequences of iteration are reversible. These are also the domains in which machine-readable rule systems of the Catala and L4 type are most tractable. This is a much shorter claim than "governance should operate at software speed," and it is one the literature supports. The larger claim we explicitly do not make.
What This Argument Establishes and What It Does Not
Schmidt and Sorota diagnose a real crisis and propose a serious response. The verification substrate they argue for, identity that can be trusted without surveilling, content whose provenance can be checked, deliberation that scales beyond small assemblies, is a prerequisite for any digital-first democratic order worth building. We agree with that part of their argument, and nothing in this essay disputes it.
Our disagreement is with the implicit claim that voice-augmentation will be sufficient. Hirschman's argument, now fifty-five years old and still unrefuted in its essentials, is that institutions without credible exit decay regardless of the quality of their voice mechanisms. The contemporary pattern of declining institutional trust, chronic administrative failure, and the turn toward authoritarian or technocratic alternatives is consistent with the monopoly-without-exit decay pattern Hirschman described. We hold, conditionally and within specific domains, that digital infrastructure lowers the cost of exit in ways that can restore an accountability mechanism voice alone often cannot provide.
The argument we have advanced rests on four claims and admits four open problems, and we want to name both sets explicitly rather than let the reader infer them. The claims are these. Tiebout's mobile-citizen model was implausible for human households because the underlying frictions were high; those frictions are substantially lower for software-operated legal entities whose compliance behavior is executed by software. Where rules can be written in machine-readable form, the cost of comparing jurisdictions and moving between them falls. Where functional jurisdictions can be honestly disaggregated, competition along one axis is compatible with coordination along another. Where programmable organizations are operated with honest attention to the nominal-effective decentralization gap, they can provide substrate for pluralistic governance rather than a new center of concentration.
The open problems are these. The race-to-the-bottom critique of Cary and Bebchuk applies to any regime of expanded exit and is not answered by the fact that exit is cheaper; it requires floor standards on externality-heavy dimensions, among them carbon, financial-systemic risk, anti-money-laundering, and civil-rights floors, and the specification of those standards is an open policy problem rather than a solved one. Hirschman's lazy-monopoly warning remains: if exit selectively pulls the voice-active out of the voice pool, the institutions they leave decay faster, and we do not yet have a crisp answer to the selection-effect question. The infrastructure that makes exit cheap is itself a point of potential concentration; Schmidt and Sorota's concentration worry applies in full force, and the argument survives only if the substrate is genuinely pluralistic. And the democratic-theory question, whether voluntary association over functional domains is compatible with the shared political identity that sustains republican freedom in Pettit's sense, interacts with traditions we have only gestured at and which would require extended engagement to do justice to.
We close on what we take to be the useful point of convergence with Schmidt and Sorota. The choice does not lie between voice-augmentation and exit-expansion. Voice mechanisms that are not backed by the threat of exit can become ignorable; exit mechanisms unaccompanied by serious voice can produce the lazy-monopoly outcome Hirschman warned about. Democratic institutions in their current form have too little of either, and the project, on both their view and ours, is to rebuild the substrate on which both can operate honestly. We would add, against their framing, that a decisive constraint is structural rather than deliberative: a territorial monopoly with no credible outside option is unlikely to be brought to account by better talk alone. Talk is necessary; it is not sufficient. The judgment we part from them on is what suffices, and we have tried to make the argument in a form they could engage rather than in one they could dismiss.