
When fifty-five delegates gathered in Philadelphia during the sweltering summer of 1787, they faced a challenge that would haunt American politics for the next eight decades. The question wasn’t whether slavery was morally right—many delegates privately acknowledged its evil—but whether a unified nation could exist with slavery as a part of it. That summer, the institution of slavery nearly killed the Constitution before it was born.
The Battle Lines
The convention revealed a stark divide. On one side stood delegates who spoke forcefully against slavery, though they represented a minority voice. Gouverneur Morris of Pennsylvania delivered some of the most scathing condemnations, calling slavery a “nefarious institution” and “the curse of heaven on the states where it prevailed.” According to James Madison’s notes, Morris argued passionately that counting enslaved people for representation would mean that someone “who goes to the Coast of Africa, and in defiance of the most sacred laws of humanity tears away his fellow creatures from their dearest connections & damns them to the most cruel bondages, shall have more votes in a Government instituted for protection of the rights of mankind.”
Luther Martin of Maryland, himself a slaveholder, joined Morris in opposition. He declared the slave trade “inconsistent with the principles of the revolution and dishonorable to the American character.”. Even George Mason of Virginia, who owned over 200 enslaved people, denounced slavery at the convention, warning that “every master of slaves is born a petty tyrant” and that it would bring “the judgment of heaven on a country.”
The Southern Coalition
Facing these critics stood delegates from the Deep South—primarily South Carolina and Georgia—who made it abundantly clear that protecting slavery was non-negotiable. The South Carolina delegation was particularly unified and aggressive in defending the institution. All four of their delegates—John Rutledge, Charles Pinckney, Charles Cotesworth Pinckney, and Pierce Butler—owned slaves, and they spoke with one voice.
Charles Cotesworth Pinckney stated bluntly: “South Carolina and Georgia cannot do without slaves.” John Rutledge framed it even more starkly: “The true question at present is, whether the Southern States shall or shall not be parties to the Union.” The message was unmistakable—attempt to restrict slavery, and there would be no Constitution and perhaps no United States.
The Southern states didn’t just defend slavery; they threatened to walk out repeatedly. When debates over the slave trade heated up on August 22, delegates from North Carolina, South Carolina, and Georgia stated they would “never be such fools as to give up” their right to import enslaved Africans. These weren’t idle threats—they were credible enough to force compromise.
The Three-Fifths Compromise
The central flashpoint came over representation in Congress. The new Constitution would base representation on population, but should enslaved people count? Southern states wanted every enslaved person counted fully, which would dramatically increase their congressional power. Northern states argued that enslaved people—who had no rights and couldn’t vote—shouldn’t count at all.
The three-fifths ratio had actually been debated before. Back in 1783, Congress had considered using it to calculate state tax obligations under the Articles of Confederation, though that proposal failed. James Wilson of Pennsylvania resurrected the idea at the Constitutional Convention, suggesting that representation be based on the free population plus three-fifths of “all other persons”—the euphemism they used to avoid writing the word “slave” in the Constitution.
The compromise passed eight states to two. New Jersey and Delaware are generally identified as the states voting against the compromise, New Hampshire is not listed as taking part in the vote. Rhode Island did not send a delegation to the convention and by the time of the vote New York no longer had a functioning delegation.
Though the South ultimately accepted the compromise, it wasn’t what they wanted. Southern delegates had pushed to count enslaved people equally with free persons—but otherwise ignored on all issues of human rights. The three-fifths ratio was a reduction from their demands—a limitation on slave state power, though it still gave them substantial advantage. With about 93% of the nation’s enslaved population concentrated in just five southern states, this compromise increased the South’s congressional delegation by 42%.
James Madison later recognized the compromise’s significance. He wrote after the convention: “It seems now to be pretty well understood that the real difference of interests lies not between the large and small but between the northern and southern states. The institution of slavery and its consequences form the line of discrimination.”
Could the Constitution Have Happened Without It?
Here’s where I need to speculate, but I’m fairly confident in this assessment: no, the Constitution would not have been ratified without the three-fifths compromise and related concessions on slavery.
The evidence is overwhelming. South Carolina and Georgia delegates stated explicitly and repeatedly that they would not join any union that restricted slavery. Alexander Hamilton himself later acknowledged that “no union could possibly have been formed” without the three-fifths compromise. Even delegates who despised slavery, like Roger Sherman of Connecticut, argued it was “better to let the Southern States import slaves than to part with them.”
The convention negotiated three major slavery compromises, all linked. Beyond the three-fifths clause, they agreed Congress couldn’t ban the international slave trade until 1808, and they included the Fugitive Slave Clause requiring the return of escaped enslaved people even from free states. These deals were struck together on August 29, 1787, in what Madison’s notes reveal was a package negotiation between northern and southern delegates.
Without these compromises, the convention would likely have collapsed. The alternative wouldn’t have been a better Constitution—it would have been no Constitution at all, potentially leaving the thirteen states as separate nations or weak confederations. Whether that would have been preferable is a profound counterfactual question that historians still debate.
The Impact on Early American Politics
The three-fifths compromise didn’t just affect one document—it shaped American politics for decades. Its effects were immediate and substantial.
The most famous early example came in the presidential election of 1800. Thomas Jefferson defeated John Adams in what’s often called the “Revolution of 1800″—the first peaceful transfer of power between opposing political parties. But Jefferson’s victory owed directly to the three-fifths compromise. Virginia’s enslaved population gave the state extra electoral votes that proved decisive. Historian Garry Wills has speculated that without these additional slave-state votes, Jefferson would have lost. Pennsylvania had a free population 10% larger than Virginia’s, yet received 20% fewer electoral votes because Virginia’s numbers were inflated by the compromise.
The impact extended far beyond that single election. Research shows the three-fifths clause changed the outcome of over 55% of legislative votes in the Sixth Congress (1799-1801). (The additional southern representatives—about 18 more than their free population warranted—gave the South what became known as the “Slave Power” in Congress.
This power influenced major legislation throughout the antebellum period. The Indian Removal Act of 1830, which forcibly relocated Native Americans to open land for plantation agriculture, passed because of margins provided by these extra southern representatives. The Missouri Compromise, the Kansas-Nebraska Act, and numerous other slavery-related measures bore the fingerprints of this constitutional imbalance.
The compromise also affected Supreme Court appointments and federal patronage. Southern-dominated Congresses ensured pro-slavery justices and policies that protected the institution. The sectional tensions it created led directly to later compromises—the Missouri Compromise of 1820, the Compromise of 1850—each one a temporary bandage on a wound that wouldn’t heal.
By the 1850s, the artificial political power granted to slave states had become intolerable to many northerners. When Abraham Lincoln won the presidency in 1860 without carrying a single southern state, southern political leaders recognized they had lost control of the federal government. Senator Louis Wigfall of Texas complained that non-slaveholding states now controlled Congress and the Electoral College. Ten southern states seceded in large part because they believed the three-fifths compromise no longer protected their interests.
The Bitter Legacy
The framers consciously avoided using the words “slave” or “slavery” in the Constitution, recognizing it would “sully the document.” But the euphemisms fooled no one. They had built slavery into the structure of American government, trading moral principles for political union.
The Civil War finally resolved what the Constitutional Convention had delayed. The Thirteenth Amendment abolished slavery in 1865, but not until 1868 did the Fourteenth Amendment finally strike the three-fifths clause from the Constitution, requiring that representation be based on counting the “whole number of persons” in each state.
Was it worth it? That’s ultimately a question of values. The Constitution created a stronger national government that eventually abolished slavery, but it took 78 years and a war that killed over 600,000 Americans. As Thurgood Marshall noted on the Constitution’s bicentennial, the framers “consented to a document which laid a foundation for the tragic events which were to follow.”
The convention delegates knew what they were doing. They chose union over justice, pragmatism over principle. Whether that choice was necessary, wise, or moral remains one of the most contested questions in American history.
____________________________________________________
Sources
- https://www.battlefields.org/learn/articles/slavery-and-constitution
- https://en.wikipedia.org/wiki/Luther_Martin
- https://schistorynewsletter.substack.com/p/7-october-2024
- https://www.americanacorner.com/blog/constitutional-convention-slavery
- https://www.nps.gov/articles/000/constitutionalconvention-august22.htm
- https://en.wikipedia.org/wiki/Three-fifths_Compromise
- https://www.brennancenter.org/our-work/analysis-opinion/electoral-colleges-racist-origins
- https://www.gilderlehrman.org/history-resources/teaching-resource/historical-context-constitution-and-slavery
- https://www.nps.gov/articles/000/constitutionalconvention-august29.htm
- https://www.lwv.org/blog/three-fifths-compromise-and-electoral-college
- https://www.aaihs.org/a-compact-for-the-good-of-america-slavery-and-the-three-fifths-compromise-part-ii/
















America’s Healthcare Paradox: Why We Pay Double and Get Less
By John Turley
On January 5, 2026
In Commentary, Medicine
The healthcare debate in America often circles back to a fundamental question: should we move toward a single-payer system, or is our current mixed public-private model the better path forward? It’s a conversation that gets heated quickly, but when you strip away the politics and look at how different systems actually function around the world, some interesting patterns emerge.
What We Mean by Single-Payer
A single-payer healthcare system means that one entity—usually the government or a government-related organization—pays for all covered healthcare services. Doctors and hospitals can still be private (and usually are), but instead of dealing with dozens of different insurance companies, they bill one source. It’s a lot like Medicare, which is why proponents often call it “Medicare-for-all”.
The key thing to understand is that single-payer isn’t necessarily the same as socialized medicine. In Canada’s system, for instance, the government pays the bills, but doctors are largely in the private sector and hospitals are controlled by private boards or regional health authorities rather than being part of the national government. Compare that to the UK’s National Health Service, where many hospitals and clinics are government-owned and many doctors are government employees.
America’s Current Patchwork
The United States operates what might charitably be called a “creative” approach to healthcare—a complex mix of employer-sponsored private insurance, government programs like Medicare, Medicaid and the VA system, individual marketplace plans, and direct out-of-pocket payments. Government already pays roughly half of total US health spending, but benefits, cost-sharing, and networks vary widely between plans, with little overall coordination. In 2023, private health insurance spending accounted for 30 percent of total national health expenditures, Medicare covered 21 percent, and Medicaid covered 18 percent. Most of the remainder was either paid out of pocket by private citizens or was written off by providers as uncollectible.
Here’s where it gets expensive. U.S. health care spending grew 7.5 percent in 2023, reaching $4.9 trillion or $14,570 per person, accounting for 17.6 percent of the nation’s GDP, and national health spending for 2024 is expected to have exceeded $5.3 trillion or 18% of GDP, and health spending is expected to grow to 20.3 percent of GDP by 2033.
For a typical American family, the costs are real and rising. In 2024, the estimated cost of healthcare for a family of four in an employer-sponsored health plan was $32,066.
The European Landscape
Europe doesn’t have one healthcare model—it has several, and they’re all quite different from what we have in the States. Most of the 35 countries in the European Union have single-payer healthcare systems, but the details vary considerably.
Countries like the UK, Sweden, and Norway operate what are essentially single-payer systems where it is solely the government who pays for and provides healthcare services and directly owns most facilities and employs most clinical and related staff with funds from tax contributions. Then you have countries like Germany, and Belgium that use “sickness funds”—these are non-profit funds that don’t market, cherry pick patients, set premiums or rates paid to providers, determine benefits, earn profits or have investors. They’re quasi-public institutions, not private insurance companies like we know them in America. Some systems, such as the Netherlands or Switzerland, rely on mandatory individually purchased private insurance with tight regulation and subsidies, achieving universal coverage with a structured, competitive market.
The French System
France is particularly noted for a successful universal, government-run health insurance system usually described as a single-payer with supplements. All legal residents are automatically covered through the national health insurance program, which is funded by payroll taxes and general taxation.
Most physicians and hospitals are private or nonprofit, not government employees or facilities. Patients generally have free choice of doctors and specialists, though coordinating through a primary care physician improves access and reimbursement. The national insurer pays a large portion of medical costs (often 70–80%), while voluntary private supplemental insurance covers most remaining out-of-pocket expenses such as copays and deductibles.
France is known for spending significantly less per capita than the United States. Cost controls come from nationally negotiated fee schedules and drug pricing rather than limits on access.
What’s striking is that in 2019, US healthcare spending reached $11,072 per person—over double the average of $5,505 across wealthy European nations. Yet despite spending roughly twice as much per person, American health outcomes often lag behind.
The Outcomes Question
This is where the comparison gets uncomfortable for American exceptionalism. The U.S. has the lowest life expectancy at birth among comparable wealthy nations, the highest death rates for avoidable or treatable conditions, and the highest maternal and infant mortality.
In 2023, life expectancy in comparable countries was 82.5 years, which is 4.1 years longer than in the U.S. Japan manages this with healthcare spending at just $5,300 per capita, while Americans spend more than double that amount.
Now, it’s important to note that healthcare systems don’t operate in a vacuum. Life expectancy is influenced by many factors beyond medical care—diet, exercise, smoking, gun violence, drug overdoses, and social determinants of health all play roles. But when you’re spending twice as much and getting worse results, it suggests the system itself might be part of the problem.
Advantages of Single-Payer Systems
The case for single-payer rests on several compelling points. First, administrative simplicity translates to real cost savings. A study found that the administrative burden of health care in the United States was 27 percent of all national health expenditures, with the excess administrative cost of the private insurer system estimated at about $471 billion in 2012 compared to a single-payer system like Canada’s. That’s over $1 out of every $5 of total healthcare spending just going to paperwork, billing disputes, and insurance company profit and overhead before any patient receives care.
Universal coverage is another major advantage. In a properly functioning single-payer system, nobody goes bankrupt from medical bills, nobody delays care because they can’t afford it, and nobody loses coverage when they lose their job. The peace of mind that comes with knowing you’re covered regardless of employment status or pre-existing conditions is difficult to quantify but enormously valuable.
Single-payer systems also have significant negotiating power. When one entity is buying drugs and services for an entire nation, pharmaceutical companies and medical device manufacturers have much less leverage to charge whatever they want. This helps explain why prescription drug prices in other countries are often a fraction of prices in the U.S.
Disadvantages and Trade-offs
The critics of single-payer systems aren’t wrong about everything. Wait times are a genuine concern in some systems. When prices and overall budgets are tightly controlled, some countries experience longer waits for selected elective surgeries, imaging, or specialty visits, especially if investment lags demand.
In 2024, Canadian patients experienced a median wait time of 30 weeks between specialty referral and first treatment, up from 27.2 weeks in 2023, with rural areas facing even longer delays. For procedures like elective orthopedic surgery, patients wait an average of 39 weeks in Canada.
However, it’s crucial to understand that wait times are not a result of the single-payer system itself but of system management, as wait times vary significantly across different single-payer and social insurance systems. Many European countries with universal coverage don’t experience the same wait time issues that plague Canada.
The transition costs are also substantial. Moving from our current system to single-payer would disrupt a massive industry. Over fifteen percent of our economy is related to health care, with half spent by the private sector. Around 160 million Americans currently have insurance through their employers, and transitioning all of them to a government-run plan would be an enormous administrative and political challenge.
A large national payer can be slower to change benefit designs or adopt new payment models; shifting political majorities can affect funding levels and benefit generosity.
Taxes would need to increase significantly to fund such a system, though proponents argue this would be offset by the elimination of insurance premiums, deductibles, and co-pays. It’s essentially a question of whether you’d rather pay through taxes or through premiums—the money has to come from somewhere.
Advantages of America’s Mixed System
Our current system does have some genuine strengths. Innovation thrives in the American healthcare market. The profit motive, for all its flaws, does drive pharmaceutical research and medical device development. American medical schools and research institutions lead the world in many areas of medicine. Academic medical centers and specialty hospitals deliver advanced procedures and complex care that attract patients internationally.
The system also offers more choice for those who can afford it. If you have good insurance, you typically face shorter wait times for elective procedures and can often see specialists without lengthy delays. Americans with high-quality employer-sponsored coverage give their plans relatively high ratings.
Competition between providers can theoretically drive quality improvements, though this effect is often undermined by the complexity of the market and the difficulty consumers face in shopping for healthcare.
Disadvantages of the Current U.S. System
The most glaring problem is simple: The United States remains the only developed country without universal healthcare, and 30 million Americans remain uninsured despite gains under the Affordable Care Act, and many of these gains will soon be lost. Being uninsured in America isn’t just an inconvenience—it can be deadly. People delay care, skip medications, and avoid preventive screenings because of cost concerns.
The administrative complexity is staggering. Doctors spend enormous amounts of time dealing with insurance companies, prior authorizations, and billing disputes. Hospitals employ armies of billing specialists just to navigate the maze of different insurance plans, each with its own rules, formularies, and coverage determinations. U.S. administrative costs account for ~25% of all healthcare spending, among the highest in the world.
Medical bankruptcy is uniquely American. Even people with insurance can find themselves financially devastated by serious illness. High deductibles, surprise bills, and out-of-network charges create a minefield of potential financial catastrophe. Studies of U.S. bankruptcy filings over the past two decades have consistently found that medical bills and medical problems are a major factor in a large share of consumer bankruptcies. Recent summaries suggest that roughly two‑thirds of US personal bankruptcies involve medical expenses or illness-related income loss, and around 17% of adults with health care debt report declaring bankruptcy or losing a home because of that debt.
The system is also profoundly inequitable. Quality of care often depends more on your job, your income, and your zip code than on your medical needs. Out-of-pocket costs per capita have increased as compared to previous decades and the burden falls disproportionately on those least able to afford it.
What Europe Shows Us
The European experience demonstrates that there isn’t one “right” way to achieve universal coverage. The UK’s NHS, Germany’s sickness funds, and France’s hybrid system all manage to cover everyone at roughly half the per-capita cost of American healthcare. Universal Health Coverage exists in all European countries, with healthcare financing almost universally government managed, either directly through taxation or semi-directly through mandated and government-subsidized social health insurance.
They’ve accomplished this through various combinations of centralized negotiation of drug prices, global budgets for hospitals, strong primary care systems that serve as gatekeepers to more expensive specialist care, emphasis on preventive services, and regulation that prevents insurance companies from cherry-picking healthy patients.
Are these systems perfect? No. One of the major disadvantages of centralized healthcare systems is long wait lists to access non-urgent care, though Americans often wait as long or longer for routine primary care appointments as do patients in most universal-coverage countries. Many European countries are wrestling with funding challenges as populations age and expensive new treatments become available. But they’ve solved the fundamental problem that America hasn’t: they ensure everyone has access to healthcare without the risk of financial ruin.
The Path Forward?
The debate over healthcare in America often presents false choices. We don’t have to choose between Canadian-style single-payer and our current system—there are multiple models we could adapt. We could move toward a German-style system with heavily regulated non-profit insurers. We could create a robust public option that competes with private insurance. We could expand Medicare gradually by lowering the eligibility age over time.
What’s clear from international comparisons is that the status quo is unusually expensive and produces mediocre results. We’re paying premium prices for economy outcomes. Whether single-payer is the answer depends partly on your priorities. Do you value universal coverage and cost control more than unlimited choice? Are you willing to accept potentially longer wait times for non-urgent care in exchange for lower costs and universal access? How much do you trust government to manage a program this large?
These aren’t easy questions, and reasonable people disagree. But the evidence from Europe suggests that universal coverage at reasonable cost is achievable—it just requires us to make some choices about what we value most in a healthcare system.
Sources: