Author, professor and former state representative writing about politics, law, health care and culture
Ten years ago, American officials urged drafters of Afghanistan’s constitution to provide for a very strong president. Now, as the New York Times reports, our officials have agreed with Afghan officials that it is better for a political system to provide for the sharing of power across social divides. All persons deserve a voice in the policymaking halls of their governments—ensuring that public officials represent both sides of the political aisle makes for a fairer form of governance, thereby giving the government greater legitimacy in the eyes of the public.
Just as constitutional reform was important for Afghanistan, it is important for us. We also suffer from a political system that has too much in the way of winner-take-all powers. In particular, we give all of the immense power of the modern presidency to a single person from one political party. That does much to fuel the high levels of partisan conflict that plague Washington. Instead of creating strong incentives for conflict, we should create strong incentives for cooperation. We too should ensure that both sides of our political aisles have meaningful roles in our policymaking offices.
Undoubtedly, the Supreme Court has been too solicitous of corporate rights in recent years. And without doubt, reproductive rights are under siege from many state legislatures and federal judges. But these concerns do not justify the dramatic characterizations of yesterday’s Hobby Lobby decision.
According to Emily’s List, the Court’s decision to “restrict women’s health care” is a “devastating setback.” According to the Democratic Legislative Campaign Committee, “millions of women must have their bosses’ permission to access birth control.” And according to Planned Parenthood of Indiana and Kentucky, “countless women, already struggling to make ends meet, will not have the benefit of the family planning coverage provided to all others under the Affordable Care Act.”
In fact, the Court’s decision need not result in limits on women’s access to contraception. To be sure, the Court agreed with Hobby Lobby (and Conestoga Wood) that they should not have to pay for methods of birth control that violate their religious beliefs (morning after pills and IUDs in this case), But the Court also observed that the federal government could use other approaches to guarantee access for women to contraception.
Indeed, wrote the Court, the government can employ the same accommodation for companies such as Hobby Lobby that it employs for religiously-affiliated, non-profit institutions such as universities. Under that accommodation, the organization’s insurer provides a separate plan for contraceptive coverage and does not bill the organization or the employee. In other words, the female employees receive full coverage without imposing a burden on the employer’s religious practice.
There are plenty of reasons to criticize the erosion of reproductive rights in the United States. And it is possible that the narrow holding of Hobby Lobby will be expanded in the future. But the decision itself does not entail a compromise of reproductive health.
All of those efforts to persuade people to authorize postmortem organ donation seem to be paying off. Whether one gives consent when renewing a driver’s license or by signing up at Donate Life America, the results are impressive. In 2012, 45 percent of American adults were included in state organ donation registries, and 40 percent of organ donations after death came from these “designated donors.” That’s a more than doubling of the 19 percent rate of designated donors among posthumous organ donors in 2007.
But the increase in donor designation has not translated into a meaningful increase in organ transplantation. There were 22,053 transplants from 8,085 deceased donors in 2007 and 22,187 transplants from 8,143 deceased donors in 2012.
Why hasn’t donor designation translated into more organs? Is it because organ procurement organizations would have obtained consent from family members anyway for individuals who registered for donation? A survey of organ procurement organizations suggests strong agreement between registered donors and their families. Or maybe family wishes matter more than the decedent’s wishes despite legal rules that recognize the priority of the decedent’s wishes. Or perhaps other factors are hiding the effect of donor designation. Maybe it’s too soon to see an effect from donor designation. It will be interesting to see how the data play out over the next few years.
[cross-posted at HealthLawProfs and PrawfsBlawg]
With the Supreme Court’s blow to affirmative action last week, state universities may increasingly turn to the Texas model of automatic acceptance for applicants at the top of their high school class rank. When colleges draw from the top five or ten percent at all high schools, they may be able to recruit an entering class that mirrors the ethnic and racial diversity of high school graduates. While top class rank policies raise a number of concerns and their impact on diversity is mixed, there is a potentially more important benefit from a tweak of the policies.
Suppose that instead of looking just at GPA, admissions offices looked at a range of measures, including test scores, artistic talent, and athletic skills, and admitted the top students from each high school. Suppose further that all selective colleges—public and private—employed a top ten (or one) percent admissions policy. By whatever measures admissions offices used to rank applicants, the colleges would admit the top applicants from all high schools (of a minimum size).
Parents would recognize that their children would do better in the application process by attending Urban High than by attending Suburban or Private High. Instead of concentrating their children in the highest performing schools, parents of means would have a strong incentive to spread their children across the full range of schools.
There would be two important effects for cities. First, their property tax bases would grow, as more families chose to live in the cities than in the suburbs. Cities would be in a better position to invest in infrastructure and finance public services. Second, school quality would improve. Once their children were attending one of the lower performing schools, parents would push for improvements in the quality of the school, whether by seeking more public dollars or by raising more supplemental funding. The upper and middle socioeconomic classes might still focus their attention on the schools that their children attend, but the number of such schools would have increased. The gap in quality between the top schools and the bottom schools should narrow, and school quality should become more uniformly high. Rural areas also should benefit from top ten policies.
Would parents really send their children to lower performing schools to take advantage of high class rank admissions policies? They already have in Texas. After that state’s ten percent policy was adopted, a number of parents moved their children to schools with lower levels of achievement by the student body. The effects would be even greater if Ivy League and other elite universities followed the Texas model. The ten percent policy also has had a substantial impact on property values as families have moved into neighborhoods with lower-performing schools. Because parents adjust their choices of schools in response to high class rank policies, academic standards at selective universities needn’t suffer (except perhaps in the short-term).
We’ve known for a long time that we do better by the disadvantaged when we link their fortunes to the fortunes of the advantaged in society. Top class rank policies can provide that linkage. And they can do so without passing any laws. (For more discussion of this idea, see here).
[cross-posted at PrawfsBlawg]
If electing a single executive from one party compromises principles of representation, promotes partisan conflict, and encourages poor decision making, we should give serious consideration to ways in which executive power in the United States could be shared across party lines. With shared power, almost all Americans would have a voice in the policy making of the executive branch, solving the representation problem. And with broad representation of the public, partisan conflict could be defused. Moreover, with perspectives from both sides of the aisle on the table, wiser decisions should emerge from the Oval Office.
Shared power may seem problematic, but as David Fontana has observed, it has become much more common around the world for losing parties to be given “winners powers.” Under the interim South African constitution, for example, the losing party was given seats in the cabinet, an approach that Fontana recommends for the United States.
Switzerland may provide the best example of shared executive power. In Switzerland, the executive power lies in the Federal Council, which has seven department heads who possess equal decision-making authority. Decisions are made by consensus, with resort to a majority vote only in exceptional cases. For more than fifty years, the seven councilors have come from the major political parties (currently five) that represent roughly 80 percent of the country’s voters, and the councilors work cooperatively.
After their 19th century civil war, the Swiss concluded that the best way to bridge social divides was to ensure that all citizens have a voice in their government. And with its broad sharing of power, the Swiss government has been able to avoid the kind of political conflict that we experience—and that Switzerland once experienced—even though its population is socially more diverse than our own. Switzerland has effectively melded its French, German, Italian, and Romansh citizens, as well as its Catholic and Protestant communities.
I’ve previously described some serious disadvantages from a presidency that gives all of the executive power to a single person—the denial of representation to the half of the public that supported the other candidate and the promotion of partisan conflict as both sides fight to secure control of the Oval Office. Might these disadvantages be offset by the benefits of an energetic executive who can act decisively and with dispatch?
That might have been true for the first 150 years or so of the United States, but the energetic executive of Federalist No. 70 no longer meets the demands of the modern presidency. Indeed, a one-person presidency invites decision making harmful to the country.
As Congress has transferred much of its policymaking power to the executive branch, the nature of presidential power has been transformed. The Constitution envisions a president with secondary responsibility for the creation of national policy and primary responsibility for the execution of national policy. However, the contemporary president enjoys primary responsibility for both the creation and execution of policy.
This assumption of policy-creating responsibility by the president allows national policy to be made in the absence of a robust debate among multiple decision makers who bring different perspectives to their decision making. It may make sense to have a single person who can act decisively and with dispatch when the person is an executor of policy made by others. But the founding fathers correctly reserved policy making for multiple-person bodies such as Congress and the Supreme Court. As Woodrow Wilson observed, “ The whole purpose of democracy is that we may hold counsel with one another, so as not to depend upon the understanding of one man.”
Indeed, when it comes to making policy, there is much truth to the maxim that two heads are better than one. Studies by economists, psychologists, and other researchers demonstrate that shared decision making works better than unilateral decision making. As the example of George W. Bush waging war against Iraq illustrates, a single decision maker can make very poor choices. Multiple executives from different parties would bring the different perspectives and problem-solving skills that make for better decision making. Multiple executives would make more good choices and fewer bad choices than single presidents.
To be sure, too many cooks can spoil the broth. As Congress illustrates, very large groups can become quite dysfunctional. But small groups generally make better decisions than do individuals or large groups.
Of course, even single presidents do not make decisions in isolation. They consult with members of their cabinet and staff, so they enjoy many of the benefits of group decision making. Nevertheless, there is a big difference between deciding alone after consulting with advisers who are inclined to reinforce one’s inclinations and sharing decision making with others who are inclined to challenge one’s inclinations. Consider in this regard how different would be decisions from a Supreme Court of one justice and eight law clerks.
Don’t we need a single president to keep gridlock out of the Oval Office? While the framers were concerned about dissension and rivalry between multiple executives, there are good reasons to think that multiple executives could develop a meaningful willingness to cooperate with each other. That will be the topic of my final post in this series on the presidency.
[cross-posted at PrawfsBlawg]
In a previous post, I observed that reserving all of the presidential power for one side of the political aisle denies representation to half the country, a serious problem in itself. It also causes other problems. In particular, a one-party executive fans the flames of partisan conflict.
With the marked transfer of domestic and foreign policymaking power from Capitol Hill to the Oval Office over the past 75 years, the White House has become the dominant power center in the national government. Presidents control the issuance of regulations for air quality, energy exploration, education, health care, consumer protection, and many other concerns. They also establish national policy through signing statements, executive orders, and the granting of waivers from statutory obligations. Thus, for example, President Obama has doubled fuel efficiency for automobiles, expanded offshore drilling for oil and gas, and granted waivers from No Child Left Behind and the Affordable Care Act.
While presidents exercise considerable domestic authority, they dominate Congress even more in foreign affairs. Presidents play a far larger role in the determination of U.S. policy—and Congress plays a far smaller role—than intended by the founding fathers. Whether Truman with Korea or Obama with Libya, presidents send troops into combat without congressional authorization. Presidents also reach agreements with other countries without congressional participation, they unilaterally recognize other governments and terminate treaties, and they decide on their own about restrictions on the rights of U.S. citizens to travel abroad.
When one person exercises the enormous power of the modern U.S. presidency, we invite hyperpolarization. Under the current system, Democrats and Republicans fight tooth and nail to capture the White House. They spend hundreds of millions, now billions, of dollars. Moreover, once an election is over, each party launches its effort to win the next presidential race. The party of the president unites behind the president’s initiatives to ensure a successful administration. The losing party tries to block the president’s proposals so it can persuade voters to change parties at the next presidential election. Republicans lined up against the Affordable Care Act to “break” the Obama administration, and Democrats lined up against Social Security reform to weaken the Bush II administration.
Or to put it another way, excessive partisan conflict can be expected under a winner-take-all system for a presidency whose power has grown so much. Indeed, the sharp increase in partisan behavior over the past several decades parallels the marked expansion of presidential power over the same time period. Currently, a candidate can win election with a small majority or even a minority of the popular vote. As a result, substantial numbers of voters feel that that their interests and concerns are not represented in a politically dominant White House. It is no wonder that the party out of power spends more of its time trying to regain the Oval Office and less of its time trying to address the country’s needs.
This is not to say that the presidency is the sole cause of partisan conflict. Other factors are at work as well. Nevertheless, the one-party executive is an important factor. Indeed, the link between presidential politics and partisan conflict has a long pedigree. For example, political parties first appeared in Congress when legislators aligned themselves either in support of or in opposition to the executive policies of George Washington and Treasury Secretary Alexander Hamilton. Similarly, parties first mobilized nationally around presidential elections, starting with the 1796 contest between the Federalist John Adams and the Democratic-Republican Thomas Jefferson.
To be sure, partisan competition provides important benefits. We want elected officials to engage in a vigorous policy debate. But the debate today is too much about political calculation and not enough on the merits.
There is a another key problem when the presidential power is reserved for one side of the political aisle. It encourages misguided decisionmaking. That will be the topic of my next post on the presidency. In the meantime, you can find the introductory chapter to my book-length treatment of the presidency, “Two Presidents,” here.
[cross-posted at PrawfsBlawg]
Absent a major change in the political climate and a Democratic wave election in November, we can expect many more articles like Peter Baker’s in the New York Times on the frustrations facing President Obama for the remainder of his term in office. As Baker observed, it is becoming increasingly difficult for presidents to get sweeping legislation through Capitol Hill.
While it is tempting to blame Congress, partisan polarization, or other features of the contemporary political system, it also seems clear that there is a deeper structural problem at work—the U.S. presidency no longer works well. I consider the defects in the presidency at some length in “Two Presidents Are Better Than One: The Case for a Bipartisan Executive Branch.” In this and upcoming posts, I will discuss some of the key problems with the presidency.
For example, barely more than 50 percent of the public has a voice in the policymaking decisions that emerge from the Oval Office. While presidents may once have aspired to act as the representative of all Americans, and George Washington may actually have done so, contemporary presidents generally hew to the views of their partisan base. Even when they attract only 53 percent of the popular vote, presidents claim a broad mandate for their partisan platforms and remind the other side that “elections have consequences.”
All citizens want to have a voice in their government, but nearly half the public is denied a chance for meaningful input into the development of presidential policy. This is fundamentally unfair. To paraphrase John Stuart Mill, instead of having an executive branch “of the whole people by the whole people, equally represented,” the United States has an executive branch “of the whole people by a mere majority of the people, exclusively represented.” Or as Jill Lepore wrote in The New Yorker last month, “one-half of the people ought not to be ruled by the other half.” (To be sure, Lepore was speaking about women being ruled by men, but the point still stands.)
It’s not only unfair to reserve all of the presidential power for half of the country, it also fans the flames of partisan conflict. We should not be surprised that when people are denied representation, they become receptive to a policy of obstruction that might enhance their chances of winning back power. In my next post, I will discuss the modern presidency and partisan conflict.
Earlier this week, I wrote about the link between health insurance and health and suggested that socioeconomic factors such as education and wealth can be much more important for health than access to health care. There are some interesting studies in this area.
For example, researchers looked at health outcomes in England under that country’s National Health Service (NHS) and found that the higher the socioeconomic status of a person, the lower the death rate. People in the highest civil service grade for government employees had a mortality rate about half that of people in the lowest civil service grade, even though they all had good access to health care. In addition, the gap in mortality rates among men in England by socioeconomic status has actually widened over time since the introduction of the NHS in 1948.
Or consider an interesting policy experiment in Canada during the 1970s. For four years, the province of Manitoba guaranteed a minimum annual income for all residents of Dauphin, a small, rural city. Health status improved significantly. When Dauphin residents were compared with residents of other rural communities in Manitoba, the data showed that while people in Dauphin were more likely to be hospitalized before implementation of the minimum income program, the gap in hospitalization rates disappeared by the end of the program. The decline largely occurred for hospitalizations that tend to be sensitive to levels of income security.
And the improvements in health status cannot be attributed to better access to health insurance. Manitoba had implemented a program of universal health insurance before the minimum income experiment, so the income benefits did not affect health insurance status.
U.S. data also illustrate the value of socioeconomic interventions for promoting health. Studies have found that the provision of housing for chronically homeless individuals decreases the number of hospital admissions, shortens the duration of hospitalizations, and reduces overall health care costs substantially.
No doubt there are important political reasons for dedicating dollars to improving health care coverage rather than socioeconomic status, but we’re not making the wisest investments with our limited resources.
[cross-posted at HealthLawProfs and PrawfsBlawg]
The Affordable Care Act might not bend the cost curve or improve the quality of health care, but it will save thousands of lives, as millions of uninsured persons receive the health care they need. At least that’s the conventional wisdom. But while observers assume that ACA will improve the health of the uninsured, the link between health insurance and health is not as clear as one may think. Partly because other factors have a bigger impact on health than does health care and partly because the uninsured can rely on the health care safety net, ACA’s impact on the health of the previously uninsured may be less than expected.
To be sure, the insured are healthier than the uninsured. According to one study, the uninsured have a mortality rate 40% higher than that of the insured. However, there are other differences between the insured and the uninsured besides their insurance status, including education, wealth, and other measures of socioeconomic status.
How much does health insurance improve the health of the uninsured? The empirical literature sends a mixed message. On one hand is an important Medicaid study. Researchers compared three states that had expanded their Medicaid programs to include childless adults with neighboring states that were similar demographically but had not undertaken similar expansions of their Medicaid programs. In the aggregate, the states with the expansions saw significant reductions in mortality rates compared to the neighboring states
On the other hand is another important Medicaid study. After Oregon added a limited number of slots to its Medicaid program and assigned the new slots by lottery, it effectively created a randomized controlled study of the benefits of Medicaid coverage. When researchers analyzed data from the first two years of the expansion, they found that the coverage resulted in greater utilization of the health care system. However, coverage did not lead to a reduction in levels of hypertension, high cholesterol or diabetes. Also, in a nationwide study of people age 50-61, researchers looked at the study subjects’ access to health care and their health outcomes for the next 18 years. As expected, insured individuals used more health care resources than did uninsured people. However, there was no evidence that being insured lowered the risk of death 12-14 years into the study, and only mild evidence of a mortality benefit at 16-18 years.
All of this is not to say that health care does not matter. Rather, it is not clear how much more ACA will do for the health of the previously uninsured than did the pre-ACA safety net. The safety net is porous, but it provides important benefits to the uninsured. In addition, ACA’s impact will be limited because it put most of its money on treatment, and that was not a wise bet. It has long been clear that public health interventions do more to promote health than do treatments of disease. It also may be true that health care coverage is a necessary but not sufficient factor in improving a person’s health. The uninsured face many barriers to receiving good health care, and they often may need other kinds of assistance to ensure that they realize the full benefits of health care coverage.
In the end, the benefits of ACA may lie more in their contribution to economic health than physical health. Support for ACA was driven in large part by concerns about the extent to which health care costs were overwhelming family budgets and forcing Americans into bankruptcy. ACA will greatly reduce the financial burden from health care needs, and this is very important.
[cross-posted at HealthLawProfs and PrawfsBlawg]