Thursday, 23 March 2017

Copyright Should End With Death

Copyright protection of artistic creations (including books, music, visual art, etc…) should end with the death of the creator. Upon death, the work should immediately enter the public domain.

The main reason is because there is no compelling reason for it to extend past death. Copyright protection, and with it, exclusive marketing rights over the creation, is best justified as a means of incentivizing artistic creation. This protection gives creators a much better chance to monetize their work than they would have had without it, thereby providing means and/or encouragement for such creation. Once the author is dead though, they are not capable of enjoying the monetary benefits from this protection anymore, thus severely diminishing its incentivizing effect.

I say severely diminishing, not entirely eliminating, because some creators might like to secure the royalty money for their descendants after their death. But this consideration is only relevant when the creator is so old or sick that they foresee their, relatively imminent, demise. Otherwise, they will be around to receive the copyright-protected royalty stream, and will remain incentivized to continue creating. Nevertheless, it is possible that removing copyright protection after death might disincentivize old/sick creators, which should be considered a (small) negative of my proposal.

‘Now hold on’ a critic may object. ‘Taking away copyright protection upon death is like levying a large inheritance tax on the dead person’s assets. And I thought you were against such taxes!’ In response, I am indeed against inheritance taxes, but I consider the current proposal to be a fundamentally different beast. The reason inheritance taxes are bad is because they encourage old people to consume their accumulated capital for the purposes of consumption, as the tax will diminish their ability to pass on their capital assets to their loved ones after death. This results in a withdrawal of capital investment from the economy, which reduces wages and productivity. No such effect is present for the case of removing copyright protection upon death. In fact, there is reason to believe it would have the reverse effect. Knowing that their descendants wouldn’t be able to rely on post-death royalties, the creators might be encouraged to save and invest more of their pre-death royalties, and then leave a capital asset legacy to their descendants instead. This would result in an injection of capital investment into the economy, which would increase wages and productivity.

Besides this, there is the more obvious benefit of consumers getting much cheaper access to the creator’s works right after the creator’s death, rather than having to wait 50-70 years after the creator’s death to get the cheaper access. Plus, copyright of any kind is getting more and more difficult to enforce, so it makes sense for authorities to focus their enforcement resources on the much more important protection of living creator’s works, and to allow dead creator’s works to enter the public domain immediately.   

In sum, my proposal has three main benefits: cheaper access for consumers earlier, better focusing of enforcement resources, and the encouragement of capital accumulation. These are set against one small drawback: the potential disincentivizing of old/sick creators who believe they are near death. I would say that the benefits most definitely outweigh the costs, therefore, copyright protection should be removed following the creator’s death. 

Monday, 20 March 2017

Post-Secondary Education Policy

My contention is this: there is no good reason for governments to subsidize post-secondary education in any way, shape, or form.

Because here’s the thing: either the educational investment is a good one, in which case the student will be able to make up for the cost of the education in extra future income, or the educational investment is a bad one, in which case the government shouldn’t be wasting taxpayer money on it.

Let us consider the good investments first. Let’s assume that with only a high school education, you could earn a yearly salary of $40,000, while with a Computer Science Bachelor’s Degree, you could earn a yearly salary of $90,000. Now let’s assume that the tuition required to obtain this degree is $50,000. A loan provider offers you a deal: he will pay your $50,000 tuition, in exchange for $80,000 due 4 years after the degree has been completed. You accept the deal. Once you have your degree and get your $90,000 job, you take $20,000 off your salary each year to pay to the loan provider. You effectively earn $70,000 each of these four years, $30,000 more than without the degree, and after four years, you are free and clear. You win, the loan provider wins, the university wins, the skills-seeking employer wins: everyone is happy.

In this situation, there is clearly no need for the government to step in. When a deal is mutually advantageous to all parties, the free-market will make it happen. If you are a bright young person with high marks and a clear aptitude for your chosen, practical discipline: loan providers will rush to make such deals with you, because it is likely that you will be able to succeed in the job market and pay them what they’re owed.

Now let’s consider the bad investments. You want to do a degree that is unlikely to make you much more valuable to employers than someone with just a high school education (ie. something like ‘Women and Gender Studies’). Loan providers will be reluctant to loan you money for this education because of the high risk that you won’t pay it back with the requisite interest. So should the government step in and fund this person’s education? No! Why should taxpayers be forced to pay for the luxury spending of university students?! Because a degree whose purpose is to ‘introduce students to differing perspectives’ is just that: luxury spending. One does not need to get a fancy university education in order to be exposed to differing perspectives: an Internet connection will do just fine for that. If a student (or their parents) can afford to pay for this kind of education, then absolutely they should be entitled to. But there is no good reason for the government to subsidize it.

Oh, and one more thing: the government shouldn’t be in the business of determining what post-secondary institutions are worthy of being ‘degree-granting’ institutions and which are not. Private professional, trades, or scholarly associations are perfectly capable of deciding what institutions and educational programs meet their desired criteria, and certifying them based on that. They will certainly be better qualified at doing so than governments, who often care more about political considerations when making such decisions than they do about things that actually matter to the future employers or colleagues of these students.
   


Tuesday, 28 February 2017

Fixing Family Law

Consider this scenario: I (a man) get married to a woman. We have two children and buy a house. Seven years later, my wife tells me that she wants a divorce (women are statistically more likely than men to initiate divorces. 66% by women in the UK in 2011)[1]. She asks me to the sign the divorce papers; I refuse. I don’t want my family to be broken up! Alas; I cannot stop it on my own. She files for a contested divorce in court[2].

What can I expect from this court proceeding? Nothing good I’m afraid. According to Statistics Canada, in the year 2000, just over 50% of custody orders gave mothers sole custody, while joint legal custody was awarded in about 37% of the cases[3]. ‘But I’m a good dad,’ I tell myself. ‘Surely the court will grant me joint custody.’ And they do; but only joint legal custody. Joint legal custody only means that I have a say in some major decisions involving the child (especially healthcare and education)[4].  The question of physical custody (ie. where the children actually reside) is very different. Here, according to a survey conducted by Statistics Canada for the years 1998-1999 (seemingly the most recent Canadian data on the matter), when custody was determined by a court order, joint physical custody (children split their time roughly evenly between the two parents) was ordered in only 17.1% of cases (father custody in 5.2%). But, as if these odds weren’t bad enough, in only 5.6% of the cases did the courts order joint physical custody and the children actually have joint living arrangements. In most cases, the children just resided primarily with the mom anyway, resulting in the fact that a whopping 88% of cases involving court custody orders ended up with the children living primarily with the mother[5].

In other words, unless I can prove in court that my ex-wife is a terrible mother (which I can’t; because she’s not), it is highly likely that the children will end up living primarily with their mother. As for the house, it is highly likely that either the court will force us to sell it and split the proceeds, or else they will award it to the spouse with whom the children will primarily reside (that is, in the vast majority of cases, the mother).

The upshot is this: as a result of the divorce (initiated by my wife), I am kicked out of my home and am prevented from playing a major role in my children’s lives anymore (I will probably only be allowed to see them every second weekend, and on a few weekday evenings in between). In one fell swoop, and through no fault of my own, I lose my wife, my children, and my home. Now I ask you: is it desirable that we have a family law system that can do this to me? I don’t know about you, but I say: HELL NO!

So, let’s think of a way to fix it. Currently, family courts claim to make their custody decisions based on ‘the best interests of the child’. There is nothing wrong with this basis, but the fact is that the ‘best interests of the child’ are almost always best served by their parents not getting divorced in the first place. There are a number of reasons for this. Firstly, a child growing up in a household with two adults that are biologically programmed to love them is better than growing up with just one. More love, more support, more attention.

Secondly, men and women tend to have different parenting styles, and have a deeper understanding of the issues of their own gender than of the issues of the opposite gender. A child getting the benefit of both of these complementary styles and perspectives will often be better than a more one-sided approach from a single parent. This is especially the case where it’s a male child living only with his mother (a fairly common occurrence); in this case, the child is missing out on a masculine parenting perspective and will probably feel like the mother alone can’t really understand him (similar considerations apply to the rare case of a female child living only with her father).

Thirdly, children tend to do better in a stable, rather than a chaotic, environment. Divorce will always inject a certain amount of turmoil into the child’s life; especially if it means losing the house in which they were living, getting shuttled around to different parents’ houses on different days of the week, or new ‘step-parent’ figures entering and exiting their lives.

These three reasons, and others, help explain the fact that the children of divorced parents are more likely to experience more psychological problems of various kinds than children of stable families[6]. As such, it is quite clear that, apart from cases of domestic abuse, the best interests of the child are best served by the parents remaining together. Very well, but of what use is this fact to the family court judge? It’s not like he can order the parents to stay together! What he can do, however, is to make decisions in a way that disincentivizes both the mother and the father from initiating a divorce or sabotaging the marriage.

What might this look like? Basically, the judge would award primary custody of the children to the parent who is the least at fault for the marriage breaking up. In other words, the parent who initiated the divorce, or the parent who rendered themselves such an intolerable pain in the ass to the other parent so as to basically force the other parent to end the marriage, should, by so doing, forfeit primary custody of the child to the other parent. Enough of this ‘no-fault divorce’ crap; more often than not, there in fact is a parent who is more at fault for precipitating the divorce than the other. By doing so, this parent has done a grave disservice to their children, and as a result, should be penalized by having less access to them, not rewarded with more power over them, as frequently occurs in the current system.

A couple of points of clarification and expansion. Firstly, in cases where it is determined that fault lies pretty evenly with both parents, what I have said does not apply and custody/property issues can be decided on the current grounds. Secondly, the spouse at fault for the divorce, whether the marriage involves children or not, should be granted no right to alimony payments from the richer spouse. This is in accordance with the same principle as above: initiators or precipitators of divorces should not, as much as possible, be allowed to benefit financially or in child-rearing terms from the divorce that they pushed for. This will, again, serve to disincentivize divorce, and to help prevent the ridiculous plundering of the richer spouse that often occurs in the current system. Assets accumulated during the marriage will, however, continue to be split roughly evenly between both spouses in a divorce. The matrimonial home, as in the current system, will be an exception to this rule, but under my system, its full value and title will go to the party deemed less at fault for the divorce. Thirdly, child support payments will continue as before, with the non-custodial parent paying child support to the custodial parent. And finally, parents who wish to voluntarily swap custodial/non-custodial roles at a later date should always be free to do so.

Another benefit of my proposed system that should be noted is that it will likely encourage more marriages and more reproduction. Currently, many men are wary of getting married, because they are aware of how stacked against them the family law system is. As a result, it is often the woman that is cajoling the man into marriage, rather than the other way around. Perhaps if more men were confident that they would be treated fairly in case of divorce, rather than being scared that the life that they built during marriage could be brutally torn apart on the mere whim of their wife, they would be more willing to settle down and get married.

My proposal is bound to be controversial, so I will now take the opportunity to address some objections that will likely arise to it. Firstly, it could be objected that my system will simply pressure couples involved in unhappy marriages to stay together, when really the marriage should be broken up for both of their sakes. In response, I would distinguish two cases: childless marriages and marriages involving children. In the case of childless marriages, the only real loss for the party initiating the divorce is the loss of their share of the matrimonial home, if there is one. This is not really that great of an impediment to divorce if the person considers the marriage to be truly unhappy. In the case of marriages involving children, parents who truly care for their children will be willing to endure a less than happy relationship with their spouse for the benefit of those children. If they don’t care that much for the children, then losing primary custody of them by initiating the divorce shouldn’t be all that devastating.

Secondly, it could be objected that determining fault for a given divorce will be a very difficult, and often highly subjective, process for the judge. In response, I would say that it is certainly no more difficult or subjective than determining how ‘the best interests’ of children (that the judge knows very little about) would best be served, or determining a ‘fair’ division of property between the spouses. In fact, I would say that determining fault is a far more objective determination than what judges have to do in the current system. In the case of an initiation of divorce in an obviously non-toxic marriage, it is crystal clear: the initiator is at fault. In the case of a toxic marriage made so by the actions of one spouse, the other spouse will have ample opportunity to collect evidence for their case before initiating the divorce proceeding. Just have a tape recorder handy for a bunch of choice interactions, remain polite and respectful at all times in order to prevent counter-evidence, and your case is basically made. And finally, in the case of a toxic marriage made so by the actions of both spouses, there will probably be ample evidence of unpleasantness presented by both sides, in which case the judge can just make a ruling of ‘fault unclear’ and revert to the current criteria.

Thirdly, it could be objected that if a stay-at-home parent were deemed at fault for a divorce, the consequences would be excessively brutal. They would lose their house, a lot of access to their children, and would be left with no income due to not getting any alimony. In response, I would say that the primary duties of a stay-at-home parent are maintaining the marriage and caring for the children. If such a parent were to initiate or precipitate a divorce, it would represent an utter dereliction of those two duties. Rewarding such dereliction, as is done in the current system if the perpetrator is female, is a good way of encouraging others to follow their example. But I would prefer that others not follow their example, and for that, we must ensure that dereliction and failure have consequences. Although such consequences would, admittedly, be quite severe under my system, I think that they are justified in this instance.

So there you have it: a system of family law that truly takes the interests of the children, and those of the innocent spouse, into full account. May it be implemented soon!            






[1] http://www.telegraph.co.uk/men/relationships/10357829/Why-do-women-initiate-divorce-more-than-men.html
[2] http://family-law.freeadvice.com/family-law/divorce_law/spouse-refuses-to-sign-divorce-papers.htm
[3] http://www.justice.gc.ca/eng/rp-pr/fl-lf/divorce/2004_6/f3_1.html
[4] http://www.legalmatch.com/law-library/article/legal-custody-definition.html
[5] http://www.justice.gc.ca/eng/rp-pr/fl-lf/divorce/2004_6/p3.html#f3_1
[6] http://www.marriage-success-secrets.com/statistics-about-children-and-divorce.html

Sunday, 12 February 2017

In Praise of School Choice

On February 7, 2017, Betsy DeVos was confirmed as the Secretary of Education of the Trump Administration. The confirmation process was a particularly contentious one, forcing the Vice-President to cast a tie-breaking vote in the Senate in favour of confirmation. One of the main reasons for the controversy was Ms. DeVos’ favorable view of ‘education vouchers’ and ‘charter schools’, two policy mechanisms favored by the ‘school choice’ movement. In a March 2015 speech, Ms. DeVos said: “Let the education dollars follow each child, instead of forcing the child to follow the dollars. This is pretty straightforward. And it’s how you go from a closed system to an open system that encourages innovation. People deserve choices and options.”[1]   

‘Choices and options?! What kind of right-wing monster could be in favour of those?!’ the leftists bray. ‘Such things will only increase education inequality!” To counter such nonsense, let us examine the working and benefits of the most thoroughgoing school choice policy: education vouchers.

Educational Vouchers: The Basics
In 2013, the average amount of public school spending per pupil was $10,700[2]. The idea behind an education voucher system is that, rather than governments spending this money to fund ‘free’ public schools, they would distribute it to the families of individual students in the form of vouchers that could only be spent on approved private education services. Under this system, education providers are rewarded monetarily depending on how many students they are able to attract, rather than in the public system where schools receive a set amount of funding, regardless of performance. The private, competitive education market of the voucher system maximally incentivizes education providers to cater to the demands of the education consumers (families of students, or the students themselves); something which the public, monopolistic model conspicuously fails to accomplish.

So that’s the basic principle, but what kind of concrete changes to the education system could we expect as a result of implementing the voucher system? The most noticeable change would be a significantly more diverse array of educational options, as education providers sought to appeal to specific target markets in their quest for voucher money. For the majority of this post, I will discuss different aspects of the increased educational diversity that we might expect.

 Secular versus Religious
I begin with this one, not because I expect it to be the most important consideration for voucher-wielding parents, but because it is the one that critics of school choice carry on about endlessly. Yes, it is true: under a voucher system, religious parents would be able to send their children to religious schools.  These schools, it is alleged, will focus more on religious indoctrination than on actual education, thereby dooming their pupils to lives of ignorance and zealotry.

There are several problems with this statement. Firstly, it makes the unwarranted assumption that the chance to religiously indoctrinate their kids some more is more important to most religious parents than their children’s academic and professional success. This is very unlikely to be the case. Take, for instance, Jesuit High School: a private, Catholic, secondary school in Portland, Oregon. On the ‘Academic Snapshot’ page of its website, the school brags about how 99% of their students continued to college, and about how their average SAT scores in all three areas (Reading, Writing, and Math) were better than the State and National averages. They also brag about how many hours their students volunteered, how qualified their teachers are, and the school’s favorable student-to-teacher ratio.[3] Nowhere on the page does it brag about how boss their students are at theology, or about how many Bible verses the average student can recite by heart.

This makes sense because, contrary to the ignorant beliefs of rabid secularists, most religious parents actually want their children to succeed in their academic and professional lives, not just in their ‘spiritual’ one. Most religious schools competing for voucher money will cater to this desire, and prioritize general academic accomplishment over religious content in their programs as a result. But, in case the paranoid secularists are still worried about ‘indoctrination academies’, the government could just formulate a policy that sets a maximum amount of religious content in the institution’s curriculum (25% say) as a condition for the institution remaining eligible for voucher payments. This policy will likely prove superfluous, but whatever; it won’t do much harm to have it on the books.

A second problem with the statement is that it assumes that only religious schools are into indoctrination. This is far from the truth; secular, public schools attempt to indoctrinate kids all the time! Just recently, the Toronto District School Board (TDSB) mandated that all schools include an ‘acknowledgement’ that the school is situated on the ‘traditional territories’ of specifically named ‘indigenous tribes’ in their morning announcements; every single morning![4] This is not a mere history lesson, as some defenders of the policy claim. We know that it is indoctrination because it is repeated, with ritual-like regularity, every single school morning. It seeks to indoctrinate children with the ridiculous notion that only members of indigenous tribes are the ‘real’ owners of the land, and that the rest of the population (the settlers) are illegitimate interlopers, a crime for which they must pay with endless guilt and reparations towards the remaining indigenous peoples. Sound familiar? Why yes, it is the religious doctrine of sin and repentance! See, supposedly secular people can be into what looks suspiciously like religious indoctrination.

Speaking of sin and repentance, how about the doctrines of environmentalism and climate change? These controversial doctrines are taught as gospel truths in many secular public schools (including the ones that I attended). They claim that mankind has sinned through his pursuit of consumer goods (consumerism) and use of fossil-fuel-powered machines (industrialism). Only by repenting and giving up his sinful ways can he avert the hellish scourges of environmental contamination and catastrophic global climate change. Never mind that environmental contamination could best be addressed via better-defined private property rights and free-market mechanisms, and that the science of anthropogenic climate change is still hugely uncertain. To the secular indoctrinator, the truth is clear, and that truth is environmentalism.

And speaking of morning rituals, how about the ritual of singing the national anthem every single morning in schools? The purpose of this is obvious: to indoctrinate children with the doctrines of nationalism and patriotism.    

And these are just a few examples of secular indoctrination. Here are a few more: the lionization of the United Nations, the teaching of the fallacious economic theories of John Maynard Keynes as absolute truth, the emphasis on ‘sharing’, ‘group work’,  ‘fairness’, and other collectivist principles, strict adherence to the latest principles of left-wing ‘political correctness’, especially as it relates to racial and sexual orientation minorities. And these are just off the top of my head. The point is that worldview indoctrination, whether this worldview is secular or religious, is an almost inescapable part of any kind of education. As such, to the champions of public secular education who complain about indoctrination in religious schools, I respond with a quote from the book which they loath: “And why beholdest thou the mote that is in thy brother’s (religious) eye, but considerest not the beam that is in thine own (secular) eye?”[5]

Local versus Out of Area
This is another controversial one, but one that I believe to be very important. Often, in the public school system, kids are forced to attend the school that is in their family’s local area (school zones). Now, there are definite advantages to going to a local school (most notably, saving time and money on transportation), but there can also be significant disadvantages in certain cases. These are most obvious in the case of local schools in impoverished, or ‘ghetto’, areas.

The behaviors, attitudes, and mindsets of a student’s classmates (in other words, the social environment of the school) constitute an important factor in determining what that student will get out of a program of education. In many ghetto areas, the social environments of the local schools are very unconducive to learning. Bad parenting of many of the kids, often the result of the (usually single[6]) parent lacking the requisite time or intelligence, causes widespread behavioral issues (acting out, unwillingness to learn, violence) in the school’s student population. Lacking a stable family, a number of these students join violent/criminal gangs as a kind of substitute. Gang politics and winning ‘street cred’ become more important to these students than actually learning academic subjects at school. Those few students who actually do prioritize academics over these things are often excluded from the ‘cool’ social groupings, and mercilessly bullied by the more thuggish students.

In my view, the biggest tragedy of this situation is that many bright, academically-talented kids from these areas are prevented from living up to these potentials. They are forced into a chaotic school environment where classes are constantly disrupted by their behaviorally-challenged classmates, thereby rendering academic instruction very difficult. In addition, tremendous social pressure is exerted on them to conform to the culture of thuggery and ignorance favored by their peers. For the government to force these kids into these kinds of schools is not just bad education policy; it is also incredibly cruel.

With a voucher system, these kids could get away from the toxic social environment of their local school by attending an out-of-area school in a friendlier part of the city. Getting there would be more expensive and time-consuming, but compared to the benefits of a far more conducive social environment, this is a small price to pay. Part of the total voucher amount for these kids could be used to pay for the extra transportation (whether by school bus or by public transit). It is true that this would leave a reduced sum for tuition, but there would still be more than enough to pay for an educational experience far superior to that provided by the local ghetto schools.

Pace and difficulty of education
Especially bright, average, and especially slow students learn at different paces and levels. If they are all stuck in the same class, the teacher generally adopts a pace and level conducive to the average student, thereby leaving the bright kids bored and the slow kids struggling. Better to separate these groups of kids into different classes so that the pace and level of instruction can be more optimal for them.

While they make an effort to do this in the public system with gifted programs and such, free-market competition under a voucher system, where catering to specific target markets is highly encouraged, would most certainly result in more such differentiation.

Other differentiation factors
-   Size of school.

-   Online, more self-directed, learning versus in-class, more guided, learning.

-   Pedagogical approaches (content-focused versus skills-focused, rote learning versus creative learning, strict discipline versus more student freedom, etc…)

-   Facilities (more/better facilities such as gyms, libraries, cafeterias, fancy furniture, etc… but higher tuition/less money spent on straight academics; or less/worse facilities but lower tuition/more money spent on straight academics).

-   Student to teacher ratio.

-   Subject specialization (art school, math/science school, technical school, humanities school, etc…)

-   General education versus career-oriented education.

Higher Quality Overall
Not only would a voucher system result in far greater variety and choice when it comes to education; it would also tend to produce a higher quality system all around. The public system is an uncompetitive, politicized monopoly that caters to teachers’ unions who prioritize the job security of their members over the quality of teaching in the system. The free-market system would be a competitive, consumer-oriented array of different educational providers who would leave such anti-social unions in the dust. For these reasons, the latter would tend to drive up quality and be more open to innovation overall.

Conclusion
If one is a big fan of coercive social engineering and the mean-spirited, leveling-for-leveling’s sake version of egalitarianism, then a public education system is the best choice. If, on the other hand, one is interested in higher quality, more innovative, more diverse, more customizable, and similarly accessible education for the world’s children; then a free-market system, paired with redistribution via education vouchers, is the best choice. And if the latter is more your cup of tea, then the confirmation of school choice advocate Betsy DeVos as education secretary should be a cause for celebration, no matter what she may or may not have said about guns and grizzly bears.   





[1] http://www.foxbusiness.com/politics/2017/02/07/education-secretary-betsy-devos-on-school-choice-vouchers-and-religion.html
[2] http://www.census.gov/newsroom/press-releases/2015/cb15-98.html
[3] http://www.jesuitportland.org/page.cfm?p=419
[4] http://www.cbc.ca/news/canada/toronto/tdsb-indigenous-land-1.3773050
[5] King James Bible, Matthew 7:3.
[6] https://newsone.com/1195075/children-single-parents-u-s-american/

Friday, 3 February 2017

The Case for National Origin Discrimination in Immigration Policy

On January 27, 2017, President Donald Trump issued an executive order barring aliens (non-US citizens) from Iraq, Syria, Sudan, Iran, Somalia, Libya, and Yemen from entering the United States for at least 90 days[1]. The order resulted in about 600 people either being blocked from boarding flights to the US or being prevented from leaving the airport upon arrival. All but two of the 394 of these who were permanent residents of the US were eventually let in, although some of them were detained for multiple hours before then[2].

The order sparked outrage and protests from the left, and not just because of its chaotic implementation. Many strongly objected to the principle of excluding people from the country on the basis of their national origins; denouncing it as ‘racist’ and ‘intolerant’. It is this principle which we shall evaluate in this post.

Excluding a person from a country involves two different things: 1. Denying the person to right to live long-term, work long-term, or participate in the political institutions of, the country (immigration ban). 2. Denying the person the right to enter the country for any purposes whatsoever (travel ban). The latter is clearly more all-encompassing than the former, so we will start with its evaluation.

A travel ban based on national origin is, in most cases, inadvisable. There are several reasons why this is the case. Firstly, it cuts down on the number of tourists spending their foreign resources in the country; something which is almost always a negative for the country’s economy. Secondly, it prevents outstanding academic, cultural, and entertainment figures with the banned origin from visiting and sharing their unique knowledge, skills, or art with the country; a culturally-impoverishing proposition. Thirdly, it is unnecessarily antagonistic to the countries on the banned list; which is likely to cause problems in diplomatic and trading relationships. Finally, it prevents citizens of the country from hosting their relatives with the banned origins; something which is likely to make them feel alienated and like second-class citizens.

Ah, but what of terrorism? Preventing terrorists with the banned origins from committing acts of violence in the US was the stated rationale for Trump’s travel ban. But there is little reason to believe that such a ban will be at all effective at preventing acts of terror. The leaders of terrorist organizations are smart and resourceful people; if a ban is in place, they will find ways around it. That could involve radicalizing and convincing citizens of the target country to commit terrorist acts, or it could involve getting a terrorist with a national origin that is not on the banned list into the target country. Though travel bans may prove inconvenient to terrorist organizations, they are unlikely to be effective at actually stopping terrorist acts. 

Alright, so a travel ban is no good: but how about for immigration? Here, a strong case can be made for the desirability of discriminating on the basis of national origin. The general culture (including the political culture) of a country is determined by the views and values held by its individual citizens. If the majority of citizens hold ‘traditional’ views on most issues, the country will have a ‘traditional’ culture. Likewise, if the majority of citizens hold ‘progressive’ views on most issues, the country will have a ‘progressive’ culture. The same applies to sub-regions or sub-groupings within a country, such as states/provinces, cities/counties, or communities of various kinds. Each of these will have some kind of general culture, determined by the views and values held by its individual residents/members.

Immigration policy for a desirable, developed country is all about selecting which, of the long list of people who wish to immigrate, to invite into the country as permanent residents, and eventually, citizens. Most would agree that economic considerations should be an important part of the selection process. Applicants with in-demand labor abilities (and the language skills necessary to use them in the country) should be given priority over those who don’t. Such people are more likely to prove a net benefit to the country economically.

While important, economic considerations are not everything. Cultural considerations also play an important role in determining whether a given applicant will make a desirable immigrant or not. To take an extreme example: imagine that an applicant has impeccable economic credentials, but that they also believe that women should be chattel slaves to men, that homosexuals should all be beheaded, and that richer people should be able to hit poorer people without facing any legal consequences. If you were someone who held ‘progressive’ cultural views, would you really think that such a person would make a desirable immigrant? Of course not! Such a person would either push the country’s culture further away from where you want it to be (however minutely), or else lash out violently against a culture they despise and cause harm to people. Either way, the cultural values that they hold prevent them from being desirable immigrants.

Now, what does all this have to do with the desirability of national origin discrimination? Basically, it is the best proxy (however imperfect) that we have to determine what cultural values an immigration applicant might hold. And for cultural considerations, we need a proxy, because it is obviously impractical to try to determine an applicant’s beliefs directly by asking them about it. Any applicant with half a brain (and hopefully we are not considering anyone with less than this) will simply determine what the most-favored answers to the ‘cultural values’ questions are and then give them, whether they actually believe them or not. Also, such an evaluation would necessarily be highly subjective, which would leave the immigration official with altogether too much (easily abusable) discretion.

So, how exactly can national origin be used as a proxy for cultural values? The first step is to determine whether a given applicant is looking to immigrate for primarily economic, or primarily cultural, reasons. Cultural migrants are, usually, those that are applying either from other wealthy countries, or from elevated economic positions in poorer countries. They don’t expect to become all that much richer in the country to which they are applying, but they expect that the generally-held cultural (and political-cultural) values of  that country will suit them better than those of their home country. For example: people with conservative/libertarian views from the ‘progressive’, socialist, Scandinavian countries looking to immigrate to the US; or secular professionals from Iran who sought to immigrate to the West after the 1979 Islamic Revolution in their country. With such people, it would be unwise to assume that their individual, cultural values lined up with the general culture of their home countries; in fact, something closer to the opposite assumption would probably be more accurate. It is precisely the culture of their old countries from which they are fleeing, which suggests that their individual values do not align with it in some fundamental way.

The case of economic migrants is very different though. These are applicants, usually from poorer countries, who wish to leave their old countries, not because they dislike the culture, but because they expect to have more economic opportunities and access to more resources in the country to which they are applying. For these, there is a fairly high probability (certainly more than 50%) that they share many of their individual values with the general culture of their home countries. Because after all, the general culture is, by definition, the majority culture, and this majority culture exerts a powerful influence on most people to conform to it so that they can feel included in their social group.

But hold on: wouldn’t, by the same logic, the cultural values of the immigrants begin shifting the moment they find themselves in a new country with a new general culture? Yes, but immigrants (especially economic immigrants) have a tendency to conglomerate into ethnic communities in their new country. This will greatly retard the integration process, such that it will take multiple generations (if ever) before the culture of the old country is discarded.

In light of all this, here’s how I think the immigration application process should work. First, officials should determine, to the best of their abilities and based mainly on objective markers, whether an applicant is primarily an economic or primarily a cultural migrant. If cultural, they should be assumed to be sufficiently sympathetic with the host country’s general culture, and evaluation should focus primarily on economic considerations. If economic, their national origin should become a factor in the evaluation. If their old country’s values are deemed to be largely antithetical to those desired by the new country’s government, then, unless they are truly stellar economically, their application should be rejected.

Let’s make this discussion more concrete. Economic migrants from Muslim-majority countries tend not to integrate very well, and are significantly more likely to hold more oppressive, traditionalist views than members of the western countries’ general populations. As illustration of this, take a survey of British Muslims done by ICM in April 2016. According to this survey, 39% of Muslims believed that ‘wives should always obey their husbands’, compared to 5% in the general population. Only 18% of Muslims believed that ‘homosexuality should be legal in Britain’, compared to 73% in the general population. 87% of Muslims believed that ‘publications should not have the right to publish picture that make fun of the Prophet Mohammed’, compared to 44% in the general population[3]. In sum, the Muslims in Britain are more likely to be (genuinely) sexist, homophobic, and anti-free-speech than the general population. Since none of these attitudes are conducive to a free society (something that I, and many others, value highly), I think that nearly all economic migrants from Muslim-majority countries should be denied the right to immigrate to developed western countries.

It was something like this kind of national origin discrimination that Donald Trump had in mind when he called for a ‘Muslim ban’ during the campaign (something he later changed to ‘extreme vetting’ of those from Muslim countries), and when he signed the controversial executive order discussed above. The difference is that I think the focus should be narrowed to economic migrants applying for immigration, rather than to travelers generally, and that the main rationale is a cultural one, rather than an anti-terrorism one.

But whether I think that Trump’s specific policies on this front are good or not, one thing is clear: the left-wing position that the national origin of an immigration applicant should not be considered at all is silly. National origin is the best proxy we have for the cultural values held by an applicant, and cultural values are very important in determining whether an immigration candidate, if approved, will make a positive contribution to the country or not. To ignore all this because taking it into account might appear ‘racist’ is to limit the effectiveness of our country’s immigration policy in the name of ‘political correctness’. It is to put our country’s prosperity and societal harmony at risk in order to make some self-righteous leftists feel better about themselves. This is something that I must vociferously object to.




[1] http://www.npr.org/2017/01/31/512439121/trumps-executive-order-on-immigration-annotated
[2] http://www.thedailybeast.com/articles/2017/01/30/white-house-lowballs-impact-of-trump-ban.html
[3] https://www.icmunlimited.com/wp-content/uploads/2016/04/Mulims-full-suite-data-plus-topline.pdf

Wednesday, 1 February 2017

An Empowering Infrastructure System

As promised, this post examines what a maximally-empowering system of infrastructure policy might look like. The basic principle is fairly straightforward: if it can be done by private companies, then it should be done by private companies. Why? Because as we have discussed in many other posts, market-incentivized, competitive private businesses are generally better at giving the consumers what they want than monopolistic, politically-incentivized governments.

So, can private companies indeed be given responsibility for our infrastructure? Big government proponents will respond in the negative, arguing that because infrastructure is a ‘public good’, whose benefits are more difficult to monetize than the ‘private goods’ on the private market, it must be provided by the government, rather than by profit-seeking enterprises. Actually though, the benefits from most infrastructure can be monetized relatively easily, and hence it can be provided by a profit-seeking private business. In this post, I will show how.

Long-distance infrastructure
We begin with long-distance infrastructure, which includes highways, railways, and pipelines. With these, monetization is quite simple: just charge people based on how much of it they use.
What’s a bit more complicated is securing the property necessary to build this kind of infrastructure in the first place. Building this kind of infrastructure requires the builder to have ownership of, or right of way over, a large stretch of contiguous land parcels. Most land owners along the proposed route will be willing to negotiate a mutually advantageous right-of-way deal with the builder, but there is always the chance of a few stubborn holdouts who refuse to make a deal at any price. This small minority could render the builder’s infrastructure plan unfeasible.

What’s to be done? Well, here, the government could usefully intervene a little. If they deem the project to be in the country’s economic interests, they could use their power of eminent domain to force the holdouts to sell rights-of-way to the builder. Because it is a compelled sale, the price should be set at twice the going market rate, in order to compensate the reluctant seller for their loss, and in order to prevent overuse of this provision. While it is true that this would deprive the holdout of a great deal of power over his land holdings, it will most likely be outweighed by the economic benefits (and corresponding increase in general purchasing power), provided by the project.    
This problem aside though, there is really no need for the government to take any other role in the provision of long-distance infrastructure. Private business can do it just fine.

Urban Infrastructure
Now we come to another category of infrastructure: urban infrastructure. This includes roads, sewers, parks, powerlines, neighborhood rules (non-physical infrastructure), garbage collection, etc… Monetizing the benefits from this kind of infrastructure is a little more challenging than it is for the long-distance variety, mainly because it is so intertwined with people’s homes and neighborhoods. But therein lies the key to its monetization: it’s price should be included in the price of a home!
Here’s how it would work. A whole neighborhood (or district) would be owned by a single landlord (or condominium association). The landlord would rent out individual buildings to people, in exchange for a regular monthly payment. The landlord would use part of these payments to maintain the infrastructure of the neighborhood. Why? Because it’s part of the product that he’s selling! The nicer/more functional the neighborhood’s infrastructure, the more the landlord will be able to charge for the privilege of living in that neighborhood. In order for his neighborhood to remain competitive with others, it is in the landlord’s interest to provide the best possible infrastructure for the amount of money that tenants are willing to pay for it.

Some may object that, under this arrangement, the neighborhood landlords are given too much power, while the residents are deprived of the power that comes with independent home ownership. Unfortunately, independent home ownership and city living are incompatible. In the current system, people have the illusion of being independent home owners, but actually, it is more accurate to say that they are in a condominium arrangement. The local municipal government is like a condominium association, which, like such associations, is elected democratically, levies condo fees (property tax), and controls the condo’s (city’s) infrastructure.

In the private neighborhood model, there exists the option of organizing the neighborhood on a condominium basis, where residents have limited ownership of their individual buildings and the infrastructure is maintained by a pseudo-democratic condominium associations, via condo fees that they levy from individual residents. Some people will prefer this arrangement as a way of feeling more in control of their neighborhood, although it is unlikely that condominium associations will be as responsive or attentive to consumer demand as a profit-seeking landlord.

Cross-neighborhood infrastructure
Some kinds of infrastructure (such as sewers, subway tracks, and power systems) will have to go through and service multiple neighborhoods in order to be effective. If these neighborhoods are owned by different landlords though, won’t there be a problem of coordination? Perhaps a little, but it is in the best interests of the landlords to collaborate effectively on these matters. Why? Because just as whole industries compete against whole other industries for the consumer’s dollar, so would cities against other cities. It is often in the interests of the major, competing players in an industry to work together in order to achieve something that will benefit the industry as a whole (such as a specific law or policy change). Similarly, competing, neighborhood landlords in the same city will tend to work together to make the city as a whole more attractive, as doing so is in both of their interests (they can attract tenants from other cities, and then raise their prices due to the extra demand). If collaborating on a sewer, power, or subway system is what is necessary to do so, chances are that such collaboration will be forthcoming.

Power analysis of private infrastructure system
So, private parties can indeed provide all of the infrastructure that governments provide. But is it desirable that they do so? Yes, it is; for a number of reasons.

Firstly, private businesses will provide better and more cost-effective infrastructure than governments. This is because, unlike governments, they are incentivized and competing with one another to do so. This will increase the purchasing-power of just about everyone in society.

Secondly, private businesses will allocate their infrastructure spending based on the anticipated priorities of the consumers, rather than the priorities of the government. Since one has far more sway as a consumer than one does as a political voter, this is a net positive for power levels.

Thirdly, the private neighborhood model for urban infrastructure will offer consumers of residential housing more choice than the municipal government model. Many neighborhood landlords will likely try to tap a niche market by offering up a distinctive ‘neighborhood experience’. There would likely be artist neighborhoods, professional neighborhoods, ethnic neighborhoods, ritzy neighborhoods, religious neighborhoods, child-friendly neighborhoods, more neighborly neighborhoods, etc… Each of these would have their own distinctive sets of rules, aesthetics, types of communal spaces, services, etc…; designed by the landlord to appeal to the distinct target market for that neighborhood.

This contrasts with the municipal government model, where the whole city is under the sway of the one, slow-moving institution. While there are still distinct neighborhoods, the rules and infrastructure choices are not really designed to appeal to specific target markets, or at least not as effectively as they would be in a private system.

Since the private system provides people with more choice, it also provides them with more power. It provides people with a better opportunity to live in the kind of neighborhood that suits their own, individual preferences. It thus provides people with more power over their surrounding environment; one of the most important kinds.

Transition plan
How would one go about transitioning from the current system to the proposed one? The municipal government would just need to slice the city up into reasonably-sized neighborhoods, then establish a condominium arrangement, governed by a standard contract, in each of them. The contract should have a provision, like most existing condominiums, where if some kind of supermajority (usually 75%) agree to sell their interests to a buyer interested in the whole neighborhood, then the holdouts can be forced to sell as well. This would provide a mechanism for converting a neighborhood into a landlord-tenant model, if enough people agreed.

Objection: bad for the poor
An objection that could be raised to this system is that in it, people will have to pay for all of the infrastructure benefits that they receive, rather than in the current system where the government pays for it via taxation and then gives it away for free. This will be especially hard on the poor.

In response, I would say that if private businesses can do something far better than governments can, then they should, even if the government option happened to subsidize the poor. This is because there are more effective ways of subsidizing the poor: just give them some money, and then they too can access the far-superior private system. There’s no sense opting for an inferior system just because of a few poor people: just go with the better system, and then help the poor get what they need from it. And in this case, the private infrastructure system will be much better than the government one; I guarantee it.


Tuesday, 31 January 2017

An Empowering Monetary System

What might a maximally-empowering monetary system look like? Before I tell you that, I will begin with a brief description of the monetary system in the Anglosphere countries.

The Current System
At the center is an institution known, quite fittingly, as the ‘central bank’. The central bank possesses the power to legally create additional units of their corresponding national currency (either physically or digitally). Besides allowing them to control the country’s money supply, this power also allows them to act as the ‘lender of last resort’ to commercial banks. When a commercial bank is in danger of going bankrupt, the central bank can just whip up some new currency and lend it out to them, thereby helping the commercial bank to weather the storm.    

Because indeed, without the central bank, commercial banks would be in constant danger of sudden insolvency. This is due to a practice that they engage in known as ‘fractional reserve banking’. When you deposit money into a checking or savings account at your bank, you would be forgiven for believing that the bank is simply holding your money in safekeeping for you (as the word ‘deposit’ would seem to imply). But you would be mistaken in that belief. Actually, the bank is only required to hold about 10% of the money that you deposit in reserve; the rest they can, and often do, loan out to other people and businesses at interest. Although this practice is highly profitable to the bank, it is also very risky. If, for whatever reason, the depositors’ confidence in the bank is shaken (perhaps due to the revelation of some high profile bad loans that the bank made), many will rush to the bank to try and withdraw their money before the bank goes under. This rash of withdrawals will quickly deplete the bank’s reserves and, because the rest of the money is tied up in loans, will render the bank insolvent (unable to fulfill its contractual obligations to its clients). Such an event is known as a ‘bank run’, and happened quite regularly before the advent of modern central banking in the early 20th century.

The inflationary nature of the current system
Central banks acting as ‘lenders of last resort’ have indeed been effective at preventing bank runs. Unfortunately, this blessing comes with a steep price tag: inflation. Inflation occurs when the quantity of money in a society is increased. Inflation leads to the reduction of the purchasing-power of each monetary unit, unless counterbalanced by a sufficiently-increased demand to hold money reserves or by a sufficiently-increased quantity of goods/services on the market. Central banks’ directly increasing the supply of money (printing/digitally-creating money) is one obvious source of inflation, but the practice of fractional reserve banking is another, more subtle, one.

It works as follows: a man deposits $100 into his checking account. Assuming a reserve requirement of 10%, the bank holds $10 of this in reserve, and loans out the rest ($90). Now, the first guy has $100 in his checking account, and the recipient of the loan has $90. Where there was once $100, there is now $190! But it doesn’t end there: those $90 will soon end up in bank accounts, and the process repeats. The depositors have $90 in their accounts, $9 is held in reserve, $81 is loaned out. Now there is $271! This process repeats until the money supply has been multiplied to about ten times the ‘monetary base’ (the part of the money supply directly controlled by the central bank).

Some may object to this analysis by arguing that the first man, after the bank has done its magic, doesn’t really have $100. Rather, he has some kind of financial instrument worth $100, while the bank has $10 in cash and $90 worth of loan assets. Technically, this is correct; strictly speaking, ‘cold hard cash’ and ‘money in the bank’ are two separate economic goods. However, the demand for these two goods is highly interrelated. Most people, unless they wish to engage in large illegal transactions, don’t have a marked preference for cash over money in the bank. Why would they? Bank money can be used as payment at most commercial establishments (using a debit card), at the same rate as cash, and if, for whatever reason, cash is required, bank money can be converted into it at virtually no expense. As such, both goods fit into the category of good, ‘generally accepted medium of exchange and means of final payment’. Most people will have a certain demand for this category of good, while the specific array of goods in this category that they choose to hold is mostly dictated by convenience. Evidence of this virtual interchangeability of cash and bank money can be found in the fact that their values, since the advent of modern central banking, have never deviated by a non-negligible amount from one another. And therefore, when discussing the monetary system overall, it is sound to classify both cash and bank money as the good ‘money’, and to consider it using a unitary supply and demand analysis.

All of this is to say that central banks, and the fractional reserve banks which they enable, are jointly responsible for virtually all of the inflation that occurs in modern monetary systems. Inflation, in turn, has two significant effects on the respective individual power levels of people in a society. We shall discuss them in turn.

Inflation’s redistributive effect
Firstly, inflation has a redistributive effect. Inflation benefits those who are economically closer to the source of the newly-created money, at the expense of those who are further away. This can be seen most clearly in the case of a criminal counterfeiter. Imagine the counterfeiter creates a bunch of perfect cash replicas, and spends them at his favorite local establishments. While the biggest beneficiary of the new money is the counterfeiter himself, these establishments also benefit via a direct increase in demand for their offerings. The establishment owners will then use the extra money to either consume or invest, thus benefitting those towards which this extra monetary demand is directed. The process continues until the new money has flowed through most of the economy.
The problem is that, as the new money flows through the economy, it causes a tendency for prices to rise as a result of the increased monetary demand going around. This is not a problem for those who got a piece of the new money early, but for the economically-distant parties, it means that their buying prices will have increased before benefitting from any of the increased demand themselves, thus leaving them worse off.

A similar process occurs as a result of inflation orchestrated by the central bank and their fractional reserve bank cronies. In this case, one of the biggest beneficiaries of the new money will be the society’s government, due to the large quantity of government bonds that central banks tend to purchase with fresh money. Other big winners include: commercial lending banks, well-connected investment banks (such as Goldman Sachs), government employees and contractors, owners of real estate and stock market assets, and the many businesses who cater to these groups.  On the other hand, some of those most harmed include: anyone on a fixed income, workers for whom negotiating a raise is difficult, people who are economically/geographically far-removed from the centers of finance and government, and people with substantial money holdings.

Looking at these lists, it would appear that, in general, richer people are more likely to benefit from this kind of inflation, while poorer people are more likely to be harmed. Thus, this kind of inflation tends to affect a redistribution of resources from poor to rich. Since, in the previous post, we concluded that, taken in isolation, a redistribution from rich to poor is a net positive for aggregate individual power levels, the reverse of this, which is the result of central bank inflation, must be considered a net negative.

In addition, as we also discussed in the previous post, when power is redistributed from individuals to government (or any large institution for that matter, although the problem is the most serious with the government), ‘leakage’ and ‘friction’ cause significant amounts of it to be lost. Since the kind of inflation that we are discussing has this effect, this must also be considered a negative.

Business cycle effect
Little known fact: central bank/fractional reserve bank inflation is the primary cause of the dreaded business cycle that plagues most modern economies. Here’s how. This kind of inflation is also known as credit expansion, because the new money generally begins its journey through the economy in the credit markets. Traditionally, central banks have favored government bond purchases as a conduit for expansion of the money supply[1]. These bonds are purchased on the secondary market, usually from large investment banks, rather than from the government’s treasury directly. This increased demand for government bonds operates to reduce the interest rate (or yield) on such bonds. It also operates to reduce the interest rate/yield in the credit markets generally. The non-central-bank capital that was tied up in the government bonds that the central bank purchases, is now freed up and redirected, usually towards increasing the demand for some other kind of investment, which then operates to reduce the interest rate/yield on that investment, and so on. Eventually, most of the new money will gradually leak out of the credit markets and enter the wider economy, but generally not before exerting a powerful downwards pressure on interest rates. This is why central banks use bond purchases as a tool when they want to lower the market’s interest rates: it works.

So, credit expansion results in inflationarily increasing demand in the credit markets, before increasing demand in the rest of the economy. The result is a lowered interest rate; the same thing that would occur if people, in a non-inflationary environment, decided to direct more of their monetary demand towards the credit/investment market, and less towards the consumer market. However, with credit expansion, there is no necessity that people actually reduce their consumption demand. In fact, there is reason to believe that credit expansion inflation causes people in general to increase their consumption demand. This results from the lower interest rates making saving/investment less lucrative, the inflation making cash holding more costly, and the illusion of prosperity that inflation creates among those who are less aware of the rising prices that it causes.  

Thus, we get a situation where both investment demand and consumption demand seem to be increasing. But how is this possible? Either resources are consumed in the present, or they are saved and then invested in an attempt to improve the effectiveness of future productive endeavors. It can’t be both. And indeed, it is not. What happens as a result of the credit expansion is a redirection of the resources destined for investment. The lower interest rates draw investment resources towards longer-term investments (such as R&D or expensive facilities/equipment that are only expected to pay off over a long period of time). At the same time, the increased monetary demand for consumer goods draws investment resources towards businesses/assets that are close to the final consumer (such as retail stores, restaurants, consumer tech companies, residential real estate, etc…)

So, this is where the resources are being redirected to, but where are they being redirected from? The all-important middle. The economy becomes schizophrenic; not knowing whether it is supposed to become more long-term oriented or more short-term oriented. Investment in the intermediate steps necessary to eventually turn a long-term investment into a series of goods ready for consumption is relatively neglected. A scramble for these, now underproduced, intermediate goods results in a bidding up of their prices. Those undertaking to bring long-term investments to fruition begin to worry that they may not have enough money to carry it through. They attempt to borrow more, which the central banks often, initially, enable via yet more credit expansion, at an increased rate to accommodate all the extra demand.

Eventually though, the party must come to an end. In order to avert a hyperinflation (one of the biggest disasters that could possibly befall a capitalist economy), the central bank must eventually halt or slow down the credit expansion. When it does, the credit markets tighten up and interest rate rise, thereby dooming many of the long-term investments undertaken during the expansionary period to unprofitability or loss. The expansionary boom turns to contractionary bust, and a period of recession/depression ensues.

During this period, businesses go bankrupt, jobs are lost, people’s houses are foreclosed. This period lasts until the economy has had a chance to adjust to the reality of the situation, after years of being misled by the false signals of the boom. Or, until the central bank foolishly decides to embark on another credit expansion, which often seems to help the situation in the short-term, but really is just setting the economy up for an even bigger crash down the road (for instance, in the US, credit expansion to ‘combat’ the 2000 burst of the tech bubble set the stage for the far worse 2008 financial crisis).

So, central bank inflation causes the business cycle, but is the business cycle positive or negative for power levels? It is almost certainly negative. The economic instability makes it more difficult for people to plan for the future, as the business cycle, which is entirely outside their control, could throw their economic lives into disarray at any moment. It could cause them to suddenly lose their jobs, or to suddenly lose a great deal of their savings. The latter is particularly cruel, because it is inflation that pushes people away from cash and safer investments and towards riskier investments, which in a significant inflation, are the only kind that allow their savings to grow in terms of purchasing power. Then, the inflation-caused business cycle proceeds to put these riskier investments in grave danger. It is almost as if the policy of central bank inflation were designed to make people feel more powerless over their economic futures. And as such, I must vociferously oppose it.

A better way
As we’ve shown, the current monetary/banking system is quite disempowering, and therefore, is not to be recommended. A better alternative to it would meet the following criteria: 1. It would eliminate or severely limit inflation. 2. It would ensure that people’s deposits weren’t constantly threatened by bank runs and bank insolvency.  3. The transition from the current system to it wouldn’t cause too much turmoil.

Luckily, there exists an alternate system, and a transition plan to get to it, that meets these criteria. It would work as follows:  first, the central bank whips up enough new cash to bring all of the banks’ reserve fractions on their customers’ checking and savings accounts to 100%. The banks are given this money in exchange for title to the loans that they had made with the checking and savings account funds that they had not held in reserve. The banks must henceforth remain on 100% reserve for these accounts.

Second, the government transfers all liability for its debt to the central bank, to be paid off as much as possible with the portfolio of loans just acquired from the private banks.

Finally, the government just needs to prohibit the central bank from creating any more new money. The result of all this will be a stable currency, rock solid banks, an end to credit expansion and their corresponding business cycles, and a debt-free government.

Objection: credit shortage
One objection which could be made to this system is that, by forcing banks to be on 100% reserves for their checking/savings accounts, the amount of credit available to the market will be significantly curtailed. Whereas before, banks could lend out 90% of the money in these accounts, now they must hold all of it as unproductive cash. This could cause problems for businesses and individuals who rely on bank loans.

This objection assumes that withdrawal-on-demand accounts (checking/savings) will remain as popular in a 100% reserve system, as they are in the current fractional reserve system. I believe this assumption to be inaccurate. On 100% reserves, banks would have to start charging customers for the service of holding their money for them (rather than recouping their costs by loaning out 90% of the money at interest).  This will likely prompt bank customers to look for alternate places to put their money. Low-risk, but interest-bearing, financial instruments such as Certificates of Deposit (CDs) would likely become more popular. Money in CDs cannot be withdrawn-on-demand; depositors must wait until the maturity date of the investment before being able to withdraw their money[2]. With these, it is clear that the depositor doesn’t have ownership of the money until the investment has matured. The inflationary duplication inherent in withdrawal-on-demand fractional reserve accounts is avoided with these instruments, while still allowing the bank to loan out the money on the depositors’ behalf at interest.

So basically, all that will change is that the reality of the situation will become more apparent to bank customers. In the 100% reserve system, bank customers will explicitly decide how much of their money they wish the banks to hold and to be available on-demand, and how much of it they wish the banks to lend out for them, with the knowledge that the money will not be available to them while it is tied up in loans. It will give individuals much more power and control over their economic affairs than either the old fractional reserve system, when bank runs were commonplace, or the modern central banking system, where constant, power-sapping inflation is the norm. This, I think, is well-worth giving up a little bit of credit availability for.

Objection: no more government debt  
Another objection that will likely be raised has to do with the way the government’s debt is handled in my plan. Because there is no guarantee that the assets acquired from the commercial banks in exchange for bringing them up to 100% reserves will be enough to pay off the entire national debt. Certainly not in the United States, where there are (as of January 2017) about $9 trillion in savings accounts[3], $1.9 trillion in checking accounts[4]. Take 90% of this as the approximate value of the assets to be acquired, and we reach about $9.8 trillion, less than half the total US national debt of $19.9 trillion[5].

So, government debt holders will most likely have to take a significant haircut. They will not be very happy about this, and the government’s credit rating will take a severe hit as a result. Perhaps the government will never be able to borrow at non-prohibitive interest rates ever again. But to this, I say: good! There is no good reason why governments should be perpetually in debt.  It costs a fortune in interest payments every year, sucks up savings that could have been more productively invested in the private economy, and encourages the ballooning of government by allowing politicians (in the short-term anyway) to engage in politically popular spending without politically unpopular tax increases.

Ah, but what if the government actually needs to make a real investment (such as an expensive piece of infrastructure)? Then wouldn’t it make sense for them to finance this with debt, like private companies do? It would; but there’s actually no need for governments to be building infrastructure. Private companies would do a better job of it anyway. This is something we will explore in a future post.




[1] In the wake of the 2008 crisis, the US central bank broke from precedent and bought over $1 trillion worth of ‘troubled’ mortgage-backed securities held by a number of major financial institutions. Ordinarily though, they stick to government bonds.
[2] http://www.investopedia.com/terms/c/certificateofdeposit.asp
[3] https://fred.stlouisfed.org/series/WSAVNS
[4] https://fred.stlouisfed.org/series/TCDSL
[5] http://www.usdebtclock.org/