By John Hnatio
The whole idea of unintended consequences dates back at least to John Locke. In his letter to Sir John Somers, a Member of Parliament, Locke talked about the unintended economic consequences of interest rate regulation.[i] During the Scottish Enlightenment, Adam Smith, coined the term “consequentialism” when he too discussed the unintended results of social actions. [ii] In the twentieth century sociologist Robert Merton picked up the banner in his famous 1936 paper, “The Unanticipated Consequences of Purposive Social Action.”[iii]
Building upon Max Weber’s earlier idea of rational social action, Merton espoused a number of rules for better understanding how unintended consequences can arise from the purposeful actions we take to improve society.[iv] Merton concluded that although there were more concise ways to understand how unintended consequences arise there was no consistent way to determine what was going to happen beforehand because human social systems are just too complex to predict with any certainty specific outcomes.[v]
Today, the law of unintended consequences has become a misunderstood and oft ill-used term meaning that we simply cannot understand all of the effects of our own actions until we come face to face with the results. Many of us when confronted with highly complex social issues like corruption in our government, world hunger, poverty and even the rise of violence in our schools too often simply shrug our shoulders and say, “It’s too complicated It’s really out of our control.”
But is it? Since 1936, there has been substantial movement forward in better understanding the behaviors of what are called complex adaptive systems—CAS for short. A CAS is a system that changes its behavior in response to changes in the environment in which it exists. But what happens when the environment itself is constantly changing? The system continuously adjusts. As systems adjust, the anticipated results of our initial actions, social or otherwise, change with it. Thus arises what we call the “Merton conundrum”—if you can’t understand the outcome of your actions then why act at all?
Well, the reality is that we do and can understand in a gross sense the results of our actions even when systems are highly complex. As Thomas Sowell reminds us in his studies of the qualitative social process aspects of complex political struggles:
The ever-changing kaleidoscope of raw reality would defeat the human mind by its complexity, except for the mind’s ability to abstract, to pick out parts and think of them as the whole. This is nowhere more necessary than in social visions and social theory, dealing with the complex and often subconscious interactions of millions of human beings.[vi]
Because the entire universe we live in is constantly evolving, change is the least common denominator of everything. It is part of the fabric of all existence. But human beings do not like uncertainty. We constantly work to reduce the uncertainty of our own existence. But, in spite of our best efforts, things still go awry in the worst ways.
While the great scientific minds of our society try to come up to grips with the uncertainty of our existence, most of us common folk just live with the fact that there are some things completely out of our control. Many of us visit our churches, synagogues or mosques believing that it is only God that has and will influence our past, present and future existence.
But whether you are just common folk, have a great scientific mind or believe in a supreme being it does not really matter. As Sowell observes human beings, for whatever reason, have astounding brains.
While uncertainty may be the fabric of the universe, research shows that we are finding better and better ways to deal with the “Merton conundrum.” Great scientific minds like Einstein, Schrödinger, Gell-Mann, Prigogine, Gregoire, Resnick and others have paved the way to move beyond Merton’s uncomfortable conclusion that no matter what we do we must first become the victims of the unintended consequences of our own actions before we can manage them. The “Merton conundrum” has done much to enculturate our “reaction” versus “prevention” perception of the world by leading us to accept the false premise that bad things happening are the inevitable result of uncertainty. Science now tells us that this is not true.
Instead of accepting “reaction” as the best way to deal with uncertainty a new breed of thinkers is emerging. Their idea is not to try and eliminate uncertainty but rather to accept it and focus their attention on doing a better job of preventing as many adverse outcomes as possible before they happen. Rather than pursuing a strategy that relies solely on reaction, their mantra is to anticipate, prevent and when necessary respond.
Central to the new idea is something called projectioneering–a combination of science and the power of computers. The premise is that while we may never be able to predict with absolute certainty the exact outcomes of our actions as we intervene to try and manage complex systems, we can do a much better job of projecting the things that may happen, put the safeguards in place to prevent them and if something does happen be in a much better position to swiftly mitigate adverse outcomes.[vii]
After the first great fire in London in 1657, the idea of insurance emerged.[viii] All of us know the later story of Lloyd’s of London providing insurance to vessels traveling across the British Empire engaged in trade. Insurance schemes of the period relied on what is called the law of large numbers when put in its simplest terms is a form of statistical guessing that says the best way to predict the future is to rely on a large number of past events. [ix] For example, while a horse betting establishment may lose money to a lucky gambler in a single horse race, its earnings will tend towards a predictable percentage over a large number of horse races. Any winning streak by a horse gambler will eventually be overcome by the low probability of maintaining a winning streak over any long period of time. But just like massive hurricanes where damages outstripped insurance reserves, relying solely on past experience when dealing with high impact events is not enough. We must also be able to deal with black swan events that come on us as a complete surprise and have a major impact on our society. We no longer have the luxury of rationalizing away the possibility and increasing frequency of similar high impact events occurring in the future.
As the overwhelming consequences of singular high impact events whose damages stray far away from the statistical mean like Virginia Tech, Sandy Hook, the Aurora Theatre and Orlando teach us, the past alone is not the best indicator of either the frequency nor severity of future events. The time for a new way of thinking about black swan events is upon us.[x]
Unlike the law of large numbers, the science of projection does not rely on the past as the best indicator of what will happen in the future. The science of projection is not to be confused with prediction. While the word prediction suggests that you will be able to determine exactly what will happen in the future (which is, by the way, scientifically impossible) projection does not do this. Instead, the science of projection embraces the uncertainty of our existence and uses it to mathematically determine probabilities of certain events happening in the future based on a combination of past, current and simulated future events that reflect potential changes in the environment.
The first step in the process is to gather data on categories of events that you are concerned about—say the topic of school violence. A database is created for past and current incidents while bots and crawlers are used to gather, in near real time, all newly reported incidents of school violence. As you might imagine these databases are very large.
As the data is gathered an automated process is used to structure the information on school violence based on date of occurrence, school, motivation of the student, type of weapon used, the security precautions that were in place at the time, quality and time of law enforcement response, the specific outcomes of the incident and a host of other factors.
The data is then subjected to scientific examination to determine the mathematical probability of different school violence events with different characteristics occurring at the same or different types of schools, e.g., elementary, middle, high and post-secondary schools. This process is known scientifically as clustering. The resulting probability of occurrence statistic is called a threat quotient or “TQ” for short and connotes the numerical probability that a given type of event will occur at a given type of school given the safeguards in place at the school, e.g., metal detectors, CCTV surveillance, fast or slow police response times, the presence of school safety resource officers at the school and many other factors.
Figure 1: Projectioneering Process
Once the data is gathered, structured and clustered individual school violence incidents are subjected to algorithmic analysis based on the following five factors.
- The safeguards that were in place at the particular school to deter the eruption of violence;
- The procedures and equipment that were in place to detect the perpetrator of the violence before the attack occurred;
- The ability to quickly and effectively communicate to first responders that the perpetrator was detected;
- The timeliness and quality of the response, and;
- The processes and procedures in place to mitigate the consequences of the incident.
Based on this information, the baseline TQ score is mathematically refined to reflect the five factors above and a school specific TQ score for the particular school is statistically derived. If TQ scores are dangerously high then countermeasures to reduce risk are made. These recommendations are based on those safeguards that have worked most successfully in deterring, detecting, communicating, responding and mitigating the consequences of similar incidents at other schools. The recommendations are validated using computer simulations of the events.
As the population of past, current and simulated events of school violence in the database increases something remarkable happens–the statistical fidelity of TQ scores becomes more and more precise. This revolutionary characteristic results in learning knowledgebases that become smarter and smarter as more incidents of school violence are added to the database.
What is so amazing about the science of projection is that it can be applied to virtually any complex adaptive system involving school safety from shootings, outbreaks of communicable disease, food safety, bullying and all of the concerns that can impact our children and their teachers at school.
If you are interested in learning more about the practical ways advanced science is leading the way to safer schools please contact us at the National School Safety Collaboratory. We look forward to working with you to make our schools safer.
Dr. John Hnatio is the Executive Director of the Institute for Complexity Management and the inventor of the science of “Projectioneering.” He received a patent for the idea while conducting his doctoral research at the Graduate School for Higher Education and Development at The George Washington University.[xi] The State of Maryland through the Technology Development Corporation (TEDCO) subsequently helped to fund the development of a software system for school safety based on the new science of Projectioneering. The software, called “School and CampusTQ”, is being made available to schools across the country at little to no cost who are interested in promoting enhanced school safety based on proven science. If you have any questions please contact Mr. Bruce Becker, President, of the National School Safety Collaboratory at: [Insert]
[i] Some Considerations of the Consequences of the Lowering of Interest and the Raising the Value of Money, available at https://www.marxists.org/reference/subject/economics/locke/part1.htm
[ii] Smith, Adam. “The Theory of Moral Sentiments”. p. 93.
[iii] Merton, Robert K. “The Unanticipated Consequences of Purposive Social Action” (PDF). American Sociological Review 1 (6): 895. doi:10.2307/2084615. Retrieved 2016-07-05.
[iv] Weber, Max. Economy and Society. University of California Press.
[v] Merton, Robert K. “The Unanticipated Consequences of Purposive Social Action” (PDF). American Sociological Review 1 (6): 904. doi:10.2307/2084615. Retrieved 2016-07-2016.
[vi] Sowell, T. (1987). A conflict of visions: Ideological origins of political struggles. New York, NY: William Morrow.
[vii] Hnatio, J. (1986) The Complexity Systems Management Method: A Next Generation Decision Support Tool for the Management of Complex Challenges at Institutions of Higher Education, The George Washington University, School of Higher Education and Development.
[viii] International Risk Management Institute, Inc. The Great fire of London. Retrieved from the World Wide Web at: https://www.irmi.com/articles/expert-commentary/the-great-fire-of-london on July 5, 2016.
[ix] S.D. Poisson, Probabilité des jugements en matière criminelle et en matière civile, précédées des règles générales du calcul des probabilitiés (Paris, France: Bachelier, 1837),
[x] Taleb, Nassim Nicholas (2010) . The Black Swan: The impact of the highly improbable (2nd ed.). London: Penguin. ISBN 978-0-14103459-1. Retrieved 23 May 2012.
[xi] Hnatio, J. The complexity systems management method. US 8103601 B2 published on January 24, 2012. United States Patent and Trademarks Office, Washington, D.C.