Can using a DSS have unintended negative consequences?
by Daniel J. Power
Editor, DSSResources.COM
YES. Decision support is not a cure-all or a panacea. Researchers and managers often focus too much on the anticipated positive consequences of using a specific Decision Support System (DSS). Using a computerized system to support decision making can have anticipated and unanticipated negative consequences. This essay is a starting point in assessing unintended negative consequences that leaves many opportunities for readers to extend the analysis. The following two statements define issues related to negative outcomes caused by using a computerized decision support capability.
Statement 1: “The best DSS cannot overcome the limitations of a faulty (poor) decision maker. We should not force decision makers to use a DSS, we cannot insure a decision maker will pay attention to DSS responses, or consider DSS responses as part of the decision making process."
Computerized decision support cannot completely overcome the cognitive and attitude limitations of the person who is using it. We are all “faulty decision makers”. Each of us makes some bad, wrong or incorrect decisions even when supported by a DSS. Some of us are however "better" at making "good" decisions than others. A task specific decision support system is intended to increase the quality of decisions and the effectiveness of a person making a specific decision or set of related decisions. A well-designed decision support capability has the potential to assist those decisions makers who can and do use it. Decision support can improve a decision maker's “batting average”.
But ... Bad outcomes may result from using a decision support capability including excessive confidence in the correctness of any recommendations, concern about losing one's job to a less skilled decision maker or to an "expert" computer-based system that lead to sabotaging the system. Also, some people may perceive a decision maker who uses a decision support capability as less capable (Arkes, 2007). Liability issues from not following a decision support recommendation and problems from following the prescribed course of action when it was incorrect. Concerns should exist about obsolete systems and lack of confidence in new systems.
In some situations a decision maker learns while using a DSS. The decision maker learns about decision criteria, appropriate facts to consider, and/or process issues that impact a specific decision situation. Computerized DSS encourage and promote “rationality” in decision making. Decision support may even encourage hyper-rationality and overreliance on quantitative data and models. The goals of a DSS are not however always completely achieved.
So what is the correct conclusion about DSS and rationality? Individuals who don't recognize the limitations of decision support and of decision makers will be surprised when an expensive decision support capability doesn't improve decision making for some or even many users. Even though it is an unintended negative consequence, some decision makers may actually be hindered by using a DSS and a poorly designed system can negatively impact even the “best” decision maker. Also, coercing people to use a computerized decision support capability can lead to resentment and counter productive behaviors.
Statement 2: “There is a decision support danger: the danger of overdependence on a decision support system (DSS), of blindly following the recommendations of the DSS, or of interacting with it in an entirely mechanical and unimaginative fashion."
It seems plausible and even reasonable that these “dangers” can and do exist. I am not however aware of empirical research that confirms these “dangers”. We do not know how likely “overdependence” is, or if some users will “blindly follow”, or mechanically interact with some or all types of DSS. I'm assuming “overdependence” means a person can not make a specific decision without using a DSS. For many DSS, the intent is that users will become “dependent” on using it. If decisions are improved, then the goal of training, reinforcements and rewards should be to promote regular and habitual use of the DSS. Managers and DSS users who recognize the “dangers” are sensitized to them and that makes the “dangers” less likely to occur or less likely to cause harm. DSS are intended to support not replace decision makers so users need to consciously interact with a DSS to use it effectively. The expectation needs to be created that the human decision maker is the ultimate authority and that the user can “over rule” or choose to ignore analyses and recommendations of the DSS. The “dangers” raised in this question warrant our attention and certainly they should be studied, but they do not justify avoiding the use of a DSS or rejecting a proposed DSS project.
Additional potential negative consequences might include using a decision support capability to rationalize or justify a poor decision. Another unintended consequence might be the rising cost of using a decision support capability because people feel the need to use it for otherwise trivial decisions. Also, decision makers may over invest in computerized capabilities assuming that reduces liability or improves decisions that don't benefit as much from computerized support or to "cya" or "cover your ass". A problem with unanticipated negative consequences is that the outcome is not thought about during decision making. Examining the negative experiences of others can help anticipate the unanticipated consequences.
Because of complex decision situations, making effective decisions requires relevant, timely, summarized information based upon analysis that is available when needed. Deploying and using the systems that provide the information can result in unanticipated negative consequences.
In conclusion, remember the law of unintended consequences. Any intervention in a complex situation can and probably will lead to results that were not intended as an outcome.
How would you assess these statements and in particular how would you justify your answers? Can you provide any evidence to support your conclusions?
References
Arkes, Hal R., Victoria A. Shaffer, Mitchell A. Medow, "Patients Derogate Physicians Who Use a Computer-Assisted Diagnostic Aid," Medical Decision Making, 2007, Vol. 27, pp. 189—202.
Huber, G.P., "The Nature of Organizational Decision Making and the Design of Decision Support Systems, MIS Quarterly, 5:2 (1981), 1-10.
Kay, M. F., "The Law Of Unintended Consequences: And Your Money," Psychology Today, Posted October 21, 2015 at URL https://www.psychologytoday.com/blog/financial-life-focus/201510/the-law-unintended-consequences
Lenz, R.T. and M. Lyles, "Crippling Effects of 'Hyper-Rational' Planning," Faculty Working paper No. 956, College of Commerce and Business Administration, University of Illinois at Urbana-Champaign, April 1983, at URL https://archive.org/details/cripplingeffects956lenz
Power, D., Do DSS builders assume their targeted users are rational thinkers? DSS News, Vol. 5, No. 21, October 10, 2004, modified February 26, 2011, at URL http://dssresources.com/faq/index.php?action=artikel&id=34.
The above response is based on Power, D., Can using a DSS have unintended negative consequences? DSS News, Vol. 4, No. 8, April 13, 2003; updated April 23, 2016.
This Ask Dan! column was originally written in response to an email from Nadeeka Silva in March of 2003. She asked for some help in answering some provocative questions about unintended negative consequences of DSS. I assumed Nadeeka was taking a DSS class so I broadened the assignment questions and responded publicly in this Ask Dan! column.
************************************************
Abstract for Arkes, Hal R., Victoria A. Shaffer, Mitchell A. Medow, "Patients Derogate Physicians Who Use a Computer-Assisted Diagnostic Aid". Medical Decision Making. 2007 Mar-Apr;27(2):189-202. Objective. To ascertain whether a physician who uses a computer-assisted diagnostic support system (DSS) would be rated less capable than a physician who does not. Method. Students assumed the role of a patient with a possible ankle fracture (experiment 1) or a possible deep vein thrombosis (experiment 2). They read a scenario that described an interaction with a physician who used no DSS, one who used an unspecified DSS, or one who used a DSS developed at a prestigious medical center. Participants were then asked to rate the interaction on 5 criteria, the most important of which was the diagnostic ability of the physician. In experiment 3, 74 patients in the waiting room of a clinic were randomly assigned to the same 3 types of groups as used in experiment 1. In experiment 4, 131 3rd- and 4th-year medical students read a scenario of a physician-patient interaction and were randomly assigned to 1 of 4 groups: the physician used no DSS, heeded the recommendation of a DSS, defied a recommendation of a DSS by treating in a less aggressive manner, or defied a recommendation of a DSS by treating in a more aggressive manner. Results. The participants always deemed the physician who used no decision aid to have the highest diagnostic ability. Conclusion. Patients may surmise that a physician who uses a DSS is not as capable as a physician who makes the diagnosis with no assistance from a DSS. Key words: decision support techniques; diagnosis computer assisted; patient satisfaction.
*************************************************
Abstract for Jordan Lowe, D., Reckers, P. M. J., & Whitecotton, S. M. (2002). This study provides evidence on how auditors' use of decision aids affects jurors' evaluation of auditor legal liability, based on an experiment in which actual jurors responded to a hypothetical audit lawsuit. The results suggest that decision aids can have positive, negative, or neutral effects on auditors' legal liability, depending on how auditors use the decision aid and the reliability of the decision aid. For high-reliability aids, jurors attributed more responsibility for an audit failure to the auditor when the auditor overrode the recommendation of a decision aid than when the auditor did not use the decision aid. However, jurors attributed lower responsibility to an auditor who relied on the recommendation of a highly reliable decision aid, even though the aid turned out to be incorrect. In contrast to the high-reliability conditions, auditors' use of the decision aid had virtually no impact on jurors' liability judgments when the reliability of the decision aid was low.
Last update: 2016-05-12 08:07
Author: Daniel Power
Print this record
Show this as PDF file
You cannot comment on this entry