Home | Contacting Me | Return to Curriculum Vitae for Dr. Mark Alfino
When professionals are asked about the value of information technology to their work, they typically give two kinds of answers. Some see the advent or arrival of sophisticated information technology as a great boon to their professional lives. For them, the only question is how soon can the technology be deployed to open up new horizons for professional activity and end dull and tedious work. Others sense more acutely the serious dislocation of familiar work patterns and interactions that the arrival of personal computers, networked information systems, and global information access has caused. Both responses are justified. The thick instruction manuals that come with the hardware and software of information technology are mixed blessings; we ignore them at our peril, but if we read them and adopt new information practices, the character of our work may change. The question I will consider in this column is whether and under what circumstances that change poses a threat to our pursuit of the good life. I will focus this question on a particular kind of information technology, expert systems. One of the most prominent philosophers of our day, Daniel Dennett, has argued that this technology can indeed pose a significant moral threat to us. My view is that the danger is overstated, but the problem of integrating information technology into our lives (especially our work lives) does pose moral challenges.
In two previous columns I argued that there is, in addition to a moral right to information, an "information virtue", and that managers and other workers with special access to information have a moral responsibility to realize this virtue in their work. The information virtuous do not just use information to promote excellence and opportunities for growth among their co-workers, but they also think critically about how information handling is tied to patterns of institutional power and whether such power relationships are encouraging moral virtues such as worker responsibility, autonomy, and leadership. This entire line of thinking assumes that information technology does not, in general, pose a moral threat to our personal and professional lives. Daniel Dennett thinks that it does, so it is especially important that we look at his arguments carefully.
Since Dennett's argument is that expert systems in particular threaten our ability to pursue the good life, we should first describe some of the features of expert systems before presenting his argument. Most expert systems are in fact large databases which use rules of various kinds to answer questions people are likely to have about the information they contain. For instance, I might assemble data about a particular economic market and rules which I think govern that market to create an expert system which models the market in a way that allows the system to predict future events. In a more sophisticated expert system, one which takes advantage of recent advances in "machine learning", the distinction between data and rules is somewhat looser. For instance, suppose I want to predict the likelihood that an entering freshman law student will complete law school. Instead of giving the machine the rules which I think determine success in law school, I might just give it a lot of information relevant to predicting success and have it derive rules by weighing competing variables on the basis of actual data on actual law students. In this more sophisticated model it appears that the machine has "learned" the very rules which it will then use to answer my question about a particular incoming student. Some people believe that these more sophisticated systems, since they appear to develop the rules for answering our questions on their own, usurp a more significant domain of human judgement than less sophisticated systems.(1)
Dennett wants us "to consider the possibility that information technology, which has been a great boon in the past, is today poised to ruin our lives -- unless we are able to think up some fairly radical departures from the traditions that so far sustained us"(2) He is particularly worried that two features of the good life, a moral life and an interesting life, may not be jointly realized because of information technology such as expert systems. The general reason for this is that expert systems take away elements of human judgement which are the subject of specific human virtues and make life interesting.
To make these general claims concrete, Dennett considers the case of the "country doctor" who must decided whether to use an expert system to make diagnoses. Suppose this system had the most up to date information on medical science as well as rules for weighing particular information about particular patients. If such a system could produce more reliable diagnoses, the country doctor would have no choice but to use it. But Dennett believes that the doctor's reliance on the system represents a loss. Not only is life now less interesting for the doctor, but the art of medical judgement will be "displaced by the mere capacity to follow directions" (138). Dennett is claiming that the moral loss is not so much the result of something immoral being done to the doctor, but in losing some morally significant features of his or her work.
To make his argument general, Dennett draws from the example of the country doctor a general claim that modern technology is robbing us of some virtues that depend upon ignorance. "The obligation to know -- a burden of guilt that weighs heavily on every academic, but that in milder forms is ubiquitous today - creates the situation where, if we read everything we "ought" to read, we would have time to do nothing else." (145). By a "virtue that depends upon ignorance," Dennett probably has in mind those situations in which we give moral praise to someone for a judgement which required them to bridge some gap between knowledge and action. People rarely get our admiration for giving answers about which there is easily attained absolute certainty. But we do often praise and admire people for their ability to use intuition and judgement to reach good decisions in the face of great uncertainty. Sometimes we say, in retrospect, that they were lucky, but other times, especially when we see a pattern of such judgements, we credit the person with a kind of wisdom or good judgement. The expert system, since it may have better "judgement" than its rival human doctors, would appear to job them of the admiration they might formerly have received. Dennett is quite candid about identifying the basis of that admiration in terms of the ability to operate successfully in relative ignorance.
While I am very unsympathetic to Dennett's argument, I must admit that it is ingenious and correctly shows the ethical implications of a given economy of information. Since many people think of information ethics in terms of isolated ethical problems (like confidentiality, copyright, propriety information, etc.) rather than general and pervasive features of our experience, Dennett's approach is refreshing. But I think his analysis has two serious weaknesses: he misunderstands the way expert systems operate and he uses a narrow analysis of the impact of information technology on the workplace.
Given Dennett's expertise in following developments in artificial intelligence and philosophy of mind, it is surprising that he would actually believe that an expert system would eliminate the role of professional judgement is very significant ways. Sometimes information technology involves trade-offs between skills, but we should not assume that when one area of judgement is eliminated it is not replaced by another. Library automation, for example, places a premium on good judgement in the pursuit of information, but de-emphasizes some other virtues more appropriate to older technologies. Similarly, the expert system in medicine, since it will open up the possibility of searching more literature, will require more practicing doctors to be good researchers. But this is a trade-off between moral goods -the virtue of good judgement in the face of ignorance and the virtue of good judgement in the face of an information rich environment.
Like many science fiction writers, Dennett conceives of expert systems as "closed" systems in which the doctor interacts with front end software which exhaustively describes the search possibilities for the particular case or illness. But there is good reason to believe that this situation will only emerge when (and if) we achieve certainty about a large percentage of the causal relations at work in the human body and environment. Until then, any expert system will be limited to providing doctors with a variety of possible diagnoses. The doctor's virtue will still consist in judging well in light of what is known and not known about the patient's illness at any given time.(3)
The second problem with Dennett's argument is more interesting for the light it sheds on information ethics in the workplace and how the analysis of such problems should proceed. There is little dispute that information technology is radically reorganizing many professional workpatterns and workplaces. Someone might argue that such technology is liberating because it frees professional time for satisfying other unmet responsibilities and for taking on new responsibilities. These new efficiencies could allow professionals (especially doctors) to meet more client and patient needs. When Dennett considers this objection, he likens his poor country doctor to a doorman working in a world with automatically opening doors. He agrees that the doorman might have more time now to deal more personally with more customers. Yet he thinks the net result is a loss: "The doorman has certainly been relieved of such menial labor, but also of responsibility, variety of challenge, and autonomy." (141)
In tallying the moral costs and benefits from new technology, Dennett presumes that professionals are morally entitled to insist on the relative stability of their professional job description. But why should we assume that? One of the most exciting (and threatening) aspects of information technology is that it has the power to force reappraisals and changes in the way we live and work. This is a background condition of technological change in general, but it is especially true of information technology. Perhaps the moral "virtues of ignorance" of the country doctor will be transformed into moral "virtues of access, education, and deliberation" which can be enjoyed by far more people living in an information rich world in which expert systems do make medical knowledge more understandable to more people. Dennett is right to warn us that there are moral tradeoffs to information technology. My point is that we cannot accurately assess those tradeoffs if we assume arbitrarily that some things, like country doctors and doormen, ought to be permanent fixtures of the landscape. When the analysis is broadened we are in a much better position to understand just which future scenarios we should work toward.
The ethical analysis of information issues in the workplace cannot be divorced from an organizational analysis of work and a social analysis of the position of the organization in the larger values of the society. When we fail to broaden the analysis in this way, we risk placing an unjustified value on just those aspects of a status quo which we are trying to assess. As I indicated in my last column, the development of the virtuous information manager may indeed include changes in the way workpatterns and power relationships are configured. For example, with networked information technology the question of information sharing in an organization is no longer resolved by asking about the practical limits of sharing documents. The new technology usually makes room for electronic copies of a tremendous amount of information such as budgets, blueprints, plans, and schedules, as well as forums which allow workers to contribute to deliberative processes. The basis for decisions about what to share has changed, and there is a clear need, in my opinion, not to shrink from the ethical challenge of creating information policies which realize both organizational goals and the morally significant goals of the organization's stakeholders.
In the next column in this series, I plan to pursue this question about
the new grounds for making workplace information policy and the ethical
implications of that policy. We will look at some specific cases involving
information sharing among organizational stakeholders in an effort to understand
some of the ethical principles which ought to guide these policies.
1. This brief description oversimplifies a couple of the features of these artificially intelligent expert systems. We did after all, give the computer rules for handling the data.
2. Dennett, Daniel C. "Information, Technology, and The Virtues of Ignorance." Daedalus 115 (Summer 1986):135-153.
3. In light of this criticism, we could recast Dennett's argument as a concern about the moral implications of achieving relatively complete knowledge about a subject, not about the advent of better information technology. Indeed, deep philosophical issues are raised by advances in our knowledge of human physiology and behavior.