In the December 2015 issue of IEEE Technology and Society Magazine (IEEE T&S), 2015-2016 Society on Social Implications of Technology (SSIT) President Greg Adamson discusses Improving Our ‘Engineering-Crazed’ Image. Adamson opens his message with a mention of the recent VW environmental deception debacle, and the rift that such events create between the engineering community and the general public. “In addition to the anticipated financial impact on the company,” Adamson writes, “it is a setback to the credibility of technologists, one that brings ethics to the fore.” Adamson insists that, in situations such as these, an engineer’s primary loyalty to should be to public interest, and not his or her employer.
Below, Adamson expands upon his IEEE T&S article, and discusses in more depth his opinions on engineering and ethics.
For those interested in further discussion of engineering ethics – and the world’s perception of it – please visit IEEE SSIT’s website.
How would you further define “engineering-crazed,” and what other applications/situational impacts can you think of to which this term can be applied?
“Engineering-crazed” is a media term. What the journalist seems to be suggesting is that a technologist can focus on a vision of technology prowess and lose all perspective. Concern for this can be found in popular literature. The “crazed scientist” who builds Frankenstein (in the movie versions, not in the original thoughtful book) could be an archetype. Within the technological community we see cases of technologies that become self-perpetuating, losing sight of their original social purpose. A serious example occurred in Australia in the 1980s with the building of hydroelectric dams on the picturesque island of Tasmania. Over time the value of each dam became less important than the need to continue building dams, and it took a large citizens’ movement to rein in the dam builders and protect the heritage environment.
You say this is a setback to the credibility of technologists. What are the negative implications of such a setback? How can this be alleviated?
During periods of significant technical change, public trust is probably the most important asset the technology community has. For the past three years the vehicle industry is the subject of enormous debate on ethics in autonomous design: how should an autonomous vehicle reach a decision if all options lead to injury or death? For example, should the vehicle cause injury to a driver in order to avoid killing a pedestrian? Autonomous vehicles could significantly reduce the loss of life due to accidents, so their introduction strongly suggests a public good. “Prove that there isn’t a secret module in this car that deliberately caused my injury” isn’t a demand the industry would welcome. If the industry loses public trust, the autonomous vehicle industry’s future will be weakened.
Why is engineering ethics so poorly perceived?
Within the technology community, some professionals still operate in blissful ignorance; they are not aware that they are being required to make ethical decisions, and not surprisingly therefore make poorly informed ones. Fortunately, this is less common today than 50 years ago. From outside the professional community, however, there is very little awareness of this change in the profession. The broad public trusts, hopes, or doubts that engineers will “do the right thing”. This (mostly) benign ignorance makes us vulnerable to shifts in opinion. A handful of high profile cases where technologists are seen to be behaving badly could damage the reputation of our profession. On the other hand, if technologists are seen as proactive, protecting their organisations and profession from bad behaviour, we will build trust.
Should engineering ethics have to adapt to the times, especially in the era of surveillance devices, computer-controlled vehicles, and/or the presumption of free IP on the internet? Is there a concrete answer, or is this situational? Can there be leniency? Is it okay to be unethical?
We must adapt to the times, and this is why it isn’t possible to survive if we follow an unethical approach. In the past “accidental” controls prevented intrusive technology overstepping social boundaries. Surveillance didn’t matter because the videotape devices were often faulty, and there was no collation of all images. Today the world is “enclosed” by technology. It is “tightly coupled”. This can be seen as “working better”. Alternatively we can see that the room for failure is diminishing. Data gathered and inaccurately aggregated can impact individuals in unexpected ways, leaving them with no way of knowing how it happened or how to correct the results. As Poincare explained in the late 19th century, in deterministic systems a small change in a cause yields only a small change in the result, while in a statistical system a small change can yield a result of any finite size. One could say our world is becoming less deterministic. It is becoming harder to know what impact poor ethical decisions will have, and this makes them more dangerous.
In some industries there is an expectation of unethical behaviour, for example in organized crime. In others there is an expectation of trust, such as in health care. Presumably our profession would prefer to have expectations of trust.
What are the biggest threats to “good ethics” in terms of the engineering profession?
Ethical behavior may be viewed as an individual moral obligation, facing the combined weight of organizational self-interest and bureaucratic inertia. Perhaps a better way to think of it is as a shared obligation. All technology professionals share an interest in protecting their jobs, their organizations, and their profession. If we look at the recent history of financial services, there have been something like half a trillion dollars in regulatory fines for legal breaches. This trend is now found in relation to technology failure. This decade we have seen the cost to BP of the 2010 Gulf of Mexico oil spill (tens of billions of dollars), and speculation of the eventual cost to VW for deliberately misleading behaviour. Organizations need ethical technologists to protect them from reputational damage and financial loss. From this perspective the biggest threat to ethics is poor organizational support to the ethical technologist, for example, whistle-blower protection, particularly in organizations with a culture of “shoot the messenger.”
A completely separate threat is the growing responsibility of technologists for areas previously managed by other professions, such as in health care and the life sciences. Here the threat is the lack of understanding of the complexities that arrive with the new opportunities. For example, if multiple embedded medical devices from different suppliers within one person fail to share life-critical data, due to technical or legal barriers, then standard technical practices may become life-threatening.
How can we bring this dilemma into dialogue between both practitioners and the general public (and how does an organization such as the IEEE/SSIT or other “professional organizations that promote public interest” facilitate this)?
Establishing an effective dialogue on ethics between technology professionals and the broader community is primarily about acknowledging our limitations. This may be difficult as we work in a culture of problem solving, and this includes designing to avoid limitations in our products. Nevertheless, here are some limitations we may need to acknowledge:
- Technologists are just one set of professionals, and our fields of expertise are limited. As required we work with social scientists, medical professionals, lawyers, architects, political scientists, policy specialists, elected officials and hundreds of other groups.
- In a dialogue with non-technical audiences, we cannot insist on framing the dialogue in technical terms. We do not understand a community, just as they do not understand a technology, until we have the dialogue.
- We cannot insist that our view is “right” or “true”. We should also avoid appealing to metaphysical concepts such as “progress”. All we can reasonably argue is that our view is “fact based”, accepting that “facts” include traditions, feelings, and community priorities.
- Technologists are in general poor communicators. Technologists place more value on what we say rather than how we say it. Courses on the art of rhetoric are not found in undergraduate engineering degrees!
- We are sometimes completely wrong in our own fields of expertise. For example, the crash of the Mars space probe in 1998 due to confusion over units of measurement is a cause for reflection.
Taking these points on board, the IEEE Society on the Social Implications of Technology (SSIT) seeks to build bridges to the social sciences and humanities; to understand the reasoning of those who “oppose technology”, even if we disagree with the reasoning; and to let the broader community know that technology professional organizations care about these issues.
What should this dialogue accomplish?
A dialogue can aim to raise the quality of public discourse. “Fact-based” discussion can be counterposed to a prejudice or ignorance based one, and there is an opportunity for the technical community to contribute to a reduction in prejudice and ignorance.
What responsibility does a professional organization have to uphold and promote good ethics?
IEEE has had a code of ethics for more than 100 years, and should be proud of this. This is a living code, updated to reflect community and professional attitudes. Point 10 of the code places a responsibility on all IEEE members to “assist colleagues and co-workers in their professional development and to support them in following this code of ethics.” This responsibility is held both individually, and collectively by IEEE. We see this done in a wide range of Technical Society activities, at conferences, in publications, and in other ways.
IEEE can also provide support to individuals and organizations in standing up for principles and arguing why these principles are important. For example, IEEE-USA is actively explaining the damage that could occur if technology companies are forced to allow back-doors in their security. In this case the ethical path isn’t contrasted to a non-ethical path, but to a path of ignorance. Actions such as this can help show to the community the ways in which our ethical practices are helping to protect the community.
The Society on Social Implications of Technology (SSIT) of the Institute of Electrical and Electronics Engineers (IEEE) is concerned with how technology impacts the world, and with how the application of technology can improve the world. The Society focuses on issues such as: humanitarian engineering; environmental issues including climate change, green technologies, and sustainable design; privacy and security; other economic, health, and safety implications of technology; engineering ethics and professional responsibility; engineering education including k-12 and engineering education in social implications of technology; history of technology; public policy related to engineering, technology and science; health and healthcare technologies and impact; reliable energy and social issues related to energy, and social issues of information technology and telecommunications.
This article was taken from