In general, privacy, bias and discrimination are currently receiving a lot of attention. However, it is common for them to be underprioritized in technology implementations and treated as isolated issues, only receiving attention when necessary. Many organizations instead prioritize goals such as efficiency gains or increased profits, which often require richer data sets, but they fail to consider the eventual impact of their data handling methodologies on foundational social justice issues.1 The consequences of implementing technologies without fully understanding the privacy, bias and possible discrimination issues they pose threaten both individuals and enterprises. Built-in bias can negatively affect an individual’s ability to receive fair treatment in society. For organizations, the negative potential includes reputational damage, financial impact, litigation, regulatory backlash, privacy concerns and diminished trust from clients and employees.2 Developers of technology applications should aim for them to be impartial, unbiased and neutral, and organizations should consider these foundational issues during the implementation of emerging technologies to ensure that bias and discrimination are not fundamental components of a system’s design.


For this conversation to advance, current industry behavior patterns need to be addressed. Favoring operational goals and efficiencies without prioritizing ethics is no longer an acceptable approach. The climate may be changing, as revealed by the challenges some large enterprises have faced over the biases designed into their systems and deployed at scale. Some have reevaluated and modified their technology implementations to address built-in bias and refined their approaches. Designers of systems must acknowledge the tendency to impart their individual biases into systems. Further, enterprises must realize that when technology implementations have built-in biases, using them to leverage data sets may perpetuate discrimination at scale. Only by understanding where these fundamental problems arise will it be possible to develop nondiscriminatory systems and mitigate risk.

Reads the full article at ISACA:


1 Lo Piano, S.; “Ethical Principles in Machine Learning and Artificial Intelligence: Cases From the Field and Possible Ways Forward,” Humanities and Social Sciences Communications, vol. 7, iss. 1, 17 June 2020,
2 Cheatham, B.; K. Javanmardian; H. Samandari; “Confronting the Risks of Artificial Intelligence,” McKinsey Quarterly, 26 April 2019,

One Response