A scale and gavel sit on a judge’s wooden desk in a dark room with only sunlight coming in through a small window, highlighting the current obscurity of ethics in artificial intelligence which needs to be highlighted much more extensively.

AI Ethics Needs to be More than Theoretical in Your Organization

How developing an AI ethical framework can ensure sustainable growth in the long term

Many reading this will already have heard of Senate Bill 205 that recently passed in Colorado, a bill that could easily be placed in the new category of AI Ethics enforcement. Industry advocates are adamant the bill is too vague, while many consumer groups worry enforcement provisions don’t go far enough in enforcement measures.

Centered in the bill’s vision are anti-discrimination and anti-bias approaches meant to prevent AI from repeating historic patterns of bias in housing, healthcare, and other necessary human service decisions. Importantly, Senate Bill 205 operates on an outcome based, rather than intention-based enforcement system for monitoring discrimination in practice.

The bill’s effectiveness against discrimination and effect on technological growth in Colorado will take years to be seen. Enforcement of the law is set to begin in February of 2026. However, the onset of legislation like Senate Bill 205 is an important step in the widespread adoption of AI ethical frameworks at large.

 

An Overview of AI Ethics

“Trustworthy AI” is a popular phrase used to describe artificial intelligence applications with purported ethical guidelines built in. Yet for those who are adopting ready-built solutions, taking the word of the AI creator may not be enough to truly grasp the ethical framework behind a given product. 

An overview on the subject defines AI ethics as the psychological, social and political effects of AI in its adopted contexts (Kazim et. al, 2021). In the paper, the researchers distinguish between principles, practices and processes.

AI Ethics Principles

The first category in AI ethics principle is that of abstract-first principles, or basic guidelines for use and development of AI. We can see this category of AI ethics in action through campus-wide AI guidelines like those distributed by the University of Santa Clara’s Markkula Center for Applied Ethics

These guidelines are an example of basic guidelines for use, rather than development of AI and include instructions such as, “NEVER directly copy any words used by ChatGPT or any generative AI.”

An example of abstract-first principles for development can be found in Google’s published Responsible AI Practices. Rather than focusing on using the algorithms, these guidelines focus on development of the algorithms by requiring steps like bias-assessment through dataset evaluation and workforce composition in developing the artificial intelligence tools themselves.

Sci-fi fans may even think of an ever-present AI trope in the genre: AI may not cause harm to humans. Prohibiting harm to the creators may also be considered an abstract-first principle of AI ethics.

AI Ethics Legislation Principles

The Colorado legislation discussed at the beginning of this article is an example of a legal framework for enforcing ethical AI development and deployment. Underneath this law, we see a few of the intersecting questions beneath AI ethical legislation principles.

The first is the idea of top-down approach via legislation rather than responsive case law, or a bottom-up approach. Colorado is an example of a state taking a bottom-up approach rather than letting case law establish precedent in use of data and enforceable ethical standards. 

As exemplified by industry opposition in Colorado, there is additionally another AI legislative principle that questions the use of legislation at all in cases where self regulation could be effectively used.

Another underlying principle of ethical AI legislation demonstrated by Colorado is jurisdiction. Artificial intelligence will be deployed on a global scale, with international laws and conflicting ethical frameworks at play across borders. In the case of Colorado, we also see smaller jurisdictional concerns in the domestic United States. Will new AI technology adapt to Colorado or simply cease to operate in the state? 

Bio/Medical Principles in AI Ethics

Using bio/medical ethical frameworks in AI ethics is not the same as questioning the use of AI in medicine. Rather, industries can rely on well-developed ethical frameworks like those in the medical space as a starting point for developing their own ethical AI.

The four classic principles in medical ethics are: 1) beneficence (do good), 2) Non-maleficence (do no harm), 3) autonomy, and 4) justice. Many AI ethicists have turned to these as starting pillars for AI frameworks, but as published in Natural Machine Intelligence in 2019, “Principles Alone Cannot Guarantee Ethical AI” (Mittelstadt, 2019).

Processes in AI Ethics

In addition to AI principles, processes in enacting ethical AI can be divided into two camps: ethical-by-design and governance (Kazim et. al, 2021). Ethical-by-design is a process that is far from universally agreed on in standardized practices. Trade-offs of privacy versus data transparency, integration and interpretation of datasets, and usability versus necessary complexity are just some examples of the types of by-design specifications that are yet to develop a universal standard.

Governance can be divided into technical and nontechnical aspects. Technical aspects would overlap with those of ethical-by-design processes, wherein responsible administrators would ensure the inclusion of those processes into any AI development. Non-technical governance includes tasks like assessing impact of AI implementation over time, working with developers to include organization-specific ethical guidelines, and deciding when and how to implement AI within an organizational context.

Out of the weeds, into the why: AI Ethics and Sustainable Growth

Understanding this overview of ethical AI can have huge implications for sustainable growth in your organization. There are two ways to think about applying these ideas to your company’s AI journey. One is to have these ideas in mind and to ask questions that go deeper than an AI product’s party line. Who was involved in developing AI ethical guidelines in their company? How are ethical questions involved in updating the product over time?

The second way to implement these ideas is to build AI ethics into your organization’s proprietary AI algorithms and solutions. We’ve discussed in the past how proprietary solutions built for your team can have a large competitive advantage and can set your team up for long-term success and innovation. 

Incorporating AI ethical frameworks as a part of this design process will protect your company from damaging AI implementation resulting in legal peril or public relations disasters. In the overview above, you likely noticed that there is ambiguity still present in universal AI ethical guidelines. As the subject matter expert on your organization’s specific risk profiles, you can ensure that ethical questions in design and deployment are specific to your offerings and potential pitfalls of deployment. 

Conclusion

Having safeguards in the beginning of your AI journey can ensure that your company builds processes for sustainable growth that won’t be hampered by ethical concerns that were unaddressed at the outset. As you grow the amount of AI solutions employed across your organization, each of these solutions will have the backbone for ethical, sustained growth.

Ready to get started on high-performing, sustainable AI built for your organization’s specific needs? Reach out today for a consultation.

 

References

Google AI. (n.d.). Responsible AI practices. Google. https://ai.google/responsibility/responsible-ai-practices/#:~:text=It%20requires%20a%20holistic%20approach%2C%20from%20fostering%20an,testing%20of%20final%20AI%20systems%20for%20unfair%20outcomes.

Kazim, E., Koshiyama, A., & Webber, J. (2021). A high-level overview of AI ethics. Patterns, 2(9), 100314. https://doi.org/10.1016/j.patter.2021.100314

Markkula Center for Applied Ethics. (2023, May 22). Guidelines for the ethical use of generative AI (i.e., ChatGPT) on campus. Santa Clara University. https://www.scu.edu/ethics/focus-areas/campus-ethics/guidelines-for-the-ethical-use-of-generative-ai-ie-chatgpt-on-campus/

Mittelstadt, B. Principles alone cannot guarantee ethical AI. Nat Mach Intell 1, 501–507 (2019). https://doi.org/10.1038/s42256-019-0114-4

 

Download the Curriculum

Get the FREE guide

A guide you’ll actually want to read