Generative AI breakthroughs over the past year have crystalized a significant issue that IT leaders have long been aware of but few have addressed programmatically: tech ethics.
And the stakes are beginning to mount. Of 119 CEOs polled at the Yale CEO Summit this summer, 42% said they believe AI has the potential to destroy humanity within the next decade. Indeed, a report released by authors from Carnegie Mellon, the Center for AI Safety, and the Bosch Center for AI showed how easy it is to get around safety measures in recently released AI chatbots, causing them to generate harmful and dangerous content.
Confronted with the prospect of destroying civilization, tech leaders have proposed two paths: moratoria on development or legislative regulation.
Moratoria are unrealistic and regulation takes time to develop. Public-private partnerships necessary to build technologies safely and focus on the greatest of society’s challenges require months, if not years, to take shape, in part because companies do not decide to collaborate easily or quickly.
While those approaches will certainly be necessary to ensure AI systems are designed and built to be safe, they should be augmented by careful, principled, and practical technology development by enterprises seeking to make good on the promise of the technology. People say they want technology responsibly deployed, but it seems to never work out that way, in part because ethical technology starts with the design and development of technologies, not just their implementation.
From our experiences working in companies and our research, we have outlined a roadmap with stages that IT leaders can follow even before regulation is developed. From this roadmap, we offer five best practices for the development of ethical and humane technology, to provide both a set of tools for people working inside companies and the agency that people need to develop technology ethically, without waiting for regulation.