We will soon reach a point when we will no longer be able to tell if a data set has been tampered with, either intentionally or accidentally. AI systems rely on our trust. If we no longer trust their outcomes, decades of research and technological advancement will be for naught. Leaders in every sector—government, business, nonprofits, and so on—must have confidence in the data and algorithms used. Building trust and accountability requires transparency. Building more transparency is a challenge, as corporations, government offices, law enforcement agencies, and other organizations understandably want to keep data private. The ethics of how data is collected in the first place may also influence the trustworthiness and validity of scientific research, particularly in areas such as organ donations and medical research. In addition, employing ethicists to work directly with managers and developers and ensuring diversity among developers—representing different races, ethnicities, and genders—will reduce inherent bias in AI systems.