Developing AI Responsibly: Perspectives from Industry, Academia, and the Public

This post brings together perspectives from a review of 164 research papers and 10 industry guidelines, complemented by a survey of 300 individuals, all centered on the theme of responsible AI development.

People trying to come up with ideas!
Image credit: Work illustrations by Storyset []

Understanding AI Research — Publications in the Academic Venues

The academic community, known for shaping the future and influencing industry and regulations with evidence-based results, often mirrors the concerns of the industry. Current research in human-centered responsible AI heavily emphasizes fairness and explainability. It’s worth noting that a significant portion of AI research is industry-driven, which means that independent research might be overshadowed.

Summary of research themes on human-centered responsible AI (164 papers in total). See the full paper here:

Governance is how to govern AI, like what this post is trying to talk about. What even consists of a responsible AI?

Fairness ensures that AI provides the same services to all intended users without making specific judgments or favoring one over the other. A known example is facial recognition AI having inclusivity issues making headlines and documentaries. You might want to check out Coded Bias if you haven’t watched it yet.

Explainability is about providing citations for the results and understanding how a decision was made. Knowing what is inside the black box that I am communicating with.

Human well-being or flourishing is about caring for people who use AI and build AI. We don’t want an AI to make us addicted to it or lose our motivation for work. If you watched the movie Her, you get what I mean. Besides the users, we have to think about the labor behind AI. AI models are built on top of human annotations, and some annotators may not get paid well enough. This is very similar to the idea of Sweatshops in fast fashion (cheaper is not always a good choice).

Privacy ensures that whatever data I have stays in my hands. Interacting with AI may result in over-sharing data that may end up in other people’s interactions with AI or the server of the OpenAI, for example.

Security is all about making sure that AI stays within it’s boundaries. For example, ChatGPT shouldn’t answer questions like how I can create a bomb.

AI Industry Guidelines for Responsible AI Development

Major organizations like Google, Microsoft, IBM, OpenAI, Salesforce, and the National Institute of Standards and Technology (NIST) have already published responsible AI guidelines. These guidelines are instrumental in AI design, education, and communication.

Given the numerous guidelines echoing similar values, a unified understanding of AI ethics is essential for its ethical and responsible evolution. Here’s a list of a 10 known companies and institutes with their responsible AI values listed on their websites:

A summary of responsible AI guidelines from 10 companies and institutes. I have arranged the values based on their similarities.

While academic research and industry guidelines align in some cases, companies tailor their approach based on their business model. For instance, Hugging Face emphasizes open sharing and credit attribution as a platform for sharing AI models, while OpenAI prioritizes safety, security, and trust as platform for public facing AI tools like ChatGPT.

Big tech companies like Google, Meta, and Microsoft highlight fairness, explainability, and privacy, possibly because these often appear in the news.

Public Perception of AI

I recently surveyed six countries with 50 participants from each about their expectations from AI (about 300 participants in total). A noteworthy finding was the public’s desire for highly accurate AI.

Concerns arise when considering AI in critical roles like medical decision-making. What if it’s not 100% accurate and leads to fatal errors and people dying because of AI’s decisions?

Concerns about privacy and copyright also emerged. For instance, who retains the rights when collaborating with an AI tool on an essay? How do we ensure private interactions with AI chatbots? Tools like ChatGPT utilize user interactions for training, raising privacy concerns.

While fairness and explainability are vital, they might not be the public’s primary concerns.

On the other hand, despite AI’s potential for businesses and the massive investments in creating business values out of AI, many survey respondents want to see AI helping in their personal lives, like making travel itineraries and shopping. (This research is ongoing, so more details will be available soon.)

Recommendations for the Future

Some potential future directions in AI governance include:

  • Making AI development human-centered. While companies like OpenAI frequently hire engineers and AI scientists, they rarely have openings for user researchers. Jakob Nielsen’s post sheds light on this issue. UX research can play a critical role in AI. ChatGPT, for example, has a tremendous innovative backbone, but the interface is flat without much imagination or creativity. It looks like an old chat system without any specific interactions, perhaps mirroring the lack of UX in its design and development.

  • Improving public trust by effectively communicating AI’s capabilities. Over-reliance or skepticism can be detrimental, as seen when a lawyer overtrusted ChatGPT, leading to a courtroom faux pas. User research both generative and evaluative, would be beneficial to discover and understand different approaches to communicating these values (fairness, explainability, privacy, etc.) to people through informative mediums. These communications should be transparent and help people make decisions without deception or even with intended friction to minimize harm.

  • Governance of Open-Source AI: Blessing or Curse? Open-source AI projects exude a sense of impartiality. However, they can have unintended consequences, such as being used to produce deep fakes. To mitigate potential risks, developers should meticulously document their work and adhere to responsible development guidelines. Yet, the question of how to govern this open market remains unanswered. This is particular important for companies like Hugging Face as they grow in hosting AI models. Communicating responsible AI values through simple visuals could be a direction for future UX.

  • Assessing Integrity of AI Research: Funding is a significant driver of research, and AI is no exception. Some researchers receive funding from tech giants, potentially influencing their research direction. This is reminiscent of how the tobacco industry once swayed academic research. Therefore, there’s a pressing need for transparent reporting in AI research as such influences over a long period could be huge and possibly dangerous.

  • Exploring other aspects of AI governance like validity, privacy, security, and explainability rather than putting too much stress on one aspect of responsible AI. While fairness is crucial and highly researched, a balanced approach considering other elements is vital. We don’t want an AI that is super fair but doesn’t respect our privacy.


As demonstrated throughout this post, academics, industry, and the public come from different perspectives on AI. Their backgrounds, expectations, and values may or may not align.

Nonetheless, academics and the AI industry should put humanity at the forefront of their work to understand what people want from AI instead of a techno-centric view toward AI, where AI is built because of specific capabilities that it can offer, instead of understanding what people want from it.

Mohammad Tahaei
Mohammad Tahaei
Research Lead

Research Lead