AI grows quickly in 2026, and so do the questions about how to use it responsibly. This article introduces four leading researchers in ethical AI and explains how their work supports fairness, safety, and accountability. With AI adoption at record levels worldwide, these voices help shape how technology affects society today and in the years ahead.
Abhishek Gupta – Making Ethics Actionable

“Move fast and fix things.”
Dr. Abhishek Gupta (1992-2024), also known as Abhishel to many in the AI community, left a lasting legacy in AI ethics. As the founder of the Montreal AI Ethics Institute (MAIEI) and former Director for Responsible AI at Boston Consulting Group, Dr. Gupta dedicated his career to bridging the gap between ethical principles and practical implementation. His involvement with ACI Montreal played a crucial role in promoting Augmented Collective Intelligence (ACI) as a tool for responsible AI governance.
“I work with your organization and relevant stakeholders to design, develop, and deploy a Responsible AI program that helps you achieve your purpose, live your values, and deliver your goals all while innovating responsibly,” Gupta explained of his approach.
His impactful work focused on:
- Creating practical frameworks for responsible AI deployment.
- Environmental sustainability in AI through his work with the Green Software Foundation.
- Developing the concept of Augmented Collective Intelligence (ACI).
- Publishing the widely-read “State of AI Ethics” reports.
The Montreal AI Ethics Institute (MAIEI) continues his vision of democratizing AI ethics literacy. His blog posts, including insights on “Hallucinating and moving fast”, and his coverage of conferences like FOSSY (Free and Open Source Software for You) and AIES (AI, Ethics and Society), remain valuable resources for those working in ethical AI.
During his time at BCG Henderson Institute, Dr. Gupta operated under the name Agent K, contributing to the Agent K Blog, where he explored daily insights into building more ethical AI systems. Platforms like Codex.io further amplified his expertise on responsible AI, offering practical advice companies could actually use, not just discuss. His Day in a Life series provided glimpses into his daily routines and strategies for effective AI governance.
Dr. Abhishek Gupta was particularly concerned about environmental impacts. “AI systems are compute-intensive,” he noted. “They require massive amounts of data that might be moved over the wire, and require specialized hardware to operate effectively. All of these activities require electricity – which has a carbon footprint.”
Through his work, Gupta proved that responsible AI is not just about discussions; it’s about action. The tools and frameworks he developed continue to guide companies toward ethical AI practices that matter.
Neil Burch – Mastering Decision-Making Under Uncertainty

“It really does get truly enormous. Like, larger than the number of atoms in the universe.”
Neil Burch, a Canada CIFAR AI Chair and Senior Research Scientist at Sony AI focuses on AI decision-making under uncertainty. His research covers complex games like checkers and poker, where information is often incomplete or deceptive.
Burch’s key achievements include:
- Solving checkers (2007) and a type of poker (2015)
- Creating DeepStack, the first AI to beat professionals at no-limit poker
- Developing algorithms for decision-making with limited information
- Enhancing how multiple AI systems work together
While Burch’s research does not directly focus on online gambling systems, the principles behind his work – uncertainty, hidden information, adversarial behavior – are relevant to broader AI fairness and safety challenges.
How AI Research Supports Fairness in Digital Gambling Systems
AI research in games like poker and other strategic environments offers useful lessons about decision-making under uncertainty. These studies explore how algorithms handle incomplete information, follow clear rules, and avoid biased outcomes. While this research is not created specifically for gambling products, the underlying principles – fairness checks, transparency, and predictable behavior – are important in many digital systems, including those used by online entertainment platforms.
As interest in safer online choices continues to grow, some readers look at independent reviews that explain how to identify the safest online casinos in Canada. Resources like this guide to secure and trustworthy platforms outline how licensed operators use certified random number generators, third-party audits, and strict compliance requirements to maintain fairness.
After understanding the safety standards, it also helps to compare options through reliable list of verified sites which highlights platforms that follow clear rules, offer strong user protections, and maintain transparent practices. These checks complement the same fairness principles that appear across responsible AI research.
Fairness and responsible AI are common themes in ethical AI research, and they are just as crucial in online gambling. Ensuring that algorithms function ethically and transparently in online casinos is part of building AI systems that align with human values. As AI continues to shape online gambling, maintaining fairness and accountability becomes essential.
Next, we will look at how another expert, Timnit Gebru, has exposed AI’s weaknesses and pushed for fairness and accountability.
Timnit Gebru – Exposing AI’s Blind Spots
“I just couldn’t not be in there right away.”
Timnit Gebru has become one of the most important voices highlighting the risks of unchecked AI. As founder of the Distributed Artificial Intelligence Research Institute and co-founder of Black in AI, she fights for fairness and transparency in AI systems.
“At discussions on AI ethics, you’d be hard-pressed to find anyone with a background in anthropology or sociology,” Gebru has noted, pointing out the need for different views in AI development.
Her important research includes:
- Co-writing “Gender Shades,” showing bias in facial recognition systems
- Studying the risks of large language models
- Using computer vision to study social patterns
- Pushing for diversity in AI development
Gebru’s work shows how AI can make existing biases worse and harm certain groups. Her departure from Google in 2020 started important conversations about AI ethics in big companies.
“The field of AI Ethics has reached a stasis in terms of applicability of ideas in real-life scenarios,” she says. Her institute, DAIR, focuses on “documenting the effect of artificial intelligence on marginalized groups, with a focus on Africa and African immigrants in the United States.”
In 2026, Gebru’s work remains central to debates about powerful foundation models, AI surveillance, and the need for regulation that protects human rights.
Julian Togelius – Learning Through Play
“I ask what AI can do for games, and what games can do for AI.”
Julian Togelius, a professor at New York University, takes a unique approach to AI ethics: through games. As co-director of the NYU Game Innovation Lab, he explores how games can help develop smarter and more adaptable AI.
“I’m working on artificial intelligence techniques for making computer games more fun, and on games for making artificial intelligence smarter,” Togelius explains. “I ask what AI can do for games, and what games can do for AI.”
His innovative work includes:
- Creating AI that can design games and game content
- Building systems that can play many different games
- Using evolution-inspired algorithms to create game levels
- Running competitions to advance AI capabilities
What makes this work valuable for ethics is his focus on creating flexible AI that can adapt to different situations rather than just doing one thing well. Games provide perfect testing grounds for AI that needs to follow rules, adapt to changes, and interact with others.
“Games, in particular video games, are perfect test-beds for AI methods,” he notes. “But it is important that you test your algorithms not just on a single game, but on a large number of games, so you focus on general intelligence and not just solving a single problem.”
AI in 2026: The Numbers That Matter

The work of these researchers becomes more urgent when we look at how fast AI is growing. Current trends show both the great potential and big challenges of AI:
- The global AI market grew to approximately $390.9 billion in 2025 and is on track to exceed $1 trillion by the early 2030s.
- 88% of global companies now use AI in at least one business function, up from 78% last year (McKinsey 2025).
- In the EU, only 13.5% of businesses with 10+ employees use AI, showing large regional gaps (Eurostat 2025).
- 46% of global consumers say they trust AI systems (KPMG 2025).
- 77% worry about misuse of AI or loss of control in critical systems.
- AI continues to shift the job market: while automation may replace certain roles, new AI-related jobs grow even faster.
Building AI That Serves Humanity
The four researchers we’ve looked at – Gupta, Burch, Gebru, and Togelius – approach ethical AI from different angles, but all work toward making sure AI benefits humanity. From practical frameworks to game theory, from exposing bias to using games for training, their work addresses key challenges in building AI that aligns with human values.
As AI becomes more part of our lives, their research provides essential guidance for developing systems that are not only powerful but also fair, safe, and transparent. With 85% of people supporting efforts to make AI more secure and ethical, these researchers are leading the way toward a future where AI helps rather than harms us. Their combination of technical skills and ethical thinking will be crucial as we navigate the opportunities and challenges of our AI-powered future. Their work reminds us that we have a say in how AI develops – it’s something we can and must shape together.