Artificial intelligence is changing our world fast, but it raises big questions about fairness, safety, and who controls it. Who’s making sure AI helps people instead of hurting them? A group of researchers is working hard to make AI follow human values and act responsibly.
In this article, we talk about four key names in ethical AI, including Abhishek Gupta. They’re tackling AI’s biggest challenges from different angles. With more companies using AI and the market set to reach $1.3 trillion by 2030, their work matters now more than ever.
Abhishek Gupta – Making Ethics Actionable

“Move fast and fix things.”
Dr. Abhishek Gupta (1992-2024), also known as Abhishel to many in the AI community, left a lasting legacy in AI ethics. As the founder of the Montreal AI Ethics Institute (MAIEI) and former Director for Responsible AI at Boston Consulting Group, Dr. Gupta dedicated his career to bridging the gap between ethical principles and practical implementation. His involvement with ACI Montreal played a crucial role in promoting Augmented Collective Intelligence (ACI) as a tool for responsible AI governance.
“I work with your organization and relevant stakeholders to design, develop, and deploy a Responsible AI program that helps you achieve your purpose, live your values, and deliver your goals all while innovating responsibly,” Gupta explained of his approach.
His impactful work focused on:
- Creating practical frameworks for responsible AI deployment.
- Environmental sustainability in AI through his work with the Green Software Foundation.
- Developing the concept of Augmented Collective Intelligence (ACI).
- Publishing the widely-read “State of AI Ethics” reports.
The Montreal AI Ethics Institute (MAIEI) continues his vision of democratizing AI ethics literacy. His blog posts, including insights on “Hallucinating and moving fast”, and his coverage of conferences like FOSSY (Free and Open Source Software for You) and AIES (AI, Ethics and Society), remain valuable resources for those working in ethical AI.
During his time at BCG Henderson Institute, Dr. Gupta operated under the name Agent K, contributing to the Agent K Blog, where he explored daily insights into building more ethical AI systems. Platforms like Codex.io further amplified his expertise on responsible AI, offering practical advice companies could actually use, not just discuss. His Day in a Life series provided glimpses into his daily routines and strategies for effective AI governance.
Dr. Abhishek Gupta was particularly concerned about environmental impacts. “AI systems are compute-intensive,” he noted. “They require massive amounts of data that might be moved over the wire, and require specialized hardware to operate effectively. All of these activities require electricity – which has a carbon footprint.”
Through his work, Gupta proved that responsible AI is not just about discussions; it’s about action. The tools and frameworks he developed continue to guide companies toward ethical AI practices that matter.
Neil Burch – Mastering Decision-Making Under Uncertainty

“It really does get truly enormous. Like, larger than the number of atoms in the universe.”
Neil Burch, a Canada CIFAR AI Chair and Senior Research Scientist at Sony AI focuses on AI decision-making under uncertainty. His research covers complex games like checkers and poker, where information is often incomplete or deceptive.
Burch’s key achievements include:
- Solving checkers (2007) and a type of poker (2015)
- Creating DeepStack, the first AI to beat professionals at no-limit poker
- Developing algorithms for decision-making with limited information
- Enhancing how multiple AI systems work together
Poker, with its bluffing and hidden information, was a perfect testing ground for AI. His work with DeepStack showed how AI can make smart decisions even with incomplete data.
These techniques also apply to online gambling and casino games, where algorithms must handle uncertainty and ensure fairness. Burch’s research helps AI make ethical, accurate decisions in complex, unpredictable environments.
How AI Research Impacts Online Casino Algorithms
AI research in games like poker is not just about winning – it’s about making fair and accurate decisions even with limited or deceptive information. The same techniques used to master poker are also applied to other online gambling systems, including slot games, blackjack, and roulette. Algorithms designed for decision-making and fairness help online casinos maintain integrity, prevent cheating, and improve user experience. For players, finding platforms that apply these same principles of fairness is just as important when choosing where to play.
This reliable list of verified sites is especially useful for comparing the best online casino Canada sites. It highlights casinos that prioritize transparency, fair play, and strong user protections – key factors when real money is involved. The guide is regularly updated to reflect the safest, most reliable choices available to Canadian players.
Fairness and responsible AI are common themes in ethical AI research, and they are just as crucial in online gambling. Ensuring that algorithms function ethically and transparently in online casinos is part of building AI systems that align with human values. As AI continues to shape online gambling, maintaining fairness and accountability becomes essential.
Next, we will look at how another expert, Timnit Gebru, has exposed AI’s weaknesses and pushed for fairness and accountability.
Timnit Gebru – Exposing AI’s Blind Spots
“I just couldn’t not be in there right away.”
Timnit Gebru has become one of the most important voices highlighting the risks of unchecked AI. As founder of the Distributed Artificial Intelligence Research Institute and co-founder of Black in AI, she fights for fairness and transparency in AI systems.
“At discussions on AI ethics, you’d be hard-pressed to find anyone with a background in anthropology or sociology,” Gebru has noted, pointing out the need for different views in AI development.
Her important research includes:
- Co-writing “Gender Shades,” showing bias in facial recognition systems
- Studying the risks of large language models
- Using computer vision to study social patterns
- Pushing for diversity in AI development
Gebru’s work shows how AI can make existing biases worse and harm certain groups. Her departure from Google in 2020 started important conversations about AI ethics in big companies.
“The field of AI Ethics has reached a stasis in terms of applicability of ideas in real-life scenarios,” she says. Her institute, DAIR, focuses on “documenting the effect of artificial intelligence on marginalized groups, with a focus on Africa and African immigrants in the United States.”
Julian Togelius – Learning Through Play
“I ask what AI can do for games, and what games can do for AI.”
Julian Togelius, a professor at New York University, takes a unique approach to AI ethics: through games. As co-director of the NYU Game Innovation Lab, he explores how games can help develop smarter and more adaptable AI.
“I’m working on artificial intelligence techniques for making computer games more fun, and on games for making artificial intelligence smarter,” Togelius explains. “I ask what AI can do for games, and what games can do for AI.”
His innovative work includes:
- Creating AI that can design games and game content
- Building systems that can play many different games
- Using evolution-inspired algorithms to create game levels
- Running competitions to advance AI capabilities
What makes this work valuable for ethics is his focus on creating flexible AI that can adapt to different situations rather than just doing one thing well. Games provide perfect testing grounds for AI that needs to follow rules, adapt to changes, and interact with others.
“Games, in particular video games, are perfect test-beds for AI methods,” he notes. “But it is important that you test your algorithms not just on a single game, but on a large number of games, so you focus on general intelligence and not just solving a single problem.”
AI in 2025: The Numbers That Matter

The work of these researchers becomes more urgent when we look at how fast AI is growing. Current trends show both the great potential and big challenges of AI:
- According to Grand View Research, the AI market is projected to grow from $279 billion in 2024 to $1.3 trillion by 2030.
- Exploding Topics reports that 77% of companies are using or exploring AI, and 83% of businesses consider AI a top priority.
- Studies by PwC suggest that AI could replace 85 million jobs but create 97 million new ones by 2025.
- Forbes indicates that 60% of business owners believe AI boosts productivity, but 77% of people fear AI will cause job losses.
- The most in-demand AI-related jobs include data engineers, scientists, and machine learning engineers, according to AIPRM.
- Only 39% of people consider AI technology safe, while 80% are worried about AI being used for cyberattacks, according to a study by MITRE.
- Research by Forbes shows that 65% of consumers trust AI if companies are transparent about their practices.
- According to MITRE, 85% of respondents support national efforts to make AI safe, and 81% want more investment in AI safety. Additionally, 85% want companies to be transparent before launching AI products.
Building AI That Serves Humanity
The four researchers we’ve looked at – Gupta, Burch, Gebru, and Togelius – approach ethical AI from different angles, but all work toward making sure AI benefits humanity. From practical frameworks to game theory, from exposing bias to using games for training, their work addresses key challenges in building AI that aligns with human values.
As AI becomes more part of our lives, their research provides essential guidance for developing systems that are not only powerful but also fair, safe, and transparent. With 85% of people supporting efforts to make AI more secure and ethical, these researchers are leading the way toward a future where AI helps rather than harms us. Their combination of technical skills and ethical thinking will be crucial as we navigate the opportunities and challenges of our AI-powered future. Their work reminds us that we have a say in how AI develops – it’s something we can and must shape together.