March 9, 2026
When Robots Become Real
As a child growing up in Western Canada in the 80s, encounters with Expo Ernie, the beloved robotic mascot of Expo ’86 in Vancouver, sparked the imagination of many young visitors.
For kids raised on comic books and Saturday morning cartoons, he wasn’t just a mascot, he was proof that robots were real. Watching the mechanical spaceman inspired awe and curiosity, as if somewhere beneath it all there must be circuits, intelligence and something close to magic.
There wasn’t, of course. Expo Ernie was theatre. Performance. A remote-controlled machine with an operator and a voice box.
But the mere presence of something labelled a “robot” shifted the imagination. It blurred the line between fiction and possibility.
Decades later, that line is no longer imaginary.
Dr. Marie Charbonneau, assistant professor in the Schulich School of Engineering, researches how humanoid robots can interact safely and intuitively with people.
Giving AI a Body
In a robotics lab, a humanoid robot learns to dance with a human partner.
It does not move with cinematic precision. It is not flawless. But it is learning — learning how to balance, how to respond to touch, how to move safely in close proximity to a human partner.
For Dr. Marie Charbonneau, PhD, assistant professor in the Schulich School of Engineering, this is not spectacle. It is research into something deceptively simple: how to design robots that can interact with people safely, comfortably and intuitively.
“At a very high level, what I’m trying to do is develop robots that can interact with people not just socially, but physically in a way that’s safe, and where people know what’s going on,” she explains.
Charbonneau’s work focuses on whole-body control, regulating the forces between a robot and its environment so that movement is soft, compliant and predictable. A robot that can be guided by a human hand without resisting dangerously. A robot that might one day assist with physiotherapy exercises, providing resistance when needed, yielding when appropriate.
Safety, she notes, has layers.
There is physical safety: ensuring a robot does not lose balance, react unpredictably or harm someone through force or malfunction. But there is also psychological safety: how people perceive a robot, how comfortable they feel approaching it and whether that perception aligns with reality.
“You don’t want the robot to psychologically scar people with weird behaviours,” Charbonneau says.
As humanoid systems become more sophisticated, those questions move beyond mechanics and into ethics. What does it mean to design a machine that shares physical space with us? Can something like “respect” be engineered?
Charbonneau is careful.
“Where I’m at right now is that robots are not people, and we should not blur the line as much as possible,” she says. “That is my opinion.”
At the same time, Charbonneau sees potential, from assistive technologies that improve mobility for children with cerebral palsy to systems that could support aging populations. The challenge, she suggests, is not whether robots will become more capable, but how intentionally they are integrated into society.
If Charbonneau’s research focuses on how robots move safely through the world, another question emerges just as quickly: how those systems think, decide and interact with society.
Dr. Marina L. Gavrilova, PhD’99, professor in the Faculty of Science and Research Excellence Chair in Trustworthy and Explainable AI, studies how artificial intelligence systems can be made transparent, accountable and reliable.
Inside the AI Mind
Popular culture has long imagined artificial intelligence in apocalyptic terms. From HAL 9000 calmly overriding human commands, to the self-aware machines of Skynet from The Terminator. Those narratives continue to linger in the public imagination. For researchers at the University of Calgary, however, the more immediate concerns are less cinematic and more structural: transparency, accountability, bias and governance.
A professor in the Faculty of Science and Research Excellence Chair in Trustworthy and Explainable AI, Dr. Marina Gavrilova, PhD'99, has worked in AI and machine learning for more than two decades. In her view, the current moment represents a fundamental shift.
Today, those systems are no longer confined to isolated technical applications. They draft documents, analyze medical scans, moderate conversations and increasingly interact directly with the public. The transition from background tool to visible participant changes the ethical equation.
“Ten years ago, AI was considered a tool,” Gavrilova says. “Now AI agents are becoming integrated into society.”
That integration brings new urgency to concepts such as explainability and bias mitigation. As AI systems are increasingly used in high-stakes environments, including health care, finance and public services, understanding how and why a system reaches a decision becomes critical.
“If an AI system makes a consequential decision about someone, we need to understand how that decision was made,” Gavrilova explains. “What data was used? Was there bias? Can we trust the outcome?”
In recent years, “trustworthy AI” has become a central focus of research, emphasizing transparency, accountability and fairness in automated systems. But as AI systems become more autonomous and potentially embodied in physical robots, the stakes grow significantly.
“When AI becomes embodied, safety and trust become paramount,” Gavrilova says.
Those concerns extend beyond individual machines to the systems that surround them. Questions of cybersecurity, data privacy and authenticity, particularly in an era of increasingly sophisticated synthetic media, are no longer abstract technical issues. They are governance challenges.
Dr. G. Kent Fellows, BA’08, MA’10, PhD’15, assistant professor in the School of Public Policy, examines the economic and policy implications of emerging technologies such as artificial intelligence.
Who is in Charge?
Are regulatory frameworks keeping pace? Who is responsible when an autonomous system fails? How should policymakers balance innovation with public trust?
For Dr. G. Kent Fellows, BA’08, MA’10, PhD’15, assistant professor in the School of Public Policy, the implications of AI and humanoid robotics extend beyond the lab and into institutions.
From a socio-economic policy perspective, he argues the challenge is not always rewriting laws for new technologies, but ensuring existing frameworks are applied effectively.
“In many cases, the issue is less about updating regulatory frameworks and more about adapting how we apply and enforce the ones we already have,” Fellows notes.
New technologies have always produced disruption during periods of transition. New tools reshape how work is distributed and how organizations operate, often creating friction before new norms emerge.
The same may hold true for generative AI and autonomous systems. In the workplace, Fellows suggests the most successful organizations will likely be those that learn to align new technologies with the comparative advantages of their existing workforce.
He also notes that these shifts can reinforce broader economic patterns.
“There is a huge risk of concentration of economic power,” Fellows says, pointing to trends already visible in the digital economy, where large technology firms dominate many of the revenue streams associated with the internet.
When it comes to accountability, Fellows argues that responsibility should sit with those best positioned to mitigate harm, most likely the developers and manufacturers. At the same time, distinguishing between misuse of a tool and harm caused by the technology itself remains an open policy question.
As with many technological transitions, the challenge is not only anticipating innovation, but determining how responsibility, regulation and economic impact will be distributed across society.
Taken together, these perspectives reflect a shift in how AI is being discussed. Rather than focusing on the extremes of utopian efficiency or dystopian loss of control, the conversation is increasingly centred on responsibility.
Decades after a child might have first imagined what robots could be, the question is no longer whether they will exist, but how thoughtfully we choose to build and govern them.
The technology is advancing. The question now is whether our ethical, economic and regulatory frameworks can evolve alongside it.
Join the Conversation
Explore these questions and more at Humanoid Robots and How They’ll Transform Society — Better or Worse, a thought-provoking virtual event on April 2 presented by UCalgary Alumni. Hear from experts in engineering, AI and public policy as they unpack the ethical, social and governance dimensions of embodied AI systems.