A humanoid robot from the Beijing Humanoid Robot Innovation Centre posed a question to former New Zealand Prime Minister Jenny Shipley, who was one of the panel participants.
“Madam Shipley, I’d like to ask you one question … as a robot that genuinely wants to serve humanity, what should we do to earn the trust of ordinary people?” it asked.
In response, Shipley said trust would depend on reliability, clear boundaries and an understanding of human needs.
“I expect you as a robot to convince me that you are reliable, adaptable and responsible,” she said.
“I’m looking to you with confidence for functional support, but I want my humanity to be able to reserve the space in a respectful way.”
She added that robots should not overstep into areas such as emotional judgment.
“I don’t expect you to comfort me … I don’t think that is your responsibility,” she said.
WHO’S RESPONSIBLE?
Focus also fell on responsibility and governance.
Sam Daws, senior adviser at the Oxford Martin AI Governance Initiative, said policymakers must balance innovation with safeguards, including managing labour displacement, data use and safety risks.
He cited Singapore as one example of how governance frameworks are evolving.
“Singapore’s governance framework on agentic AI will be useful as we anticipate the effect of a million ‘lobsters’ beginning to interact in the world,” Daws said, referring to OpenClaw, the open-source AI agent that has taken China and much of the world by storm.
Singapore unveiled its Model AI Governance Framework for Agentic AI in January this year at the World Economic Forum in Davos, guiding organisations on deploying AI agents safely, with a focus on risk management, human oversight and accountability.

