New Zealand, which has a population of 4.8 million people, suffers from a “small market problem,” Ms. Caplan said, comparing it to other countries like Canada that have also tried to bolster regulation of social media platforms.
“The companies, depending on what the regulation is, might just pull out rather than comply,” she said, referring to Google’s decision to ban political advertising ahead of the Canadian elections after new transparency laws were introduced.
But France, a much bigger market, has already taken action on its own. It announced in November that it would embed regulators at Facebook for the first six months of 2019 to determine whether its processes for removing hate-fueled content could be improved.
In May, French lawmakers will debate an update to the country’s online hate speech law in an attempt to force social media platforms to take more responsibility for taking down heinous content. Under the legislation, the companies could be fined up to 4 percent of their global revenues if they fail to withdraw extremist content within 24 hours.
Ahead of this debate, government officials have called on platforms like Facebook, Twitter, YouTube and Instagram to act against extremist content. “You are too slow,” France’s junior minister for gender equality, Marlène Schiappa, wrote in a tweet in November. “Your responsibility is to delete content! Stop being accomplices.”
Australia, which historically has been more open to limiting speech than the United States, has also taken strong steps, recently approving regulations that will impose fines on social media platforms that fail to swiftly remove violent content. That change was vehemently opposed by the tech industry, and there has been no suggestion from Ms. Ardern that she plans to enact similar regulations.
She also appears sensitive to worries about infringing freedom of speech, saying on Wednesday that the pledge she and Mr. Macron were preparing would refer specifically to terrorist activity.