It is the bane of every security researcher: no matter how sophisticated the tool is to fight harmful behaviour on any given interface, hackers will always adapt, scale up their game, and find new ways to work around the mechanism.

In an effort to get ahead of the scammers, Facebook is trying a new approach: to unleash an army of bots on a version of the platform tasked with harmful actions – so that the Facebook-controlled bots can discover the loopholes before the real scammers even get to them.

The technology will be operating in an alternative version of Facebook, dubbed WW to reflect that the system is a scaled-down version of the World Wide Web (WWW).

Contrary to traditional simulations, in which the simulated bots would be working on a simulated platform, WW is built on Facebook’s real-world software platform.

The company’s engineering team developed a method called Web-Enabled Simulation (WES), which consists of carrying out simulations on real web infrastructures, rather than artificial ones, to better reflect real user interactions and social behaviour.

Using WES, Facebook’s engineers built WW – a parallel version of the social media platform, complete with Messenger, profiles, pages, and inopportune friend requests, but exclusively reserved for bots.

Presenting the technology at a webinar, Mark Harman, a research scientist at Facebook, said: “The simulations happen on the actual tens of millions of lines of code that make up the Facebook infrastructure. The bots use all of the same software and tools that a user would be using on the platform.”

“It means the simulation results are much closer to the reality of what happens on the platform, and to the many subtleties where harmful behaviour can occur,” he added.

The bots, therefore, operate in an environment that is very close to the actual users on the platform, but a safe distance has wisely been kept. The bots’ actions are carefully constrained, and the engineers set up both a privacy layer and interaction mechanism layer to separate the two worlds.

Supervised learning was also a part of the mix: using anonymized data, the researchers defined patterns of real-user behaviour and trained the bots to imitate them.

“There is a strong relationship with AI-assisted gameplay,” said Harman. “Simulated game players are a little bit like our bots. We are automating the process of making the game ever-more challenging because we want to make it harder for potentially sophisticated and well-skilled bad actors.”

From an engineering perspective, the proposal is ambitious, and Harman stressed that the project is still in a research phase. Hartman hopes that it is only a matter of months before the WW initiative comes to life, but admitted that further research was needed in various fields such as machine learning, graph theory or AI-assisted gameplay.

If the project were to come about at scale, however, the research team anticipated a significant boost to Facebook’s defence in the war against harmful behaviour.

“The bots, in theory, can do things we haven’t seen before thanks to reinforcement learning,” said Hartman. “That’s something we want because it will let us get ahead of the bad behaviour, rather than catch up with it.”

What’s more: using the WES method, WW could be replicated for any large-scale web system in which a community’s behaviour can be observed. It could go a long way, therefore, in alleviating moderation efforts for many organizations.

Zdnet

LEAVE A REPLY

Please enter your comment!
Please enter your name here