Mapping Social Media Behaviors with BLOC Framework
Category Computer Science Tuesday - September 19 2023, 15:35 UTC - 1 year ago Alexander Nwala, a professor of data science, has recently published a paper introducing BLOC, a universal language framework for modeling social media behaviors. Through this framework researchers can detect automated and coordinated behaviors that arise from malicious actors online. Ian MacDonald '25, technical director of the W&M DisinfoLab, is developing a BLOC-based website to be accessed by the public.
Not everyone you disagree with on social media is a bot, but various forms of social media manipulation are indeed used to spread false narratives, influence democratic processes and affect stock prices.In 2019, the global cost of bad actors on the internet was conservatively estimated at $78 billion. In the meantime, misinformation strategies have kept evolving: Detecting them has been so far a reactive affair, with malicious actors always being one step ahead .
Alexander Nwala, a William & Mary assistant professor of data science, aims to address these forms of abuse proactively. With colleagues at the Indiana University Observatory on Social Media, he has recently published a paper in EPJ Data Science to introduce BLOC, a universal language framework for modeling social media behaviors."The main idea behind this framework is not to target a specific behavior, but instead provide a language that can describe behaviors," said Nwala .
Automated bots mimicking human actions have become more sophisticated over time. Inauthentic coordinated behavior represents another common deception, manifested through actions that may not look suspicious at the individual account level, but are actually part of a strategy involving multiple accounts.However, not all automated or coordinated behavior is necessarily malicious. BLOC does not classify "good" or "bad" activities but gives researchers a language to describe social media behaviors—based on which potentially malicious actions can be more easily identified .
A user-friendly tool to investigate suspicious account behavior is in the works at William & Mary. Ian MacDonald '25, technical director of the W&M undergraduate-led DisinfoLab, is building a BLOC-based website that would be accessed by researchers, journalists and the general public.Checking for automation and coordinationThe process, Nwala explained, starts with sampling posts from a given social media account within a specific timeframe and encoding information using specific alphabets .
BLOC, which stands for "Behavioral Languages for Online Characterization," relies on action and content alphabets to represent user behavior in a way that can be easily adapted to different social media platforms.For instance, a string like "TpπR" indicates a sequence of four user actions: specifically, a published post, a reply to a non-friend and then to themselves and a repost of a friend's message .
Using the content alphabet, the same set of actions can be characterized as "(t)(EEH)(UM)(m)" if the user's posts respectively contain text, two images and a hashtag, a link and a mention to a friend and a mention of a non-friend.The BLOC strings obtained are then tokenized into words which could represent different behaviors. "Once we have these words, we build what we call vectors, mathematical representations of these words," said Nwala .
"So we'll have various BLOC words and then the number of times a user expressed the word or behavior."Once vectors are obtained, data is run through a machine learning algorithm trained to identify patterns distinguishing between different classes of users (e.g., machines and human's).
Share