The Desk appreciates the support of readers who purchase products or services through links on our website. Learn more...
Tideline promo banner for The Word and WADL-TV
Tideline promo banner for The Word and WADL-TV

Post to grade users with ‘trust metric’

A screen capture of the social media website Post.
A screen capture of the social media website Post. (Graphic by The Desk)

Startup social media platform Post says it will introduce a new feature that will grade users based on their contributions and their behavior.

The grade, which Post is calling a “trust metric,” is intended to reward positive behavior and contributions on the social media platform, while limiting the visibility of so-called “ruler breakers” who violate the website’s policies.

“Our goal is to allow most people to publish most content to anyone interested, but limit the ability for anyone to publish anything to everyone,” Noam Bardin, the founder of Post, wrote in a note on the platform. “To achieve this, we plan to introduce several user levels and an activity-based trust metric that determines the appropriate level.”

By default, users on Post will have a trust level of 100 percent, which will allow their content to be visible to all of their followers. Bardin said 90 percent of the accounts currently on Post are also expected to have a 100 percent grade when the feature launches.

Users who break any of Post’s acceptable use, behavior or content policies will see their rating fall into a “no post” category. While in the Post equivalent of social media purgatory, users will be able to participate on the platform — including commenting on other people’s posts — but they will not be able to post new content “until they earn their way back with good behavior.” Aside from commenting on other content, it wasn’t entirely clear how Post users whose grade falls can earn their way back into Post’s good graces.

Those who consistently break Post’s rules will be suspended, which Post explained to mean that “they cannot access [their] account and their content is removed from all feeds.”

Bardin said Post will not disclose the specifics of its trust metric system “because we know people will try to game the system,” but did say behavior like breaking content rules, attacking users, promoting hate speech, spreading misinformation and other illegal activities would lower a user’s score.

Users can also have their grades lowered of other users block them, Bardin said, but they can increase their score by being followed by a significant number of accounts with high scores of their own.

To that end, Bardin said Post’s users will serve as its community moderation team, who will have the ability to raise or lower the trust metric of other participants by following or blocking them.

“These metrics will help us identify who to trust and on which subjects,” Bardin wrote. “Imagine a world where users who, over months or years, have proven themselves to be positive, rule-abiding Post citizens, having greater weight in the algorithmic decisions than a new account that has never posted nor added an image to profile. We want to amplify the good actors and diminish the reach of bad actors, at scale.”

But that could be a complicated endeavor, because what someone considers to be “problematic,” “abusive” or “misinformation” could earn someone a lower trust rating, based merely on the perception of a community moderator who is given a wide degree of latitude to determine what is acceptable and what is not.

Social media platforms have struggled with this in the past. At the start of the coronavirus pandemic two years ago, Twitter, Facebook and other websites suspended users who floated a theory that the virus was created in a lab, when the acceptable theory at the time was that the virus first spread to a human in a meat market.

Those platforms moved away from their heavy-handed tactic of banning users who floated the so-called “lab leak” theory after the World Health Organization affirmed it was open to receive evidence to support it. Several months later, the debate swung in the other direction when a different panel of experts said there was “overwhelming” evidence to support the notion that the first human contraction of the novel coronavirus was spread from an animal.

The two-year debate — which has still not been settled — has aggrieved social media users who found themselves banned from various platforms after content moderators enacted policies that were intended to help spread misinformation connected with the coronavirus. Those policies were well-intentioned, but platforms ultimately had to give a degree of latitude to news organizations and scientific journals who acknowledged that there were many theories — including the lab leak — that were worth exploring and discussing.

Post could repeat the same mistake with another well-intention moderation framework — but to a much-worse degree, because it expects users to place a high value in the trust score. To his credit, Bardin seems to know that the trust metric won’t hit a home run out of the gate, and appears open to the idea of retooling it based on feedback and its real-world deployment.

“We hope you appreciate what we are trying to accomplish and we will be listening for feedback as we know we will get some things wrong,” Bardin said.

Get stories like these in your inbox, plus free breaking news alerts on business and policy matters involving media and tech.

Get stories like these in your inbox, plus free breaking news alerts on business and policy matters involving media and tech.

Photo of author

About the Author:

Matthew Keys

Matthew Keys is a nationally-recognized, award-winning journalist who has covered the business of media, technology, radio and television for more than 11 years. He is the publisher of The Desk and contributes to Know Techie, Digital Content Next and StreamTV Insider. He previously worked for Thomson Reuters, the Walt Disney Company, McNaughton Newspapers and Tribune Broadcasting.
Home » News » Industries » Social Media » Post to grade users with ‘trust metric’