Content Metadata for Social Media (e.g., Mastodon)

I’ve been thinking about how to make social media practical and usable by kids. Back when I was experimenting with app.net, I wrote a blog on how it could be used safely with kids. Well, app.net died. But maybe Mastodon will succeed where it failed.

No social platform yet has made a decent set of tools for being kid-friendly but Mastodon could create one. This blog post is a rewrite of my old one, edited to account for things that are different in the ‘fediverse’.

[legal note: The ideas expressed in this blog post are hereby, without limitation, donated to the public domain. All copyright, intellectual property right, and/or ownership rights are hereby explicitly waived. In other words: someone should take this idea and go make a business out of it, and don’t worry about paying me a cent.]

What We Need

To achieve a family-friendly social messaging experience, we need to create some new capabilities. These could be technologies that are turned on/off on an instance level. They could also be options that parents can enable/disable on clients that their kids use.

Tagging of Accounts

It would be very cool if the platform allowed people/accounts to tag their own profile (using platform supported metadata). Originally I contemplated 3 levels.

  • (E) meaning fit for everyone young or old. By adopting this tag, you’d be saying “this account will always post things that are appropriate for everyone.” And you’d be agreeing to some Ts&Cs (described below).
  • (U) for unclassified. Meaning the person isn’t saying one way or another. This is the default for all new accounts.
  • (M) for mature (ie grown-up). This is just a self-tagging mechanism so that people can basically say “I’m giving you fair warning. I’m not kid-friendly”.

The idea is to give me an ability to signal to other people the nature of my account. That is, I’m saying “because I’ve elected to rate myself E, I’m telling the world that I will only use language or topics or pictures, etc. that are appropriate for everyone.”

You could also imagine a series of metadata tags. Like “XXX” indicating “I talk explicitly about sex” and “!XXX” I claim that I never talk explicitly about sex. So instead of just a one-letter indicator, you would have a series of symbols that indicate stuff. Heck, you could even use emoji.
Imagine seeing in my profile:
Account Info: [ 🚬 🔪 💊 ⚖️ ⚔️ 🛏 ] where symbols meant things like “tobacco, violence, drugs, politics, bad language, sex” respectively. And I could put a ! in front of them to indicate that I *don’t* talk about that. So !⚔️ !🛏 would indicate “no bad language, no talk about sex.”

Clients that support filtering

We need a version of a Mastodon instance that has a bunch of parental controls enabled. This would allow parents / teachers / schools to control the accounts that can be followed and the instances that are federated. The instance could have a rule of “only E” or “not M”, etc. likewise, there is the classic market for parent/kid friendly clients that look for bad words or links to adult content anyways.

A Moderation Mechanism

For instances that run this way, you’d want a quarantine mechanism that would look for bad words, trigger words, or violations of whatever policies are in place. A school’s instance, for example, would probably have pretty aggressive quarantining. A public “kid-friendly” instance might be a little less strict, but still moderated.

It would also be interesting to allow people to tag a post as being related to some topic (e.g., drugs, sex, violence) to call it to the attention of the moderators. And false positives should be tracked as well. (i.e., if a bunch of people troll an innocent post, marking it as a violation when it really isn’t, then those who falsely marked it could be penalised by being muted for some time).

A school / student / teacher business model

There needs to be a business model that allows a business to make enough money that it can do this. Schools or kid-friendly sites would require a management interface that lets them enroll, unenroll, monitor accounts, group accounts, etc. A parent interface would also be handy.

Goals

Given these things, we can create a kid-friendly social messaging system. Clients will restrict content (to the extent that we can get the kids to only use approved clients). Parents don’t have a lot of options for stopping their kids from signing up on non-kid-friendly instances. But the Mastodon architecture would allow a kid-friendly instance to be added to a parent’s white list of web sites. So if you were actively whitelisting safe web sites for your kids, you could add a trustworthy, kid-friendly mastodon instance to the white list. If that instance’s admins were good about only federating to other kid-friendly instances, you would have a big ecosystem of kid-friendly social media.

Moderation Automation

I figure someone will create an automated moderation system that will catch a fair amount of obvious profanity, abusive content, and potentially adult content. You’d have 3 levels of moderation:

  1. Automated triage: a program would just read every message and look for the really obvious F-words, acronyms (WTF, STFU) or abusive language. I assume such technology exists and isn’t hard to apply. Likewise, it would have to check all URLs to make sure they landed at safe pages.
  2. Semi-automated triage: anything the first level thought was questionable would go into a human quarantine to be figured out by a person. Stuff that is acceptable would be released into the stream.
  3. Complaint resolution. If someone manages to get something past all the filters, there would have to be a ‘flag this post’ or other complaint mechanism to push a post into the human triage queue.

Example Usage Ideas

School Social Network

A school gets an instance and gives out social media accounts to the students, staff, and faculty. Teachers, administrators, students, and parents can all use it. By default, everyone must be rated E. Content is monitored and moderated (as above). Kids (if they’re older and have smart phones) can get mobile apps that hook into it and keep them connected to the network and let them interact with their teachers, parents, etc. Now, I expect that kids will still use services like WhatsApp, iMessage and such to interact with their peers so that parents don’t snoop. But there’s a fair chance that lively discussion and interaction could happen using the school-oriented network. It lets teachers maintain two identities (their personal and their teacher persona) while still using modern social media. I’m not sure, but there’s a reasonable chance that you could let students follow other Mastodon users who are also marked (E), even if those people are not at the school. Students can give out their school Mastodon ID to others, and the content that comes in to them will be filtered and monitored.

Tagging Individual Posts

It would be cool if, even though I’m rated U or M, I could make a post that was individually tagged E. I mean, I’m a grown up, but I have kids. Some of my colleagues have kids. I might post something like “this museum has an awesome exhibit” and I want it to show up in the kid-friendly clients. I tag the post E. Maybe that goes straight to human moderation because my account isn’t tagged E. Maybe it doesn’t. It might be handy, though.

You could also accumulate some notion of “karma” that is basically a notion of how often you tag your posts correctly. I.e., if I tag it E but it isn’t, I lose karma.

Kid-Friendly Mastodon Clients

There’s no reason the whole thing has to be done by schools or oriented around schools. For someone who wants to create a kid-friendly chat service, they can build and run a Mastodon instance that puts in the parental controls and the rating metadata.

Conclusion

This is an area where Mastodon’s infrastructure is brilliantly suited to build something that simply cannot be built on top of another social infrastructure. An entrepreneurial developer could cook this up in a matter of weeks and would be completely supported by the nature of the platform.

Leave a Reply

Your email address will not be published.