If we managed to create a decentralized social media ecosystem, how would we go about to identify the hardest problems to tackle, and what would be our greatest achievement if we succeeded? If this seems like an odd question, bear with me, dear reader: Many technologists look are motivated by great, technical challenges, and this is an attempt to channel that energy into social problems.
Many people, who I would consider relatively like minded as myself would say that things like censorship resistance and anonymity are the absolute requirements, and so crown achievements. I do think they are important, but only within a broader, social context that takes into account a wide variety of social problems. We have to explore the borders to understand where this may fail to bring social benefit, and we have to consider other options in those cases.
I think it is very important to think about future, decentralized social media not as an application, not like another Facebook, but as an ecosystem, where social interactions is a common ingredient of many interconnected applications contributed by many different actors.
In an earlier post that I wrote in Norwegian, I mentioned the revenge porn problem, where media is put into a sexual context and distributed without the depicted person’s consent. Another problem in the same space is “grooming”, where a person is manipulated into sexual abuse.
Grooming often follows a pattern, where an older person contacts a minor, lying about their age and/or gender and has the minor send them relatively innocuous pictures based on some false pretense. With those pictures, the perpetrator then threatens to expose those pictures to classmates, parents or others to put the minor into a coercive situation, where abuse and coercion can only escalate.
It is clear that one should never enter such a situation with an anonymous peer. However, it is equally clear that one should not enter such a situation with a peer that knows your complete identity either, as that can result in more direct forms of sexual abuse. The grooming problem is a problem because there exists no reasonable and commonly used middle ground, and therefore people resort to unsafe channels. Most of these cases can probably be prevented if people had a strong, online identity that could be used to pseudonymize them using selective attribute disclosure and verifiable claims. With the former, the two peers can disclose only relevant and non-compromising information, for example age and gender (even though that too can be problematic, technology should also be developed to assist in ensuring that their full identity cannot be compromised). With verifiable claims, both peers can verify that the information disclosed by the other is accurate. They should be empowered by the social media platform to enter a context where they have this kind of pseudonymity, where they get the extra security. If, for example a teen enters a dating site, they will use their strong, verified online identity, but the security system of the dating site will see to that nothing that can compromise the identity is exchanged unintentionally. If the peers eventually choose to exchange further details or meet in real life, the peers should be able to indicate to the dating site that they have chosen to do so, and if the meeting results in abuse, this information can be passed to authorities.
“Revenge porn” is a much harder problem. The name itself is problematic, as for example artistic or simple nudes, indeed almost anything, may not have had any sexual intentions, but may be twisted into sexual context by a perpetrator. Moreover, the distribution of such media may not be for revenge, but still be felt as an abuse by the depicted. This underlines that it is never OK to blame the victim and that the problem is much broader than it seems at first sight. A fairly authoritarian approach may be advocated: One may argue that people cannot retain full control of their own devices, so that authorities may have the option to delete offending material. Censorship may be advocated, so that material will not propagate. Longer prison sentences may be advocated. I am opposed to all of these solutions, as they are simplistic and fail to address other valid concerns. Taking away people’s control of their own devices contribute to alienation and distrust in the social relevance of technology, something we need to rely on for the solution to the grooming problem above, but also many other problems. I am also opposed to prison sentences, it is often a breeding ground for more crime, and should be reserved for extreme cases.
We should be able to fix (in the sense that the problem is marginalized and made socially unacceptable) this without resorting to such authoritarian measures. It takes changing culture, and while there’s no technological quick fix to changing culture, technology and design can contribute. The Cyber Civil Rights Initiative is a campaign to end non-consensual porn, and has published a research report with many interesting findings. The group advocate some of the more authoritarian solutions, and while I am sympathetic to the need for legislation, I believe this should be considered a privacy problem, and dealt with in generic privacy legislation, as I believe is the case in Europe. Without having personal experience, I suspect that privacy violations where private media are stolen and exposed even without any sexual undertones can be felt as much of a violation is “revenge porn”, and they should therefore be dealt with similarly.
Page 22 of the report summarizes what kind of sanctions would have stopped the perpetrators, in their own words. It is understandable that legislative measures are forwarded, as those comes out as the most significant. I nevertheless think it is important to note that 42% said that they wouldn’t have shared abusive media “if I had taken more time to think about what I was doing”, and 40% “if I knew how much it would hurt the person”. These are very important numbers, and things that can form the basis for design and cultural change. It is now possible to detect pictures with certain content with a relatively high probability, and make the poster think more carefully. Make them answer some questions. We could build a technology that asks “do you know how much this could hurt?”, and then a culture were friends ask the same. This becomes even easier if the victim is identified, as is not uncommon. In that case, the media could be tagged as “not OK to distribute”, and where friends of the victim could also appeal to the perpetrator’s conscience and also participate in stemming the distribution. Laws are for the 20% who said that nothing would have stopped them, and building a culture should also shrink this number significantly. Finally, 31% wouldn’t have posted the media “if I had to reveal my true identity (full name)”. Even without full name, a pseudonymized identity, like the one discussed above, could act as a significant deterrent, and would also help propagate warnings about how further distribution would be inappropriate and/or illegal.
This makes me hopeful that this is a problem were a well designed decentralized and rather censorship-resistant social media ecosystem could have a meaningful impact.
Another reason that the system has to be censorship resistant, is that it the same ecosystem has to be universal, for example, it must also function as a platform for public debate under authoritarian regimes. I would like to point out the work of Hossein Derakhshan who initiated a large blogosphere in Iran that contributed to a more open public debate. Derakhshan was arrested for his blogging in 2008, and released in 2014. He wrote a retrospective analysis of the development in the intervening years that is important well outside of Iran, called “The Web We Have to Save“. I have great respect and admiration for the work that Derakhshan has done, and it underscores the importance of having a Web that can be the foundation for situations where it is important to stop the spread of certain material and for situations where it is important to keep it flowing.
To achieve this, we must be very careful with the design of the ecosystem. For example, the trusted identity is straightforward to achieve in Norway, where we have good reason to trust government, but it would be counter to the goal of an open debate and therefore stifling to do so in Iran. Trust in the identity is an example of something that must be built in very different ways around the world, and the ability for local programmers to integrate different approaches into the same ecosystem is therefore instrumental to the possibility of making it work for everyone.
It is undeniable that there is a tension between the above goals, but I think it is easy to agree they are both important. To have the same platform do both of these things is a great technological challenge, and I think that if we can do this, it will be a very important achievement for all of mankind.