I had mixed feelings will reading the Digital Services Act. To follow up on last week’s readings and discussion, media literacy appears to be the obvious long-term solution to halt the spreading of false information that manipulate people. This would in fact only be a solution to react to a dishonest or sensationalist use of social media by journalists or leaders of (far-right, in this case) movements, instead of fixing the downsides of fast traveling and easy-access information. But since, as mentioned by Freedman, right-wing groups are actually dependent on social medias that give them the visibility needed, it certainly won’t stop writers from molding news to convey provocative assumptions or targeting minority groups (as shown by the story on Roma in Czech Republic reported by the study of Slavíčková and Zvagulis). The tactics employed by those who wish to put forward their political group’s ideas are caught in the wheel of social media, already too big and too fast to hope to keep any control on whatever is thrown in it. The point outlined in Freedman’s article about wanting to minimize regulatory controls show how populist parties rely on social media (it then benefits them that regulations are kept at a minimal degree), but at the same time, those who are opposed to the movement (those targeted by the anti-elite narratives, presumably) actually contribute to the visibility on medias. Online platforms are at the heart of how information is introduced, consumed, and used. In this line of thought, regulations on their content are necessary, in order to complement other solutions to limit the possible negative impacts.
I think that, at first glance, the regulations proposed by the Digital Services Act are rational and well-founded. They appear to be more about not hiding information, so that it would ensure that there is no market dominance, and that costumers and companies are all protected when they navigate on websites. It seems to me that it is evidently predominantly a reaction to the misuses of how web platforms allow to display content, a short-term solution, compared to media literacy courses, for example, as it would directly affect the content, and not the way that the viewer deals with it. But I would be afraid that a guaranty that all the online material on said and said website is reliable would be reassuring, to the point where the viewer feels that he doesn’t need to exert his critical judgement. And since loopholes and other type of tinkering with rules always happen, or grey zones (such as not including the Roma side of the event in a news article, as explained by Slavíčková and Zvagulis) can be exploited, stating that online services are reliable might put judgement to sleep. Maybe adding a disclaimer that despite efforts to provide trusted information, critical judgement is still advised would help? I would think that no regulations will be inclusive or ahead enough to prevent any misuse or hiding of information. It would only be a reaction, a short-term solution until another loophole is found. It is still a good and needed plan, but I believe it has to be combined with a stress on the use of critical judgement whenever people are exposed to information online.