Comments
Some personal thoughts on recent trust and risk issues in the digital economy.
I haved mixed identities about wallets
12 APRIL 2024
The final act of the European Digital Identity framework, a revision of the eIDAS Regulation (EU) No 910/2014, was signed yesterday, 11 April 2024. By 2026, all EU Member States will need to provide their citizens and residents with a digital wallet. A significant novelty here is that this system won’t solely serve e-Government purposes. Also major tech companies like Google, Meta, or Amazon will be required to integrate the wallet for user authentication (which also allows them to request information from their customers or users).
There are a number of aspects that, during the development of this legislation, were luckily improved. First of all, as with everything, digital does not automatically mean better. The non-discrimination protection for people who either do not want or cannot use the wallet is crucial to ensure freedom of choice and social inclusion. There should be no disadvantages, higher costs, or any other kind of exclusion imposed on people who opt out. This safeguards various groups, including those without smartphones or with low digital literacy, or individuals who are wary of the wallet's risks. Second, private companies can use the wallet to ask any personal information from customers and users. To mitigate arising risks, there is a restriction provision imposed on companies requesting information through the wallet. They must provide detailed information about the purpose for requesting information from the wallet, and the specific data they seek. Users are informed about the company's identity and have the right to refuse sharing certain information. Third, the famous unique identifier debate has been settled. The initial proposal called for the introduction of a unique identifier for individuals, aiming to create a comprehensive tracking system across all aspects of life. In certain countries of the EU, this is, for good reasons, unconstitutional, as it increases the possibilities to correlate anything a user does in various areas of life. In the final text of the legislation, not only was the call for a unique identifier removed, but additional privacy protections to deter extensive tracking and profiling were introduced.
Now, there is also a flipside to the new regulation. What is particularly worrying is the amount of power the new digital identity architecture hands to governments providing the wallet. Although improvements have been made to ensure unobservability (Recital 11c) by hindering wallet providers from collecting data and gaining insights into the transactions of the wallet users, there are technical excemptions to this, based on explicit user consent. Consent-based data insights are a contentious topic and thus a discussion per se, so we will have to see how much governmental observability will truly be restricted by this provision. Moreover, the wallet is commercialized as open-source. In reality, this is not completely true. Member States can decide that the source code of specific compontents will be kept closed for “duly justified reasons”. On top of that, some security concerns may arise when Member State will designate national certification bodies to ensure the wallet’s safety and legal compliance. This can lead to varying security levels and potential risks for users in different Member States. Certification for privacy compliance under GDPR is thus also left to national authorities' discretion, which may result in inconsistency regarding privacy and security certifications across countries. Forum shopping is another concern. Large tech companies headquartered in Europe are regulated by their national authorities (e.g. Facebook Ireland). If Facebook Ireland requests personal information it shouldn’t, the competent national authority is likely to act in the corporate interest, and regulatory bodies in other Member States may lack the power to remove non-compliant companies from the wallet system if they operate in these specific jurisdictions. Finally, there are a number of issues regarding Qualified Website Authentication Certificates under Article 45, which is not strictly related to the digital wallet, but allows Member States to issue website certificates (that is, root certificates from European Trust Service Providers), which could potentially be used for interception and surveillance. Despite alarming calls from industry experts, Article 45 remained unchanged.
Engingeered romance and why we are done with that
24 MARCH 2024
I recently came across this article, which argues that the dating landscape is experiencing a significant shift away from apps. Many people are hoping to meet people more 'authentically', and the superficial nature of dating apps, filled with spam, bots, and fake accounts, becomes discouraging and emotionally draining.
I believe that the fast-paced dating environment and recycle mentally does indeed drain people. Engineered superficiality and the options people are faced with lead to experiences of rejection, ghosting, inauthentic interactions, significant time and energy investment, and negative impacts on people's self-image. The constant cycle of swiping, matching, and messaging is exhausting. It's like online shopping, with the slight difference that clothes, shoes and handbages usually don't play with our feelings. No wonder why many people are now falling back to 'traditional' methods of meeting others, in the hope to find more authentic and fulfilling experiences outside the confines of digital platforms.
At the same time, developers witness the online dating fatigue of course, which probably also generates ideas for new ways of software romanticism and the monetization of relationship formation online. With Bumble, Hinge or Feeld following Tinder to address the challenges generated by the latter, it remains only a matter of time before new algorithmic applications equipped with innovative and addictive features come to replace the current left and right swiping lethargy. Soon, we are offered another technology-mediated way that disrupts how we form real life connections with others.
Frankly, I do not think though that it is the technology this time, or the socio-technical design of mobile applications, or the lack of innovation to make dating online a 'real-life, but better' experience. It's us. We have forgotten to be patient and take the time to let good and valuable things enter our lives. We have forgotten how it feels to take ourselves out on a date alone. How it feels to walk around or stand in a queue without looking at our screens, but at each others faces.
Perhaps it is time to reconnect with the world, and to not blame tech this time. I do believe we have unlearned to embrace both the beauty and pain of real life interactions and connections. Instead of asking whether one should or should not use dating apps, maybe we should think about whether there is a right and a wrong way to use those apps? And maybe it is also time to stop obesssing about who could be there waiting for us, what we are missing out, and why meeting the right one did not work out so far. It might sound cheesy, but frankly, these aren't the right questions to ask when looking for a happy and fulfilled life. It is not about what a dating app can or can't afford. It is about what kind of people are we surrounded by, and what kind of person we are becoming to be around others. What do we have to offer to others? The right questions and answers to that start within ourselves. Not with the use or non-use of dating apps.
Guardians of the Internet's wild west nobody talks about
9 MARCH 2024
According to the Digital Services Act (DSA), Very Large Online Platforms (VLOPs) and two search engines (VLOSEs) with over 45 million users are required to submit daily reports on their content moderation decisions. Since its inception in September 2023, the DSA transparency database contains over 735 billion content moderation decisions submitted by these tech companies to the EU Commission.
However, while the DSA transparency database offers aggregated analytics on content moderation decisions by major digital platforms in the EU, it lacks comprehensive information submitted by these platforms. For instance, aggregated statistics make it impossible to detect specific reasons for moderation, such as the 'scope of platform service' category, and lump together various forms of illegal or harmful speech without differentiation. This limits understanding, especially when trying to discern moderation patterns across different platforms, or even societal trends and reasons when and why some harmful or illegal categories peak.
Beside the transparency database lies the work of content moderators that piques my curiosity. I raise a couple of questions around moderators' working environment and conditions. I do believe that scrutinizing the labor of content moderation is an important approach for evaluating compliance with the DSA, its norms, and standards, and ultimately the governance of large technology companies for the societal well-being. What training do they receive? What morals do they live by? Are the resources allocated to this division sufficient? What benchmarks guide their decisions, and are these benchmarks practical and realistic, considering the sheer amount of content uploaded by millions of users everday? Although a lot of content is filtered out by AI systems, some images cannot escape the manual review. So, another pressing issue would be, what is the impact on moderators' mental well-being, both in the short- and long-term?
Understanding the nuances of content moderation involves exploring these questions, which I consider really important. This is the part of online governance that ultimately shields society from society, and we should not underestimate the efforts that are invested - and misinvested - into this work, and the insights this labor can give us.