To truly safeguard the metaverse we need to understand the harms users are exposed to. And that means all users – whatever age groups they are in, adults and children.
While Virtual Reality – VR – is often perceived as a family entertainment device, most consumer headsets have a lower user age limit of either 12 or 13. This is written into the manufacturer’s terms and conditions, with the main way of enforcement being through the linking of the headset to an online account elsewhere (e.g., a Facebook account), in which the user’s date of birth has already been required.
However, there’s consensus among industry experts and researchers that the lower age limit is not widely adhered to.
At present VR headsets are covered by the UK ICO’s Children’s Code, which requires organisations providing online media to under-18s to include adequate age verification processes. Companies that do not conform to the code could be fined under GDPR legislation.
It’s quite evident from fieldwork conducted for the recent IET report ‘Safeguarding the metaverse’ that VR is being used by children, and that the number of children in the UK using VR without intervention is unlikely to decline.
A key issue identified in our metaverse fieldwork is that of unsupervised children in openly accessed VR social spaces. This includes under 13-year-olds and over 13-year-olds (the mandatory lower age limit for VR users).
During our field work we met children in these spaces who told us they were as young as six. This means that children are interacting with adult strangers. It’s a problem that was seen on social platforms including Altspace, VRChat and Meta’s Horizon Venues.
What makes this situation different to non-immersive media – such as chatrooms – is that VR is embodied. Strangers can not only say things and share content: they can also interact using their bodies, as represented by avatars. On VRChat – one of the most popular metaverse apps on the Meta Quest – we also
observed avatar nudity.
Research from the Centre for Countering Digital Hate (CCDH) shows that VRChat is also “rife with abuse, harassment, racism and pornographic content”. CCDH researchers found that users, including children, are on average exposed to abusive behaviour every seven minutes. Abusive behaviour recorded and reported by CCDH researchers included:
• Exposure to graphic sexual content
• Bullying, sexual harassment and abuse of other users, including children
• Minors being told to repeat racist slurs and extremist talking points
• Threats of violence
One aspect of VR that differentiates it from other forms of media is its personal nature. Unless the user specifically chooses to ‘cast’ their headset’s view to an external screen, it remains a solo experience.
Parents, teachers and caregivers cannot easily monitor what a child is doing or seeing. Some apps do not allow ‘casting’ or recording. This creates a ‘black box’ effect, in which supervision becomes difficult.
Furthermore, studies indicate that VR can cause psychological issues for younger children, regarding the blurring of the line between imagination and reality.
Harassment and abuse are also common experiences for VR users when they spend time in open spaces where strangers can meet.
In the case of the metaverse and immersive technologies, user-driven safety features aimed at addressing harassment and abuse are not enough.
The solutions being offered by technology companies for user
safety, for instance the block and mute feature, are primarily instigated by the victim. But by the time a victim has found the block, mute and report button, the psychological damage has often already been done.
Technology companies must now be incentivised to address these issues of harassment and abuse at their core – addressing the culture of these spaces – rather than placing the onus onto victims.