Virtual reality (VR) in popular culture has long promised a world of immersive online experiences that one day might come to complement, or even supplant, our regular everyday lives. The reality today is more immature than fully immersive, with computer-generated characters bumping around in a limited online world. As this promise evolves into (virtual) reality in coming years, it raises some fascinating questions for risk functions, and the question of VR security and virtual fraud.
The future of metaverse has had a virtual shot in the arm in recent years, as major players—Meta(Facebook), Microsoft, Google, Nvidia, Unity, Shopify, and more—announced significant investment commitments. Meta alone committed over USD21bil metaverse investment in 2021—more than five times it paid for VR technology Oculus in 2014—as the company banks big on the virtual world of the future.
The potential value captured by early pioneers could be immense, with investment analysts Grayscale predicting that metaverse represents a trillion-dollar revenue opportunity across advertising, social commerce, digital events, hardware, and developer/creator monetization. As with any big-money opportunity, there are undoubtedly bad actors waiting on the fringes to exploit users and corporations for their personal gain.
Understanding how metaverse works
While common perception might see the metaverse as an endless online world of virtual interaction, a broad definition incorporates that iconic embodied virtual-reality experience, as well as a virtual experiential platform or landscape, and the broader Web 3.0 framework for economic and experiential interoperability.
If that all sounds complex, the reality is that like our ‘real’ world, the ‘virtual’ world can incorporate many facets. Logging onto a PC and entering the simulated world of popular game Minecraft could be considered a dimension of the metaverse, but quite removed from the experience of strapping on a VR headset and attending a virtual concert.
In one survey of technology experts by Pew Research, more than half (54%) of experts believe that by 2040 the metaverse will be a more refined and truly immersive, well-functioning aspect of daily life for at least 500 million people globally.
A truly unified metaverse where users can seamlessly dive between these facets is a long way away—as protected (and siloed) corporate interests and sheer computational power requirements create a barrier to this future. What’s important to understand when it comes to risk is that these disconnected environments are still generating huge amounts of data, and opening up new avenues and opportunities for fraudsters to identify, assess, and interact with potential victims—both corporate and consumer.
Fraud in an expanding virtual world
While it might be at a nascent, and at times rather cartoonish stage, the rise of VR engagement demonstrates the inevitable future of metaverse growth. The estimated value of VR and augmented reality (AR) products is expected to grow from USD4.34bil in 2021 to over USD36bil by 2025 according to research by IDC Worldwide. The wider value of the AR and VR market is expected to be worth up to USD200bil to USD450bil by 2030, depending on how aggressive the projections are.
These values matter, because they imply both the significant user growth, and also the value which can be leveraged by bad actors. An expanding threat surface, and expanded monetary value of that surface, means fraud in virtual reality is as inevitable as someone naming their online avatar Elvis Presley and pretending that they’re the King returned from beyond the grave.
There are a number of ways that an expanding metaverse will expand fraud risks that risk management teams should be aware of. The most obvious angle is through data piracy and personal data theft. Huge volumes of user data will be generated by the metaverse, which in turn can be used to unlock value in the real world. Good metaverse safety will mean securing that data in the same way that telecommunications companies today secure the data flying between mobile devices or in the rapidly expanding world of mobile data. It’s imperative that the same protection is given to this new, high-volume data world.
At Neural Technologies we know the value of data, and we work with clients across the world to deliver solutions that can meet that expanding need. Our own Fraud Management product is designed to integrate any data of any type, with machine learning (ML) and artificial intelligence (AI) technologies that allow for scalable, automated solutions. That type of scalable and data-agnostic functionality will be critical in a world where the metaverse is adding to our expanding threat landscape.
Money laundering may also be a challenge, as bad actors take to the virtual world to hide the source of their illegal funds. Given the huge pressure for regulatory compliance in the digital landscape, it’s likely that AI/ML driven anti-money laundering will be an important part of an effective metaverse solution.
Another obvious example comes from the idea of virtual phishing to socially engineer an individual into handing over valuable personal or financial data. Imagine, for example, making a friend in a virtual world, who convinces you they’re in trouble and asks for you to transfer money. As a growing number of corporates transition to metaverse delivery, there’s also a risk of fraudsters posing as official agents of institutions such as banks to target victims.
These kinds of phishing frauds are already a major problem for communication service providers (CSPs) in the existing telecommunications landscape, and one which our SCAM Block solution for scam phone calls was designed to address. If you eliminate the opportunity for a fraudster to on-board and connect with a victim, then you cut off fraud at the source. That will be just as important in the virtual world as it is currently in our real world.
Account takeover fraud and impersonation is another obvious area of potential challenge. At the early stages, this will require strong electronic know-your-customer (eKYC) to verify and validate the authenticity of applications and user registration. We know from implementation of eKYC in our own products how important that can be in cutting off fraudsters before they reach the end consumer.
Metaverse operators will need to take their own steps to address this potential avenue for fraud. One likely approach will be using link analysis data like that integrated into our own Fraud Management products, using user data to ‘link’ and correlate potential fraud risks. Machine learning will also be a powerful tool in this regard, as it is able to learn and grow to identify ‘new’ fraud risks or behavioural profiles as they emerge—one reason it’s so fundamental to our own suite of solutions.
That idea of addressing the ‘unknown’ will be critical in a rapidly evolving sphere like the metaverse. It’s obvious that we don’t know how or what fraud might emerge in this new arena, which is why AI/ML solutions will be a critical part of addressing it. They’ll also be fundamental capabilities when it comes to actively reacting and responding to the huge wealth of data the metaverse will generate.
The metaverse promises a remarkable virtual world of opportunities, and one which, given current investment, is likely to accelerate significantly over the next decade. With this new frontier comes new fraud risks. But with the right data, backed by the power of AI/ML technologies, operators can work to ensure the future of the metaverse is secure for users.