California Law Review

View Original

Podcast with Yeji Kim: Virtual Reality Data and its Privacy Regulatory Challenges

This podcast accompanies the student note from Yeji Kim: Virtual Reality Data and its Privacy Regulatory Challenges: A Call to Move Beyond Text-Based Informed Consent

See this content in the original post

Transcript

SPEAKERS

Carter Jansen, Yeji Kim

00:00

Imagine putting on a virtual reality headset that makes you feel as if you're present and a different world. The more data that the virtual reality company gathers about you, the more realistic it can make your individual experience. But this data could be used to infer a lot about you. For example, data from a virtual reality game could be used to help an employer decide whether you would be a good employee, or the data could be used to determine the user's medical conditions. What does and what should the law do to protect you? Welcome to the California Law Review podcast. Our goal is to provide an accessible and thought-provoking overview of the scholarship we publish. Today, we will be discussing a piece by Yeji Kim titled Virtual Reality Data and its Privacy Regulatory Challenges, a call to move beyond text based informed consent. This is a student note published in Issue One of Volume 110, February 2022. Yeji, thank you so much for sitting down to talk with us today.


01:02

Thank you so much for having us.


01:04

So just to start, can you give us a quick overview of what virtual reality even is?


01:11

Yeah, virtual reality refers to computer generated technology that creates a simulated world. The unique goal of virtual reality is to provide an immersive experience where the user is made to feel physically and mentally present in a different world. And ideally, this presence feels so real to the point that a user would perceive and behave as they would in the real world. And this technology has been heralded as really promising because it has endless possible applications. So for example, virtual reality could emulate flying training for pilots without the accompanying dangers of real world training. And it can be used across many sectors like education, health care, and so on and so forth.


01:56

Okay, and what do you argue for in this piece?


01:59

Yeah, the question that I had was, what kind of data do we need to collect from users? And use that to prepare the kind of immersive experience? And does it come with any kind of privacy regulatory challenges? And my argument for the piece is that yes, virtual reality poses some unique regulatory challenges, not only because it needs to collect a lot of data, but also because the particular combination of data collected can draw extensive inferences about its users' psychological, physiological and behavioral tendencies. All the while this data collection and analysis isn't really obvious to the users. So currently, data privacy laws, like the EU's General Data Protection Regulation, which is the GDPR, is the most stringent data privacy law there is. And it relies on users consent to data collection, and use only after reading a company's privacy policy as one of the key ways to safeguard privacy rights. And I argue that in the context of virtual reality, even if users do read privacy policies, which they seldomly do anyway, they will still not be able to fathom the extent to which these inferences can be drawn about them through the kind of data collected and analyzed. And I try to offer some solutions so that users can more consciously weigh the benefits and privacy risks associated in using virtual reality.


03:27

So can you give some examples of inferences potentially harmful, that can be drawn using this VR data.


03:35

So for example, there is a study currently going on, where based on the body movements of children, the analysis can identify who is autistic. So that's one inference that can be drawn about users in virtual reality. And there's another study, for example, that's called kinematic fingerprint and this refers to the kind of bodily movements that's unique to each individual. So if you analyze how somebody moves their body, that's equivalent of being able to identify that person, just like a fingerprint would. So these are kind of inferences that can be drawn about each individual as well.


04:14

Okay, that's, that's super interesting. And what made you decide to write this piece? What made you interested in this topic?


04:22

So I think during my 1L summer, I encountered this article that talked about virtual reality and how it can really function as a surveillance device. And I remember being kind of shocked about it, because to me, virtual reality was just a gaming tool. So to think of virtual reality as something else was really surprising. And I thought, there is some inherent privacy risks associated in virtual reality just because it's not really intuitive for us to understand the kind of kind of data that gets collected and used.


04:53

So to contextualize what we're talking about this discussion of data privacy and virtual reality is part of A larger topic that you just mentioned, the regulation of big data. So for listeners who may not be familiar, what is big data?


05:09

Yeah, big data. As the name suggests, it refers to large and complex sets of data. And its key characteristic is that the insights and the purpose of the data isn't really clear until after it has been processed and analyzed. So, for example, an example of big data is users' purchase history from various stores and online offline, and inferring their shopping tendencies. But we can't really know what their shopping tendencies are until those are really analyzed. And that's the key distinction of big data.


05:42

Okay, so it's large and complex sets of data taken from internet users or consumers or in other contexts, and what makes big data valuable?


05:54

The value of big data lies in the fact that it can do predictive analysis. So an example of predictive analysis is basically what is a user going to do or what is likely to happen. So there is also a study that based on like the very subtle muscle tremors of a user, they can predict who's going to develop Parkinson's disease. So that kind of predictive analysis helps the businesses and organizations that use big data to make better decisions, and you know, what is going to be more efficient for them.


06:28

And bringing it back to virtual reality, what type of big data is collected from VR users?


06:35

The kind of big data that gets collected from VR users is going to be a combination of behavioral data, psychological data, as well as physiological. So an example of physiological data is the way we move our bodies. As I mentioned previously, it also refers to kind of the eye tracking software that is going to be implemented in the VR in the near future. So that refers to how we move our eyes when we are reacting to a certain scenario in virtual reality. And that can kind of capture what we respond to what we're focusing on and what they can do to make us focus on something longer.


07:13

So you also explain in your note, that the regulation of big data typically involves a consent based framework. Can you explain what this consent based framework is?


07:25

Yeah, consent framework just refers to basically what we do as consumers what you're already used to, which is reading a privacy policy. And after that, like checking the yes box, I'm consenting to the use, to the collection, processing and use of personal data. And that consent framework, sometimes the privacy laws don't really require consent. If they're not collecting personal data, or kind of sensitive information, it really depends on the privacy law at hand. But that's the general structure of the consent framework. And it presupposes that a user knows a lot about themselves and can make really well informed decisions based on the kind of data that's collected after reading the privacy policy.


08:11

So you say that sometimes privacy laws don't require consent, if personal information is not being collected? Can you expand on this a little bit?


08:19

What I mean is that there only so in the US context, only what is considered personal information is what's regulated. So for example, something that I that can indirectly or directly identify you. So that can be something as very obvious as like social security number, but it can be like really not as obvious, as you know, like, I don't know, bodily movements, right. But those can be considered as personal information, anything that's not considered personal information. So for example, like taking out your name, and just analyzing your behaviors only. And making it like pseudo anonymous, that's considered not personal data, or information that's already on the public in the public space. That's not considered public, personal data. So in those cases, the law doesn't really regulate or provide protection for privacy rights.


09:08

Why do some criticize the use of consent based frameworks when it comes to big data?


09:14

Because it's a little bit, it's inherently in tension with big data. Because consent, as I explained earlier, kind of necessitates that a user would understand the ramifications of their consent prior to the collection and use of the data. But Big Data's key characteristic is that its meaning is only available after collection, and processing analysis. So sometimes, a company may collect certain data and not really realize what they're going to use it for, and then use it for other reasons. And that is a little bit against the law in a sense that there's this principle called use limitation. You're only supposed to use the data for what is what its purpose was, but that can get very vague, especially in the context of big data. 


10:03

Virtual reality data is a specific type of big data and as you argue, a type of data that is inherently intrusive. So can you talk a little bit about why Virtual Reality creates unique privacy risks for users, and how it's different from data collected in other online contexts?


10:22

So I think the first reason is that the goal of virtual reality itself is a little bit counterintuitive. So it is to provide an immersive experience, right? So the more data we collect about each user, and the more personalization, it allows for a user to be more truly immersed in a different reality, right? So by definition, it just requires a lot of data collection and use, right. So the principles that I've mentioned, like use limitation isn't really effective, because you're using it for the virtual reality contexts where it necessitates a lot of data to begin with. So those I think, so virtual reality, and those principles are inherently intention. So I think that's really difficult to regulate. Another unique risk of virtual reality is that of profiling. So user behaviors in virtual reality can really differ in many ways from the actual reality. So there has been a study where somebody is asked to jump off the cliff in the virtual reality setting. And when they are trying to jump off from the cliff, they would feel real fear that when they do jump off, they feel pain, real pain. But in the actual reality setting, even if somebody were to be asked to jump off the cliff, right, they're not really going to. So there is that behavioral divergence in virtual reality, but because it's an immersive situation, other businesses may use those behavioral data to infer something about a user. And I think that can be problematic. And another real risk is real time psychological manipulation. So this is more forward looking but virtual reality can not only respond to user psychological needs real time, by showing real relevant targeted advertising, but it can also create those psychological needs to sell more advertisements. So and especially because unlike other types of technologies, virtual reality can create an entirely new 4D world for us to react and respond in. So targeted advertising, started in a desktop space, but now it has been moving toward more mobile spaces, and now TVs, and it will soon also be transitioned into virtual reality space as well. 


12:47

Going back to the profiling point, I just want to clarify, the implication is that this inference that is drawn using virtual reality data, that inference isn't necessarily true, because behavior doesn't match up to how someone would act in real life.


13:07

That's correct. Yeah, just to expand on that a little bit. We can see virtual reality being used in hiring decisions. Because I don't know if you have experienced it before. But these days, a lot of companies use computer games to make hiring decisions. I don't know if you have played that before I have. And it's really funny. Like, they try to measure how you respond to risks based on a computer based scenario they give you like fake money, and how would you spend it right? But then it doesn't it to me that didn't really feel like it wasn't making an actual, like, very correct inferences about me, because I'm using fake money and doesn't really have any implications on me, in reality, but you can see that kind of computer game moving toward more virtual reality space and playing games in VR. And that's one of the movements that's going to be continuing, I read an article where, where an author was talking about how VR is going to be more effective in making those inferences as opposed to a computer game.


14:13

To illustrate the unique privacy risks of VR, you discuss how the EU General Data Privacy Regulation (that's GDPR) applies to the use of VR data. Can you give some background on what the GDPR is?


14:30

Yeah, the GDPR came into effect in 2018. It is the EU's data protection law. And it has been kind of heralded as the gold standard for data privacy laws in general and has influenced other data privacy laws in different parts of the world. And even the US has been influenced by it. And it has been also like applauded as one of the more stringent data protection laws in the world.


14:59

And do you think that the GDPR adequately protects the privacy of VR users?


15:05

Um, I think yes and no. So the GDPR definitely has a lot of forward thinking features. So for example, its definition of biometric data is really forward thinking. Biometric data, like we've talked about personal information before, and biometric data receives even more protection than just personal information. So biometric data refers to something like identifying an individual based on a characteristic that's not really alterable, like fingerprints, right. But when the GDPR defines biometric data, it includes features that indirectly identify an individual so a body movement can count as biometric data. In that sense, the GDPR is very much forward thinking. But at the same time, it does still place the burden on the consumer to figure out what the risk of VR is, and to consent to something that they may not really understand.


16:02

Why is text an ineffective medium for communicating privacy risks to consumers, especially the risks that are associated with virtual reality?


16:13

Yeah, it's because the text changes its meaning, because virtual reality can collect and process so much information. So for example, Oculus, which is a VR company that was acquired by Facebook, their privacy policy says something like we collect information that's unique and relevant to you and process that information. And a user reading that doesn't really, they can't really fathom what is relevant to them or unique to them? Well, they have their own understanding of what is unique and relevant to them. But that doesn't necessarily match the company's understanding of what's unique and relevant, because they can infer so much data from the user and define what is relevant for them. So even after reading the privacy policies, which no one does anyway, the user, yeah, the user wouldn't really walk out understanding the full ramifications of consent. And I think that's what makes virtual reality to have a particular privacy risk.


17:15

So that brings us really nicely to your proposed solution. So what are the two approaches that you suggest in your note for upholding VR users' privacy rights?


17:27

Yeah, one is to move away from the text, but use interactive videos to receive user consent. So the videos would be a part of like an interactive learning tool, where users would have to answer a series of questions related to privacy risks associated in using virtual reality. Other fields like medicine is already using this kind of interactive learning tool to understand the risks associated with their clinical trials. And I think it can also be helpful in the VR setting, because the implications of using VR isn't very obvious to users. So seeing it visually and listening and using our senses to understand the risks is going to be more effective than just a text based consent model. And another solution that I'm trying to offer is giving the user the control over privacy settings, particularly on the amount of customization that can go into VR. And I think the benefit of that is that, first of all, just by seeing the kind of level of customized settings that your user can understand the kind of customizations that can go on in VR, right? So I think that's one benefit. And another benefit is that by kind of choosing a certain type of privacy setting, they can better recognize what actually happens in VR, and what kind of inferences can be drawn about them.


18:55

Can you give an example of a certain privacy setting that could be customized? And how a user might customize it?


19:02

So an example of that is the sort of like olfactory or oral settings that can induce a reader or user to feel hunger or feel appetite, and kind of inducing a user to go to a restaurant or purchase something online or in the virtual reality setting. And these kinds of customizations are very subtle, like a user wouldn't feel, "Oh, this is really customized for, you know, my personal needs," right? Because it's really natural for them to be in the virtual reality setting. So those are kind of subtle customizations that a user wouldn't really recognize, but it's really in the background.


19:41

So you're saying that a user could for example, say "I do not give permission for me to be manipulated to feel hunger or to feel like I need a certain thing" in these VR settings?


19:56

I think, yeah, I think something can be like "Don't customize my audio settings" or, you know, factory settings or something like that. I think that's really challenging from a virtual reality company's perspective, because, you know, what is the extent of that customization, for example, but at the same time, I think even just laying out those possible customizable settings is going to be helpful for the user to understand, "Oh, these are things that actually can get customized, and can influence me."


20:28

An interesting concept that came up in this discussion is something called the privacy paradox. So what is the privacy paradox? And how would educating users about VR privacy risks help address this paradox?


20:42

Yeah, privacy paradox refers to a mismatch between a user's expressed privacy preference and their actual behavior. So for example, somebody may say that they really care about their privacy, and they will do anything to protect your privacy, but their actual behavior suggests that they don't really care about privacy. So that mismatch is called the privacy paradox. And one of the causes of privacy paradox is that the user is not very well informed on some of the privacy risks that can come from a certain behavior. So part of the challenge is to really inform the user on the possible ramifications so that the, the privacy paradox is decreased.


21:26

So the two components that you suggest in your proposal, interactive media that educates users about the hidden privacy risks, and then customizable privacy settings, these definitely seem like a promising solution. Thank you so much for drawing attention to the privacy risks of virtual reality, and for discussing your note with me.


21:47

Oh, thank you so much for the opportunity to speak today. I would also like to thank the community here at Berkeley Law, particularly professors, Paul Schwartz and Bertrall Ross, who have been really amazing mentors, and I would also like to thank the CLR staff, who made the editing process a very communal experience.


22:07

We hope that you have enjoyed this episode of The California Law Review podcast. If you would like to read Yeji's note, you can find it in Volume 110 Issue One of the California Law Review at california law review dot org. For updates on new episodes and articles, please follow us on Twitter, you can find a list of the editors who worked on this volume of the podcast in the show notes. If you're able to leave us a rating and review we would greatly appreciate it. See you in the next episode.