Ethics in Artificial Intelligence

Machine generated transcript…

thank you for coming out to this panel this is something that a lot of us are really excited about getting to discuss especially here at gdc um so a few months back when i had pitched the idea of doing a panel on ethics and ai here at gdc i actually got a lot of feedback from game devs saying ethics and ai but were at a games conference if this was a social media conference or a conference that was maybe focused on machine learning then sure but why would we talk about ethics at a games conference.

and hearing this feedback from game devs was what made me realize just how important having a panel like this should be especially as were starting to see the spectrum of game ai grow into so many different areas over the past few years when social media platforms were being built ethical and privacy concerns were not their focus and look at where we are today when machine learning was brought into the education system ethical considerations were not made and it caused major issues because of biases that had been programmed into their systems.

With big data comes big responsibility, when our panelists actually got together the other day. An interesting statement was made and it was that discussing ethics had always seemed very taboo and we wondered. Why is it because companies and individuals are scared to admit that they arent necessarily the experts in a given field that they actually need help that maybe a system, game or tool that was built by a company um was coded with biases? That could be offensive or even dangerous to some people, but

Where one would rather ship a product than actually have their product be scrutinized and redesigned for improvements through ai, we are able to do amazing things, but without proper considerations, around biases and ethical implications. Things can easily go south, so today, im joined by some amazing developers from different areas of expertise who work with and use ai across a number of very different areas.

and have all been advocates in their respective fields around ethics and games and ai were going to cover quite a wide range of topics today from biases and game ai to data and privacy concerns and areas of ai that transcend from traditional traditional video games like xr and digital assistants and then wed like to spend some time where we open it up to the audience because we really want to see what do you all think are areas that we should all be addressing when it comes.

to ethics and ai so hello my name is alicia liedecker im one of the advisors here at the ai summit and during my day job i work on xr as director of developer experience at magic leap prior to that i was lead ai on many of the assassins creed titles so were going to start off by having our panelists if you’d like to start emily talk a little bit about yourselves and some of the work that you do sure im emily short im chief product officer at spirit ai and what we do at spirit is middleware.

for games um so that includes a product called character engine which does dialogue for npcs and how they can respond to natural language input or other kinds of input from players and the second product which probably has the greater application in this area is ally which is a community moderation tool that looks at toxic behavior within communities and helps give community managers an opportunity to see in a triage dashboard who is causing the most trouble in this space and what are the biggest concerns that we should be.

looking at moderating so were not just waiting for things to be reported by players but were actually able to surface issues in the community and obviously that both of those products and especially ally raise a lot of questions and things that we need to think about about how do we train to look for these things what data are we using how are we tagging it how do we decide whats offensive whats whats racist whats appropriate and.

then how do we make use of that information when we have it how do we protect the privacy of the clients and of the players that are making use of the system and then when were applying characters that can respond to risk input in interactions how do we make sure that players who are interacting with those characters understand that theyre interacting with an ai and they form an appropriate rather than an inappropriate kind of connection with that character thank you celia hey so my name is celia.

hudent and i am the least knowledgeable person about ai on this panel my background is in psychology actually i have a phd in psychology i was im specialized in child development and development of cognitive psychology and ive been working in the game industry in the past 10 years i started at ubisoft in france im french and then moved to ubisoft montreal.

i worked at the playtest lab there and also worked with a rainbow six franchise i moved to lucasarts uh working on star wars games like 1313 that never saw the light of day sadly and then moved to epic games in 2013 to be director of user experience at epic because now im specialized in game ux this is how my background meets game development and so i worked on all the different products that i picked so undrill engine and of course fortnite i left epic late 2017 and now.

im a consultant in freelance and also wrote a book called the gamers brain and so im very interested into understanding how we can use psychology in developing products for good or evil so all these questions are really really interesting to me awesome thank you timoney hi im timoney west im the director of augmented and virtual reality research at unity labs my background was originally in a product design in social media and.

others and ive spent my entire career trying to figure out how to get data from people largely personal information and then give it back to them in a way that makes sense and is useful for them when it comes to spatial computing that has been an even bigger conversation because we are literally creating tools that let you record everything about your house.

everything about the way you move or the way you’re moving your device and then try to put that boat back into the engine and then into your game or to your experience in a way where it makes sense uh and actually adds additional value so i fundamentally believe that if we take in information from these devices and from our users we have an ethical consideration to give back more than weve got and that is.

especially when it comes to having you know devices that have 10 different cameras watching your every move and listening to your every breath something we really need to take seriously so im glad to be here talking about it so im luke dicken im director for central and strategic analytics you might have heard me talk here yesterday i will try not to shout at you so much today um so im zynga uh and we are kind of well known as a data driven games company weve kind of come to prominence in maybe 2009 kind of era.

uh you know there are quotes flying around that oh were not a games company were an analytics company masquerading as a game studio um so weve been kind of collecting a lot of data over the years um and i think that we we do a pretty good job with it so im really excited to kind of come here and talk to you about the way that we do that awesome thank.

you um so sorry luke were actually going to start off by talking about npcs i know yesterday and im going to start shouting again about this um yeah that’s fine lets show it you know um so my first question to the panelists is um what are some examples that youve seen in video games uh where youve seen biases that might have been programmed around characters and systems ai anyone want to start you try uh yeah i remember um when i was uh playing watch dogs um they.

there was a lot the ai was always uh you would see like a woman always getting violented and in trouble and as a player so you’re incarnating the um a male character and always coming here to help out the women and the women always need someone to get out of that violence and so the the women are in that game are massively portrayed as getting beat up and needing some help.

um i always like to diss navi from ocarina of time how many people play that its not even fair shes just clippy i get it but um it was an early attempt to have you know the user sort of guided and and given context as they go through experience but she was extraordinarily annoying she just wanted you to stay on task and the great thing about ocarina was the first time you could really run around in 3d and hyrule right so that’s what you’re gonna do.

and she was just there being annoying telling you to go to the next temple and you’re like no navi i just want to ride my horse leave me alone anybody else i mean one that jumps out to me i think is um you know if you look at sort of systems like in rim world um remote has a whole kind of deep simulation system but a lot of it kind of gets represented as very um its an example of a way that’s very easy to get mpcs doing things that are.

very um kind of uh stereotypical because its leaning into things like oh this person has a misogynist trait this person has a misandris trait and its kind of uh it raises some interesting questions about like does that actually need to be part of the ai system and what were actually kind of representing or is there something kind of i guess it comes back to the question of taboo right like if its taboo for us to talk about it here is it taboo for us to.

like express it exactly so it was interesting for a real world because that was one programmer one ai programmer who decided to program what they thought the rules were for how men and women should behave in the game and how sexuality um should behave in their in that game um what ended up happening is the community ended up going on reddit and basically demanding the programmer to remove some of those biases and the only reason that that person.

made changes was because the community demanded for it so what do we think that we could actually do to help put in place processes or how can developers be more mindful about coding systems without biases that have been put in place well i think some of some of the approaches that we see brought into narrative context around having sensitivity feedback having people comment who belong to particular groups that might have a little bit of more understanding of kind of what does.

this portrayal mean for us what does this kind of representation likely to convey about us have a look at the work and give some feedback and intentionally seeking out sensitivity players has been really useful for story games and i think its its entirely possible to bring that kind of approach also into looking at systemic representations and i would add to that its not necessarily the case that systemically representing negative attitudes on the.

parts of of characters like having a racist character is not necessarily inherently evil but i think its its we need to be careful about how we represent and unpack that um and so having people be able to intentionally seek that kind of feedback about their system is quite important actually that is the curious thing about being the creator of a system if you’re trying to mimic a real world system you may or may not agree with because just inherently in creating and.

defining the system it sort of feels like you’re giving it the thumbs up even if you’re not you know like like because i am replicating this i feel like the sort of implicit uh acknowledgement that the system is here even if i completely disagree with it its just an interesting tension there’s no real solution there i know when we were on our call the other day we brought up some interesting discussions around just what the role of.

a programmer for systems and ai means and actually about the education of programmers i think a few of us brought up how who here actually whos a programmer has done some type of ethical class when you were in university a lot of us right or quite a few of us i remember personally for myself that it was kind of that class that you didnt take seriously and it was actually very focused on business rather than actual ethical um areas that we should consider as programmers.

so what do we think that we can do on the education side for programmers and designers maybe dont even call it ethics it doesnt really matter what the ethics are that’s like a social thing you know it changes from country to country or even regional region but just being able to create or have a class that teaches you the underlying socialization systems so that you can confirm your own bias and know that its a bias right recognize which part is is the.

socialization aspect and how it differs across different cultures and and then it takes out the is this good or bad and its just like this is a tendency that you have acknowledge it and then see if you want to build it into the system you’re designing or not yeah maybe maybe we should call call it unconscious bias more than ethics because ethics is a set of values that we decide and who.

whos deciding how we decide its complicated but if we talk about unconscious bias and we explain that as humans we are fallible and we all have biases and its okay we need to admit it and accept it um but then we need to work around it because we cant even if we know our biases we still cant get away from.

them so we need to design the environment to go around them the um the example of im gonna try to do that fast but the example if if you want to hire more women you its we are discriminated against women even even women against other women and so if you take the example of the orchestra i think it was in boston they wanted to hire more women because they figured that how come we dont have that many uh.

musicians in our uh in our in our group um that are women and so to avoid um being discriminated against women they started to do blinding um uh auditions so they were a curtain and they couldnt see and also later on they added um carpets because they could still hear the high heels of the women and but after they um added the carpet and the curtain then they hired i think it was plus 30 to 50 more percent 50 more percent women uh in the orchestra so we are.

biased its happening we we have to acknowledge it and we have to understand how it works and how we perpetuating them and so in ai or in any system if we dont take that into account we are going to um perpetuate all these discriminations im just going to give one more example from ai um but i love that example because its pretty clear um if in turkish.

there is no gender when you talk so yes she is a doctor he is a doctor its gonna be the same phrase you dont have he or she in when you talk in turkish so if you google translate uh he is a doctor or she is a doctor you will have the same phrase that im not gonna say because i dont speak turkish at all but you have the same phrase for both of these different phrases in english now if you take that phrase from turkish and you put it in google translate and.

translate it back into english then you have he is a doctor now if you take the phrase in turkish that says he or she is a nurse and you put that in a google translate you get she is a nurse and so where there’s no gender in one language uh through uh ai algorithms uh we its gonna perpetuate some discriminations so this is where were at we need to take that into account and if you want more equity we have to find a workaround so actually i agree with what you were.

just saying but i want to make the case for ethics is about more than just the biases biases are very important but i think as a as a culture we need more training and ethical thinking and how it works in general um and maybe that’s everybody needs to watch the good place a lot more but i feel like one of the things that’s happened is that because we have for many reasons and you know historically a lot of this kind of thinking about how do we figure out what.

is good and bad has either been in the realm of fairly academic philosophy or it has been expressed within a religious context and moving away from those contexts especially as more and more people like dont necessarily share the same backgrounds we need to have ways of talking about these problems like how do we decide what is good and bad like that’s a its humanities you know we got to still have.

them in schools so i think its its broader than just identifying a set of issues its its about like building that whole framework of philosophy yeah i mean that yeah cecilia was talking about like all were doing is measuring the vector differences between words that are close together and that is how you get she is a nurse and he is a doctor but that’s how you get he is a doctor were just you know somethings.

running through a bazillion words and trying to figure out which words are close enough to each other that you can predict which word comes next that’s how it works right so if on top of that we want to say but dont say he or she that’s where we need this ethical framework of like okay go back through the data set again and now do basically kind of a ethical fix but the interesting thing of the data so yeah where we get that.

list what is the list who defines the list where is that who validates it how do we get it part of our process but we already have it right i mean well is it in your pocket in as much as its everything yeah so i mean its really easy for us to kind of step away from like fiddling about with with demographic words in particular yeah.

but like everybody in this room knows what an imbalanced data set is and if you have an imbalanced data set you know that you need to do something about it so like if you can take that that kind of tool set across everything and then start applying that to places where you’re not necessarily applying it right now is that what a fix looks like just how do you account for cultural differences there are countries where women cannot be doctors i mean i feel like if we had an actual plan here wed be off doing it yeah.

fair enough that’s true um so moving on from npcai there’s a lot of different areas and different types of games that use ai from social games to online games celia i know you had some areas you wanted to discuss around like how we could be using ai data better to improve those types of games i dont know if you want to talk to that uh sure um no i think the the only thing that is is just to go back to my previous point we our society as is not we dont have.

equity um and we i mean a lot of people want equity i think if we can agree to that then we need to figure out what is it that we are uh showing you know what are the biases that are perpetrating in our games that its not just our games not to be fair uh its perpetrating in everything.

social media in in movies and books or whatever but we are also participating into that because we are part of the the culture and so a lot of people play games now so what what do we want to reinforce there are we going to reinforce discrimination towards women and towards people of color so what what is it we want to actually how do we want to participate because we are going to reinforce some discrimination uh or we can reinforce a better vision of of the world so since.

we are participating even if we dont mean to what do we want to do how do we want to participate so coming back to sorry online online and social games and i think like its mostly a few of you here especially luke at zynga as well and then from fortnite um theyre very different types of games that arent just about the biases um for characters but how you’re using aai data.

to influence those games in different ways what do you guys think about that or what are some initiatives that youve done on your sides so one of the things that we do is we definitely dont feed demographic information into machine learning algorithms partly because that would be wrong uh its partly because we dont trust our demographic information uh it turns out that a lot of millennials lie to everybody they can online um so as a result of that like its we.

know that its not necessarily the most reliable data so we dont actually use it as a basis for anything but that’s an example of like how you can be a little bit more kind of buttoned up about this stuff like you know machine learning in general takes the the view that hey throw all the data into the machine and see what comes out its magic um but you.

can be a bit more intentional about it and you can kind of um be a bit more structured and kind of you know we gather a lot of kind of in-game behavior data um you know as you’re playing through the game we fire off counters for probably too many things frankly given whats in our database um but at the same time like we can use.

that to actually back into a model of your player journey and then from there kind of understand that like i was talking about yesterday kind of what you like and dislike about the game and what you’re whats resonating with you that feels like um there’s like two different ways you can go with that right there’s the the kind of black cat.

okay lets just milk you for as much money as we can and then throw you to the curb um there’s also the way that you can kind of approach it which is like lets actually work with you to provide a good experience and i think i think some of it is intentional right like if i come at it saying hey im just going you know im an evil corporation and uh my comms team is gonna.

hate me saying this stuff um but if im you know if i come at it with with that hat on then that’s probably not super ethical whereas you can you can just change the framing and use the same kind of approach but do it in a much kind of better way and this actually came up this morning i was talking to ingrediesto whos our head of monetization at unity and we have the ability to uh map and track player behavior and change the game on the fly as a result.

and the reality is we could be manipulating the heck out of our users constantly so the ethics of how long and how addictive area but then we gamepl need to have an ethical base on which to make these decisions because what is the cutoff is it a thousand hours of gameplay like in a row if you spend two grand on one mobile game do we cut you off.

do we should we have the ability to cut you off it should be a percentage of income maybe you’re rich in two thousand dollars is nothing i dont know but obviously its not good for anyone to waste their life or their money on these on these trivial things so anyway its something that were thinking about quite a bit who are some of the people that you’re working with to help synthesize that data because like.

celia was saying coming from a psychology background it would say i would think that working with experts on that field is who we should be really like synthesizing and going through the data with rather than just the product owners or the programmers on that side um so have you done any like consulting with with game companies um that are trying.

to figure out what to do with all of that data and then same question for you luke how are you synthesizing all the data that you’re collecting from zynga i mean one of the things that we do kind of poorly is the psychology side of things we are very stats oriented um and i think that’s partly because we actually dont have a user research group we have a like a kind of.

marketing and consumer insights kind of focus test group and then we have the analytics team and if you have an analytics team they like numbers and they dont necessarily like people so theyre going to focus more on the numbers but its definitely something that we could probably do more on um sorry um yeah so i so i do a lot of the consulting now and usually people ask me to help them uh offer a better experience to their players so that there is that empathy that that that’s that’s willing for.

empathy um but then you never you never know how things are going to be used because even if i if i explain um things like um behavioral psychology and how we react to uh rewards especially uh in a variable schedule of reinforcement just like in loot boxes and explain you know this is uh this is exciting in a lot of cases that’s why we love to play dice or if you have critical hit its exciting.

so most most of the time you know in games these things are exciting now if you apply that to monetization form of loot boxes that you actually pay money on it and it becomes a little bit of a problem because they are using something that is engaging that we know is engaging to make money now is it a bad thing to try to make.

money when we see all the studios that are closing and its actually very difficult to survive when you make free-to-play games i dont know um so its its its hard because now a lot of people talk about how psychology is used for for evil we can use it you know to for dark patterns but we can also use it to nudge people and to make people make better choices for.

their health or financial decisions so this is this is where its not clear um there’s there’s more people outside of the gaming industry than now asking me for consulting uh to for questions around inclusion and diversity um more than the game industry so far so um so we keep kind of bringing up data and ethics and ai is kind of nothing unless we are talking about ethics and data collection and privacy um emily i know when we had the call both you and luke had talked about.

different standards and regulations that you had used to put in place in your products and in your games can you talk a little bit about that sure so in in the eu because were working across sort of multiple continents in the eu we have gdpr which for people who are not familiar with this is a regulation about how companies are allowed to store and process and apply personal information that they have from their players and users and it means that you have to be.

very specific up front if you’re collecting data you have to let people opt out of having their data collected or used you have to let them know if somebody places a request they can actually as a matter of transparency ask what data you have on them and then as a corporation you have to keep track of like what kind of status do i have relative to this data am i just storing it or am i a processor of the data um so its its a huge sort of piece of bureaucratic overhead which.

made a lot of people very frustrated and very uh it took up many many man hours um and woman hours person hours to um to make this thing work um but i think that’s the only way that we get around uh the sort of very strong incentives that exist for corporations to just consume as much data as possible and then do whatever the heck they want with it and one of the other really important.

pieces about gdpr is that it has a really serious fine attached so its the larger of uh 20 million euros or four percent of your global revenue which is enough to make a difference to a google or facebook to actually make them pay attention to this kind of thing and i think that’s something that we need to think about we can identify a lot of issues and problems and things that we hope that you know people who are in a position of.

power within individual institutions are going to take seriously we hope that theyre going to apply these ethical considerations and we can talk about that and we can try to create you know cultural standards and norms about how they should be behaving but i think the business incentives in some cases and especially around big data collection and big data use are so strong that we really do need the.

effect of government regulation in order to keep that within parameters that are acceptable and for the audience was gdpr that was just what i was expecting does it stand for oh uh general data i actually dont remember the gdpr you just have been chanting gdpr for so long i dont actually remember what the ackerman acronym stands for general data protection regulations right and are you using the same at zynga i mean you have to yeah its an eu mandate.

so if you dont want the fines you have to be compliant with this thing um so yeah i mean we had a pretty big initiative internally partly because of the very significant fines but also it is actually a i dont want to say its the gold standard for for what privacy and kind of personal data regulation should look like but it it feels like a really good first swing at kind of government legislation around this stuff i agree yeah uh unity also supports supports grdp um actually when you were talking it.

made me realize something i hadnt before which is we talked and when i was just talking with ingrid about how we should at least think about whether or not we have a responsibility to cut off players who are spending too much of their time or money uh somehow ads gets like a free pass on this i had a friend who told me she stopped using social media and she stopped buying as much stuff because every third instagram photo is actually an ad for something you probably were considering buying just bought or probably will want to buy.

and we dont really talk about the ethics of that like how much money how much more stuff can you buy in a year on the internet before the great adds god in the sky says no more that’s enough hmm i mean i read run an ad blocker so much of the time that i managed to yeah i mean can we try this but yeah i mean it is it is insidious because facebook so so this is this is like my personal story about how much i hate.

this right like i changed my facebook status to engaged and that same day i started to see facebook ads for weight loss like wow and it was it was completely obvious like what was going on in the creepy little mind of the algorithm back there so um but anyway um i i dont know i mean i think that the the ads question.

is even harder than the question of where do we cut off people in free to play because in free to play you at least have the record of like oh this customer spent this much money and its a lot maybe this isnt a good idea um and so you could have kind of internal standards of you know how much is how big is the whale allowed to be basically um so you can think about that.

kind of thing but when you get into the ad space its very hard to know what effect the ads might be having on the person i didnt sign up for any of these weight loss programs as it happened so unsuccessful there so you know its kind of tricky to know but its a valid question yeah so but the thing is what is it that were value what is it are we measuring what.

is it that makes your investors happy its like how much money you’re making right what are the profits you’re making so if this is what what were measuring and were not measuring if people are happy using your product or if you give them a better life that’s the whole question about um what uh tristan harris is going around with the economy of attention and and you know.

what what is it were measuring what is it we want to offer to our people um because we dont like facebook they they measure how many uh how much you engage where with the advertisement they dont measure if you actually have a better quality of relationship with your friends so what is it we measure because this is what is going to drive the algorithm or whatever to um optimize it uh so we know that for example like the reason why clickbait is working that.

well is that outrage is making us react and this is something that makes us like spend a lot of time on twittering and i got a little excited about something so if we measure if the thing that we want to optimize is people clicking on the link then all of a sudden their algorithm of course is going to favor all the big claims and the stuff that.

gonna actually make us fight each other and so it has a terrible impact on our relationships on society so what is it we want to measure what is it want to optimize is it always making profits and making you know the best the most clicks on those views yeah actually exactly like it doesnt matter what ethical standards we have if some pms kpi for.

this quarter is to increase activations 25 and you’re like well you know doesnt quite match up that doesnt matter they dont get their bonus or they they lose the respect of their peers if you want to go to a little bit more high level theyre seen as a bad worker like we need to have what people do for a.

living and what theyre rewarded for financially and and you know socially match up to the ethics or theyre just not going to implement it which is what were seeing today yeah i mean i seem to be up here making big statements but i think the part of what we see is that it is hard to debug the system the systematic incentives without sort of taking here are the results this.

is the kind of results that were seeing is ai is allowing us to take what we were already doing much further than before and were seeing bad results from that but what that is telling us is that we need to debug capitalism like its not just its not just the machine learning that needs to be fixed right yeah i mean its a system we can manipulate it just yeah turn in a different direction everyone.

gets to make a lot of money you know yeah yeah well but anyway that’s its gonna go way off data um timoni you mentioned this in your introduction but to do xr experiences its even more important today to get as much data about the real world around you you mentioned that what you want to know is see what the user sees hear what the user hears but there are a lot of discussions that are currently happening around what does that mean and what do we expose to developers.

and to users so what are your thoughts around that collect as much data as possible so there’s too much and then no one can do anything with it honestly i tend to be pretty lazy fair about data collection i briefly work for the state department and it was very cough gasket and i walked away thinking there’s this is not going to turn into a police state theyre not that organized.

but and if we get to a police state anyway that’s always a question right its always like oh well people have your data and you dont know what theyre going to do with it and if in the future they just you’re you’re suddenly on the bad list because you did something in the past that was recorded uh you’re okay you are but if we get to the point where we do suddenly turn america into a police state we have a lot of other problems as well and i dont know if data collection is.

top of the list necessarily that being said i do know people in the audience who have been on lists and therefore going to the airport sucks for them or because of their name because of how they look so i i do want to be sensitive to that um there are a lot of inroads right now into new ways of anonymizing.

personalized data the biggest problem right now is it tends not to be performant because you can only perform calculations on encrypted data which doesnt work well on mobile devices which is basically what ar hmds are for example but i think as computers get faster and smaller well continue to see inroads and being able to basically obfuscate parts of the data set so that you can have a personalized experience but it doesnt tie back to you personally but then there’s a whole other question of you probably do want a digital identity.

that’s tied to you personally as you go from space to space or room to room and other peoples homes and you want to see what they have or they want to give you permission to see the digital goods in their in their house but that’s that’s kind of for a few point do you end up with the same thing if you walk in to go shopping for a wedding dress and you walk out and you see a digital ad about weight loss that says this way so yeah and digital ad blockers.

um kind of riffing on the xr side uh were seeing a lot of talk i think every platform is talking about doing um digital characters in the real world um also with spirit ai like your whole focus is about how do you create these interactions and build like ai systems to interact with digital characters were also seeing where how many people in here have alexa or google home at home quite a few and essentially that.

is also a digital character just without the visual aspect of it there’s a lot of talk about kind of the humanizing side of things and again what do we do with data and how do we design these digital characters to not have these biases similar to what you were talking about with watchdogs so what do we all think about that i think what what were seeing at especially with some of the clients that were talking to are use cases like there is the use case of its just a digital character or an.

assistant in your home for adults but there are also especially a lot of use cases that are even more sensitive than that so cases where um people want to create an educational character or a toy you know its an educational game series that where this character kind of lives with your child for a period of years and it has dlc that like teaches your kids stuff and remembers things about what your kid.

likes and of course that like runs smack into like all of the sort of child data protection issues but also a lot of sort of subtler things about like what does it even mean if youve got a sort of an educational program that is shaping itself in response to the child like what are your responsibilities about what you teach them and dont teach them and it it gets very diamond age and its kind of strange um.

but that’s an interesting area and then another big sort of point of use is in cases of people whove experienced trauma or who are suffering with autism and have or whose parents kind of want to help them get through uh learning to interact socially i probably shouldnt say suffering with autism that’s not a good way of putting it but so in cases where the finding is that.

some people in some situations actually find it easier to talk to a digital character than to another human but then you’re again creating a context where there’s the potential for the ai to do a lot of good right there’s the potential for the ai to make somebody comfortable talking about something that they dont want to talk to a human being about because theyre ashamed or because theyre uncomfortable for.

whatever reason so that is that’s high value but its also its a high-risk high reward kind of situation because its such a vulnerable space you could also do considerable harm so i think there again a lot of the kind of impetus is on making sure that the people who are working on these materials are coming to it with the background of kind of educational experience and psychological experience that it is not just a product being formed by an.

entrepreneur who like came up with this because it was an easy story to tell to vcs like it needs to come from some place of much deeper understanding than that this is where i always get into we need to tell the computer no and the computer needs to be able to hear that and react to it and record it for future context right like now today if siri starts playing music or alexa im like no no not that song no you’re all wrong no not amazon music whatever but this needs to be the case for all.

especially in xr when things are going to be coming up to you or trying to attach themselves to you or pin themselves to places in your home you have to be like no that’s wrong no that’s wrong no that’s wrong were already starting to be able to do this with ads in a really systemic way because the ads follow you all around the internet so oddly its like kind of a good use case but yeah that we need to be able to tell the computer now which means that we.

have to have that in every piece of software the listener that listens to the know and records the now um there’s also the idea of as we start seeing that there’s more and more happening all around us do we have a responsibility to start explaining to people what is an ai and what is not an ai i was at a conference once and we were talking about alexa and a few people.

that were there were telling me that their little kids started yelling at their parents because the parents had said alexa play song x and the children yelled back at the parents saying you didnt say thank you to alexa and basically the people had said that children are growing up where they feel that alexa is part of the family.

um that its a real person and as were starting to see these digital characters come into the real world around us with xr to these um smart home objects and really ai everywhere what do we think we need to start thinking about like how do we explain to users this is an ai versus this is not real i think its its actually i didnt look this particular point up before this panel so i may be misremembering but i believe its the case that the state of california law.

is that if you have a chat bot or something similar to it that if the user asks if it is ai it has to respond correctly like it has to admit that it is ai you cannot have a character that is going to say no im just a customer service agent um you know im tim and im here to help.

you um and i that’s clearly kind of there there’s a particular sort of customer service context in which that could be important is like you know if as a human user i really need to make sure that im getting accurate information back from this company that ive got somebody that i can hold to it or whatever that so there’s kind of a.

specific local case but i think more broadly it is important for people to understand kind of what theyre interacting with and they the thing that i keep thinking about in this context is that many years ago so i made a game very early in my career that was a conversation game where you’re talking to this character and um it was a very like low fidelity it was all text you were interacting with it by typing so nobody was going to be confused that.

it was like physically a person but at one point i got this email from this guy who said like thank you for making this game galatia like i keep it on my phone shes become my best friend and i talk to her every day and i felt like thanks but im actually really quite concerned right now um and so like that that seems to me to suggest like some other areas where we need to.

be careful right like you know its is it okay to be using ai characters to resolve loneliness is that maybe that’s a good thing maybe maybe its a little disturbing and sorry um so i agree with all that its just because my child development background um i just want to mention one thing children have a tendency to animate things that are inanimate uh so im not im not.

so sure about that part do they really not understand that uh alexa is not a real person because children will have a tendency to say oh you kiss me good night you have to kiss my doll good night as well so i just want to make sure that we are not saying that kids are not able to discriminate between the two i wonder if we can come up with a new way of human i mean we do this all the time right to your point or you know you if i.

make two circles on a smiley face everyones like a face were really good at pattern recognition i wonder if over time well just start to create this new space for these digital creatures there there are several excellent companies that make ais that help people through depression and suicide and have great search are great great results and i would not want that to stop that’s awesome right and clearly they’ve defined this relationship to this digital being that works for them knowing that its not human because its very clear when you go there.

and i love that i love that use of the technology what i dont like is when people kind of humanize the sort of malicious intent like oh its all going to be skynet and theyre going to destroy us and we deserve it by god you know now that’s just you projecting so yeah i maybe in in teaching people about what ai is we can start to come up with the language that allows for that middle ground where this is made by humans and its designed to be interacted with by humans.

but it is something else yeah i mean i think if you present it as like this is an extension of the people or people who created it and they do you know want you to recover from whatever you’re you’re dealing with or whatever there is a personal connection being made through the ai but its not yeah yeah yeah so that kind of transitions into our last question i want to ask everyone before we open it up to q a um were all passionate about ai and the ethics around it because i think we.

all inherently believe that ai can do a lot of good in the world around us um so what are some recommendations you can give to the audience ideas um areas you wish for this whole community to start driving ai towards whats to start oh i can start yeah look looking to look into psychology and try to understand how we influence people we all influence other people and we are influenced by.

our environment so we need we need to understand that better understand behavioral psychology better not just for to make more money and to hook people up but we need to we need to understand how we can use it to favor equity and to make people like feel better because also its at some point its going to be a good business drive to have a trust relationship with your customers and that you treat them with respect and make them happy in the end and ill just like make them pay pay pay because if this is the only metrics that you.

have then of course you’re going to favor that so look into look into these things around to understand it and again dont this is not about oh you’re a bad person we all we are all uh biased and we have to be in peace with that so we can actually move move on and um and solve the problems that we want to solve so i i would add on to that um because its really cool to be like okay well now we are in the psychology portion and.

were gonna now think about that but bring that lens to everything like literally every decision you make make it intently around what celias talking about because its really easy to like sideline it and go cool were gonna make a whole bunch of systems were gonna throw a whole bunch of stuff out and heres the kind of like ethics bit.

bring it to everything the ethics module um i think another area and this is this is a little bit of a tangent from from some of the things that others have suggested but but i think the fact that machine learning tends to take our biases and exaggerate and expose them and that like causes embarrassment sometimes and that’s a reason why we need to be careful about the balance of our data and so on but it is also a way in which the process of.

working in an ai gets us to think about the systems that were part of in general um what is embedded in the world around us and in the data around us all the time that we ignore out of being used to seeing it that way or sort of a general level of comfort or having incentives not to notice it um so first of all sort of a practice of interrogating the things that we build with ai and saying like all right well what is this expressing back to me and not only is it.

broken but also what is that telling me about humanity and the systems around me and should that perhaps be addressed in some way um and then the there that there’s tremendous potential in our ability and this this does get back more into the kind of games and simulation space again um to build models of how we think pieces of the world maybe should be and explore whether were satisfied with that you know potential way of doing things so that we can then move towards more just remediation in the real world.

i just have three things to write down to google later the first is homomorphic encryption which i sort of vaguely referenced as a way of anonymizing data sets the second is differential privacy which is another technique used to do basically the same thing and then finally and this kind of builds on what everyone is talking about i highly recommend shane parishs farnam street which is an online rationalist community that has a wide directory of different systems of.

thought and mental models and collection of biases so if you want to delve into how you can start thinking more clearly and accurately there’s a lot of online communities but that one is a great place to start my final one would be consider accessibility and how do you create systems that can ensure equal opportunity and equal accessibility of people getting to try your systems whether this is from a video game and making a video game where the system is not just about killing and shooting but how do you make systems that allow.

the users to do many different types of interactions to having digital characters in the real world not everyone has access to the hardware not everyone is going to have access to systems so thinking about how can we make and build this community to be accessible to everyone so weve got nine minutes left wed love to do some q a please ask any question this is a list of references that our panelists had put together.

of books that we recommend reading a lot of articles that dive into things from machine learning to standards and regulations so please take some pictures but um we would love to hear from you over here i cant see does that kneel yeah um some of this echoes earlier gdcs or perhaps it was cgc cgdc back then um compuserve and genie.

had the same issues they called it credit card meltdown that is it uh ethical to let someone for example a sailor on a nuclear submarine who spent six months underwater comes home and melted his credit card with online charges playing games on genie and compuserve um and my suggestion there is that youll know weve done this if were not back here in 10 or 20 years doing the very same set of questions um its and and perhaps there is even.

more history for us to go mine along this stuff youve all talked about the idea of labeling data in terms of the happiness and contentment that it provides for people so can you say a little bit more about any ideas that you have as to how we might go about that can you can you repeat that go about sure so so youve talked about the fact that if we label our data sets instead of with the amount of attention grabbed or the.

number of dollars we we get from a person instead we label our data with the satisfaction and happiness that it may have created for the user of an app or a game or a web page or an ad or whatever could you say because its easy to be glib about that and i feel the panel is maybe an opportunity to do a little out loud thinking about how we might label our data for happiness so the question was in short how do we label our data for happiness sorry im supposed to be repeating.

questions yeah i mean that’s it seems like probably more of a reinforcement learning kind of thing perhaps where its not so much that you start out with labeling i know that these things are going to lead to happiness but im detecting something but then that leaves the heuristic completely undefined so i mean i dont know is the short answer but i think it probably.

our best attempts to work that out i think im i am definitely thinking out loud so thanks for permission to do that um i think its going to be very much on kind of a case-by-case and product-by-product basis like what kinds of things in the context of this particular situation um would equate to happiness um and their i mean there are all sorts of things that you can do with sentiment analysis.

and stuff like this like what like how much expression are you getting back from the user right so this starts to get into territory where where actually have done some prior thought about it because one of the things that were interested in with character interactions is being able how do we tell from inputs that we might have from language from facial expressions from gestures that the user is in a particular mood but something that you had entirely trained to um you know be reinforced if the user is smiling and like lets all smile at our computer all the.

time so its super creepy so really that’s not really the answer but but i think there are some things where basically if were in what we really need in order to train towards that is something where the user or interactors experience is pretty expressive because otherwise were training against inputs where all we have is like oh you you completely watch this entire youtube video end to end and that makes me think that you would like to watch.

these other creepy like even more politically reactionary uh videos from end to end and weve seen where that leads so i dont know but i think its i think there are sort of things to to delve into um especially like starting out in the spaces where we have a lot of information about how the user is reacting like xr so i would probably say and im thinking out loud as well uh but.

you know adding on to what emilys saying there like the youtube example is a single kind of axis of thing like i think that expressive things like happiness and sentiment and that kind of thing are going to be like multi-dimensional so if you define it in terms of so as i was kind of sat here thinking about it um you know retention for us might be a good one as a proxy fair for happiness except its not really because if you take that to the ultimate extreme then the ai system.

optimizes for sending somebody to your house and saying play the game um so you know i think that if you actually track kind of multiple axes then you end up with a more intricate function that allows you to capture something but that’s not super actionable i mean there’s also immediate pleasure versus long-term happiness which i mean as humans were not even particularly good at working that out so that’s why we need nudges yeah psychology but also look at what tristan harris is doing this is exactly these questions at.

least exploring the ethnicities um because like for example if you’re tinder uh what are you gonna measure i measure the time that people are spending on dinner but ideally you want people to match with other people and and have fun with them and so therefore not being on tinder so what is it were measuring what is it were trying to accomplish and how we can uh make sure that we we can have a business and still provide what we want to offer to our to our clients that’s ux you want to actually offer the.

experience you want to offer and not just have like a business orientation uh were going to go to the next question we have i think room for two maybe three more hello uh sorry if i started im a little nervous oh no being really realistic it will be probably a long time before i we have like like global regulation in ai ethics uh so how can we go about justifying uh like for producers and managers that we have to care about these things uh like for example in the facebook ad.

about weight loss uh how could we justify that we shouldnt show that ad even though its if we show we know that it can perform well maybe in in a different realm uh like in the room of diversity for example um there is a lot of research about uh why its important to have like diverse things and like uh so we can use this information to justify to the managers why its important uh so are there is there any studies being done like oh we if we are.

more ethical we can also perform better and how so that’s a great question were asking um the the platforms for ethics and ai just doesnt exist today and this is why were talking about it and trying to see like what people can do so two questions there one what can we do to help inform producers people on teams to think about ethics and then what are some of the trainings that already exist that we can share with managers.

um id say there’s more studies on how diversity improves monetization and kpis than uh than ethics but i think you kind of get one from the other or at least the brightening of that yeah i mean there’s a lot of horror stories out there right now about like machine learning gone bad right usually as a result of a lack of diversity exactly um and i think you know finding those kind of articles is a really good start for for.

this kind of work we should do it more yeah also maybe also about having a great diversity and and your team is going to help try to figure out how this this system is going to work and to try to come up with all the different how all the different ways it can go bad uh if you only have like people that look like you uh on the team its gonna be very hard to come up with all these examples um so that’s that’s a way i said to you.

and i would say for us its talking about this not just that advocacy conferences but talking about this with the people actually working in the field so they start being more aware um last question nicole i think yes thank you great great panel really awesome that its here at the ais summit and uh my question is just an extension of the idea of like uh well and in the context of facebook you.

know i actually have a gender blocker i i declare myself as non-declared uh and what i found is that i got less wrinkle cream and dresses and you know i mean you’re going to get zuli zulu ads no matter which gender you choose um but i thought that was a way of me as a user saying look i you know that’s not what i want to see.

and i will go out intentionally on the net and like go to some very you know specific sites in order to kind of crack the you know the code that might be you know that’s uh the preferences that the ai is is um is assuming about me so my question is you know is there a lot of talk about you know getting giving users control over their over the platform can we have a reset button weve talked a little bit ive heard a little bit.

last year about having like a reset button not just for ads in general but you know like my my social media feeds i want to you know be able to like hit a reset button and just you know start from scratch like i may go down a really dark path and am i going to be able to like my way out of that dark path and ive got multiple personalities i mean not as a not not as a not as a medical condition but.

just simply you know i you know i run a studio im a designer im also an engineer i want to be able to and im lively environment so i want to be able to switch and i dont have those kind of controls either so there’s this efficacy of just being like one thing for everybody and that’s you know just like a fire hose without that flexibility so are people talking about the ethics of user control certainly they are um the the idea of especially the right to be forgotten.

is like how people talk about like your permission to have information about you removed from the system and there are specific applications of that under gdpr but that could apply in a lot of other cases as well um yeah i know we dont have a lot of time to answer so yes especially in europe there’s a lot of people thinking about that you know there’s a.

lot of committees uh in france you have uh the cnil that’s looking at all these questions but yeah in europe you can ask for your data to be removed so im not sure yeah i mean its not necessarily easy it kind of depends on what but i think yeah i mean there’s i think but as people become conscious of that like building products towards a point where.

it becomes easier to opt out of things um or say you know what i like i i would like to change my gender presentation now and that’s across the board so lets acknowledge that like all of those kinds of applications yeah i mean there’s also there is a flip side to that is the ethical consideration of people using that kind of tool to remove things that.

people should know about you know we there are a lot of kind of right to be forgotten cases where its a politician who doesnt want some image to be circulating generally of them with some high profile donor or something you know there’s a there’s a counterpoint to the ethics and i think there’s use cases on both sides yeah but i think its also not just the right to be forgotten in terms of.

what i publish but as a consumer ive got a right to you know adjust you know the difference if i want to be able to go to a horror movie or a wrong romantic economy or something like that i dont have that flexibility in social media right now i dont have like a wide variety of things its just one thing and again you can navigate yourself into.

some like very specific like feed content and its very hard to get out especially if you’re not a developer i think were going to have to continue the discussion out there we are already over but thank you so much thank you to our panelists.

In this 2019 GDC session, panelists Luke Dicken, Celia Hodent, Aleissia Laidacker, Emily Short & Timoni West discuss the opportunities, challenges and solutions around the ethics of using artificial intelligence for games, and how you can imbue your AI with ethical thinking.

Register for the all-digital GDC 2021: https://gdconf.com/passes-prices?_mc=sm_x_3pvr_un_x_gdcsf_x_x-yt-gdc21

Join the GDC mailing list: http://www.gdconf.com/subscribe

Follow GDC on Twitter: https://twitter.com/Official_GDC

GDC talks cover a range of developmental topics including game design, programming, audio, visual arts, business management, production, online games, and much more. We post a fresh GDC video every day. Subscribe to the channel to stay on top of regular updates, and check out GDC Vault for thousands of more in-depth talks from our archives.