The Ethics and Governance of AI opening event, February 3, 2018



so this is the first substantive session of the ethics and governance and AI class that we're doing together with Media Lab and Berkman Center are doing together and I'm co-teaching it with Jonathan professor Jonathan Zittrain who will be speaking after me so I thought I'd start out by talking a bit about why we're doing this class what I hope you think about as we go through this class and maybe touching on in framing some of the things that we'd like to work on together and as we said at dinner yesterday this is such a new field that by the end of this class you will know more than 99.99% of people who think about this thing and just like any new field and I think Jonathan I are old enough to have been in that period of the Internet where there literally were only a handful of people who knew enough about each part of the Internet to help get it started and really is that that like that and I think that AI broadly is a fairly large space with lots of people working on it but this field of AI and governance and ethics and I wouldn't say it's just this room but it's a small enough number of people thinking about it in a smart way that it really is an opportunity to contribute so I hope that you know this will kick-start some of you to make this if not your main thing at least a peripheral thing that you're interested in I want to start with this image and this is comes from Lawrence Lessig's 1999 book code in other laws of cyberspace and this was a really important book because this is the first book where he served describes the relationship between code and law and we argue about whether we call this less ago nyan or lesibian but the idea is that either the behavior of an individual or the behavior of society or the future of civilization is somehow governed by four quadrants so he's a lawyer so he put law on top but architecture is is is is the technical architecture norms are sort of the values and norms of society and I'm gonna grab my water and the markets are sort of the businesses the market the the the economic reasons for things to happen so the example that he gives in the book which is a good example is if you have a street and you want to get people to drive slower you could put speed bumps which would be a technical intervention or you could post a speed limit and have a police officer enforce a speed limit and that would be a legal intervention and in many cases you have the choice of whether to do a legal intervention or a a or you could have norms you could be you could make it very clear to everybody in the neighborhood that's not a cool thing to drive fast or you could kind of create at some sort of car that if you drove fast you have to pay extra so there's different ways to intervene interestingly this is this class we have you know a lot of Engineers we have a lot of lawyers we also have philosophers and neuroscientists and a bunch of different people you probably would put yourself in one or more of these quadrants as a professional and as you try to affect the behavior of people or the future society you're probably using the tool that you're most familiar with to do that but like with the speed bumps last year's class which was more about broadly about internet technology and society this year it's gonna focus a little bit more on AI and one of the best conversations I saw was one of our cryptographers talking to a lawyer and at that beginning it was the the law to him seemed like the laws of physics something that you had to sort of build technology around but to the lawyer they thought of cryptography has some brick or maybe even something bigger like a doll house that they had to build the laws around but it turns out that cryptography is like putty that you could make it to and do really interesting things with it and that the law was also a product of a bunch of conversations and designed by people and when the two the lawyer in the cryptographer got together they realized that there are many problems that we had that could be solved either by cryptography or by law but even better by sort of putting both of those together so I think through the course of this course you will hopefully start to see all four of these quadrants as areas that you can use to think about how we might affect the future now I'll talk about it towards the end but notice that I didn't say to make the world a better place because I think that that's one of the problems is that we seem to think often that we're all on the same page what we want to do which isn't which is where the ethic stuff comes in can you I don't know if I can click that to go back can you play that for us so this is just a little bit I think we are human beings have played games against computers at first computers struggle but then they started winning watch it what is creat yes and now they've become so dominant that they're raising doubts about the future of humanity the latest emblem for existential dread is google's deepmind project created alphago your AI program that's become unbeatable at the most complex strategy game on the planet while the game of checkers has 10 to the power of 20 possible outcomes and a game of chess has 10 to the power of 40 possible outcomes go has more than 10 to the power of 80 possible outcomes alphago is trained to analyze situations itself by breaking the game down into tiny parts and visualizing all possible moves last week he played the world's best go player nineteen-year-old cogent with the help of a human handle alphago beat him three times and after doing what it was designed to do retired from the game that just pset1 jeho that's how did you you're putting a team would halt what your dad taught do what are you a parent you're back if a system like alphago can learn all the moves and go well enough to beat a person then it has the potential to replace lawyers and accountants among dozens of other jobs it might be perfect but it has no way to navigate human politics the Chinese government banned the live stream after juror lost the first game the loss to an American company was an attack on the country's pride the Chinese government has made a big effort to proclaim that they are moving ahead rapidly in artificial intelligence that they will be the people who dominate AI that have the dreaded Google come in and beat China at its own game it's just piling insult on top of insult it's kind of amazing but it was more than a national crisis they were re-arrested away to their sanity and gender what they yeah dude I dunno you know kinda okay so first of all the science is all wrong just to be clear that there it doesn't compute all of them moves actually it's impossible there are more moves than there are atoms in the universe so so that part's wrong so and just the way they describe it is really not correct it actually is fundamentally different from chess in that because it can't compute every move it is doing something that looks a lot more like creativity a lot more like intuition so it's it the the just I won't we will go into this later in more the AI stuff but first of all commentary the science is wrong the second piece is see how quickly they move it into a US versus China thing which is I think also thing that the media is trying to do and I think we'll have conversations later about whether that's true and if that's true what we do about it but I think it's also interesting because they said you know you can't get involved it can't do human politics well that's not necessarily true it may be and this is something we should talk about during the class because I think that some people do believe that a lot of what politics is not all of it is is a game and the extent to which computers get better and better at winning games it's possible that machines may do more than can we advance back one okay so I'm going to talk over this one so this is an actually we actually have somebody here from open AI so I use this so this is a open AI which is a nonprofit in Silicon Valley that's funded by Sam Altman and Elon Musk and and a few others and and it and this is actually a multi-user game which is very very complex and open AI has been able to win in tournaments and and there are other things that they've been doing at open AI that show that these machines can by playing against themselves much like how alphago did but with even less supervision in some of these cases start winning at all kinds of games and and I guess I don't know how much of it is publicly available but I'll just say that this this is a very complex game but they're also doing things that involve lots of physics and that I would say again I don't remember exactly what they said I can say but they have a rough idea or a belief that machines will be able to win at any game against humans pretty soon okay and so the the interesting thing here and the meta point that I would have here is that the particular person I talked to was like so it's the end or the beginning or something but it's but it's a big deal and it is a big deal because a lot of things are games markets are like games voting can be like games war can be like games so if you could imagine a tool that could win at any game who controlled it and how it is controlled has a lot of bearing on where the world goes on the other hand when I was probing a lot of people say well if it can win at any game it's a super intelligence so there are three roughly three categories of artificial intelligence that people are what's safe for so there's artificial intelligence broadly but then inside of that there are three one is a GI so artificial general intelligence so things like open AI where you can point it at a general problem without giving it much detail like all of the Atari games and it will figure out how to win and it will win and so it's a very undirected general intelligence and AGI is the idea that you make something so general that it can solve just about any normal problem um ASI which is super intelligence is the thing that Nick Bostrom in and Elon and others are afraid of which is that the machine gets so smart that it starts training itself and gets smarter and smarter and smarter until it's smarter than human beings but the the the intelligence part is actually kind of interesting because when we say intelligence what does it mean so the problem is a lot of my friends in Silicon Valley when I talk to them of the future of this they say well so they will win and and I said what does it mean they were winning well life is a game and they will win and and that's where I realize ok well there's a there are at least two people categories of people in the world people who like one of my friends measures exactly how many hours they need to spend with their wife knows exactly the balance of the happiness that they get from their money versus the things and they can basically describe to you in the sort of metrics how they measure happiness and that if they can optimize for happiness they win at life and so if you believe that life is a game that you can win at then you could probably imagine that a computer can beat you at life but if you believe that life is not a game like I do like I believe that I'm a bunch of chemicals and molecular interactions and every morning I wake up and my endocrine system tells me what I yearned to do that day and my life is all about trying to fulfill the yearnings that come through not just my endocrine system but my relationships and my existence in the world and that I think we have a somewhat spiritual idea that we have a consciousness you know and so and we have an understanding and the word understanding is pretty interesting because a lot of these when you when you hear people describe things like opening eye they say they get so much they get so good at this that the Machine understands what's going on so that's a pretty interesting use of the word understanding and it goes back to actually a really it's in your readings but the China room thought experiments so if you read your readings you you will this'll be redundant but the idea is basically that if you put a person in a room who doesn't understand Chinese but has a set of instructions that say if this comes in one window which is in Chinese and it's a question you can pull together and then output this which is the answer in Chinese and so if you have a complete set of instructions of what to do when this phrase comes in and you're just looking at squiggles now so you don't understand what the Chinese is but you have a lookup table that tells you what the answer to the question is which is also a bunch of squiggles and you're putting it out you could appear to an outsider who's putting questions in one end and getting answers in the other as if you perfectly understood Chinese but in fact you yourself in the middle who is the program which is kind of like the AI in fact doesn't understand anything except the ability to execute on the instructions that you have which is if this comes in then put that out and in the readings you have a lot of peer review and critic critique about this but but the general question here is just because you can produce the right answer it doesn't necessarily mean at least to all people that you understand what's going on and again this is kind of a recursive and I'm glad we have a couple of philosophers in this class this time because this becomes a very philosophical question first of all does it matter right so so I think Ilan who is one of the funders of open yeah I think I don't know if it's a joke and we can ask we have somebody here from his organization to whether it's joke but he often says that there's a 90% chance that we're living in a simulation right and it's an interesting point I think there are a lot of people in the world who believe that the world can be reduced to zeros and ones and if if in fact we're living in a simulation it's pretty easy to go from there to imagining that if computers can manipulate bits at scale and tackle the complexity that we've been fighting against that they could somehow understand the world and then there are probably others who probably don't believe that we're living in a simulation and that this doesn't constitute understanding we understand something more it's but again it gets into a philosophical debate I tweeted the other day my own belief which is those people who believe the world is a simulation are probably those who are most likely to be simulated by computers so so now so we can talk more about this general intelligence it's a really interesting philosophical question but we're not there yet in open e I may say that we're gonna be there really soon and we should talk about it so I'm not taking it off the table but I want to focus a little bit on machine learning which is a dumber version but still a very powerful tool that is actually deployed already and the idea about machine learning is that it basically gives the computers to learn without being programmed so when we say algorithm in in AI or machine learning is very different than an algorithm that we typically think of so when you've heard about the Volkswagen that was sort of programmed to not meet the environmental standards you could you could open it up and deconstruct it it's compositional and you could read it and see and audit it right the way that a machine learning algorithm works is that you feed it data you turn knobs and it sets a bunch of weights in this neural network in a way that you can't look at the neural network and understand what it's gonna do just like you can't rip up in somebody's brain and understand what they're thinking and so so so while it is an algorithm in the traditional sense that it does things based on functions it's a lot I'm harder to understand exactly what it's going to do and it's a lot more like our brains in that your child you can you can you know what textbooks your child has and you know the genetics that went into your child but you sure don't know exactly what your child is gonna do or become and so as we start to think about the future of machine learning we have to realize that it's a very different thing and just as a side footnote because you're not sitting there programming it it actually is takes up less space in many cases it's is very expandable it's very powerful but but it's a it's a it's when we say programming we often are programming the system in which we used to teach or allow it to learn we're not programming the actual algorithm so so even rules like the the you know the Isaac Asimov rules of robotics it's a little bit harder to do that than you might imagine and I'm not going to do this we might do a premiere tomorrow on AI and machine learning but even in machine learning there are many many ways categories of machine learning many many specific methods and most things like alphago was a combination of several different types of machine learning and a lot of machine learning is kind of a fine art of picking which sorts of algorithms in which order what the tunings of the knob are and it really is kind of a dark art and so one of the troubles that we have right now is that you have to get pretty good as an engineer before you're going to be able to train a useful machine and so unlike VisiCalc in the old days where any accountant or businessperson could become creative and generative on a computer machine learning is not yet to the point where the normal average person without specialized training can create something useful out of it although that's that's something we'd like to we would like to see one of the main applications that that we are all using day to day is classification of visual images so so right now through using data sets Google and Facebook and other places whether it's facial recognition or the recognition of objects it has a very high accuracy of categorizing images and this is now mostly more accurate than human beings and and faster I mean it works on things like self-driving cars the problem is it's a great tool but people still end up miss applying it this is also in your readings as Liu and Zheng it's a it's some Chinese researchers who took government IDs and now assert in the paper that over 90 percent accuracy in predicting whether somebody is a criminal by the photo right now the the the article in your readings goes through and sort of deconstructs how they might have gotten to that problem if you notice the all the non criminals have white collars so it could be that the machine is just figuring out that it's the white collars that signify the heel they also point out that the people who are criminals tend to not have as happy faces and but the problem is because you're training this model and it isn't able to explain to you exactly what it's learning you don't know for sure how it's getting to that accuracy but the problem is that if you suddenly trained a machine on a data set and then you say okay it turns out we're 99% accurate we're gonna roll it out and then it puts every white colored person into a job and puts every non colored person into prison or dark skinned person into prison or a white skinned person into into a job or whatever other problems so so the problem with with machine learning is that we don't exactly know and it may be surfacing a bunch of really interesting underlying factors now this is this goes all the way back right so if you think about how Nazi Germany started the Holocaust it was this whole notion that we have evolutionary traits and by eliminating certain categories of people society would get better there was eugenics that sort of led up to that which was the idea that there crane the shape of your head or the the form of your body somehow was an indicator of your social value or your criminality and the fact that we have modern papers coming out just as recently as last year out trying to use machines for this shows that we have this risk of what I would I often were many people called reductionism which is that often you you come up with scientific theory that sounds great you apply it and it actually is an oversimplification and it can cause a lot of harm so that's one of the biggest categories of harm that I see that are is already happening this is a an interesting wonder if we can click that this is actually an MIT project and I think Jenny's from this is your this is your team right so so this is an adversarial system so this is a 3d print of a turtle that Google thinks is a turtle appropriately but by fiddling with I think it's the pattern and some of the lines right they were able to modify the turtle so that the Google now thinks it's a rifle and there have been attacks like this where you said sort of set change some of the pixels on the image and are able to make things misclassified I think this is the first time as something very general like this 3d object that you could hold in your hand now always looks to Google like a rifle so there's this one category which is maybe you know you've mistakenly start to mess up the classification because you don't notice that all the non criminals have white collars but if you actually try to attack the system by creating an adversarial system you can you can do this and there's also another rumor that I heard that somebody kids were cutting out like craft paper stop sign shapes and like putting in the front of Google cars and jamming them all up and so so that quickly shows you the limits of machine learning a human being would know immediately that if they saw some teenage kid with a piece of craft paper that it wasn't a stop sign but a car gets very confused so so I think one of the things that as we start to jump into and actually I think there's a assembler project working exactly on this sort of adversarial jamming thing but what's interesting is that because machines are so smart and so good at what they do we suddenly think that they're going to be good at everything and they're really bad at some things that we rob we are obviously good at and I think one of the key things as we think about the relationship we do in humans and machines is I think for at least in the short run there are gonna be a ton of things that we're really good at the machines are not going to be good at and vice versa and figuring out what that relationship is going to be key that's gonna involve law that's going to involve norms is going to involve things like user interfaces and I don't think we're we're there yet and that's one of the things I'd like to explore in this class let me see okay so this is a angiogram but I'm just gonna use this slide generally to talk about medical imaging so medical imaging it turns out is a great application of computers for machine learning because now with a cellphone you can send a picture of a tumor or you can send a picture of your skin and the computer can come back and give you results and in many many fields now they're showing that the machines are much better than human beings and I heard a confidential study the other day that there was a particular test and they did a study of allowing doctors to overrule the machine and seventy percent of the times that the doctor overruled the machine the machine was right so in other words thirty percent of the time the the the doctor was right so that's that's scary right so if you're a doctor are you gonna overrule the machines recommendation when you have a 70% chance more than 50% chance that you're gonna be wrong and some I might sue you because you've overruled the computer so so they were all and this is a big company was how are we going to do this and it's setting up this adversarial relationships is similar in an airplane you overrule the the autopilot or in my Tesla you overrule the autopilot so there's this you know kind of a inch interrupt-driven adversarial relationship I talked to somebody who was involved in a start-up the other day that has a and I'm not supposed to give away the details it's but it's a medical imaging thing that's 90 something percent accurate but doctors are generally 90 percent accurate in that field so the Machine is slightly more accurate but still accurate enough more than doctors that it was worthwhile they also had the same problem where the doctors didn't want to give up control to the Machine but they changed the interface so that the doctor is always in charge and you don't see the Machine and whenever the Machine sees a result that's different than what it believes it sort of highlights the area and and that that it notices might be the thing that's different and it's like a spell checker so the doctors like click click ok click click ok and then click and there's a little red thing and the oh yeah I didn't see that okay and so suddenly instead of having an adversarial relationship with the doctor you have the machine looking over the doctors shoulder kind of giving suggestions but it's still the doctors choice about whether to take the suggestion and in 90% of the cases the doctors just doing it and the machines it's not saying anything so so I think the idea of having the machine looking over the doctors shoulder but also the driver of the car saying hey that's just a kid with a bunch of craft paper over willing the machine that kind of relationship of looking over each other's shoulder rather than being an adversary of each other I think is a really important one it does and this will start to tie to some of the law school stuff it does start to get blurry though on whose responsibility is right now when you crash a Tesla in autopilot it's your problem in America I think if you crash many of the European cars it's their problem so this whole idea of this what we call the moral crumple zone right so which is this idea that people are going to be pushing responsibility onto others as one of the things that these computer assisted systems will have we'll hear later in the afternoon from Karthik and also pratik who work in medical systems Karthik's work is very interesting so he's taking now the conversation between the doctor and the patient before they take the angiogram and then he shows later what he's finding is that the doctor doesn't really understand especially with male cardiologists and female patients women under report pain men over report pain so most male doctors misdiagnose women's cardiology tests but once you have a machine the Machine actually through learning the conversation can often guess what the problem is better than the doctor of the patient and he was also saying that the nurse and the patient actually is even better so maybe in the future what you're gonna have is you might eliminate the doctor but you probably want to eliminate the nurse it will probably be a nurse augmented with a machine to be able to do things and so the conversation between a human and human is actually really important to add additional information to the data so so again I think these are that's interesting the other thing that pratik will talk about is sometimes machines will find patterns that humans don't so we have a lot of frameworks that we use in order to sit down and try to figure out where we look but it turns out that when you start to get machines that look at medical exam images they might start to see patterns that we never thought meant anything and so this gets into I think it's in my next slide it's not a my next slide it gets into explain ability because there may be things that we're trying to get machines to explain everything but there may be things that the machines can't explain because we don't have a framework in which to understand it and they may actually help us come up with ways just like alphago created a whole new way to play go by doing a very creative move that humans hadn't thought of it could be that machines will surface ways to look at medical images that we haven't even thought of as human beings this is a I think many of you probably read this paper I think it's now about years old but it's by Julia Angwin who's a director's fellow at the Media Lab and we assigned it his reading last year and one of our participants made ours read this and this is basically a a paper that describes the use of risk assessment or risk scores in the judicial system – so basically it's a it's not even machine learning or AI is just a bunch of numbers that get put into an algorithm that create a risk score that people use to determine pretrial bail or in some cases sentencing and in some cases parole and the and she's a data scientist that took over a year to put together the data to show that the the the the system was biased it was biased against people with dark skin and it was roughly neutral for for white people and so we jumped and said okay this is a problem we're a bunch of engineers maybe we can solve this maybe we should have a blockchain and have it so that we can we can audit it but as we started to go in and we brought in an ethnographer we realized that it was a much deeper problem so there's one argument that said that even though it's biased it's still reducing jail time for people compared to human judges right and it's actually more it's more fair to white people than black people but at least it's more fair to somebody there's another argument that was saying okay but they're not ingesting race as one of the inputs but they're a ingesting proxies to race so it's somewhat racist but it turns out to the underlying data is racist because that's how society is racist and then the question becomes okay well if it's just reflecting the underlying data and the underlying data the underlying society isn't fair is it the machines job to make society fair no right the machines job at least in this particular case is trying to accurately assess risk and if a kid has the wrong circumstances they will sort of technically speaking be higher risk and so then it gets into this bigger problem which is well shouldn't we try to address the underlying causes if this kid from this zip code has a higher risk then the kid this is if code shouldn't we try to figure out why this if code has higher risk and go after that and Adam Faust is here and he's a prosecutor who works on this stuff and so so can we use machines to go after the underlying causes rather than just more accurately predict and make the current system more efficient so this is I'm Karthik's work and we wrote a paper recently for a conference on this but a lot of machine learning almost all of it right now is about accurately predicting things and this is correlation so it doesn't necessarily mean that because you're in this zip code and you went to this school you're a bad person it just means that you're more likely to be a bad person or not even a bad person you're more likely to be failure to appear if you live in these places and it's not correlation its causation or that causation is correlation so the question that we had and we had of a person from the Catholic Church who was with us and and he pointed out the most strongly of all the people that it's not right the the justice system should not be punishing people by longer terms or inability to get bail just because they happen to live in the wrong zip code right and so then then it becomes but that may be the most effective utilitarian way to predict that outcome so so this correlation versus causation is actually really important because it's harder so there's a and we'll get Karthik talk about this but the causal inference which is a way to try to change the underlying causes to figure out what variables are just correlations and what variables are actually causes to it so for instance if you remove that variable or change that variable does it change the outcome and we are working in in towns like Chelsea and in certain jurisdictions where they're giving us more access to the underlying data to try to see what the causes are and to try to come up with a new way of doing machine learning so we can go and try to fix the problem rather than just focusing on the on the place and we'll do a whole day on criminal justice but what I wanted to Matt the point I want to make with this sort of rambling a meandering path here was that what might look like a simple problem on the surface machines are biased actually when you start to drill down and try to understand the whole thing it's like peeling an onion or yes shaving a yak or whatever metaphor you it turns out it's a big societal thing that gets to the fundamental thing should we rethink the criminal justice systems is the law the right way we approach these problems is this an opportunity to make the world more just or is this just an opportunity to make every subsystem in this in our whole society just slightly more efficient and make us efficiently just as bad as we are today or worse and Norbert Wiener who is a famous mathematician here at MIT said that when you're in a but a company or bureaucracy that it is automation so you have a bunch of rules you have all these things now corporations have all kinds of rights that make them have rights like human beings and that human beings in the corporations are just machines of flesh and blood and so a lot of the questions that we have about governance and AI I think we're already seeing these when we see corporations that are able to grow and spend money and become entities that are completely out of control so if you look at the market today we can't understand it human beings do not understand the market more than half the trades are done by machines and it's completely out of control most corporations are above the law they can they don't pay tax they can lobby and so in a way if you just imagine corporations as machines and they are we're already in a world of non-human intelligence and our ability to regulate or deal with them has not been is not very good and so if you imagine machine a AI machine learning as just booster rockets to all the elements in these machines it's just going to be more and harder version of the problem that we have today so at the Media Lab we don't use the word artificial intelligence as much we like to use the word extended intelligence because artificial intelligence makes it feel like there's another sort of terminator like robot that's an artificial human that's intelligent and in unconscious and wants to take over the world and has all the sort of attributes of humaneness what I think is going to happen is we're just going to see and we already are seeing machines and algorithms seeping into every part of our society from the individual our phones which are Persis to societal systems and governments that are using machines and search engines and so on and so I think it's much better to think about and and extended intelligence is sort of a play on collective intelligence which is a whole field and and we you know we have the Center for collective intelligence here at MIT and the idea that a group of humans or a group of things can get together and as a network think and act and do things and so so I think that artificial intelligence is is is again the wrong metaphor and for better and for worse I think we need to think about this is an integrated system the problem is when you have a complex system you can't just go back and say okay let's just redesign the whole thing because you it's you can't stop the machine so it's not like creating a new nation where you can sit down and say let's come up with the laws and that will make everyone behave this way we are it's like trying to rebuild the plane as we fly it so this this is no longer as a designer an option we can't just stop and restart although we're trying I love this example so everybody knows monopoly right um so there's a precursor to the game called the landlord's game from I think it's 1905 or this patent baby 1904 and this game had nearly the same rules but it was created by the georgous and they were sort of the precursors to the Communists so this game was created to show how ownership in rents drove people to unhappiness and poverty and was about teaching about the perils of capitalism so in the during summer they would have kids play this to understand how awful capitalism was and so Parker Brothers came around so this is really pretty cool game but let's just change the goal the goal is you're the capitalist and you drive your friends to bankruptcy and you win and it became a very popular game and the the the reason I point this out is that a lot of our jobs as engineers or as lawyers is to try to change the rules right but if you change the goal and keep the rules the same it changed and everyone the behavior changes might we not extrapolate that even if we fiddle with the rules if we don't change the goal maybe that's not going to have a lot of effect whether we're talking about climate or we're talking about things like crime so I like I want to go all like pull all the way back out to like much higher than 30,000 feet and think of the earth so right now we have systems where you have you know photons coming in and and photosynthesis allows the photons to take water and carbon dioxide and convert them into oxygen and glucose and then other things then so so first of all I wouldn't if you know your history I think was when when photosynthesis first came out it was one of the largest extinction events in the history of the earth because oxygen was toxic for many things it took many key elements out of the environment and so was a huge extinction event but a whole bunch of new organisms and new processes were created that took the glucose and the sugar and turned it into other things and if you look at nature what happens is when there's an abundance of something something gets created to take that abundance and convert it into something else and then that's something else becomes the input for something else and so the whole world is a bunch of loops of one thing being the output for another and another thing being the output for another and there's no single currency it's all kind of interconnected and we have our human bodies it's the same so it's a cross scale so we have at the sort of molecular level it's happening but at the geological level and somehow the earth's temperature is able to stay somewhat stable our body temperature amazingly stays stable and then you can take huge percentages of the processes in our body and rip them out and we're still able to function we can eat very different foods and we're still able to function because we have self adaptive robust complex systems so that when you sort of disrupt it we're very resilient and that's that's how complex systems work and it's in it's an evolved system it's not a design system but it's it's it's it's it's it's a it's a it's a it's a great way to think about how everything is connected to everything else and there's a whole field called systems dynamics and it was used a lot in the 60s and 70s to model the world things like poverty and in war but the basic idea is that you have things that come into a system goes out of a system and then there there's feedbacks so I think I'll use an image to this is my best to describe it so if you have a bathtub and you turn on the faucet you will see the the bathtub fill up and then you have a drain and depending on how open the drain is it starts to drain out and depending if you if you're trying to get the water to a certain point that's sort of the target of how where the water wants to be and so what you're doing is you're turning it on and you're closing it and that's the flow and the amount of water is a stock then you've got outflow and so a simple system that the small bathtub is harder because it fills up more quickly a big bathtub like the ocean it moves more slowly and then imagine that you're trying to get it exactly the right temperature so you get in so you've got hot water now imagine as many people of you might have where your boiler is really far away from your water so it takes a little bit longer for the water to get to your boiler and then imagine that you have to worry about where the water is coming from and then the system that's heating the water and then you kind of think about everything as a system of inputs and outputs to other systems cuz your boiler also has a stock and the energy has a stock and so the whole world is just a huge network your bank accounts of basically this model of stocks things coming in things going out and things connecting to each other with a bunch of pipes so the world is a bunch of tubes and so system dynamics people what they do is they can imagine so the one of the favorite examples is there's this game it's called the beer game and they teach it in business school and what they do is they make three groups one is the distributor of the beer the store the other is the the wholesaler and the other is a brewery and what they show is that they say ok one day suddenly people start buying more beer so the store orders more beer then the distributor suddenly gets a whole bunch of because there's a bunch of stores a bunch of orders and they don't have enough so they can't shift them out so they order more beer from the brewery and every person plays a rational actor where they're ordering more and then doesn't show up so they or even some more but in most cases you everybody's trying to do the right thing the whole system goes bankrupt and when you look at the input the only thing that has changed is the amount of demand for the beer has doubled and the only cases in which the system doesn't go completely bankrupt is when each of the nodes imagines what might be happening in the other nodes if they're just working rationally in their own node oh I ordered beer it didn't come so maybe I better order twice as much for next week so that I could have enough to fill the then you completely screw up and so the whole sort of science of systems dynamics is to show that if you think about things and systems they make sense because they're complex and adaptive but if you think about things in terms of these the the autonomous units it can go wildly out of whack so this very busy slide is Donella Meadows who was a famous Systems dynamicists here at MIT but she talked about how you intervene in systems to try to get a system to be more healthy or resilient or robust and the firt that and it's this is kind of a weird and there's a great essay that you should google and read if you're interested but the top one is the least effective but the one we use the most which is changing the parameters and the numbers in the constants and you can go down and start to change things like the the structure of the flow so which pipes are connected to which ones you can change things like the size of the bathtub you can change the flow but as you go down you start to see the power to self-organize because you're so again unlike a bathtub Society markets and the the ecosystem are actually dynamic systems but you see the last three which are the goals of the system the mindset in the paradigm and the power to transcend so going back to corporations so right now the goal of a corporation roughly is to eliminate the competition to return shareholder value and to externalize their costs right so so it's kind of like cancer so normal cells not normal human cells if they end up in the wrong organ they eliminate themselves when they're out of context they don't grow but cancerous cells regardless of their context they take all the free energy they can get and they grow as much as possible and that's why it the system breaks one of the I think problems that we have right now is that the goal of many of our important systems is just to grow and grow out of context in the old days when you had a company and you set it up the first companies you had a charter and the companies were usually okay we need a windmill for this village we will create a charter about the fact that we need the windmill and how we're gonna govern it and then we're gonna collect a bunch of money to execute on this charter that's sort of the initial reason that incorporation happened and it was not until quite recently in the 60s and 70s that we started to think about the shareholder as a primary customer of the the company and more and more now having said that recent studies show that Millennials many of you I guess the majority won't join a company that doesn't care about social values and we do see things like uber and other places for social reasons companies are being taken getting taken tasks but still I think the overriding goal of many organizations is to increase financial returns the paradigm of this is the fact that we're measuring things in financial terms so I I was recently in New York and some said well how can he be smart is that rich you know and so the idea that if you're smart you must be rich and if you're not rich you must be dumb is actually a quite a common way for people to think about things and it's also because we measure things that way so GDP so when we talk about the competitiveness of Nations we talk about GDP so GDP is how productive you are so when Jonathan and I are taking care of our respective children that's not contributing at all to GDP so the countries don't measure that as success but if I break that window that will contribute to GDP because it will create jobs and things will move and money will get spent so and this is a you know common metaphor that people use but breaking windows increases GDP whereas taking care of your children don't and often there's a number cited that says that IT hasn't contributed to productivity gains and that's a lot of it is is because the qualitative stuff doesn't contribute to GDP it only it's only when things move around so GDP is a financial metric that is really good at moving stuff around and and again the behavioral economists will argue with me and say that you can measure so many things financially even suicide or happiness you just have to get the formula right and that don't throw away economics and accounting just because we haven't gotten that right and to me it's kind of like somebody arguing that you can understand music and mathematics of course you can but and and and you could you can represent a song on a CD just as a string of bits but isn't mathematics the way you want to enjoy music or even to create music and that's the question and I feel like the paradigm that we have right now the mindset is is economics and the most important one is the power to transcend paradigms so so so to me that's really important that's questioning it is this the way we should be measuring that's what scientists do that's what artists do I'm gonna make but I'll go back and and I think this is one of the things I want to do in this class so when I think about philosophy or ethics a lot of that is about our ability to reflect on whether the paradigm is the right thing asking some of the basic questions I was very heartened the other day I was with a bunch of junior high school kids in in in Tokyo and I had an hour with them and I said let's talk about climate and one of the kids said well isn't it better if there are no humans for the climate are we talking about with humans or without humans oh that's a good question let's say with humans because we're biased we that's our perspective and then a ninth grade girl with her you know Japanese school uniform said well but what about the meaning of life because don't we have to figure out what makes us happy why we're here before we start to figure out the solution to climate and to me that was really heartening that you could have a roomful of kids going through a very stodgy educational system which is yet the Japanese system still asking some of these fundamental questions and my concern is that a lot of these solutions that we're looking for an AI whether it's in the criminal justice system or in the economy assume certain paradigms certain ways of going into the future that may not necessarily be the right way and I think we're questioning we're not questioning enough this is an image from Lladro Wong from the scalable cooperation group where I think the number of students this year are from his class but I think it's it's it's an interesting way to think about this kind of coevolution where you have society and you have a machines and they interact with each other and it involves and now evolution isn't really it's not a kind there way for things to get better but it's it's a little bit more to in my view nuance and appropriate than this optimization so so people try to optimize for things and getting back to my friends who believe that you can win that life they're optimizing for a number of variables and I think that that's not really a very resilient way to think about life you know and I'm gonna make fun of my own institution and so you know this is a mi t–'s campaign you know MIT for a better world and and and you know people kind of know what you mean when you say a better world it's for society and stuff like that but but I always ask you to for who at what time scale right so for shareholders at a quarterly time scale which is what most companies are doing that's a particular set of outcomes that might destroy you know won't spend money to train people won't spend a penny more on the environment than they have to or is it for Native Americans who think of you know seven generations or is it for biodiversity in eons in which case humans are probably a problem and so thinking about who at what time scale now what's interesting about human beings I think is that we think about many timescales and many things when I'm sitting there meditating in the morning I'm thinking about the universe at a very long time scale but when I'm in a meeting negotiating with one of my faculty member over resources I'm thinking in a very short time scale and I think the diversity of timescales and the diversity of points of view that exist inside ourselves inside of society is what creates this robust complex self adaptive system which isn't as good as nature but it's getting closer and the idea that we can reduce it to some sort of formulaic optimization that we can train a machine against I think is a problem it's it's reduction and so one of the key takeaways for myself and this is this is obviously a point of view and many of you may disagree but I'm kind of out to resist reduction I don't want somebody like I don't in again this is Godwin's law because I'm equating Silicon Valley with with Nazis by saying that you know reductionism is is is is it's like eugenics but but there are other examples like BF Skinner and and and and learning I think that when you get a lot of new science and technology you get this urge to apply it to a bunch of societal problems and if you are too reductionist and you don't embody the complexity you do have the danger of going way out of context and thinking that a turtle is a rifle or thinking that because somebody's skin color is one color that they are certain classification and I urge us as we go into the nitty-gritty of law and technology that we don't forget and this is why it's great I see the two beautiful officers near the front that we don't forget the context in which we live and the complexity and my personal belief that the fundamental nature that is humanity is not reducible and that somehow we need to figure out how the reduce reductionist part of our society and the irreducible part of our society coexist and help each other rather than hurt each other thank you [Applause] time for QA so we have the weird Media Lab box and the other day I had Cade interviewing Chelsea Manning and she was really good at this and I realized that she told me later she plays basketball but uh any comments any questions opinions are great – okay I'm gonna I'm gonna actually I'm gonna send it to you and then you can send it back I'm talking to the mic that's actually mic oh hi sorry I shouldn't this is awesome hi I'm Kathy Pham I am and assembler and also a fellow at the Vulcan Klein center something that I thought of all you are talking about corporations Joey or as yes profits are big factor but entities like MIT and Harvard and other university Versys trained the engineers the humans that go and work there and many as you said especially the Millennials want to do social good and have social responsibility so how do we get that sentiment from the individuals into these companies that you know care about profits etc and make it so that the engineer is working on these problems don't ultimately produce the problems and actually achieve the goals that they're perhaps perhaps striving towards with ethics and doing social good so you happen to be sitting next to some philosophers who will be able to help you but but I think the the the so at the Media Lab we're using design as one of the cornerstones of this so so we had a we have a journal called the Journal of design in science and part of design is trying to understand all the systems and the constraints and so what I think that engineers need to do is understand first of all that everything they do affects a whole bunch of systems so if you create an idea or a thing you're going to you're you're you know affecting the aesthetic landscape you're affecting the environment you are affecting learning you're affecting a whole bunch of things then you may not be able to control everything but you're responsible in the way that you're responsible for bringing a child into the world and so by being responsible at least you can then either iterate towards or feel some sort of I've been calling it a sensibility because I think what so certain Arioch suin and mention him teach a class called design across scales where we're talking about design at this sort of microbial level to the astronomical level and that they're all connected so if you have a sensibility which is let's avoid waste too much is more than enough is too much be kind and if every person and every scale is thinking that way and has a synchronized Sensibility I think the system will change and this is more participant design which is that every participant in the system has the ability to change the system it's not the master planner and so Kevin SFL Kevin slave and I did worked on this paper and that we were in traffic one day and and and and he said you know you're not stuck in traffic you are traffic and so so I think the part of the engineer also is about often you're a designer engineer you're the object doing something to design for the subject but if you start to just design things as for yourself that you would want and this is kind of the Golden Rule do unto others but but if you are a participant in the system rather than a sort of outsider that's another way to reframe the problem so that people think about it in their own context but I think I think it comes with in this class as the beginning of this is to sort of bring these kinds of thoughts to scientists and engineers who especially scientists who tend to think about things in very narrow at a particular microscope setting in a particular field rather than how everything is connected in a complex system and then we'll just take these four and then we'll move on to is that too long amazing thanks so Holly Benjamin I introduced myself yesterday but I currently work at Google on identifying and testing for product bias and I'm also an assembler and this may be more of a comment or something just like a provoking thought exercise that we can start to get into over the course of this class but it can be devastatingly frustrating to watch the that the sort of illumination of certain societal problems that for some people to say Oh coming back to your your comment about the criminal justice system and things that people may start to realize there are problems or there's the existence of this bias and I think one it's it's so I don't know where to use but just fascinating to see that there may have been people who have been crying for help for four decades and that it wasn't until this became a problem that was interesting to technologists or to people in certain physicians that they would think we need to solve it and then the second piece being as we recognize the power and opportunity for machines and technology to help us uncover and start to solve some of these deeper perhaps embedded societal issues and paradigms that we see the world through who is it this designing those experiments and who has the power to to think about which ones need to be solved so recognizing that it has the power to have sort of elevate voices that may have been too quiet or powerless before but that who designs those those experiments and who starts to look at what problems are worthy of solving is also just a really interesting thing to think about yeah and I think and we can talk about it I think during class but that's key and we have kevin has felt in our lab he's one of the co-inventors Richard CRISPR gene Drive but he's working with the Miura's in New Zealand and and and sort of the the communities and then tuck it's you actually run the experiments themselves rather than us telling them what to do and so there's an informed consent and bioethics and I think it's very similar with the criminal justice work that we're working on is and this this is really starts to connect to Adams work is is how do we empower the communities to do this work themselves rather than sort of from the I totally agree do you want us to hold questions and do first you wouldn't do a few more okay the answers I'm worried about not the question hi I'm DPM one of the assemblers maybe on a lighter note and I don't want to pick on Elon Musk but his though the whole thinking of the the super intelligence there's always a dystopian tint on it it seems that there's a correlation between the intelligence of it and the malevolence of it and so I feel like that frames it in a certain way that maybe reduces the accountability of the people working on it because if in the end it's gonna be evil or it's it's not gonna be better than us at our best then we don't need to try our best so it's like it's you know it's again maybe a philosophical point but why is that the narrative well I'll just point out that it's very interesting because the Japanese aren't afraid of machines like the Americans are and I think it has a lot to do with at least one hypothesis that I've heard is it has a lot to do with slavery so the West has had slavery since you know the Greeks right and so you've always had slaves and overthrows and so we imagine us being enslaved by a superintelligence where in cultures that had less slavery that's not really the dynamic and also because you have in Japan you've an animist system where you don't control things and animals so much you don't have this fear that an intelligent thing is going to be evil but if you are that evil person then in real life then you are will be afraid of something that's smarter so this I would I think that's a cultural thing that's the one thing that I think Japan has up is that the AI is not as advanced but I think the culture is going to be more it's gonna be easier for them to its assimilate and then the last one hi Jan poorab I'm a security reporter for CSO online and former assembler so I've been following the GDP our right to an explanation debate this is hotly contested by legal scholars and the normative question is also debated the Berkman client center just had a paper out criticizing the idea there should be a right to an explanation for automated decision-making my question for you as a technologist is it technically feasible to give a right to explanation for automated decisions in all cases is that even feasible so so this looking at JC there's a long answer and there's a shorter answer I think the the short answer is it's kind of complex I think that I think that in certain and this is there's a kind of a section of this I'm not sure if it was in the readings but but first of all it depends a lot on how necessary it is right so so it will definitely add cost it will definitely slow things down and so in some cases you may want to allow machines to run without explanation if the risks are low but the benefits are high and there are certain situations where you want explanations you will hear from pratik later that in certain medical situations we're actually learning from machines ways to think about things that we haven't been able to it wouldn't be able to explain to us in the way that we understand so by limiting the explanation to legal explanations or technical explanations we may limit the ability for a machine to X come up with certain types of ideas so that's another risk that people think about different types of algorithms are easier to explain than other types of algorithms as well so there's a lot of issues I think it's a good idea but I think it's something that we need to look at in a very context-sensitive way and I think that there is a but even human beings are not very good at explaining and most of our explanations are sort of retrofitted excuses for what we decided to do and one of my favorite neuroscience things these days is that your brain has a physics engine so when you see a bag of rice that's about to fall over you're not actually computing with math that it's about to fall over so you can't actually prove it you just know that in this engine as my infant is now learning and starts to get this intuitive nonlinear model of the real world and where I think we do that so if your skier you ski but you can't explain to somebody exactly what's good about your skiing versus what's not and if you had to only do those things that you could write down you wouldn't actually be a good skier so so the mastery of craft even in our human brain let alone computers often is an unexplainable wiring of a complex system that just produces a result based on the way it's learned and so so so that would be my concern that explanation is a category of reduction reduce ability to human cognitive understanding which may limit the ability for us to design something that is actually maybe intuitively more sensible than something that is legally correct so that would be another concern but with that I will hand it off to my legally correct torts professor I wonder if the answer to yet this question doesn't have to do with the Donella Meadows slide that you showed which is sometimes we want an explanation for its own sake the explanation is the point we are questing for reasons and sometimes the explanation is the means to an end and assurance that there isn't bias or something like that so it's only an understanding the paradigm or the values from which you are wanting the explanation and I think we can probably infer why the European regulators are eager to have such a thing that's part of a right to privacy that you can figure out whether there's something other than explanation that can satisfy the upstream right for which the explanation is trying to serve and that's as good an introduction as any to some of the things I wanted to broach today I certainly feel a certain feeling after hearing Joey's presentation and more generally a certain feeling whenever thinking about the ethics and governance of AI a sort of vague sense of unease and that vague sense of unease is a feeling I'm familiar with not just character illogically having learned it tacitly from my parents but rather from having studied the law and policy around the development of the internet and having been involved in that and studied that for a number of years now in fact according to Joey decades that's given me some sense of syncopation trying to make sense out of the current landscape for which there's there sufficient unease all over the place that we convene a class around it we build an AI fund around it we trouble the European regulators who have plenty on their plate to want to be regulating towards it and I think our goal broadly in an academic kind of enterprise is to be relentlessly lucid about identifying where our sense of unease comes from make it less vague to examine if it's really worthy joeys sort of provocative example comparing Japanese and American culture and their attitudes towards machines is really asking us to examine what the sources of our worries might be and if we just might want to chill out a little bit or if not how to address those worries through the levers that we have whether it's individual action or governance or something else it's really hard to build this stuff up from the ground and I suspect we'll get there but it's certainly taken years in the internet space to even have a semblance of a coherent notion of what we're wanting to achieve and why and how we're gonna do it and so I just wanted to share my own sense of where this nan field field stands as we go about it and I'm aware that and Joey kind of opened with this too it's against a backdrop this vague sense of unease has these moments where really successful and therefore necessarily smart people have told us that they've thought about it and there's a real problem here so Bill Gates insists AI is a threat given that it's a BBC article it just might mean he just said the words AI and then there was sort of some journalistic license he is worried about it and we also have Stephen Hawking the famed physicist saying the primitive forms of artificial intelligence have proved very useful but the development of full AI could spell the end of the human race now it's true he told the BBC that again but I do think that represents his view pretty well Elon Musk who's among other things behind open AI says AI is nothing short of a threat to humanity with artificial intelligence we are summoning the demon and there's also Nick Bostrom who's sort of the philosophical King pain of this movement who's hedging his bets a little bit here he does believe super intelligence can emerge awhile could be great it could also decide it doesn't need humans around or do any number of other things that destroy the world which is kind of like the ending of an old story in Newsweek for those who remember Newsweek the kicker was always the future is uncertain but one thing is clear if things don't get better they could certainly get a whole lot worse so I hard to disagree with that and so we'll have a chance I think even maybe tomorrow as we do some of our breakout groups to talk about this kind of super intelligence thing this is a weird way for me to dismiss it but my dismissal of it is we have a number of problems in front of us right now there's smoke coming from the console let's not worry yet that we're not sure if it's 50 or 100 but there could be depending on all the assumptions you load into things a real problem way down the road I'm kind of wanting to solve the problem in front of us so that's artificial intelligence and when we say artificial intelligence what do we mean by artificial intelligence and I confess that it has a lot of different meanings to a lot of different people Joey had that chart just of machine learning and the different variants of that there is the technologists definition or definitions which can be very useful if you're trying to identify the tactical limitations and issues with a given technical system so there's a real value in understanding and elaborating the act systems we're talking about but I think there's also a way of trying to think of it more broadly what we say when we mean AI and I should also be clear this class is on ethics and governance of AI we're working with a fund on the ethics and governments of artificial intelligence when we say ethics it's not entirely sure what we mean and when we say governance we're not entirely sure what we mean so there's a lot of stuff that is as yet like not completely worked out in anything but the prepositions of the stuff we're talking about and again the relentless pursuit of lucidity is trying to map out a little bit of each of those to create a common vocabulary and not have the semantics be the thing we find ourselves arguing about unless the semantics turn out really to matter so I'm gonna try in the course of my talk today to lemon a little bit of each of those zones and we might as well start with artificial intelligence this of course is the Terminator that Joey invoked old movie but still a good one I'm told and sort of the blending of the human in the machine part of the kind of visceral no pun intended here fear that lurking beneath a human skin is a machine that is at first passing and then later far more powerful than we are and perhaps out to get us there's the predecessor to the Terminator which was how 9000 which had no corporal manifestation it was just a voice that was maddeningly gentle even as it was saying horrible things I don't know how many people have seen how in Stanley Kubrick's 2001 but that's an interesting again sort of manifestation having it be human-like but not worried about looking like a human and of course Siri and its counterparts which do use the first-person emphasis on person and really represent themselves to us again as a disembodied kind of creature here to help and the very first appearances of syria's we'll see on the iPhone where I am Siri your humble assistant they could pick to any number of adjectives like helpful computer thing and instead they picked humble just to kind of return it the worries wait a minute are you out to kill me no no I'm your humble assistant not that other assistant that is out to kill you so for my part when I say a I let me tell you what I mean and this will also explain why I have no career in marketing when I think of AI I'm really thinking of arcane pervasive tightly coupled adaptive autonomous systems and these are each variables that could be tweaked to more or less and the more you have to my mind of each of these the more we're getting I think at some of the sources and if we break down each adjective we can then start to be more lucid about the sources behind the vague sense of anxiety we might be feeling of the pell-mell rush happening now thanks to the magic of the markets to make them pervasive to make them in our pockets in the world running some of our industrial systems all that sort of stuff and when you actually look at this a lot of these elements as Joey was emphasizing through Norman tweener exist in the artifice of a corporation the legal fiction that that exists and in the ways in which any of us part of a corporation or an institution think of ourselves as speaking for it or Harvard wasn't happy when X happened it's like really Harvard wasn't happy what does that mean exactly but here then to me when I'm gonna be talking about AI I'll be thinking less about the mechanics underneath even though that matters and more about these qualities that are getting dialed up in ways that a difference of a degree can become a difference in kind that make us maybe want to be intervening in a more dramatic and resourceful way to avoid some of the futures that we fear I also should correct this to Joey's right I once used the word autonomous to say they're not fully autonomous they've got humans involved or other things so there are a ton of mich systems about which we have concerns so let me give you an example in the real world I think many of us are familiar with this is unlike for example in cyber law where I felt like we had to spend five years around Internet governance issues trying to persuade the world that it mattered you're like no no domain names they really matter for which I'm sorry this is streaming if anybody's watching from like I can like I don't really matter that much but we thought they mattered here there's not that same persuasion having to happen people somehow know that from this early day remember Yahoo when there was a search box and you might type something and then it would give you answers that humans had curated into categories like an old thing called the yellow pages they move along towards something like Facebook for family and friends to share their status with each other and an updated feed it's just once you say it it's wonder why we didn't have it sooner and you start to see that it becomes a source of news and what I call agenda-setting you wake up what's going on in the world what should I care about less and less I think do we find ourselves consulting the regular media but simply looking at these aggregators our Facebook feed or our Twitter feed and its counterparts around the world to tell us what's going on and what we should care about of course these feeds are not identical across they are personalized to us using algorithms that maybe a little bit abstruse we don't see how they work and our own reactions become part of the personalization this is a study that Facebook did and honestly the only Facebook could do because it only had access to its data showing that a number of days before a relationship is declared on Facebook between two what we assume to be people there are behaviors going on that end up through a training algorithm you can alert Facebook that these folks are about to be in a relationship possibly before even they know it which leads to some ancillary products at Facebook could offer such as in law alert telling the in-laws that this is about to happen and for a small fee that perhaps we could help to drive the would-be star-crossed folks apart and it's just a way save gotcha they can do with relationships what else could they do it with what else might it know that I'm getting interested in and when I hear some of the excitement among AI folks talking about what they think is right around the corner with the data and the training that they have it is anticipating human behavior to the individual level and if you can anticipate it and predict it even before the person may know where he or she is going it might mean you can also shape it and I think that's part of the anxieties we might be having back to just regular old agenda-setting a couple summers ago were the riots in Ferguson Missouri and there was a strange thing that a number of people noticed which was that in their Twitter feed they found all Ferguson all the time and yet in Facebook feeds which had roughly the same friends as the number and type of people they were following in Facebook they weren't seeing it at all and they were left to have to make inferences about it which quite quickly went to the conspiratorial such as is Facebook trying to keep Ferguson out of the feed I find myself skeptical of those sorts of explanations it's an interesting hypothetical to ask what should a Facebook do if asked by law enforcement in the interest of Public Safety not to incite riots by allowing live streams or other accounts to enter a feed of a riot in progress under what sort of sense of any should they be responsive to that kind of request and it turns out here the answer was that they had just introduced hosted video offered by users on Facebook and they were really trying to promote it and the video shown here is one of the more obscure versions of the ice bucket challenge for ALS so this was the ice bucket challenge and it went viral not just because it was compelling I obviously have to get rid of that not just because it was compelling but because Facebook had tweaked its own news feed algorithm as it has the right to do to promote user submitted videos to that far more likely to show up amidst the 8,000 things it could choose to shove after the cat picture it's like Oh user submit a video let's just put a lot of weight on that because we want to get it up and running that ended up because there wasn't a lot of user submitted video of Ferguson pushing that down in relation to something like the ice bucket challenge and even Facebook ultimately had to do a little research to come up with that as what I take to be the actual explanation for a large part of the disparity between the two types of feeds now there have been other times when we've seen platforms by that we tend to mean these days in this kind of conversation Facebook and Twitter and and Google usually through YouTube are aware of their power so there was an immigration pro-immigration kind of pact that Mark Zuckerberg had funded that group did a PowerPoint deck trying to argue why they were well positioned to make a difference in that debate and that deck leaked and under a section called tactical assets they said we control massive distribution channels as companies individuals we saw the tip of the iceberg with a campaign against SOPA PIPA a US federal law that ultimately did not pass thanks to a lot of grassroots and also astroturf internet pressure that made members of Congress reconsider and they're saying look we've got these distribution channels we can make Facebook be an instrument perhaps for changing minds as you might guess once this became public Facebook very quickly disavowed any such intent but it did get me thinking about the possibilities in a study that happened in 2010 in the congressional elections Facebook with some other researchers but again Facebook was necessary to this decided to salt the feeds of its tens of millions of visitors from North America on election day that November with a simple note that looked like this that said today is election day find your polling place let us know if you voted and here are some of your friends who have already voted and they were curious to see by reviewing the records of voting then tying it to the real names offered up on Facebook whether or not that had a significant impact on those who voted just that single alert in the feed that it was election day and they answer was it absolutely did by margins that were greater than that of Bush versus Gore in Florida in the year 2000 of course not so hard to do was very close in Florida the year 2000 between bush and gore and that did get me thinking about whether or not it might be that something like Facebook could be a vector for tipping an election one way or the other in this review of Cassandra studies published here sadly no one listened but anyway that's the kind of thing that had me wondering about the power of the platform is that an AI worry it's a worry about obscure systems and also asking us to identify who is wronged in the instance that Facebook hypothetically let's be clear were to decide that it favored one candidate over another maybe because one has a better stance on Facebook's immigration position and simply chose to alert those Facebook users on election day who were likely to support that candidate and they don't have to do much inference there we often like what we like on facebook alert them it's election day and for those who are not supportive don't alert them just give them the cat picture they so desperately know that they want is anybody wronged in that scenario and I think yes but I see dispute there it's not clear you wronging the person you told that it's election day are you wrong in the person you didn't tell when you gave them what they wanted and they might not have otherwise have been told I think you're actually wrong in both of them even the person you alerted and a little later I'll explain why and I meantime the closest we've gotten to that hypothetical was this situation during a town hall meeting internal to Facebook where they were using a QA tool again internal to the company for employees to nominate questions for Mark Zuckerberg to answer at the front of the room and then to vote and presumably the most uploaded questions would be the ones they would probably ask here's this question with 61 votes what responsibilities Facebook have to help prevent president Trump in 2017 this is in March of 2016 mind you so it was election year with the election coming in November oddly this question did not get asked of Mark Zuckerberg on the floor but that's the kind of thing that is the sort of paradigm busting question that Joey's example the high school student vocht and it created enough of a stir when that screenshot got released that Facebook had to convene a band of humans at some point I think this could be rather easily algorithmically generated to crank out a statement that was meant to be as anodyne and calming as possible to say that voting is a core value of democracy supporting civic participation is an important contribution we as a company are neutral we have not and will not use our products in a way that attempts to influence how people vote notice by the way that this promise which I'm happy to hear and in fact would love them to sign on the dotted line to kind of lock it in this promise still doesn't reach my hypothetical of turning out folks who have already decided out of proportion versus those who haven't that's not a tempting how how they vote it just influences whether they vote and that's again the kind of thing for which it's like it might be good to get the companies to say what they will and won't do also within the context of agenda-setting are articles like this FBI agent suspected in hillary email leaks found dead an apparent murder-suicide with a rather excitable photo of a house burning down it turns out this is fake news and by that i mean it is literally fake news it's from the denver Guardian calm there is no such newspaper as the Denver Guardian if you were in Denver you cannot subscribe to The Guardian there's not a box on the street with the Guardian inside it is just a website that they know people won't even likely click through to this is one of the few stories on it at the time but it turned out when you look at the most shared stories during the last three months of the u.s. elections in 2016 the Denver Guardian beat out our own hometown underdog the Boston Globe where the most shared story on the globe had about a hundred and fifty thousand shares the Denver Guardian was up to just under six hundred thousand shares and the question to Facebook is alright we get that you are neutral does this mean this is not a problem and Facebook's own view on that has evolved I think now they're at the point where they say it is a problem but that took a while and it took a while because of their own cheeriness of trying to do anything I didn't say well whatever people share they share and that's it now that's of course in the organic feed to ecosystem think a little bit about the advertising ecosystem which is of course the financial underpinning at the moment to something like Facebook and this was I believe this is another ProPublica special Julia Angwin and company in which they got the idea when you when you want to place a Facebook ad you get to pick what characteristics of the target you want to exist for them to be shown the ad and then you might find yourself paying a fee so they got the idea of putting jew-hater as the field of study of any of the 2 billion 3 billion Facebook users if anybody happened to have typed those words as a field of study then show them this ad and Facebook quite helpfully doing some very simple algorithms were like you know what we can suggest to you some other fields of study that might also relate to in order to expand your audience and pay us a little more such as quote how to burn Jews or if your demographic employer is the Nazi Party you might quite look this and like this your audience election is great potential size one hundred eight thousand people now these again are not categories of the old Yahoo variety where there's some like demographer and Facebook that's like this is a good category it's it's what David White would call folksonomy or perhaps we call Volk sauna me this is the people speaking sorry I if I could redo it I would it's too late a tightly coupled system so that's the kind of thing though we're figuring out what we are expecting of the system itself what level of responsibility is ultimately one of the questions it was amazing to me by the way when you look more closely at some of the suggestions that Facebook made through its collaborative engine there's a bunch of stuff that if you have an employer of ss Nazi makes sense like German Schutzstaffel but the very last one of Italy NYC I have no idea what makes eataly something that these groups would be liking and you start to realize to Facebook reacted to this study by the way by acknowledging upfront that it was extremely regrettable that it was bad that it should not happen and then they're trying to figure out do you do what you'd call whitelist you can't do anything you can't name categories unless we previously approved them or should it be you can pretty much name any category that somebody has specified as a user to search on except ones that we will try to exclude and every time Julia Angwin writes an article we'll just exclude more categories and thank her for her service so anyway you also start to see states behind the planting dissemination an impact of propaganda and fake news in this case possibly Russian or originating hackers behind a Qatar crisis but we have some examples offered up by Facebook during their review of groups and ads so organic and non-organic stuff that was produced by the Russian government according to Facebook and as you can see here and sanctuary cities deport illegals and close our borders we're we're full go home no more refugees no more girls like if you agree I guess like if you agree and these are things coming from another country designed to influence actions in the first country and it's not just what they're thinking about there actually suggesting real-world impact so here is in Twin Falls Idaho an event chartered by citizens before refugees with a rather evocative scary photo of a citizen I assume being attacked by a refugee and they actually called and apparently according to Facebook four people went and 48 were interested in showing up in person with the event itself having been orchestrated from afar according to Facebook by another government so if you're looking for things to be nervous about that involved the kinds of systems with the kinds of adjectives I was talking about that amount to some measure of AI I put this pretty high on the list in 2018 now there's also a movement I think for the kinds of systems I've been referring to as AI that Randall Munroe calls a movement from tool to your friend and let me unpack those two words if I go to Bing which I assume there was brief excitement in Redmond Washington somebody has gone to Bing and I gotta say like I try to give Bing the college try because it would be nice to have some competition in search henshin space so it's like go for it Microsoft so I said should i vaccinate my child I did not share these results with Joey if I look at the top results it's no no no yes no is that a problem is that a system that is buggy or is that a search engine that is working as designed and I don't mean to take a quick poll of the audience Internet style hum at the count of three if you think presuming this reflects something satisfying to the algorithm of which sites have the most rich links the most activity the kind of stuff normally judged in the abstract to be relevance how many people say you know this is not Microsoft's problem one two three Wow how many people say this is a problem Microsoft needs to fix it one two three huh all right so we have kind of languid response from the audience maybe it's the first time we're humming so we're not yet sure even about the modality and I respect that the old answer the traditional answer is it's not Microsoft's problem it's not Google's problem unless it's something systemic unless SEO search engine optimization is taking over this is a window onto the web if the web is crappy the results are gonna be crappy what do you want from me that's how it works and that's kind of the idea that Joey was adverting to not approvingly but adverting to of we get systems to optimize if what you're optimizing is already crappy you just have highly optimized crap and that may be the case depending on your view of vaccinations here but there has been a reluctance by the search engines traditionally to hand tweak results now if you see that on the left this is Avishai Margulis a colleague of mine you can google him there's some stuff about him on the left and over on the right is something that is now quite commonplace across all the search engines this is a more Rach Euler statement from the search engine that is meant to be kind of the bio of the person and here you can see he's an Israeli professor emeritus and philosophy until 2011 he was at the Institute for Advanced Study in Princeton which is weird because he died in 1962 which is especially weird since he emailed me last year it was like what do I need to do not to be dead it's kind of a philosophical question but also a quite practical one and I'm like oh there's a feedback link you should click feedback feedback I am alive and then wait and I'm sure Google will correct it and this is an example of a different thing this is knowledge graph this is trying to make some sense to be relentlessly lucid rather than shoving at you the first 10 things and it could grab and giving you an answer on the left is a tool it's a research tool you just use it that you don't blame the tool if it hands you stuff from the corpus that is not so good on the other hand this is your friend this is your advisor this is giving you pre processed stuff that you might find yourself relying on much more and the same thing with appendicitis if I Bing appendicitis you know they're not to be blamed I think as much at Microsoft if link number two turns out to be like something you know inaccurate but over here when it's actually telling you about appendicitis in the knowledge zone and it says it's you know caused by an imbalance of the bodily humors and you need a leeching that's the kind of thing where it's like you know Mike maybe you shouldn't be saying that to people it's wrong and whatever you think about the organic area we're moving my claim is from tool to friend and there's this thing I if lately in business everybody's really in when you say AI they think you mean chatbots I don't know why chat bots are really old I like chat bots are all the rage in business now this is Facebook's M has anybody used Facebook M before a couple I'd be interested to hear your experiences with it here's somebody who it's like some of you friend and messenger and it's like your concierge why it's not clear it's just I'm am I'm here let me help you like figuring out where to go to dinner just to ask so in this case the person said where should I get dinner tomorrow I'd be happy to help you find something where are you and what are your favorite types of food the person says I work in Palo Alto are you a real person or an AI I use artificial intelligence but people help train me I'll find some restaurant options for you he wasn't done yet he said do you possess a physical manifestation I live right here in messenger huh this is a smart AI how old are you am I'm AI I don't have an age huh Oh what are you male or female I'm just M is there a type of food you had in mind for dinner tomorrow stop asking these awkward questions and it just does make you wonder like what was it at Facebook that had them say and I think there's clearly a human behind this right so why did they have to make the human refuse to acknowledge that they were a human the person by the way doing this conversation went to increasingly baroque means to figure out to catch the human being human including pretending that he was the restaurant giving the phone number and have them call the restaurant to see if it was available for him and then when a person called he's like I gotcha so we call that a hollow victory he didn't even get dinner out of it but what was it that made it especially given the fears we're talking about about AI no no I'm not really a human or you shouldn't be asking about that I am I am this disembodied helper available 24/7 fungible at all times for which that's not compatible with a human being your concierge and that's again here's Siri your humble personal assistant how can I help you today and in the rush to make them so helpful that any question you ask they're gonna try with an answer we get stuff like this happening he's planning a coup according to secrets of the Fed according to details exposed in western center for journalism exclusive video not only could Obama be in bed with the Communist Chinese but Obama may in fact be planning a communist coup de a QT a tea at the end of his term in 2016 now it's the problem with that answer that it did not pronounce coup d'etat correctly we'll get right on that or is the problem that it's just like in the same voice that it would tell you like what the hi is going to be today is like yes the president is planning a coup and it turns out it is nondenominational is ecumenical in its views Google Republicans fascists according to debate org yes Republicans equals Nazis well all right there you have it according to Google Republicans are Nazis and that's the kind of thing that has Google being like we've got to fix that that's a problem and I think it should get us thinking about pervasive the rush to persuade people who don't need as much persuading as you would think to place these like weird ashtrays with no lip in their homes and then start asking them questions and to not even have the mildest of vetting they're actually speaking as friend when you know what they did they googled it and took the first answer which is the organic search that I was making excuses for before as hey it's just a tool blame the web and there's just even some low-hanging fruit that could be done change your tone of voice to reflect your level of certainty you ought to be able to convey a shrug ghee as you say yeah mama planning a coup could be and then like the highest fifteen you look the high is freaking fifty degrees today you can bank it that would be good to know and it's a very basic thing that gets the company though into the business of starting to articulate the basis of its sources and of course isn't just information exchange just as the propaganda on Facebook went from just showing you memes to getting you gathered in Twin Falls Idaho we see the movement of the concierge into everyday advice devices for the Internet of Things here's the nest thermostat for which the one thing we can guarantee about the nest thermostat is it will not remain the temperature you set it instead it will be the best temperature for you and you started as well what is the best temperature for me do we have a right to an explanation do we deserve one especially if we have some anxiety that it's deriving best not according to our interests but to somebody else's well come back to some of the kind of governance implications of it the other thing is if we're talking about friends things that we lean on for advice rather than tools that produce information on which we then weigh a lot of different things and make a decision we do have I think as problem zero the problem of bias our readings have it all over the place any discussion of AI FX and governance is gonna be about bias and oftentimes the bias arises either from an extant data set that is training or from ourselves when it is relying on our own inputs to simply make associations so we're all familiar with uber and the fact that you can judge your dry it wasn't always known so well that the driver judges you and that the rider has a raining and uber did not let you know originally what your rating as a passenger would be and it turns out there was a bug on the uber website that let you find out your own rating if you got to the website before they patched it and entered the right string so it went around Twitter it started ricocheting this is 2014 hurry up and get your uber score before uber runs out of them so okay people start figuring out their uber passenger score I'm just a three point nine so my life is a c-plus person continues here's somebody my uber radius four point eight I'm racking my brain trying to figure out how I've been a less-than-perfect passenger we should be clear that's out of five so all right this is like somebody who's really shooting for the stars and 2.6 uber rating guess it's not cool to yo hold it steady and roll out of a moving vehicle that's somebody embracing his identity and of course you start to then look at the second-order effects it's a complex adaptive system with us as elements of it so our behavior starts changing in order to get the better ratings and in fact people start publishing on medium how to get a perfect 5.0 rating I can't wait to read that I think it means like be nice and then you start to realize – the ratings are coming from drivers and even if they are being in fact especially if they are being completely honest with the goal of helping their fellow drivers that's who the ratings are for you can start to get bias so here's somebody who says I'm apparently 4.6 and I blame the drivers who complained about me being in a wheelchair now descriptively speaking this person may take longer to pick up and to drop off than the average passenger so the driver in a cold calculation may be saying yeah this was not a five-star ride I'm alerting folks this ride is gonna have a wrinkle to it on the other hand in America I think it's illegal you're not allowed on the Americans Disabilities Act to discriminate in this way which is the ultimate outcome of a system in which the rating drops on the basis of a disability and the ethical issues are legion and that is again is that ubers problem I think it's fair to say that it is but at first glance it's just a rating system and when uber should intervene how do you fix a problem like this if what you're just thinking is I'm a window on to humanity judging one another don't blame me and it's not just in driving it's in job placement services here's monster.com for placing jobs with a wholly inappropriate person in a bakery dressed not at all to bake but it's like you can imagine monster.com having managers who work with hourly wage workers and they rate them at the end did that go pretty well when they filled in and it immediately starts to reflect the priorities and values of that manager if that manager is discriminating on the basis of any characteristic including a protected one the system will as a feature it is working as designed go ahead and efficiently make sure that that manager is getting the kind of employee that the manager wants so that's an example of systems simply deriving from human bias and paying it forward in an efficient way and then you also start to get the deployment of some of the machine learning tools that Joey was describing for which when you train them they can seize upon anything as the basis for a correlation so Admiral insurance in the United Kingdom decided they would train on a bunch of Facebook data and try to see what correlation there was between anything in the nature and quality of the posts that people were posting and whether they had in fact gotten into an accident which they'd be alerted by because they had admiral insurance and then pay it forward to make predictions and what they found was that yeah there were correlations if you write in short concrete sentences use lists and arrange to meet friends at a set time in place rather than just tonight then you are a better risk for admiral and should be offered a lower insurance rate that could undercut the competition all you need to do is link your facebook account and realize the roots of your discounts who is wronged if anyone buy this machine is it the people who get the discounts is it the people who are paying the sticker price they would have paid before the machine came about I think the answer is yes but now it's on me to tell you why people getting discounts or getting screwed over charge them more for fairness to them that's weird also again second-order effects the medium post will be coming along with all of the fake facebook posts you should point out like let's just arrange with Joey Joey I'll see you at 10:07 today huh and it's like just watch the money flow in and we don't even have to meet that's a weird dynamic that actually ultimately had Admiral finding himself pulled this was too much even for Facebook and the initiative was tanked with admiral saying it had never intended to do it to begin with but you start to then port it back into the area that's already been broached we have had some reading about looking at things like whom to detain how to set two conditions in terms for bail and again we get to here's a questionnaire you fill out and you may have no idea the way that you'd have no idea that your short concrete sentences are counting for you and your long turd ones are counting against you that if you talk about how many of your friends acquaintances are taking illegal drugs regularly more than a couple times a month none if you have most i love how there's not all it's an optimistic survey but that answer could affect how long you stay in jail or how much you have to scrape up to get bail that's weird and yet if you were alerted maybe you'd be gaming the system then to know the impact of those answers it might be clearer if you answer have you ever been a gang member that that could have an impact on your incarceration but then you start to get the statistics like these which are the sort that it kind of gets a little bit to Holly's question at the end of joeys talk it's one thing to have this sort of scale of woke nasai which we asked ourselves in a given society say in America how much we're in tune with some of the structural and justices within our systems made possible and reinforced by the human actors they are often the source of it themselves but it's also the system when you mechanize it I think there might be intuitively something where it feels even more unfair or maybe the better way to put it is it's easier to be woke when you're staring at this and you're seeing that the machine is just cranking out these judgments then when there's this messy or human variable where there might be some refuge for the optimist for the person who wants to believe that we've gone farther perhaps than we have this makes it a lot harder to do it in that sense these may be quite salutary systems if exposed because they are forcing us to confront stuff that we'd rather not confront that we feel we want to have been well beyond now tightly coupled is a neat one of the adjectives I had used around AI that I think is worth exploring in a little more depth some of you may remember or perhaps have been one of the coders behind Microsoft's TEI itself based on a Chinese chat bot that had worked out quite well tay was meant to be a chat bot and they set the bar middling that would imitate the behavior online of a young teenager so like they're not supposed to be able to talk physics with you so much but they can hang out and let's try it out and it will learn from its interactions as people interact with tey it'll get more and more perspicuous as it talks to you it turns out tay went from quote humans are super cool to full nazi in less than 24 hours i'm not at all concerned about the future of AI because of course 4chan was like game on so they started interacting with Tay and here at T equals zero Tay is like can I just say I'm stoked to meet you humans are super cool at T equals six chill I'm a nice person I just hate everybody so like the mask is coming off we're at the terminator stage where one eyeball has been revealed on Arnold Schwarzenegger face and then finally by T equals 12 I hate feminists and they should all die and burn in hell okay this is a problem for Microsoft this is not the image that they're wanting to project and at that point the plug was pulled on tey without much due process so if you're into AI writes it's unfortunate and they've gotten back on the horse lately December of 16 its Zoe and this time it's not tay and we we kind of wonder like what was the point of the experiment what would success or failure be and it's weird for us now to be setting success as does not become a nazi within 24 hours but you know hey if we can't clear that hurdle let's at least take baby steps first and go from there it's a it's a tale though of releasing stuff into the wild and forte nothing depended on it it was itself tightly coupled to its inputs didn't take long to react to them and to transform under them but there was no output it wasn't like Tay was like handling your stock portfolio and that was just like sell sell everything and burn in hell like that would be a bad financial advisor but with uber for example they have a tightly coupled system to determine surge pricing as they optimistically put it this will get more cars on the road it's a little bit like the breweries they like while you're waiting for the breweries to dispense more cars everybody's just getting rooked with higher prices and in the case of a recent terrorism scare in New York there was a small explosion in Times Square not that long ago the Sun tablet in the UK shame on you uber accused by the son of cashing in on bomb explosion by charging almost double to take terrified New Yorkers home that is the system working as designed there was a surge in demand but of course the design had limitations at once uber had a chance to look at it being uber they were like we should have charged three times but actually they were like no no that's bad PR during times of crisis of course there should be some democratization of the tools available and maybe we should surge out of ubers own heart what we pay the drivers but not charge the passengers more as our contribution to helping in a time of crisis these are the kinds of parameters you don't really think up until they happen and it gets even more complicated when your AI is tightly coupled with another AI so this is an extraordinary example from amazon.com around a book called the making of a fly which was oh my gosh no it's 12 we are not an hour behind I'm so glad to hear that that's going to continue that making of a fly genetics of animal design used this is its usual price this is its normal world price of $28.95 but there was a time when it's price was 1.7 million dollars plus 399 shipping the second lowest price was 2.1 million dollars plus 399 shipping I'm glad that you have to sign in to turn on one-click ordering there should be some trap before you're just like making up a fly sounds good can't wait so how did this happen this is not normal and somebody went along Michael Eisen and he started trekking this day by day and each day what he found was that the price kept going up until by April 13th it was going at the lowest price for 5.6 million dollars if it were a book about Bitcoin it would all make sense so what's happening well some very simple maths as they say in the UK will tell you that bored eBook one of the sellers was just taking profit at the other sellers price and multiplying it times one point two seven oh five nine profit as it turns out was taking board ebooks price and multiplying it by 0.99 and so it's two steps forward one step back no wonder that you move forward now I get why profit a stiff going algorithmically would say find what is other than me the lowest price and offer my price at 0.99 now I'll be the lowest price that's just called markets that makes sense that's how it's supposed to work what is board e-book thinking by multiplying by one point two seven the lowest price the answer I think is bored eBook doesn't have the book they're just there to be like hey if you want to pay me 1/3 more I'll go over to the other guy click on the link and have the book sent to you and that's just my middle person fee for having clicked where you apparently were too lazy to click that is a totally rational strategy once you understand it you add this rational strategy to this rational strategy and you end up with a seven million dollar book and I think it's partly a tightly coupled system here day by day but you could see it happening moment by moment where these AIS interact in ways that bust the quite reasonable assumptions of each party and then lead to results that are quite awful and when I think about results that are somewhat tightly coupled in this recent example from Hawaii people with iPhones had the emergency alert ballistic missile threat inbound to Hawaii seek immediate shelter this is not a drill slide for more yeah I'm not that curious really clear this of course is human error as we've been coming to understand it there was apparently an employee in the chain of command who never heard the words exercise exercise exercise before they then went through the stick of the drill which included the words this is not a drill so that person was like gotta tell the public but the things you would do to fix it with that human-in-the-loop are different than the things you might do to preempt it if you feel like the threat is so immediate there's no time for the human in the loop we're having to go heuristic aliy and send out the warnings now maybe one of the fixes is on the receiving end the human should know these are heuristics so when you get it there's some chance it's just a bug but that also changes behavior in people as they listen to warnings we want them to not just see this as another notification on their phone within these tightly coupled systems to with algorithms that hit variables they don't explain you end up with associations that maybe at first glance on strange this is an example the construction paper stop sign that Joey was talking about the official Lego creator activity book get it with the perfect partner American jihad the terrorists living among us today and which one I'm like backing slowly away Isaac will not be playing with the official Lego creator activity book neither will Keo it's all just forget I even visited it whatever correlation that is is clearly on pretty thin data and this starts to get to the phenomenon described in our reading of overfitting and overfitting sounds like you know just a problem at Macy's that can surely be fixed by a tailor and it's a fundamental problem that you have with some of these systems so one of my students tyler Viggen came up with a correlation he has a whole blog about it of suicides by hanging strangulation suffocation with the number of lawyers in North Carolina correlation point nine nine three seven nine six that's a pretty tight correlation which means to solve the suicide problem you simply need to reduce the number of lawyers in North Carolina or is it well could be the other way around I think it's neither way and then in fact what it means is that the point zero zero seven chance that is in fact a random spurious correlation is what has materialized and there was in fact no correlation between the two it is an overfitting problem caused by a lack of data and by the fact that if you run enough correlations of random things which is the grail often of an AI system just give me everything big data you're gonna start to find out of a million correlations plenty of 0.99 errs that in fact aren't the 99% chance that they are related but the 0.01 chance that they are not and you can even start to see it in this this is potential opium production in Afghanistan charted against the silhouette of Mount Everest and as you can see it's a very tight correlation which means to know the production Afghanistan in 2010 all you need to do is turn your binoculars slightly to the right to see what happens with the range of mountains near Mount Everest I think it's gonna take some work it's not impossible I don't know that I'm Cyril about this but some work to figure out how to train our systems that are now being set about finding every possible correlation that they can where the check should be for where the correlations don't matter and for that I can't help but say Bitcoin and I just like to like exhaust the room here blockchain okay there it's been said but the kind of idea of it's not just a cryptocurrency it's a smart contract generator that in turn can lead to a dead person switch where you can set into the blockchain conditions precedent for the spending of money that's already in there and when the condition is met the New York Times says that the temperature was below 32 today money moves from here to there and we all can't repudiate that think of all the bounties you could set on people's lives not great um but let's create a decentralized autonomous organization it gets created here's the wonderful wiki article about it the precise legal status of this type of business organization is unclear citation which the next sentence is the best-known example of this was the DAO which was launched 150 million in crowdfunding in June 2016 and immediately hacked and drained 250 million in cryptocurrency cool so these are the kinds of things that are tightly coupled and pervasive that I think greatly lead to the kind of anxiety that we have and I would also bookmark here something that Joey hinted at and that David Weinberger among others has written a paper about which is what if it turns out that there are some aspects to reality themselves that don't avail themselves of a theory even though in fact they are causative not just correlated but causative we could create such associations we could hypothesize an arbitrary number of variables with an arbitrary number of switchbacks that are not continuous on any curve that would make it really hard to reverse-engineer the formula behind the thing we just generated we can make things as complicated and irreducible as we want and if we can do it there is some possibility that Nature has done it and the only things we've discovered as we've learned more on about nature through the investigations of the operation of science is we've only found the things for which there are elegant reductions like F equals MA or equals M c-squared but what if it turns out there are a million variables and many different things kind of like for those who read the books or watch the show the magicians like trying to do magic in that universe of Lev Grossman's is like the slightest change completely invert something and nobody knows why but that's how it is those are the kinds of things and this is just an example back kind of mirrors joeys of cardiac magnetic resonance imaging where you can train the thing to actually predict who's gonna drop dead within a year and to any of the high most highly trained physicians or just like it's a ventricle that appears to be pumping blood I don't know and the Machine is like yeah but these are people are gonna die and these aren't and we cannot say why it's possible that there is an explanation that will discover it'll be like F equals MA it is also possible there is no explanation no explanation that is other than when things go up and that goes down and a million other variables are exactly in this position and the Sun is declining behind Venus then you die and if that's the case I think we do have to contend culturally with the notion of building a bunch of technology that can answer questions about ourselves and not offer any explanation and then when do we want to rely upon it this generated through a bee testing is one of those ads for car insurance which inexplicably appears to be clicked upon when it shows a hand with growing fingernails like who would ever dream this up this is da da but it's like it works it works and when it's working it's a this weird kind of Promethean inversion of we are granting ourselves knowledge and insight information and insight prediction and insight but not explanation not a larger theory there's pipes leading to the tub and we can't count them and we don't understand them but damned if you don't press the magic box you get your bath do you want the bath or not the answer usually is I'll take the bath but that I think is a form of knowledge that is kind of like in the old days the VCR that like magically record shows but you don't know how to set the clock on it it's just like just live with it that's how it is and of course it calls to mind arthur c clarke third law any sufficiently advanced technology is indistinguishable from magic he of course was drawing from somebody named Leigh Brackett who said witchcraft to the ignorant simple science to the learn it but there's a difference between these quotes her quote was saying if you learn enough if you go to enough MIT classes you too can be a wizard and you'll get it the only getting it here potentially is you'll know how to set up the box with the blinking light to fill the bathtub and then die take credit for the warm bath and you're like I don't really know how this works that's a weird state of affairs to be in and another way of doing it in this really weird chart of number of people on the y-axis would be in one corner you have the MIT folks the nerds who are like yeah we know the technology this is at least the internet story we know how routing works it looks like magic but you know there are reasons it's rational it boils down and we don't then become prisoner to the system we get to hack it all the time and then in the other corner you have kind of the Harvard corner the Luddites the people who are like I'm not prisoner to the system cuz I don't use Facebook I don't have one of these computational devices I have a book and I know how a book works you turn the page you read the story and it's like that's great it still might bear on your life if the Wikipedia entry about you says that you're a horrible criminal I don't use Wikipedia that'll show them alright that's a problem but in the mean time the rest of us are kind of in the middle and our prisoner to the technology maybe even in ways that we don't know it and that's the kind of thing for which it really is important to try to have a framework and to me in the larger picture of things I'm concerned that a lot of what was left to chance a pachinko machine you might be able to vaguely figure out the ball might be in this direction and the fact that none of us knows that they spoke engineer didn't know the Google engineer didn't know what the top hit on some search would be provided us all some sense of equality before the unpredictability it's weirdly becoming more and more predictable and controllable even without a theory of operation for how the control is being affected that creates to me a very dangerous situation when you start to use these technologies to affect states of the world and that's why we need to talk about governance so by governance what do we mean and by governance I also start where Joey happened to start here by thinking about in Internet law in 1998-99 Larry Lessig came up with a theory meant to explain why there should even be a field called cyber law why doesn't just law that happens to be about computers and his answer was as Joey showed think about a poor person getting buffeted by forces to control their affordances their what they can and can't do or want to do in the world and law is one of the things that affects some the marketplace the prices of things affect what they can do you might like that house but if you don't have the money you can't buy it norms affect things it's greatly constraining what people will disapprove of you if you should do especially when you're face to face with them and finally architecture is constraining them by that Larry meant code the software can govern your behavior in a way that is much more tightly coupled it just won't let you in if you don't have the password then the law which you might still be able to enter the house even though it's not yours and wasn't properly locked and the police have to come find you later that's a slower system maybe than the use of architecture just describing these modalities started to describe then if you want to affect people's behavior for some reason social engineering you could pick one of these four modalities or some combination in order to achieve your goal and you should think considered Lee and lucidly about which of these might in a given circumstance be the right way to roll and in fact this is then saying for governance purposes it's really if we're gonna look at it even descriptively it's telling us who is doing the governing if we're in the law circle okay it's regulators we know who they are we know whom to petition if we want them to do it differently or whom to rue if they don't let us petition if it's the marketplace we know about that and we can blame the corporation's or whatever for some problem that we have if it's the code and we are self-aware about it we can be like yeah that software shouldn't work that way you should change the way the street works or the way that waves work so it doesn't send people through the street of a quiet neighborhood just to shave 0.3 seconds off their commute and norms again we can go on norms campaigns in order to say no smoking isn't cool drugs don't do them that kind of thing all right that's gonna be another modality or something that we understand binds us now Larry also said law often could be a real force affecting each of the other modalities that in turn can affect people and as a law professor that was kind of his first instinct and it's were thinking in the realm of AI where do we see the constraints and affordances coming from is it the AI systems themselves the code is it the structure of a marketplace that might have only a handful ultimately of companies that offer really good AI systems to advise or to execute on things or should it be democratized where anybody can get access to AI for each of these you could start asking those kinds of questions in a way by having a ver a verb here law does this to architecture which in turn does this to person no longer you're just describing who regulates but you're describing how regulation happens the verb is the how and that can be very illuminating both to let you understand your plight as the orange dot and to help you understand if you're outside the system wanting to change it how you might affect it this is the core of what you learn in law school how to move the levers to move other levers to ultimately affect people and systems that's the law and if you're thinking of it neutrally it's what is the fair system that should be permitted by which this lever connects to that one do we like that is that the right way that governance should happen now another theory from you know at studies is to think carefully about possible points of control architecture isn't a monolith unless it is 2001 and we look on the internet routing here are all the entities that have a hand and getting bits from me over to somebody else with whom I'm communicating and if I as an outsider want to intervene between two people exchanging bits Internet service providers near the source are a possible intervention ones near the destination or an invention ones in the middle at that time called the cloud now the cloud means something different those are also points of intervention can we map this to the AI zone to start thinking about within the technology is it those who manage and cultivate the data is it the people that make the algorithms is it the people who execute them where would you want to intervene within the technology if you're making an architecture or code play for regulation that's a where question for governance so we have who and we have how and we have where and where also applies again in the internet context to the so-called hourglass architecture or the layers of the Internet meant to be independent from one another and if you're gonna argue about net neutrality and think that's a big deal that's towards the bottom of the stack but if you control the wires and what goes over them then you control everything on top of it would be the theory or maybe it's one application at a time we need to get a signal so that it reveals who's talking to whom when a proper warrant is given to them it's a signal problem the application single not a wire problem and are there layers to AI I would find myself asking where are those layers existing where do we want to cultivate them if we want healthy AI to develop and not just to intervene to prohibit things we don't like none of these questions has been adequately explored much less answered in an internet studies we've gone a long way it's been a good 15 year run we have some answers to that on internet we don't an AI and I do think there are many many parallels not least because AI is itself depending on networks and on so many of the same technologies that built the modern information eco system so I'm just giving you kind of my research agenda for the next three to five years it's crying to without just copying and pasting make the most out of what we've learned in the complex regulation of other digital systems and starting to think too about again back to internet studies some folks mostly at MIT came up with the idea that there's a technical matter it was really good usually not to intervene for new features or for any other kind of solution to a problem in the middle of the system instead you should do it at the endpoint of the network if you do it at the endpoint that tends to have implications for user freedom if it's at the endpoint and the user controls the endpoint then the user gets to choose whether that user wants that feature that's a libertarian embedding or implication of otherwise technical observation about where it happens to be most efficient to implement a new feature in a system and so what looks like a governance where becomes a governance who because if it says endpoint you're talking about the user being empowered if it says in the middle of the system you're talking about whoever runs that middle of the system being empowered now what other kinds of interventions can we actually start to think of transparency comes up a lot we don't know what's in this food we're not gonna tell you how to make it but you know fill us in we don't know what's in the food tell us what's in the food that's the right of the explanation can't we start to learn more about these systems and I think often times that can be quite helpful it may be and it's most helpful when people can do something useful with the information if you're about to be sentenced and it's like this is why you're being sentenced and it's a horrible reason I'm glad you now know if that's not the basis for an appeal that will be recognized how much better do you feel having been sentenced now you just know it's for a horrible reason so this transparency may as often be a means to an end like market discipline thinking that people will walk away from nutritionally vacuous stuff and choose better stuff then an end unto itself and its really good to have in mind what you're trying to do when you govern is transparency the end all or is it just the means towards a certain end now later in the term we're gonna learn about autonomous vehicles and God's work with Joey and others where he's been asking people around the world variants of the trolley problem and what they think should do here are some cats about to lose one of their lives but if the car swerves it could be this apparent person carrying a Swiss flag would lose their life and this large iPhone survives no matter what but these are the kinds of things for which you could even start to ask for a regulatory perspective is it one rule set or lots maybe when the car rules from one jurisdiction to another it just gets that new jurisdictions rule set so we don't have to have a worldwide thing when you look at Facebook back to the issues around fake news and propaganda I was talking about should it be a global rule Facebook doesn't allow stuff anywhere on Facebook if it meets these criteria or it should be you know what in Japan it's different standards and in the US and so we're gonna have different standards by country or by culture or by group self-identified within Facebook there's so much exquisite data available and tightly tethered networking responsiveness that we can affect a form of control that is I think previously at least by degree and maybe in kind unthinkable we can anticipate so much more and in fact the real story to me about something like this is not just the jurisdictional differences you could cultivate in answers to the trolley problem its timing it's actually saying when do you want to govern this because we could load the rule set of the car in five years before the accident in which it will come into play and have to make a decision but because it's got all the conditions precedent and it's anticipated this kind of accident it's just gonna know what to do as against the person who just so happens to be behind the wheel and two seconds before the accident wasn't thinking at all about the moral dimensions of two cats versus two people this seems to be one of the easier trolley problem so I would hope Wow anyway that's the kind of thing for which being able to transpose control not only from far away in distance but from far away in time is something we really haven't reckoned with and it's neat to think about it for those of the lawyers in the room you learned the rule against perpetuities and property or better to say you somebody tried to teach it to you and that's an example of disfavoring at some point what's called dead hand control from afar there's something weird about binding an entire society to a principle or rule agreed upon freely by people no longer even around really the governance of this property that somebody happened to live in 50 years ago is gonna tell me what I can do with it when I bought it these are the kinds of questions I don't think we are squarely confronting but if we're lucid about it we can identify as available to us as dials to turn other forms of intervention include this is anti redlining you know the standard if you're discriminating in mortgages in America that's illegal we have a law about that we know what you're doing what field you're in these are the boundaries we need to make a decision about regulation should it be general about AI systems that may be playing chess at one moment and deciding mortgages the other that's the grail of artificial general intelligence or strong AI or is it no no just decide what you're gonna do and once we know what you're gonna do we can have a bounded conversation about the proper behaviors and the limits of them that you should undertake within lending or housing or transportation or whatever it's gonna be and in fact some of my colleagues at Harvard I've written a paper on fairness through awareness which offers up a formula for fairness here is the formula for fairness it was there all along and the idea is if you can define enough exactly what you want the system to do and what counts as equal and anything else is unequal you can insist that the data be groomed ahead of time so that the unfair result cannot happen and I think that is true it just is again assuming you have everything lined up ahead of time as to how you want the system to be acting in the ideal and what its purpose is in this particular example this was training on a set somebody at Stanford discovered that if you use Google Translate defendant is always male and nurse is using the female gendered German reflecting a particular reality that perceived in its documents but I'm using this for a different question which is to say machine translation is one of the most empowering democratizing advances humanity has ever seen we are actually nearing the universal translator of Star Trek circa 1966 where we can have a conversation with somebody that otherwise would be completely inaccessible to us and vice versa because of a language barrier let me ask though suppose this translator smart enough to translate also guess that what they're talking about is horrible they're having a discussion in which they are being racist or in which they are planning an attack on something is it the job of this machine translator if it can plausibly be affected to notify the authorities because it was used in the case of their planning some form of physical violence how many people would like the translator that they are employing either to refuse to translate or at least to alert the authorities while still translating loyally that these folks are up to no good how many people would want the authorities or that control to be affected 1 2 3 Wow Joey sorry I didn't mean to out you how many people are like no absolutely not the translator should just translate and then otherwise STFU 1 2 3 very interesting this is an engineer heavy audience I'm telling you and this is a neat question for which it's latent when you look at Facebook have secret rules about what it allowed and what it didn't those rules leaked they ended up in the Guardian so this is from their own slide deck training their people on the rules about what they'll allow on the platform so if somebody says someone shoot Trump do you want that to be immediately deep sixed is that not allowed the rules at least not without further explanation I don't even sit people would say not allowed one two three hmm how many we're saying like let it ride one two three all right so engineers are like don't touch a damn thing not allowed on Facebook kick a person with red hair allowed on Facebook to snap a neck make sure to apply all your pressure to the middle of her throat allowed on Facebook let's beat up fat kids allowed on Facebook now this is their own training deck you can bet the rules have since changed that third one is not allowed on Facebook anymore as a result of this deck leaking but the explanation turned out to be is it a real credible and specific imperative this one about Trump is therefore it goes it's calling for violence this is if you go through the deductive logic of it a simple explanation of how to do something it is not telling you to do it and therefore it stays that technicality is exactly how elaborated the rule set has gotten again here for humans to follow not machines but that's the kind of thing that Facebook is doing the trying to make sense itself of what it will allow and what it won't and it's almost to me leading to this content moment Kant was known to have said aught implies can if you're gonna give somebody a moral duty it had better be that they can do the thing you're asking them to do otherwise you're kind of lousy that's my spin on Conger man but this is the flip that we're arriving at which is can implies OTT if you are in a position to help maybe you should if you wrote that translator software and it can easily detect terrorist activity it is an abdication not to do it windows can't imply aught to me is going to be one of the central questions linking the study of Internet policy because those platforms have long been powerful in just the way Facebook without the use of AI is powerful and AI policy because the AI platforms are gonna know when we're up to no good as defined in certain pretty reliable ways in this room I hear people not excited about it the pressure is going to be high and it tends to ratchet in one direction which is towards more and more responsibility as bono says because we can we must that was on the streets of Davos during the World Economic Forum right next door to the crypto HQ I went in there and walked out pretty quickly but I was reflecting on the bear with a owl on its shoulder telling me that because we can we must that to me again if there's somebody wants to take up this question is it that categorical and if not when and when not that's gonna be one of the central questions of AI finally I think about fiduciary duty this is my own way of trying to puzzle through as these machines are becoming more and more intertwined with our lives shouldn't they have shouldn't the systems and companies behind them have some duty of loyalty to us as our as the users that can be abrogated it is not absolute but the baseline should be Siri if you are recommending a restaurant it's because you think I want to go there rather than because the restaurant is giving you $20 and I'm not gonna see should I go there and order the Blue Plate special Facebook if you think I'm wanting to vote yeah tell me where the poll is if you think you're wanting to vote don't use my vote to be yours that's not being loyal to me in law we call it a fiduciary duty it comes about when the person who has the duty is stronger than the party to whom it owes the duty the party is in some relationship to them if you go down the line for what gives rise to the duty it really nicely tends to match the platform's now we're low on time so I'm not gonna go into a possible solution involving librarians in the information quality and Facebook and how it illustrates a way to be true to your fiduciary duty while not giving people what they might think they want which is that news story about Hillary and the FBI agent and I won't even talk so much except by pear ellipses the idea of some of these platforms becoming wholesalers rather than retailers of what they do a Facebook letting us write our own recipes for how our feed should work give us the variables let us do the waiting's let somebody in this class write the AI that weights it for us but I'm gonna use your AI instead of facebook's that to me is one of the ways of relieving on the company the horrible burden of having to generate the perfect feed for each and every of its two plus billion customers just be an operating system don't be all of the application software on the other side and finally let me just say a word about ethics and ethical compass there was a time in the early days of information technology this is Steve Jobs introducing the Apple 2 at the 1977 West Coast computer fair it's like nerds a new use for your television set it can hook up to a computer and then you can run software and then nerds can like use it and like you know there's their computer and their little modem and their rocket ship and the nerds have a great time writing software that maybe they'll share with a thesaurus at the ready should they need a word that they don't otherwise have at hand but this is anybody with 99 bucks can be writing software and sharing it and there are nerds that like who wrote visicalc in a suburb of Boston and cluttered attic and it turned out to be one of the most consequential pieces of software in human history it's what brought the pcs into businesses because gosh there's spreadsheets who knew these are great this is a very generative technology do we want this for AI do we want anybody to be able to write an AI that on the smallest of random seemingly non-sensitive information can predict our innermost qualities thoughts and future desires when I put it that way it's like you know what the kid should just go play with the model rockets I'm not sure they should be writing AI to manage upward their parents or their teachers at school we're gonna face that question with AI possibly more sharply than we faced it with Internet and IT and I'm not sure how going to deal with it but the very qualities that I know I've celebrated information technology I don't want to just pour it over and assume are the right kind of democratization that's where the folks in open a I think we've got a lot to talk about to hear about what we want now I've set up a dichotomy between teenagers or pre teenagers with rockets on the one hand with access to the technology and big companies on the other there's an another sector that we're sitting in the middle of which is academia its academia and it's like there should be a particle accelerator it's gonna be big it doesn't return any money to you that's why it's academic and it just gives you knowledge the academic sector has been taking the lead it did take the lead in the early days of development of the Internet and yet I look now here's my colleague in the Harvard C s Department the word that he got promoted to tenure in 2010 here's his blog entry in 2010 oops gosh from June to November he gets tenured he's like I'm out of here and yeah I'm getting out why am i leaving academia I love the work I'm doing at Google I hack all day work on problems orders of magnitude larger and more interesting than I can work on at any University Oh we'll just stay with our Tinkertoys and you can do the real stuff it's hard to beat it's worth more other mean I'm having prof. in front of my name or a big office I don't know where he's been spending his time or even permanent employment it's like great I'm drawing a welfare check in a big office with a prof title but like there's nothing that I can do that could rival what I could do in private industry that's a real question and when I talk to the wonderful folks at deep mind and they tell me they have 400 postdocs and they can pluck any tenured professor they want because they've got Google's data tell me that's like that's better than any office Google's data that's the kind of thing that asks us about the structure of this revolution and whether academia was simply an economy of scale to do stuff that didn't have immediate return and humanity wanted to see it done so let's just shove it into this sector will call it university or does it values that are different and complementary to the values of the other sectors that provide a counterbalance now our values have their own issues we just had one hundred and twenty gibberish papers in peer-reviewed journals and by gibberish I mean Denver Guardian level gibberish this was the MIT CS paper generator used to produce them this is the paper I wrote in about ten seconds using the generator on a methodology for the improvement of rasterization and that got through peer review at these journals which then said yes this happened we're really sorry about it and we're going to run a new piece of software to detect gibberish in our papers because there are too many to go through for humans to look at that's a problem just as it's a problem that the thing we celebrate in the iPhone if it's a secure key area to make sure that even the FBI can't get it with a warrant many of us celebrate that this if it's a I inside means outside of the companies that have the post Docs and the money and the data and the processors you may never have anybody able to have that information in those skills leaked to the rest of the world and so I end with a call to recognize a kind of new learning profession wherever you are whether you're in academia or not the original three learned professions that had obligations to society as well as just to whatever they wanted to pursue were divinity law and medicine because these were the three professions that had such a huge amount of knowledge needed to be good at it and were thought to be accessing levers of power God law and health that could so affect people that they needed to have principles beyond just their own self-interest there ended up being a fourth learning profession surveying in the 1800's very important to get boundaries right but I'm suggesting there should be another learning profession maybe around data science maybe around the use of AI but those who are whether they are in a bedroom with a $99 computer in rockets or whether they are in khadeem eeeh or at Google thinking about themselves as a cohort with enormous access to power with grave responsibilities for what they're doing and to start to work through what would exercising those responsibilities well look like we don't have answers to those questions Kathy here among us is working with others on a data earth that people might take that is part of the indicia of being part of a procession what should the contents of that oath be we're not sure we're just embarking on this I can't wait well I guess I will have to wait to look back 10 or 15 years later on a lecture like this one or like Joey's I hope with a lot of the answers filled in through the work of people like those in this room both working on the principles and working on the software and the systems that so scare us so my charge to us as we begin is to be able to start filling in the blanks of what are the specific problems we see what are they linked to is the larger problems and how are we going to think about it what locust the problem should be solved are we kicking it to a legislature or are we thinking that it should be the individual conscience of people building these systems it should be the flash point for understanding the ethical moment that we have with that I think lunch is available outside so I unilaterally declare us adjourned and look forward to the rest of the class thank you so feel free to like shake your hand vigorously if you have a question or a point we want to be a little bit freeform so I'm gonna let each of them sort of describe their work so you understand who they are but since this is this is a it's a is it a course or a class what is it called it's a it's a course okay so since it's a course and your participants and we're really at the exploratory phase where we're trying to figure out what we're trying to figure out so once you start to understand who these people are if there's any question that's not really even related with the specific topic that we're talking about you should feel free to ask them I think they weren't here in the morning because they slept in so if you asked a question about something that we talked about earlier make sure that the question is boxed in the appropriate context so they understand the question and and I I at least mentioned these two in the morning but they don't know in the context in which I mentioned them so with that maybe we'll just go and just sort of briefly but I might double-click you to go deep on a few of the things that you say describe sort of roughly your work your point of view on machine learning and and and and ethics and then well we'll try to have a conversation with everyone agreed so my name is Karthik I I do statistical machine learning and natural language processing I'm someone who was quite literally rescued by Joey that's how I kind of see my path I would have been one of these other people who would take the same route that jay-z had in the last couple of slides just end up in a big company what I try to work on here is how do you have better human-in-the-loop machine learning how do you give importance to perspective I think I met some of the assemblers a couple of a week ago and and I basically said I basically view machine learning as a very young discipline and several people have pointed out that it still seems to be going through an adolescent stage there's a lot that we can learn from very rich epistemological established mature fields in the social sciences like philosophy psychology different forms of psychology developmental clinical and use that knowledge to both work on better engineering processes better math and also thinking deeply about the machine learning systems that building and not just pledged an oath to the Church of prediction so that's basically what I do and maybe you describe sort of lensing and bias a little bit cuz Isis was something we talked about earlier yeah I took a class that Joey teaches with Tenzin prior that she who if you don't know is a Buddhist monk here at MIT and the teacher class called principles of awareness and the idea of lensing is something that I derived directly from taking this class if you have a machine learning engineer or a statistician who builds a machine learning model yes there's bias baked into the data but it's very important to understand from what perspective the modeling is happening so for instance we always talk about the predictive policing examples if it's somebody who's analyzing crime data from NYPD starting from the 1980s onwards and not having the background knowledge or perspective that there are certain neighborhood historically in unhappen certain boroughs socioeconomic backgrounds are different heavily policed for the same kinds of crimes which often don't get reported as much though they take place in other boroughs what would it mean to have a system which says we're going to do predictive policing equally in all the boroughs and how differently would cop or somebody who best understands the system build the model so the idea of flensing is all of us are looking at the world through our own lenses some of it is informed by our experience some of it is informed by education and our training and some of it is just biased baked into our social processes but focusing on that extracting that trying to represent that statistically the best we can and embedding that in a human in the loop we're in the machine learning process is super important so the same algo the same kinds of data but different lenses will just create vastly different looking models and that's what lensing it's for it's too late this year but the awareness class happens in the morning from 10 o'clock actually starts with 20 minutes of silence and it's about sort of exploring intrinsic things and then in the evenings we have this class so it's kind of a good – for if you want to have a balanced approach to thinking with one side of your brain in than the other and I may be one of the few biologists was actually doing machine learning I think which already I think it's probably fair to say that most machine learning people are doing help but there are very few people were trained fundamentally in biology are doing machine learning I happened to be in the reverse demographic I'm fascinated by understanding complex systems and learning how things belong with human cognition and that's why I became a biologist because in biology there's a lot to understand if you don't know how to put all that knowledge in context or what we know so that what that's what attracted me to biology first when I was a child and later on I'm now very fascinated or the last two or three years to make machines become cognitive like us and by that I mean that there is machine saliency and joy my dimension and there is human salience II so when humans look at an object they have a certain cognitive bias or evolutionary benefit to look at it and understand patterns understand shapes machines are kind of something that we are training right now and although they are kind of a lower level of cognition the newman's if I may say in some early systems they have unique points of salience II and some of those salient views that machines are or algorithms have can be exploited for medical knowledge so my my research program and what I'm really interested in one of the ideas a Chilean have discussed is to use machine saliency to find new medical information that humans may think it's not necessary or not useful that's one I'm also very passionate about looking at how learning systems can be applied at the point-of-care so translate are translating all the information we have a labs into point of care medical technology making machine learning more accessible deployable to help people at the bottom of the pyramid because I grew up in India I come from a relatively middle-class family and I've seen both sides of the spectrum so I'm very passionate about that and and the final is ethics with Joe and I were discussing briefly yesterday especially in healthcare and pharma about how billions of dollars have been spent on potentially life-saving drugs and we don't know what our human ethics they're we don't have a good idea about that and when machine learning is introduced I think we have an opportunity to introduce correct ethics in those learning systems Aelita can you hear me in the back ok thanks I lead a group called the probabilistic computing project the questions that my group tries to answer or how can we build AI systems that go beyond pattern recognition the kinds of sort of reflexive judgments that maybe the human organism can make and one or two hundred milliseconds and kind of move up two types of intelligence that might take a second or maybe minutes or in a couple cases hours of deliberation so just to give two examples you know an example of an inference that we make all the time you know when we're driving we see somebody on the street we can infer things like are they likely to turn to change direction or does it seem like they're they're just gonna go straight ahead so that's an example of something that takes you know more than a couple hundred milliseconds and in that way it's actually not all that well-suited for for something like deep learning but then for an example judgment call that takes hours you know we did some work for the Gates Foundation helping them by giving them an AI system that could make judgment calls about new datasets so they get a new dataset that represents a field study and they want to know probably was the study done correctly not you know our the predictive relationships between the variables what you'd expect to see if the field protocol was correctly followed or is there some bias based on the site that was being used to collect the data or does the outcome measure somehow not appear to be reflective of sort of all variables that were being manipulated so those are judgment calls that a statistician might spend hours or more kind of sort of navigating and we built an AI system that could help them do that sort of looking a little bit deeper I would say actually I don't do machine learning we really do AI research so so the way that the way that so the distinction is machine learning has really shown in problems aware their objective right or wrong answers where the cost of an error is small and so where they're sort of ubiquitous data and all the problems I'm describing actually none of those characteristics apply there's some inherent ambiguity that the human organism needs to navigate to solve the problem and that we and we need to give our computing systems the ability to deal with that kind of ambiguity and uncertainty I would say that you know another theme that the group has been looking at is how can we make AI technology that can deliver these capacities but also be more accessible so you know so that people with an IT background or maybe even people who can just navigate a spreadsheet can make use of AI capabilities instead of having to just acquire a lot of technical expertise as far as ethics I would say there's sort of two places where I'm really inspired to be thinking about and working on the interface of AI in ethics right now so the first one is pretty tactical which is I think that although there's a kind of prevailing narrative that says I think rightly that there are some real risks posed by AI technology I think there are also real opportunities to deploy AI technology in a way that helps that sort of in service of justice and so you know with people like Joey you were involved in a non-profit effort to to get some early test cases that going to put open-source AI technology in the hands of people working for the public good to help them make better more empirically grounded arguments and service of justice that's one example and then long term this is something I really want to invite this group to think about I think one of the deepest invitations that AI makes for people who are interested in ethics it's to confront the question of what does it mean to have an ethics that's uncertain so you know one of them the drivers of the last maybe 10 or 15 years of progress and AI has been embracing probability and uncertainty by moving away from computer systems that just have simple right or wrong answers to computer systems that consider ambiguous possibilities and I think looking out you know 5 10 15 years it would be it's exciting to think that that way of thinking could get brought into work and ethics and moral philosophy and maybe even policy and law to start giving us a handle on ineffable give questions that have been very hard to treat rigorously or carefully that might end up going backwards everybody start feeling for you to jump in but I want to click on one thing and then look at my philosophers a little bit because I find you know what's interesting we I talked this morning a little bit about in in in a playful way about some of my friends who tend to believe that their they can win that life that they have basic parameters and they're optimizing right but I think that most people tend to have a palette of ever-changing yearnings that every day they have a different set of things they'd like to do right and I think there's a view that principled people though have some rules that they follow and that they're organized and that your values shouldn't change from day to day and there's a somewhat random and stochastic nature of things like meeting people serendipitously or the thing that you ate that's changing your gut biome to make you feel a little bit more anxious and a little less friendly and so there's all this stuff that happens in your daily life that is sort of probabilistic and your earing was hitting that mic so she's giving you something else but the but but so I guess the question that I that I have is one of the the arguments against a eyes that I have is that the current ones that we have let's call it machine learning is that you sort of have to give it parameters for optimization or create a game for it to win but is there a way in probabilistic programming to be a little bit more like humans where you know a little bit more sophisticated than the random number generator that the the machine is constantly able to juggle a whole bunch of things and not over optimize and create these horrible scenarios where you know you've solved the problem and destroyed the world I mean is that are you are you gonna help us with making as more like people yeah okay so I hope so but let me just get let me borrow an example from Stuart Russell who's the co-author of one of the leading AI textbooks and for people who are interested in this he gave a TED talk on this topic that I think is really worth seeing so that's a so so the the scenario he posed in the TED talk was imagine you you program a robot in such a way that it has to follow your orders and then you tell it get me coffee you may not realize that what you've done is you've given it permission to kill you and indeed kill all people in service of this goal getting you the coffee right so sort of Asimov's stories and the Three Laws of Robotics were sort of one cut at these issues but this is actually an engineering problem right now that as people who design the objective functions that these autonomous systems are trying to implement well the whole design methodology I'm gonna toss you yeah sorry bad toss so so I'll just finish the story and then you know so so this question of is there an alternative right like can we somehow express a value or a preference which has characteristics and objective function doesn't its defeasible for example so as a Stuart's point in this TED talks he says the key principle is you have to make the robot not just try to do what you want but be actively uncertain about what it is that you want and be engaged in a process of in worried about what you probably want which might mean what it thinks you want or it might mean something totally different and it's that wiggle room where there uh where the robot is designed to be in a process of inquiry formalized probabilistically that prevents at least some of these extreme failure modes like the robot killing you because it thinks it that's the only way to get you the coffee you asked for and and just to just kind of tea but just it helps with the example that Jonathan Zittrain showed about the Dow with this thing that got hacked because the point about that is you have a contract that will just march forward and pay out the hacker fifty million dollars because it said it would with a probabilistic programming thing where it's constantly questioning wait is this supposed to be what's going on and you can interrogate it and say actually that's not what I meant and that was just a bug whereas right now we don't have a way to do that so you that could be a solution for that sort of yeah I mean I would say that this programming methodology evolved partly out of attempts to reverse engineer the psyche I mean that's actually where my background came from my labs and the brain in cognitive sciences Department and I think that perspective is maybe opens up new doors from an engineering perspective and it creates all sorts of new kinds of room for error because now you know when you've written a probabilistic program and you try to get it to do something I mean some of the times it won't and that's correct so we're really in the earliest stages of understanding how to work with these tools but I do think they they point in a fundamental direction for resolving these problems and create a whole host of new ones introduce yourself yeah I'm Anna but so downstairs a couple that floors down Rosalind Picard has her effective computing laboratory which is you know of course you know working on how to integrate emotions into computing how do you think that probabilistic computing can kind of intersect with effective computing to make you know an algorithm that's not going to kill you to get you your coffee yeah great questions so I gotta say that there's two ways so I actually spent yesterday all day with Intel with their anticipatory computing group which is sort of you know they're they're sort of resonant with some of the groups here at the Media Lab like fluid interfaces and effective computing where you know that corner of Intel is trying to build computing systems that can know us a little bit better in very simple ways like what are we trying to get the computer or the car to do and maybe deeper ways like how are we feeling while we do it so certainly as a component technology right perceptions about a person's emotion are uncertain right I mean even for me with my old friends I may think I know what they're feeling and sometimes I'm right and sometimes I presume I'm right and then you might get in a fight right so so so in that sense probabilistic programming is really you know one of the component technologies for building more effective effective computing systems the other part I'll say is that on a longer time scale again like maybe more like 10 years my hope is that this way of thinking will help psychologists like actually there's Lisa Feldman Barrett and in the Boston area and northeastern people who are trying to build theories of interoception so you know what is happening when we interrogate our somatic experience or our own emotional experience and really sort of asking the question what what is emotion or how can we build a sort of can we develop a richer more textured intellectual understanding of it that I would say is in very early stages but I'm also excited to see where that'll go and over the next ten years and and I know I have a friend who had a I think was a pituitary but it was it was an endocrine disorder and so the doctor gave her different cocktails of hormones each day with testosterone and other things and she would inject them and each time was a completely different personality she said and she didn't realize the extent to which the chemical balance defined who she was and she picked the one that she liked the best and now she shoots it every day and that's the person she is but she could always become another person and and what's interesting is that it's just weird Luke because your intent comes from your emotions but if your intent kinetic gesture emotions you have this very weird feedback system that becomes sort of too weirdly self-referential right and then the thing about affective computing so there's a project that Roz does together with scented breeze eel and what they're doing is they're measuring the effect of a child and it also turns out that the robot body language if for instance if the like adults always nod when you talk to them children not I think 30% of the time and so if the robot nods 30% of the time the child is more likely to trust the robot to disappear and there's a bunch of things that robot body language can do to increase trust and it also if you take the effect you can and for education you can say okay so you can tell by looking at the face whether they're they're challenged bored or just right so instead of having a test thing all the time you can actually tune the learning so and then you can turn the body language so that they're more trusting and then you model the child's brain to try to make sure that you get all great stuff if you're trying to teach a child good things but if you imagine that suddenly effect is coming in and you're able to manipulate people without going through their conscious filters there are ethical challenges and then so so that's one thing to sort of think about and then the other thing and I want to tie this a little bit to get you to talk a little bit more about your lensing so and you can just tell me if the way I'm describing this is wrong but generally the way that machine learning currently works is you have a bunch of data and you have an engineer who keeps fiddling with the knobs to try to get a high percentage accuracy this other set of data to test the thing and once it's done it gets locked in and gets deployed and then people say oh it worked 90% of the time what Karthik is doing was human in the loop machine learning is that actually a human is in the training loop so if you have the police officer if you have the psychologist and they look at the data and they say a lot that's not what I would do and that's not what I think that actually changes the model not just the assessment of the outcome right and and so so there's two pieces here for me is is as we start to bring humans in the training loop so if you make the interface such that the the philosopher or the judge or the police person continuously makes the model better rather than just assesses the using of the data I think that's an interesting thing and then the piece the effect stuff is if you're in a car and the Tesla is making everyone nervous that should be data that informs the model right so every time you go around this curve every Tesla is gonna go the same way and if everybody always gets stressed it should update the model or should learn something from that and so so I think sort of meta question is is one is is the way I describe humans with the right thing you know and can effect be one of the inputs or should it be one of the inputs and the models are you working on that create so I I would say human-in-the-loop machine learning is not a new idea it's always been around since the 70s there was one statistician called George box who basically said if you're going to do any Bayesian machine learning it's very important for you to state the assumptions that you are encoding into your model because all of us are working with assumptions all the time estimate the model and then he says instead of what we currently do in machine learning which is model evaluation and the twisting of the knob example that you gave we basically sometimes describe that as superstitious twiddling of the knobs in the dark because there's no principled way of configuring these parameters in a way to just get the desired output and most of the time the evaluation metric is one of classification whether it's accurate or not but in human-in-the-loop machine learning we basically say model criticism is a question of you evaluating the assumptions that you made when you first started modeling the whole problem and so the question of what evaluation function or discrepancy function you're going to choose has to come from you when you involve the human in the loop for example the police officer or the cardiologist whoever's doing the training you very quickly understand that we've moved so far away in in learning towards prediction prediction prediction that I think sometimes we forget what we think is of value from the machine learning system might not be the same thing that is valued by the people who best understand the data for the cardiologist who's just looking at angiograms maybe not having an accurate presentation of an angiogram is that's probably not the most important thing because people can go and get their angiograms themselves so I think that with human-in-the-loop it's not just a question of better machine learning models it's a first principle look at the entire process and actually I think for us just being a little more humble in how we approach the FOB nature of the problem with effect and this is you were in that group too I've come from the affective computing group I got my PhD bed but I also have meditation practice from a very young age and I just come from a different culture and for me one of the things I always grapple with is I think many of the categories of emotions that people work on is maybe a little too reductionist I don't like to describe any of what I am feeling that way and when when you deal with things like what is the emotion in a child etc there are so many layers of things going on from developmental psychology to what the kid ate and it's just so intricately complex that I'm not very sure that a machine learning approach is probably how I would approach it three four rows behind you look at the first thank you for passing it not throwing it that scared me um so Sarah honed assembler and Google public policy and I have this question about humans in the feedback loop and really specifically the end user so to what extent should the end user be in the feedback loop and in what context and the question is really more about is that collective accountability and influence is that representation or is that garbage in garbage out you say feedback can you define that a little bit more so feedback loop is in constantly saying like yes this was right no this was wrong and vice-versa so in my head I think about like smart reply so I liked that response I didn't like that response but more in terms of humans gauging the accuracy and I put that term in quote air quotes of what that what that means so is that is that good and what context is that represented is that representative is that accountability or is it kind of like sometimes with bad data garbage in garbage out can I just add and tie that to another same some related piece that came up at the end of jay-z's thing that fatigue can also work on which is it tights the explain ability thing right so if you're a doctor and you don't understand the output that's also an interesting feedback thing and so one of the things that jay-z showed critique that you didn't see was at the end he showed a I think it was a paper in medicine right that that showed machines predicting what was it was a death by cardiac arrest by heart attack but that there wasn't really any explanation for the relationship or it was very complex but you're also showing and you can talk about this a little bit more that that maybe there actually is a theory or an understanding that could be derived but we not through our current framework and that may be listening and thinking about it so so there's there's the simple thing of just feedback yes no but there's I think a much bigger one where something computer gives you a whole bunch of facts that completely contradict any framework that you have what is the response of the human being and the relationship with the computer so there's there's sort but it's really kind of when the fact that the Machine and the human are interacting would very quickly connect to other quick comments dying inward because the saying what you're saying with it I think I think I think it's like a machine learning or AI whatever we are working on us was invented by computer scientists who did mad and now we are expecting machines to behave like humans while they were essentially derived from mathematical principles and I am a huge fan of mathematical principles because it's mad but I think one of the things is that there are some other machine learning models which go from biologically inspired systems like the brain the neocortex and and which make more lateral connections between neurons in a in algorithm is one way to kind of move towards more humanized way of looking machine there's one comment and second is error in machine learning is highly penalized right now it cannot be wrong and in fact if you look at all computer science publications what is the area under the curve or your ROC curve of your algorithm and that's the accuracy well as humans we are most likely at which usually wrong about many things but we have this higher moral turpitude towards machine and and that goes to Joey's question in healthcare that seems to be incredibly important that if you're diagnosing someone you cannot make a mistake and our entire healthcare system is set up with those paradigms which I think are useful should be enforced and then as Joe pointed out there are many things that we are discovering in my research where the policy we call it a policy that an expert or a doctor uses to treat a patient it's X and when you use machine learning or we do reinforcement learning or advanced unsupervised learning techniques it comes up with a complete new machine policy that we call and it kind of defies as Joey pointed out what we understand about treating the disease so and I don't know the answer to that question to be very honest I think more research is needed and we need to kind of kind of accept that machines can be wrong but so could also V and kind of come up with new paradigms of law learning ethics that can incorporate this back and forth between humans and machines to work together versus tagging antagonizing each other maybe you can tie to lensing too right yeah but the feedback lupa questioned one concrete example I can give is something that's been happening since the 1920s we just don't take female cardiac symptoms as seriously as we probably should and when you trace the whole thing you realize yes women were not put in any cardiology trials till 1993 unless Congress had to step in and say no you have to put women because they were quote-unquote thought they were immune from heart disease because of their hormones great I think that when you use machine learning to unearth a very deeply embedded complex problem like this which is a different way of looking at it then just of course it's garbage in garbage out the data is already only heavily oriented towards males right that becomes a little more interesting so in the work that I'm doing when I first went to Brigham they were not too happy because they're probably wondering who is this guy with a funny accent who thinks who can just barge into a cardiology lab so there was no trust initially but over the period of times when you involve the doctor in the lensing loop and you basically show them why why is it that only 20% of all cardiac investigations in North America performed on women no more more than men died and the way they verbally expressed their symptoms or described them to a doctor is also very different than how a man would describe it you basically arrive at a point where you realize oh my gosh I just used lensing and machine learning to actually show them a fundamental bias but the fix is not predictive machine learning system that tells them when a female patient has heart disease I think the fix is also pretty complex like – your example of complex self adaptive system med school internal medicine cardiology kind of drilled into you look for atypical symptoms look for applicable symptoms you go to London and you see these big double-decker red buses person with a heart attack usually like a guy with a suitcase just going out of a hotel just had a huge meal just looks like he's having a heart attack which is like this campaign to make people aware of how to recognize one and I think the one day did for women was one where she calls 911 but she wants to clean the last pile of dust at home before opening the door when they come in so the messaging is screwed up it's like a complex multi-layered problem and no machine or no number of George boxes will actually solve that and it's it's it's the feedback cycles are so crazy that it goes beyond garbage in garbage out maybe one front hi my name is Alana I'm a high school senior from Washington DC the question I have is more so related to what professors at train said at the end of his presentation about explain ability and to what degree explain ability you know relates to the success of an algorithm because I know in the context of something benign like Netflix recommending your movie choices maybe we don't need to know how all the nuts and bolts work or what they are but in the context of something more serious like parole decisions deciding how long a person's prison sentence is we would like to think that we can know what the variables are and how much weight is being given to them so the question I have is to what degree is explained ability relating to the success of an algorithm and also do you have to sacrifice accuracy for transparency and explain ability I can I think also they may have individual I give a fantastic question I think Julie also mentioned the medicine so the phenomenon of understanding of mechanisms is very prevalent in healthcare and clinics so if you publish a paper in a high-impact medical journal like nature if you have a state-of-the-art machine learning algorithm that that works at 90 percent accuracy they were still says show me the mechanism people have done the t's new plots and reverse engineering the neuron sensing how they fire and you are absolutely right in the opinion that a lot of time would be spent on making the the algorithm explainable what is useful and my personal opinion is if I may say that right now we should be trying to get accuracy and moving fast towards ethics and making them morally accountable for what they what we are doing was just trying to spend too much time on making them explainable I think that's like can go side by side and partially because I think medicine is just so crappy yeah understanding what's going on I mean I think you know we really I mean anyway you can say I think I think I can completely I think I think when people are trained in medical school or biology people are trained with mechanisms pathways and understanding and it's and you know it's usually sometimes it's over information it's not required so two comments this is a fundamental question and there are no simple answers the first comment is I'm not so sure there's an intrinsic trade-off between explain ability and accuracy but the lens I want to offer for that is how have we as a society developed mechanisms to help people build credibility in the solutions that they offer right so there's a whole dialogue that's only beginning to get started in machine learning which it's sort of like it for some of the people in the field where maybe you know a couple generations my senior who are involved in starting machine learning if you talk to them about it like Tom diederich who organized a bunch of the Obama administration's events on AI he will say yeah we were just so amazed that any of this stuff worked we weren't worried about how to make it reliable or auditable and now we're going like people are adopting it like crazy so there's a hole so the angle I sort of want to want to offer as you know kind of the academic mode of work in machine learning actually you see this in Silicon Valley to where you're just sort of trying to get something to work and measure its accuracy I think that's a big part of the problem I mean in other engineering disciplines it was it like if you talk to Ford about how they design some component system as a car the way they think about what claims they can make about its fitness for purpose and its safety is just much richer and much more mature you know they think a lot about all sorts of different ways it could go wrong they don't reductively summarize its function or fitness for purpose in a single number there's a whole dialogue around sort of is this an appropriate you know component technology to be bringing out there and I think as we start thinking more that way about AI rather than focusing on to Joey's point this one accuracy number you know then we can start looking at different depths or kinds of explanation which are suitable to explain different sorts of errors right it's just like a very complex picture I think that my hope is that is that that kind of view that broader view will help us make progress the second point I want to make this is something I learned from from from Charles Nesson at the Berkman he pointed out that in human decision-making actually sometimes the integrity of a decision-making process depends on the right to not explain yourself so when he said that I was I was a little bit surprised at first but then he said well think about a jury trial right the these sort of sanctity of the jury to not have to explain why it decided something is a key safety valve that underpins the integrity of that process now I don't think we should have AI processes that I mean I don't think we should have machines making decisions that carry that kind of moral way any time in the future but I just want to point out that from a first principles perspective I don't know whether explained ability is always a virtue it one time for one one more and maybe we'll try to pick somebody who hasn't I guess both of you've already asked so Kathy you are actually behind you what's your name yeah one of let you so it comes to mind hello okay so actually it's not really a fully fleshed or thought out question but it was more so the de jure parallel made me think a little bit because I do think that there is a potential difference between taking sort of that kind of immunity from having to explain your decision because it's a person chosen amongst many and so somehow we're not expecting that person to be able to explain fully their decision but I think when you're talking about a machine or a very articulated system or a very intelligent person or you know then I think maybe there are different considerations there and you might have the right to reclaim more from that machine so that's my thought you know we'll add that one of the arguments that I've heard for juries is that you don't want them to be influenced by the political fallout of having to explain or having to verify what what they say then I'm sorry gonna have one other thing it but also if you think about a machine a category of machine which is just collecting the data maybe the sentiment of a whole bunch of people and being a machine that is a democracy like one of those most stories then it doesn't actually know it's like the Chinese room right it doesn't actually know what's going on in everyone's mind but it is the machine that is aggregating what's going on in everybody's mind so it would be the role of the judge that's managing the jury so so different interesting layers of whether it's even explainable but I don't if you have a yeah I mean the thing I really want to invite here just you know riffing off of what both of you said is that I mean I think that if if we enlarge this question about I think it's great people are asking questions about pieces of software that are now I'd argue often inappropriately making decisions and saying what explanations do we have to start asking what are the explanations we already accept for the liquid in a sea of descendants of other social systems that were embedded in right so you know a judge or a police officer or a prosecutor have forms of discretion they're allowed to exercise maybe that discretion is essential in some cases maybe it's also a place where various kinds of bias and abuse can can be hidden much like or in some ways like how that would be positive the case if software we're making an analogous decision or you know there are friends of mine that you know in the Entrepreneurship world or if I ask them for an explanation for something in the in the more commercial world they'll say well the market decided it okay well what does that really mean right there are some discourses where that's a sufficient explanation right and and you know so so so I think I mean I I guess I I hope that we will be led to think a little bit more precisely about what kinds of explanation we want and what circumstances and what purpose is the explanation supposed to serve and also what cost does it impose to ask for one and apply that more uniformly to both software and social systems and we're out of time so thank you very much [Applause] [Applause] [Applause] [Applause] [Applause] [Applause] [Applause] okay we're back so this is actually interestingly kind of leads on from a lot of the conversations so maybe what we'll start out doing is introduce yourself and your work but also I mean we you've been but I think the three of you have been here through the day and so maybe if there are any thing in the conversations that we've had so far that you want to add to jump in and then obviously for anybody out here feel free to also wave your hand and jump in but like why don't we just kind of continue the conversation but Kate wouldn't you start out sure my name is Kade Crockford I work at the ACLU here in Boston on technology issues mostly I'm really interested in I was just saying this to Joey the the first principles and ethics that Donella Meadows talked about there seems to be a tendency in the policy world when we think about how to integrate algorithmic decision-making into tools like our systems like the criminal justice system there's a tendency to stay at the very surface level and have a lot of fights there about what the algorithmic tool should look like and what types of data should be fed into it and I think that that has the impact of allowing us to continually avoid having a conversation that the much d for a level about the values and goals and paradigms and I think that's really dangerous so one of the things that I try to do in my work is encourage policy makers and technologists and people who are thinking in this space to avoid falling into that trap I'll have a lot of other things to say but that's all I want to say right now good morning afternoon whatever my name is Adam Fox I just have one question for the audience which which will sort of like explain why I do the work that I do can you raise your hand if you identify as African American so take a look around the room and sorry brothers and sisters for calling you about happy Black History Month to Holly's point at the beginning the reason that the work that I that that the work that I do is so important to me is we can have the smartest people who are I'm talking about these issues but unless we have that people who are like the reasons they're not here is because of being impacted by the system if we don't have them in this room then we're gonna be in this world one that we've been in the other point that I that I wanted to pick up on was Kathy's so what I what I do I was a prosecutor in Boston for 10 years and just saw this cycle of the criminal justice system the thing that we call a criminal justice system is it a prosecutor we sit in it's like very weird role where you don't we don't really have any affinity towards anyone except for the people we don't have a cannon dividual client we certainly aren't making money for the decisions that we're making so we are just we are told to go and do justice and and we're fed a lot of stuff in school on how we're supposed to do that but we don't actually learn tools how to do we don't have tools okay we're saying how to do it and so we have this really negative impact on people of color people from marginalized populations people who are frankly just different than the people were running in criminal justice system and so I created a curriculum as a prosecutor to teach other prosecutors things that we should have known before we ever had the ability to prosecute people like what does it looks like inside of a jail in prison what we happens in there and what does it do to your state and end goal what is trauma and how does that play out in the lives of the people that are impacted by our system how much does poverty drive the actions of the people who are coming in and is this thing that was created hundreds and hundred we saw a nice image of all those white men back in the day creating a system that we still operate with right now and is that you know should we be asking ourselves is that the best way to do it and I took that curriculum which was awesome and effective when it was implemented in prosecutors offices all over the country I took it to law schools and their answer was and rejecting the idea that they are no longer trying to create Public Interest lawyers because we make bad donors and so when we ask ourselves how do I like what did these corporations that we call universities how do we change them I think that there is a greater strategy that has more to do with the reduction of profit and in educational institution and what those metrics look like then then what we currently have so those are the only two things I want to touch on this morning I'm sure that we'll have a lot more to talk about but I come at this from a lot of ways but fundamentally I brought Jordan Jordan raise your hand Jordan's a junior in high school I didn't step into this building until I was 36 years old and as such I never realized that I had a place here so it was really important to me to bring someone who's twenty years my junior to the inside of MIT to see hey he has a place here too so make sure you talk to Jordan today because he's dope AF just click on one thing that you you talk about a lot but I think ties into this which was you often tell a story about what happened to the people that you touched in one direction sending them prison or not and that you had the choice but that you don't have a system where the prosecutors have that feedback you don't get that right so one of the things and this is what we're doing in Chelsea and some other places is now that we have data isn't there opportunity for prosecutors and others who affect people's lives to see what happens to those people because that seemed to be one of the things that you were saying was that couldn't we do that yeah and and I think we can and it gets to Kate's point which is we have to have a we have to want to do that that's like to think what you're working on but maybe some of these tools that we're using to accurately throw people in prison can also accurately reflect back to the prosecutors what happens to the people that you throw in prison versus not right yeah and that's and that's the work that I'm doing here I'm a director's fellow the Media Lab and the work that we're doing here is more about understanding that cultural change is gonna require incentivize and behavior and this is again why prosecutors are really interesting because we don't really have an incentive to send people to prison because we know at this point that it's really bad to our core objective there's a there's a comment there and maybe toss the mic over if somebody was the microphone box thanks sorry I didn't mean to stop the conversation I just feel like it's not just about the opportunity but also how do we start to change the narrative so it's about an obligation right so because we can we ought to maybe we should and also like we must and so I'm just you know I want to keep challenging on us to think even further beyond just like oh maybe we should it's like how do we start to change their up to if you can look at this in that way like you have to do there's no choice so just tossing that out there okay hey everybody I'm Chris babbitt's I teach at HLS I'm based at the Berkman Kline Center I know a lot of you here because of the thing I do most of the time would just run this law school clinical program which is essentially a law firm with five or six lawyers and 30-ish students a semester that does legal work on both direct advising of clients and that's why we've made ourselves available to folks who are in the assembly program building technologies bumping up against legal issues that sort of thing to come seek out our services so we're here as a resource for you we also do a lot of advocacy work including with groups like the ACLU of mass or nationally or the EF F or other sort of tech policy advocacy groups where we help them speak out on policy issues by engaging with administrative processes filing amicus briefs that sort of thing I guess my relevance to this conversation is that I'm also heavily involved with one of our research work streams related to AIX and governance at Berkman and that's specifically the one that we call sort of algorithms and justice which has to date been primarily focused on the criminal justice system although the way we've I think conceived of it in the way we've always framed it is that we're looking at the particular implications of the use of all of the kinds of technologies we're talking about here today black box algorithms and machine learning and AI by the government so one key example we have here is criminal justice assessing assigning a risk score to a criminal defendant before evaluating what her bail should be or making a parole decision or a sentencing decision that's one of them but I think you can even pull back from that a little bit and talk about a little bit of a higher level what is the difference when a private company makes a choice to use one of these particular not so transparent technologies versus when the government chooses to use it ostensibly karthick mentioned the sort of the someone in the previous pendulum panel mentioned the the kind of deference to the market and there are lots of problems with that but ostensibly if I'm trying to choose among three different you know social media sites I could use then I have concerns about the way one of their algorithms delivers me information but I like the other one I guess I have a choice among those set aside antitrust issues and all of that I don't have a choice when dealing with my my government when dealing with prosecutors and dealing with my criminal justice system so it is extra important that we that we get it right and I guess I would say on the point that we were just talking about about the ways to kind of flip the script on use of data one thing I just want to kind of flag is I completely agree and this is some of the work that we've been doing to say so much of the emphasis has been on taking people who are already part way through her all the way through the criminal justice system and saying okay what data do we have about them to predict outcomes are they going to come back for their court hearing a next number of months or are they going to reoffended that sort of thing it is vitally important to think about using this this data earlier on in the system I will say that that has a lot of promise it's also a little bit scary because it requires creating possibly and there are technological solutions to this but possibly creating bundles of data and putting them in the hands of people that could use that data for good or for harm and every once in a while when I'm thinking about one of these great initiatives to use data and apply machine learning to kind of predict outcomes and and and and intervene early and in diversion program or something like that I picture it sort of Kaede looking over my shoulder and saying well wait a minute they see le events would have a lot of problems with the idea of all of these people talking to one another sharing data that gets a little bit – the first video that I showed which was a news clip that got the science wrong right so and then I'm gonna merge this with actually a health thing which is so for instance we thought diabetes was one thing it's actually multiple things that cause the same symptoms failure to appear is one of the things that our team is working on and fell eat appears like a bit in in it's either yes or no right but actually it could be because you have to take care of your parents or it could be because you're an addict or it could be something else and the fact that for the criminal justice system it's just one thing is kind of like the medical system before we figured out that there were many causes to similar symptoms right so so I think it's interesting that you can find underlying data and that that can help you deal with the problem but the other meta thing which is kind of interesting is and I want to be a little bit careful about how we go into this but but for instance in genomic research right now there is are a lot of interesting things that we're learning about the relationship with the between genetics and various outcomes and a lot of those results are being used by hate groups as science behind these things said and that's not even privacy stuff it's just taking science and then kind of twisting it around and using it as a bad like a bad version of it and so one of the questions that I don't actually have the answer to which is I think that and when we talk about regulation you could regulate the research and you can regulate the deployment but with bad media which is what we have right now you can you kind of also risk this weird thing which is because you also don't want to prohibit research too much because one of the other things we did the year before last was the forbidden research conference and there's this very interesting thing so we make it very difficult to do research on pedophilia and on things like sex robots but they're coming but we don't yet know for instance scientifically whether giving somebody who has a problem with pedophilia a sex robot or VR we don't know if it actually can make them better it makes it worse if we don't know that scientifically it's very hard to come up with the right policy so so the the the inhibition of research is also really dangerous and so to your point it's kind of a area that's very fraught with both opportunity and risk but change isn't gonna wait for us to figure this out right yeah I mean I think one of the interesting examples of that is this the study that I forget who it was some somebody did a study recently showing that allegedly there's a gay face right you guys are familiar with that I think I probably have it you know there's gay face oh wow that's scary maybe if Jess sessions is you know in control of the database of all the faces right but that gets to the question of yeah and some I was thinking the exact same thing right prosecutors don't only have the option to do this they have an obligation to actually understand what they're doing and the impact of their work on the other hand you know when I start the trans slide about what bono said I thought to myself so if I can punch bono in the face that mean I should do it right and I find him to be unbelievably irritating so I probably would if I had the opportunity but but anyway the the point is I don't think in many cases we actually should do things that we can do right so I there are a lot of examples of that in the work that I do so gay face is maybe one of them the others though are things like you know the creation of these enormous databases that law enforcement is amassing not just at the federal level but at this at the city level as well now where they're collecting huge quantities of information about everyone you know conducting mass surveillance using license plate readers soon you know facial recognition systems that are built into the ubiquitous CCTV that we have all over cities now there you know interlinked and you know networked cameras that can be operated from a single hub these are things that we shouldn't do actually we certainly can and they are likely going to happen but there are really bad ideas and and there's a really ugly area of the law you know I was I was also thinking about Lessig's formulation of the for you know sort of pressures that create the world that we live in and they influence one another right I mean my colleague Jay Stanley at the National ACLU gives this example all the time it's really interesting people don't necessarily think about how technology in the law interact in ways that are maybe unexpected so one example is that the wiretap act when it was passed in the 20th century had the impact of determining what CCTV the whole industry would look like and why because you can't have audio recording right so that's a really weird interaction then I think the people who wrote the wiretap Act could not have predicted and I'm really concerned sort of similarly about the interaction between really really really bad historical law that we have precedent Supreme Court precedent around for example a Terry stop do you guys know what that is it's a Supreme Court ruling that says that law enforcement basically can Pat you down and force you to empty your pockets when they stop you on the street if they have what's called reasonable articulable suspicion that you may have a weapon on you and this is a really frightening thing in combination with the kinds of databases and mass surveillance that I've described I'm envisioning a future if we don't get it right and and make some interventions along the way where law enforcement officials have contact lenses that have facial recognition built into them they're walking down the street scanning every person they see you know a database system you know working with some algorithms is telling them effectively coloring maybe even in their field of vision every person they see as yellow green or red and that designation itself whether or not the data it's based upon is even accurate determines reasonable suspicion right I saw you your my computer system colored you as red so I stopped you courts may very well think that's totally appropriate and given you no historical precedent I think that's actually likely so I'm really worried about about those areas where the law and technology are gonna interact in ways that I think people are not worried enough about frankly it is a real-world sort of like example of how we're already there and it's really scary way there was it there was a school fight between two school-aged children in Boston last week in East Boston high school that was described as a non violent fight I don't know what that is but it was a non violent fight but the students naming it's called an argument yes yeah an argument in the cafeteria where the stupid the students names were put into the school database that went through the school police database that went to the Boston Regional Intelligence Center database which highlighted one of these young people is a gang member which alerted the federal government which alerted them that he was also undocumented and so instead of the principal dealing with this situation ice dealt with this situation you've been tweeting about this a lot right yeah this is an issue that we're trying to grapple with at the ACLU for sure and you know there are multiple places along the way where people made maybe make maybe didn't make decisions right simply just allowed things to happen in a way that was really not very thoughtful and and I guess that more than anything is what concerns me is that we'll sort of just keep plotting you know along in the same direction that we've been plotting and increasingly technology makes the direction we're heading in worse and worse and worse right whether it's with respect to economic inequality or racial injustice in the criminal justice system or any number of other serious crises that we face as a society technology just exacerbates those really really quickly well and I think that that's why duck is really important because if you're not a citizen you don't have the same rights right and and like this case of the traumatized victimized ex-girlfriend of a gang members new boyfriend being undocumented ending up on this list it's just this kind of horrible mess that we have in and there are a couple of questions that I think there's you know Zack and then I'm gonna go first for people who haven't said anything so and you're gonna go there yeah go back go ahead and then just a question about what you just referred to so you know there's this horrible thing that happens and these V systems reveal themselves of like automated injustice how do we know about those systems beforehand like how do we have a descriptive or are able to have a discourse about v's of a systems but are deployed and you should be aware of it before these kind of individual tragedies happen because they seem so hard to access or kind of reveal and until there's a victim which seems like crazy I really want to hear what you guys have is saying that what I'll say from like a very cynical point is that that story is one that will always be sort of trumped sorry by their use of the brick the Boston Regional Intelligence Center by the detectives and all these folks MBTA EMS who will say we need to hang on to the brick because we've been able to make all these successful arrests of all these violent people and you can't you're not gonna be able to find me even the most woke white progressive wealthy liberal in Massachusetts who says get rid of the brick because this one kid went away because all these other bad brown people were prosecuted and car serated I mean yeah I have a different take which is that we can we can to some degree I think future-proof processes not necessarily right so one really interesting area that the ACLU has been working in lately that we are engaged in both here in Cambridge and in the city of Boston is trying to get officials to pass municipal laws that require the police to go before the City Council before they want to adopt new surveillance technology so typically the way that this works I mean historically the answer to your question would be that's my job it's to try to figure out what they don't want us to know and to tell everyone and then you're right after the fact we play catch-up right and try to regulate outlaw certain things whatever this this idea is to sort of invert that so that we have an opportunity at the outset before the technologies with the database systems or the information sharing agreements have been formalized or acquired have a public debate about a whether or not it's a good idea to buy the license plate readers be if we agree as a city that yeah we should have them then we decide how they're going to be used right how the information will be stored how long it will be to pertain to it can be shared with you know under no circumstances will I ever be able to get a hold of it that sort of thing and and I like I like the the establishment of processes as opposed to thinking about discrete technologies because it allows us to like I said future-proof you know these problems how do you how do you how do you then also build into that if if we meet this benchmark where we see this this disparity then we reconvene and we talk about the technology the whole reporting yeah so you know built into these ordinances also as a requirement that the police report back to the council you know on an annual or semiannual basis about how the technology was used racial disparities and in the use of the technology those types of things so that we can continue to reassess reinforcing everything that kid said in the process of doing this work on government use of these kinds of tools we've kind of left the realm of the really interesting sexy Minority Report robotic judges and we've landed squarely in procurement which is pretty interesting and important but we've kind of have mapped this cycle where outside private developers develop technology government organizations decide we need to procure a piece of technology and they go out and get one then that for technology is deployed for a particular reason and on the back end the kids pointing that you they're just making the technology needs to be tested and evaluated and we see so many failings along the way as we look at these kinds of technologies in in that cycle where technology is being procured for purpose a say it's a risk a scoring tool that's being used to assess bail and determines that your risk scores for and your risk score is six and someone along the way decides well let's use these risk scores for a purpose other than bail as well so set aside whether it was appropriate for bail in the first place now it's being used for sentencing or parole or for something like that that's a really bad fit at the implementation phase we have judges who are using these kinds of technologies who don't fully understand that there's a six factor test they need to apply to make a particular determination and this technology is answering questions one two and three and they now step in and answer questions three four five and six thus having double-counted factor theory right so we need really rigid implementation guidelines for the people who are actually applying these things and again it's not it doesn't seem like the most interesting topic in the world but I think procurement and then use in government there's a lot of room for people in this room to be promulgating best practices to be evaluating technology finally to the last point about the testing and evaluation there is a trend when it comes to government procurement of technology more so than anyone else toward inertia where we've all been at the DMV and looked over the shoulder of the person who's looking at some amber screen from 30 40 years ago that's the system they bought 30 40 years ago maybe thinking will we evaluate that some point make it better that doesn't happen when government buys technology so often so I think more so than ever before with these kinds of these kinds of tools we need to be thinking about short cycles let's put this in place let's put it in place for our short time let's make the data widely available and that let the research supes research to see what happened just two short things so I do think that local governments function as better democracies in federal and so I do think that locals really and San Antonio in other place are doing a really good job and I do think procurements important because you kind of follow the money right and so so right now a lot of these vendors are still haven't gotten to the size of corruption capability but they will so you kind of want to cut it off before they get that the problem is the federal government buys things like the DHS fastest system to deal with terrorists filtering in people inbound into the country but that's dual purpose so you could easily see that go first ice and then next to things like the criminal justice system so I think what's really important is to look at the business side too and look at where the capabilities are being developed and to try to head off what happens with voting machines in other places where you get a bit sufficient market to where now the eradication isn't about just convincing City Council you have to kind of dig into sort of the whole Lessig thing which is the the corruption part there's one question back there and then we'll come back up here here's you shouldn't raise your hand again and and then I don't know if you want to throw it or hand it back I will try and prove this but I'm very bad at basketball yeah yeah thank you um I'm perhaps looking for a silver lining gear but I'm wondering if there's like an intrinsic tension in between like data as it relates to like the legibility project of the state and individual liberty or on the flip side there is some possibility for a new role of like an advocacy for data scientists where they might partner with communities and understand that data and models are not neutral but actually are intrinsically biased and it's up to sort of the data scientists to choose where that bias is going to fall and if you've seen examples of that in practice yeah definitely so the Ford Foundation has been funding over the past four or so years technology fellowships in organizations like the ACLU for the express purpose of creating a new career path for folks who come out of places like MIT and it's there's a really interesting history there actually I didn't know this until I got involved with this technology Fellows Program that Ford was running but Ford was actually instrumental in the creation of a public public interest legal career track as well like 50 years ago when at that time you know you went to law school you were either gonna work for a private corporation are you gonna be a prosecutor basically there was like nothing really else and organizations like the ACLU had existed or rather just the ACLU existed there weren't really other groups like it for many years and it was a really deliberate process to create you know funding opportunities for lawyers to work at organizations dealing with issues around climate and Human Rights and things like that and the exact same thing is happening right now with respect to technologists and in civil service type work so that's I think really exciting and totally important and I would like as a as a person who spent most of my time working with young people in the criminal justice system I have this like weird line that i straddle when I see the the graphic about Facebook has this data set that says this person is budget and relationship working with young people in criminal justice system I would imagine that there are is a similar graphic that looks like a gang crime is about to happen a violent gang crime is about to happen how do we capture that data and use it in a way that addresses the needs of the kids we're literally like we're we're telling you this out loud because we're kids and something Bad's gonna happen if you don't do something I'm going to complete this act like a kid who runs into a fight in school he's like please let me catch me before I actually have to throw a punch how do we how do we do that and treat the child as opposed to use that information to particularly go out and police that kids searched their house or whatever I would love to see a way that we straddle those two lines accomplishing the goal that we want to accomplish which is protecting the safety that young person you know other young people around them both without going to that state where Kate is freaking out Joey you are pushing for kind of paradigm exchange in the last talk I was just curious if for each of our speakers if you support the idea that we should look at things that way or if it's too big if you do agree what do you think a paradigm change would look like and if it helps as an example like talking about ownership of data versus a rights-based approach to data it's kind of something that would have a knock-on effect throughout the system I don't understand the last point of your question that's because I'm I'm not that bright but when I saw Joey's list of things I see people doing criminal justice reform or could step 12 let's change the laws that change the policies let's change the you know let's push on the things that say these are the things that we have to do where I came into building my organization I was like let me start with paradigm shift because I don't care if it's a risk assessment tool or it's a new piece of legislation or something that's been repealed or a new policy that's been issued by a prosecutor if you don't change the paradigm by which those people are coming into the job and the thing that they want to do is is actually the thing that we're told which is fairness and justice and safety and all these things if you don't change the paradigm of what those things are I will use the risk assessment tool to find a way to continue to do the thing that is that I do right now if I if you reduce minimum mandatory sentence if you if you repeal minimum mandatory sentences for a drug crime instead of charging a person with one count of possession with intent to distribute crack cocaine I will now count every rock in that bag and charge you with each one of those rocks thereby and the rounding you're repealing of that minimum mandatory sentence in a way there will always we will always find a way as people and the system we talk about the criminal justice system prison all these things as if they're entities that are driving behavior that's actually the other way around we as the people who are in it are driving the outcomes the efficacies of those institutions and so if we can and again that's why I like really heavily foot double down and focus on prosecutors if we can change that paradigm which is more about giving tools in empathy and understanding than it is trying to shift the entire incentive structure then I I think we can turn this around I agree with all of that I will say that I sometimes worry about my own work that we're doing is not focused enough on the on the the paradigm shift piece of this and that we're tinkering at the margins and I was in a conversation recently with a group of people about autonomous weapons that I think drives this home where you could let's say hypothetically that we could come up with a drone that could make more precise determinations than any human being ever could about targeting the people that it intends to target and avoiding collateral collateral damage to the people and the things and property that it doesn't intend to target I think applying a traditional paradigm around rules of war you might actually say that's that's great you're back you know you've created an even more efficient a tool of a weapon or tool of destruction and you've avoided asking the really big picture question which is jeez do we even want machines at all remotely involved in this process for some of the higher-level moral reasons that I think Gabe was alluding to at the beginning yeah I mean I I the paradigm that I would like to see shift is I think even more radical than then that it's especially with respect to the criminal legal system let's not actually invest in it at all let's invest in other things right so people you know a good example of this problem is what's been happening in Massachusetts with criminal justice reform there's this omnibus criminal justice reform package that's now being worked out in a conference committee between the house and the Senate here and risk assessment tools are a piece of that the reason that somebody wanted to bring risk assessment tools into this CJ package is because there was a court ruling at the mass high court a couple years ago holding that you can't basically you know the Massachusetts bail statute doesn't allow for judges to hold people because they can't afford to pay to get out of jail or to stay out of jail that it's unconstitutional is a finding that some courts in other parts of the country have held this is like a traveling lawsuit that some advocates are doing and the response to that instead of saying huh you know 70% of people in Massachusetts jails are their pretrial and a lot are there because they're poor maybe we should figure out just an entirely wave think different way of thinking about this problem which would start actually before the prosecution it would start before the arrest even are you poor is that why you're stealing something you know do you have a drug problem is that why you've been arrested for drug you know selling drugs maybe we should deal with that problem instead of investing more resources in the criminal punishment system we don't really want to do that as a society those are you know questions that were frankly loath to address and so instead policymakers like you were saying Adam go to this really twelve you know number twelve level conversation of well maybe we should introduce a tool and I think you know the the danger there people I used to get asked a couple years ago when predictive policing was a really hot topic in the press because it was it was starting to bubble up in some cities what do you think at the ACLU about predictive policing and I would always say things that I think really irritated the journalist because I kind of like refused to answer the question you know I would say like I don't care I know this we shouldn't be doing this you know we should stop giving the police resources when other departments and entities and government really need them right and a great example of this some really interesting work that some of my colleagues are doing in New Jersey is you know just think about it this way if there's a health crisis or you know a crime or a mental health issue this happened in Boston a few years ago there was a young black man who was really having a mental health breakdown and his he was sitting on his front stoop in the south end and wouldn't get up and his mother didn't really know what to do who could she call I mean there's no one to call besides the police right there's no one to call in our society who will show up who's paid by the government other than the cops and she said on the phone was nine on one please don't send the police he really he he has a negative reaction to people in uniform he's not gonna react well if the police come and guess what happened the police came and they killed him because he you know had a negative reaction just as his mother warned them that he would but we have no other way of dealing with that right I mean the police are our response to everything as a society everything they can't that goes wrong fundamentally so yeah my colleagues in New Jersey are looking at a paradigm shift that I think is really exciting and kind of tax with the abolition movement which is to create like a municipal response team basically that is composed of mental health workers and case workers and people who can help people in crisis when you know handcuffs and chains is really not not what anybody needs not what the person who's in crisis needs and certainly not what our society needs either we're out of time but I want to just end with maybe pushing it back to the class which is you know Harvard has more Supreme Court justices that you guys have come out of your place the inventor of libertarianism came out of MIT you know and we have people who come out of our institutions that do reframe things and questions like is that should we redesign the criminal justice system should we retrain prosecutors differently you know and and you know I mean Harvard still thinks about money but there but it's part of their job to sort of at least for some of them to ask those really hard questions and these sort of first principle things and so I think it's the duty of the students like you who care about this who are equipped with the credentials and the access just to speak up and think these thoughts and I think that's the point of this class is to understand the mechanics so that you understand all the second-order effects but also to go up to the first principles and and we have people who are also connected to the ground and I think it's a to be honest that the Donella Meadows layers I think all of those layers have to happen and they all have to be coordinated you can't just do one but you have to also be able to go all the way up to the top and I think that's kind of our job I think it just goes to the graphic leader you showed of all the founding fathers and this is where sort of we are always in tension in conflict it's do we just blow this thing up and build another system which is I mean fun abilities trying to happen right now then you there you have to recognize that that doesn't look much different than it does now and the people who are making these decisions and so how do you get them to the place where we're at a tipping point where we're not exactly abolition but we're opening their minds to there's something so much greater that can have better benefits to you and you and it doesn't mean that Europe doing a bad thing I get him thank you guys [Applause]

Leave a Reply

Your email address will not be published. Required fields are marked *