Episode 6

October 27, 2023

00:35:55

Owlcast 69 - Student Edition - AI & Machine Learning w/Michalis Bletsas

Owlcast 69 - Student Edition - AI & Machine Learning w/Michalis Bletsas
ACS Athens Owlcast
Owlcast 69 - Student Edition - AI & Machine Learning w/Michalis Bletsas

Oct 27 2023 | 00:35:55

/

Show Notes

Today's episode is a student edition Owlcast on AI and Machine Learning. Academy student Adrianos Botsios discusses artificial intelligence and machine learning with a world expert and pioneer of the field, Michalis Bletsas of the MIT media lab. A truly remarkable conversation  that attempts to unbundle some common and not so common perceptions about AI and its role in our lives. 

Here's what they discuss:

  • Machine Learning and the limitations of Transferability of Knowledge
  • How far are we from Artificial General Intelligence
  • The challenge of setting intelligent goals about artificial intelligence 
  • Will Homo sapiens be followed by higher intelligence beings?
  • The role of the MIT media lab in developing new technologies
  • Democratising access to technology

...And much more

View Full Transcript

Episode Transcript

[00:00:10] Speaker A: This is the Owlcast, the official podcast of ACS Athens. This is the student edition. Listen to the exciting story of the American community schools of Athens. Check out what drives all the members of our international community of learners as we create the education of the future. Here's. John. Papadaikis. [00:00:49] Speaker B: Welcome to the the ACS. Athens. South. Cast. In today's AI student edition, academy student Adrianos Botchos discusses artificial intelligence and machine learning with a world renowned expert and pioneer of the field, michalis Bletas of the MIT Media Lab. A truly remarkable conversation follows that attempts to unbundle some common and not so common perceptions about AI and its role in our lives. Let's listen as they discuss machine learning and the limitations of transferability of knowledge. How far are we from artificial general intelligence? The challenge of setting intelligent goals about artificial intelligence will Homo sapiens be followed by a higher intelligence? The role of the MIT Media Lab in developing new technologies, democratizing access to technology, and much more. [00:01:55] Speaker C: Hello, everyone, and welcome to the Artificial Intelligence Lab podcast. We are very fortunate to have a guest speaker today, a Greek research scientist and director of computing at MIT Media Labs, mr. Michales Bletzas, joining us online from Cambridge, Massachusetts. Mr. Bletzas, thank you for being here. It's an honor. Mr. Michales Bletzas was born in Hanya, Crete. In Greece? He studied electrical engineering at the Aristotle University of the Saloniki, and later went to Boston University, where he earned a master's degree in computer engineering. From January 1996 until today, he is a research scientist and director of MIT Media Labs. Now, Mr. Bletzus, we both share common goals as we want to make the world a better place through the use of technology. I know that MIT is doing that by preparing its graduates to learn the knowledge they require in order to bring about change in the world. Let's begin this interview with a general question. Can you give us a definition of artificial intelligence? And what is AI for you? [00:02:59] Speaker D: Well, AI is the pursuit of intelligent machines. And it's a bit of a cyclical definition, because that means that we know what intelligence is. We don't have a really good idea about what intelligence is. We have started to scratch the surface. This is probably the ultimate human endeavor decoding its own mind. And because artificial intelligence started as trying to replicate mechanically human intelligence and quickly hit a lot of obstacles, to put it mildly, the definition changed a little bit to actually pursuing any kind of intelligence, even non human one. And a very good example of that is what we have today with machine learning, which has absolutely nothing to do with human intelligence. [00:03:57] Speaker E: Machine learning is for those that don't know. It's the ability for an algorithm to learn upon data, right? [00:04:02] Speaker D: Yes. It's upon discovering patterns into large data volumes and learning from that. Although it seems similar to how we learn on early stages, if you think about it, it's completely different. If you want to teach a kid how to learn how to recognize cats, you only have to show them one, two, three images of a cat. And actually, most kids will figure out that lions are big cats. If you want to train a machine learning model, as we call them, to recognize cats, you have to show them thousands and thousands and thousands of images of learning cats. So there's your proof that it is completely different from human intelligence. And by the way, if you teach a model to learn cats, it becomes an expert in recognizing cats, and it cannot deal with dogs very easily. So that's another great difference. Knowledge is not easily transferable. [00:05:00] Speaker E: A lot of data is required. We are currently in the fourth industrial Revolution, and the leading technology is artificial intelligence. Every day, there are breakthroughs in many different fields where AI is applied and fields of artificial intelligence. But what are your thoughts on the recent developments of AI? Have we reached artificial general intelligence? [00:05:20] Speaker D: We are very far away from that, actually. We are getting farther and farther from artificial general intelligence. If we could continue down that path. This is pattern recognition. This is statistical computing. This is computational statistics. This is not human intelligence. By the way, when it comes to artificial intelligence, we tend to set up goals for it. And every time we achieve those goals, suddenly we realize that maybe what we did wasn't that intelligent after all. So what we have achieved recently with Chat, GPT and all these large language models is we have passed the Turing test. So we have made a machine that creates human like prose, which to many people is indistinguishable from something that a human might say. But speaking is not the same as thinking. We are very far away from machines. [00:06:18] Speaker E: That think, okay, so artificial intelligence is mimicking, as you said, human intelligence. [00:06:24] Speaker D: Well, that was the goal. That was the original goal. But, well, in order to mimic something, you have to understand it. And I don't think we although we have made great, great strides. If you look up my first public interview in Greece, one of the statements that I made, which I still believe into after a few decades, I tried to pretend that I'm younger than I am, was that human development is not going to stop with Homo sapiens, and we are going to be followed by higher forms of intelligence. And how these are going to evolve is not completely clear yet, but it certainly seems that they are going to be aided by non biological components to them. Is this AGI? Is this artificial general intelligence? Not yet. Not quite. We don't know. Again, in order to mimic something, you have to understand it. And the more we achieve with artificial intelligence, it has that interesting quality, the more we understand how much we don't know about it. So every time we reach a peak, we see that the ultimate goal is even farther away. It seems like we are striving to score a goal and that goal post is being moved further away every time we think we reach a mile post. True. [00:07:55] Speaker E: Yeah, that's very true. Because we don't even know how our brains work, right? How our intelligence works. [00:08:02] Speaker D: Well, we know in the past few decades we have made we have learned about how our brains work a lot more than we knew in the past couple of centuries. That we learned in the past couple of centuries. So the way we accumulate knowledge and understanding about how our brains work has been accelerating. And that's why I believe that we will eventually get there, but certainly not following the path of large language models. It's not a huge progress. That's not the way to achieve true intelligence, like feeding more and more data into the machine. You can create great music by training bitovans, by educating bitovans, or you can put a sufficient enough amount of monkeys to jump on pianos and select the best possible outcome. I think you will need a very large number of monkeys, probably more than Earth can sustain, to achieve your goals. But you can prove mathematically that if you put a large enough amount of monkeys, you can probably make great music. [00:09:17] Speaker E: Yeah, but you need a large amount. The data, where are you going to get it? How are you going to sustain it? [00:09:23] Speaker D: That's another reason as to why I don't believe that the existing path lead us to true AGI with the existing computing architectures. The power requirements, the energy requirements are humongous. [00:09:37] Speaker E: Yeah, that's extremely true. We want to move on to what you do, what MIT does, so the audience can also get a better understanding. What do the MIT Media Lab do within MIT and how do they impact the world? [00:09:50] Speaker D: What we do is we work at the interface, if you want, between digital technology and humanity. We try to find out. We try to work on figuring out how all these new technologies are actually going to impact our daily lives. So we started by dealing with how digital technology affected the traditional media back in the 80s. We had to deal with cassette tapes and film, actual film, developing film and things like that. So that's how the name stuck. But then we expanded ourselves to all aspects of digital technology and how it intersects with human life. So essentially what the Media Lab does is it tells stories about how technology is going to affect our lives. And we tell those stories mostly by creating demos. So we create elaborate technological setups that essentially give you an idea of how technology is going to affect our lives. So I'm going to give you a simple example, and you might recognize something like that. So back in the 80s, we put a camera on top of a car and we went around the city of Aspen and we geotagged every frame on that field. And then we took a computer and we superimposed, we made the map of the city. Clickable. And every time we put all that video in a laser disk, another technology that most of you never saw, which had the nice property, that you could actually very quickly go to a specific part on the video. So we hook every point on the map with specific frames on the video and you could click on the map and see what was there, what the car would see. I don't know if that reminds you of something. Yeah, that reminds me of they did that in the late eighty s. And actually a lot of people, even the New York Times, called us charlatans. Look at what these guys are doing. I mean, they are playing around, but we liked it. By the way, Steve Jobs saw that demo when he visited the Media Lab and six months later, QuickTime 1.0 showed up on the Mac, which was the first framework to deal with digital video on a consumer device. So yeah, this is what we do around here. What do I do? I make sure that all the technology that people are using to do their work is working, is up to date. They have all the tools that they need. We don't pay a ton of money getting it. We don't pay humongous amounts of money maintaining it, et cetera, et cetera. It works all the time because unlike demos that have to work a few times, the underlying technology has to work all the time. So that's what I do. I also go out and build things. I build networks, I build laptops, I build computing structures. It's a nice place to be. [00:13:04] Speaker E: Your vision as the Director of Computing, is there anything specific you're planning for the next one year, five years, ten years to come? Well, specific or broad? [00:13:16] Speaker D: Specifically, unfortunately, I have to follow the trend, and I'm working a lot in providing low cost machine learning infrastructures for people because machine learning has become like the Sriracha of science. Every paper becomes more interesting if you add machine learning in the title. So everybody around here is dealing with machine learning. What I'm doing short term is working on machine learning infrastructures like everybody else who's in my position right now on dealing with increased volumes of data by throwing away every spinning disk drive that we have in the lab. And believe me, we have several thousand and replacing them with solid state memory with trying to figure out how to get more GPUs cheaper by making sure that we have the ability to visualize all these large data volumes effectively and in a user friendly manner. And more long term, I find myself working more and more on policy issues because you can't separate the people who create technology from the people who actually figure out how to use it and prevent the abuses of it. [00:14:35] Speaker E: Abuses of it. [00:14:37] Speaker D: It's not enough to create something anymore. You have to figure out how it fits in an increasingly complex world, because things have very often unintended consequences. I spent my early years at a media lab, going around and working on getting the Internet in places that nobody thought should get it, in very poor areas, making sure that the Internet was accessible to a lot of disadvantaged kids, of kids that had very minimal resources at hand, kids that didn't even have electricity, often where they lived. [00:15:16] Speaker E: So you also talked a little bit about providing access. You were the Chief Connectivity Officer and Vice President of Advanced Technology in the One Laptop Per Child program. And they were a designer of the XO Laptop. A project that is of much importance to our world because as much as it is the bad use wasting time. People need access to technology to be able to adapt to a world like ours. [00:15:39] Speaker D: I became what I became mostly because my parents made sure that I had access to technology. So I want to make sure that other kids my age have this advantage, having a longer term picture in there. And I think that the Internet is one of the greatest inventions of the human mind. But I also think that, like every other powerful technology, it has uses that are extremely nice and uses that are not so nice. And when it comes to a project of that scale, we have to make sure that we maximize the former and we minimize the later. Now, when it comes to One Laptop Per Child, one of the co founders of the Media Lab was Simur Pappert, who back in the 60s, was advocating that every child should have its own computer. And that was in the 60s, when MIT had two or three computers, and each one of them was like, several million dollars. But the way that Simur explained it, and it's very easy to get, is look, he said, if pencils were expensive, we would make a pencil lab in the schools. We would take the kids there a few times a week, and then we'll write big essays on how pencils are not that useful in education. Same was true with laptops. They were extremely expensive when we started. When we were done with a project, they were almost an order of magnitude cheaper and available to a lot more people. Essentially, we needed to do that project in order to make sure that everything else that we had developed at the Media Lab all the educational technologies, all the educational methodologies. Lego Mindstones, for example, which is another of Media labs inventions where programming languages like Scratch which can introduce the kids to structured problem solving. Very early on in age, all of these things were accessible to even the most disadvantaged kids in the world. And the price of the laptop was never a problem. That's why we did what we did. We were not planning to go into the laptop business when we did that. That's why One Laptop Per child was a nonprofit. We sold about 3 million of them, 3 million overall. And we didn't make a dime. We actually lost $20 million of our sponsors, money that went into the engineering and the design, our salary, essentially, while we're working on the project. [00:18:07] Speaker E: The importance is that 3 million people got access to technology, got access to a computer that they would never have. [00:18:12] Speaker D: Not so much. 3 million people is nothing. The issue was that a lot more people got access to technology because people realized that you could make a laptop with $160 and you didn't need 1000 A and B, that every child needs a computer, because the computer is the creativity tool, is the access to knowledge. You don't need to have all that knowledge in your head. You don't need to memorize. Education is not about memorization anymore. Education is about being able to ask the right question. And if you have that tool, then you can find the answer. [00:19:00] Speaker A: You are listening to The Owlcast, the official podcast of ACS Athens. This is the student edition. [00:19:16] Speaker B: You'Re listening to the student edition of the Outcast on Artificial Intelligence with Adrianos Bocceos. Stay tuned as he talks with MIT Media Labs. Michalis Bletzas about the quantum computers. Hype the three factors for the explosion of machine learning the need for AI regulation, the AI threat felt by schools and the fallacy of memorization exams, the importance of teaching a robot statistics, linear algebra, philosophy and ethics, and AI robotics as a catalyst for the integration of humanities and sciences in education. [00:20:05] Speaker E: I wanted to pass we a little bit passed by artificial general intelligence before, but I mean, we said that data is not powerful enough to it's not the only factor that will make artificial general intelligence. And, well, AI has existed from 1951, but only recently, the last 1015 years has it had an exponential increase. One of the factors is data, of course, but one big contributor is also the powerful computing. Right? You're able to create many models, train many advanced models, specifically in machine learning. But with the development of quantum computing, do you believe that will have a huge impact? Make artificial intelligence models and possibly even give the key to achieving artificial general intelligence with quantum computing? [00:20:50] Speaker D: No. Quantum computing will also be a coprocessor. It's very good at solving certain problems. It's very good at looking at problems that are, right now, NP complete, very hard problems that have extreme combinatorial explosion dimensions. But they are not going to replace traditional computers. They are going to be a coprocessor for traditional computing down the road. I think there is a lot of hype around quantum computing. It certainly will have its role, but I don't think it's going to take over. Don't think. It's also something that will lead us to artificial intelligence. It's something that can deal with this class of problems much more efficiently than traditional computer architecture. Machine learning, on the other hand, is something that we had to invent at some point. The more data we are producing more and more data every day and that's the only effective way to look through them because yes, there are gems in those hidden inside that data and it's a good thing to find them. But it's only a first step. I mean, if we find something that is very important we should actually dig deeper in there and look into more of analytical methods to get to the final solution. But if data is the new oil there was a famous cover in the economy years back machine learning is the refinery, is the required refinery to extract value. [00:22:28] Speaker E: Do you believe we will ever reach artificial general intelligence? And if so, is it in the near future? [00:22:34] Speaker D: I think it's inevitable, but it might not be and it probably won't be a copy of our intelligence. If you think about the current explosion in machine learning, it only became possible because three independent factors came independently of each other. Big data advances in silicon computing in the form of GPUs and architectures that could do very efficiently large matrix multiplications which by the way, were designed for a completely different application I. E. Playing computer games and creating visualizations. And third, these applications, this hardware possible running back propagation very effectively. Back propagation was known to us since the late 80s but didn't become practical until the previous decade when all this GPU type of hardware became widely available. [00:23:37] Speaker E: Well, that is very true. These three are the building blocks of AI. [00:23:41] Speaker F: And talking about something being publicly available the European Commission has proposed a regulatory framework on artificial intelligence. [00:23:48] Speaker E: But what part of AI requires regulation? [00:23:51] Speaker D: We need them to be a lot more open. We need to make sure that it's been productively deployed. Right now the financial incentives, the economic incentives favor automation over creativity. You don't get a tax break in most places for increasing the rewards of your employees, but you do get a tax break in investing in a whole new computer infrastructure to run these big models. So this is a much larger discussion, but this is not something new. We have seen that going on for a very long time. But because it was mostly limited to manual labor, us mine workers who thought of ourselves as higher up in the food chain, we ignored it. Well, now it came to challenge you and now we are all thinking about it. But it's the same pattern repeating itself. So we are not going to go back. I mean, certain things that can be automated will be automated eventually because there is no creativity in there. They will become a lot more efficient. Production will go up. And AI has that advantage because it doesn't require huge capital. Previous forms of automation required big capital outlays. You had to spend money to take advantage of the increases productivity. These tools, these information technology tools are more accessible to more people so you can see increases in productivity much much quicker. But again, who is going to benefit from them is not a given. Have to make sure that the benefits are more evenly spread and that's the opportunity that we shouldn't miss the legislation. [00:25:50] Speaker E: It's not just about the use of AI, but whether which companies control it, but it's also whether it's an applied in a broad scenario, for example, something like an algorithm in a social media or an algorithm which defines which people get taxes or which people get accepted within pass through a job application. They consider these from high level risk to low level risk and they name each and for each one each application has to go through a certain committee that will process and make sure that doesn't have any risk on the people or take away opportunities from people. The bias as they say look, you. [00:26:24] Speaker D: Are in a school right now. Schools felt very threatened by Sad ZPD and a lot of them rushed to bar It completely. I think it's the equivalent of burying your head in the sand. If your students can pass their exams by using Sadzipt, it probably means that your exams are designed to test for memorization and also limit the load to the people who grade them than actually testing the important skills that students should have. All these powerful tools require more skills or higher level skills from everybody who uses them. That's the thing about them asking the right question and it creates new needs. It raises the bar definitely for everybody and we should really look very carefully and start a constructive Iterative effort to incorporate them into all of our activities because if we don't do that we will never find the right solution and instead we will gravitate towards the easy solutions. [00:27:38] Speaker E: We know AI has numerous different appliances, right? Because there's always a good and a bad side to everything. A very important appliance of it is also in the health sector. So how will and has artificial intelligence help the health and well being of humans and citizens? [00:27:57] Speaker D: It's probably one of the most beneficial applications of it. First of all in diagnostics it can make diagnosticity and helps doctor make a lot more accurate diagnosis. It helps with drug design, it helps select potential substances without large databases. The prime example is what happened with Alpha Cold which allowed us to estimate the 3D structure of proteins which is the main determinant of their functionality and accelerated essentially provided a very steep acceleration in that field that people were struggling with for decades. I mean, one of the first problems that I dealt as a graduate student is I was looking over the shoulders of other people who are trying to figure out how proteins look actually in 3D space using all sorts of advanced algorithms and crystallography. And I spent years writing crystallographic FFDS to actually see the structure of proteins in the lab. And it would take sometimes years of very tedious lab work to find the structure of one protein. And now, in the past year, alpha fold estimated with high degree of certainty, thousands and thousands of them in just a single year, in a few months. So there are tons of useful things. There is a lot of knowledge hidden out there that essentially can be refined and come to light. By using these kinds of technologies, we can be a lot more creative. [00:29:38] Speaker E: So saving time can save lives and can cause change. Well, now let's change the topic to a wonderful movie. Do you know the Bicential man? Have you heard of the movie where it's this movie? In this movie, a robot is played by Robin Williams, where he tries to become human. People know that he's not human, but still get emotionally attached with him. Now, from a different perspective, do you believe that this predicts something in the future where humans get attached to machines? [00:30:06] Speaker D: Well, we tend to see everything that is not human as inferior and not accepting, not deserving the same rights as we have. And we have treated our planet cohabitants in very bad ways. And maybe we should take some cues from that, because the next form of intelligence will definitely be partially guided by the principles that we instill in it. And if the principles that we instill in it is that you should enslave of all lower forms of intelligence who might not end up very well for us. So, as Marvin Minsky used to know, if we are lucky, they will keep us around as pets, so we should make sure that they will treat us well. And we should try to be nice to lower forms of intelligence right now, mostly starting with animals in our planet, and then figure out what to do with robots. Actually, we have a crazy lady here at the lab who writes books about the rights of robots. I think she's way out there for the moment, but it's certainly something to think about because the next form of intelligence will certainly have of artificial intelligence will have human components. [00:31:35] Speaker F: Well, as I say, let's wait and see how it turns out. Now, moving on to the last question. In order to reassure that people are ready to adapt to such a rapidly changing world, it is vital to introduce artificial intelligence to students from a young age. I inspired the creation of the Artificial Intelligence Lab in ACS, and I am leading it where we are doing exactly that. Students are able to adapt to our world by gaining handson experience from a young age. The leading project that I started in 2020 is Nikki, a 1.8 meters humanoid robot that uses artificial intelligence in the form of computer vision, speech recognition, natural language processing and also uses internet of things, robotics and 3D printing as she's fully 3D printed. An introductory video to the AI lab can be seen below as a link. Now we presented at the Spring Plenary session of the Council of Europe our work on creating an artificial intelligence lab that includes AI and ethics. Ethics that make someone a conscious citizen, something important that our incredible President at ACS, Athens, Dr. Peggy Peloni, emphasizes. The presentation not only marked a significant achievement, but also kindled the inspiration for other educational institutions to follow us. [00:32:47] Speaker E: What are your views on this and. [00:32:48] Speaker F: What recommendations do you have for us? [00:32:52] Speaker D: Look, I think you are part of a privileged community and I think I applaud you for what you do. Because you are pushing the boundaries doesn't mean that you have to be correct the first time, but you should definitely try to learn from that experience. I think it's very important for people to understand this technology because you have to take the magic out of it if you want. So, yes, Nikki is good, but also it's very good to get introduced to statistics and linear algebra pretty early and do that in conjunction with ethics and philosophy because right now we are having machine learning, which is a very bottoms up approach to intelligence. And then you have all these higher forms of symbolic processing which is what you use to create subjects like philosophy and ethics. And in Greece we do something really bad. We separate what we call humanities from science very early on. So artificial intelligence gives us opportunity to bring these two together again and to see them as an integrated curriculum. So it's good that you guys have this opportunity and you can act sort of guinea pigs for the rest of the educational system. It's a good thing. I mean, you are very privileged. You are part of a very good school. I have firsthand experience because I have a lot of my good friends'kids went to ACS. You have been very privileged before, even before Dr. Peloni's, you had another very enlightened person as a president, Dr. Yalamas. So you are part of a very privileged community and you have know, give back early on and that's a very good lesson also, I think. And let's see where it takes you. But this is what you should be doing. I wish other schools in your position were doing the same. [00:35:04] Speaker E: Thank you. It's truly an honor. And this just boosts us even more to do more work in know because we are basically proving, as we would like to say, serves as pioneers in this in this new educational shifting and thank you. It's truly important for us to have your feedback, Mr. Bletzas. And I would like to say with this, I would like to end it. Thank you very much for coming here. It has been an honor. And we. Are looking forward to the future and thank you very much. [00:35:35] Speaker A: You are listening to The Owlcast, the official podcast of ACS Athens. Make sure you subscribe to the Owlcast on Google Podcasts. Spotify and Apple podcasts. This has been a production of the ACS Athens Media Studio of my channel.

Other Episodes