[ad_1]
Welcome I was there, a new oral history project Machines We Trust digital audio file. It contains stories of how breakthroughs in artificial intelligence and computing came about, as told by the people who witnessed them. In this first episode, we meet Joseph Atick, who helped create the first commercially viable facial recognition system.
Credit:
This episode was produced by Jennifer Strong, Anthony Green and Emma Cillekens, with assistance from Lindsay Muscato. Edited by Michael Reilly and Mat Honan. By Garret Lang, sound design and music mixed by Jacob Gorski.
Full transcript:
[TR ID]
Jennifer: I’m Jennifer Strong, presenter Machines We Trust.
Here I want to talk about something that we have been working on for a while behind the scenes.
called I was there.
This is an oral history project with stories about how breakthroughs in artificial intelligence and computing came about.
Joseph Atick: And when I entered the room, he saw my face, popped it out of the background and said, “I see Joseph,” and that’s when the hair on the back was… I felt like something had happened. We witnessed.
Jennifer: We start with a man who helped create the first commercially viable facial recognition system… in the ’90s…
[IMWT ID]
I am Joseph Atick. Today I am chairman of ID for Africa, a humanitarian organization focused on giving people in Africa a digital identity so they can access services and exercise their rights. But I have not always been in the humanitarian sphere. After earning my PhD in mathematics, together with my collaborators, they made some fundamental breakthroughs that made facial recognition the first commercially viable one. That’s why people refer to me as the founding father of the facial recognition and biometrics industry. The algorithm for how a human brain would recognize familiar faces became clear when I was at the Institute for Advanced Study in Princeton, while we were doing research, doing mathematical research. But he was far from having an idea of how to implement such a thing.
It was a long period of programming and failing, programming and failing, lasting months. And one night, early in the morning, we had actually just finished a version of the algorithm. We submitted the source code for compilation to get a run code. And we went out, I went out to go to the toilet. And then when I stepped back into the room and the source code was machine compiled and back. And usually it runs automatically after you compile, and when I entered the room, he saw a person entering the room and saw my face, pulled him out of the background and said: “I see Joseph.” and that was the moment when there was hair in the back – I felt like something had happened. We witnessed. And I started looking for other people who were still in the lab, and they would each come into the room.
And he would say, “I see Norman. I would see Paul, I would see Joseph.” And we would take turns running around the room to see how many he could see in the room. This was a moment of truth, although theoretically no additional breakthrough was required, after several years of work I would say it led to a breakthrough. It was very, very satisfying and satisfying that we even found out how to apply it and finally saw this talent in action. We have developed a team that is more of a development team, not a research team focused on putting all these capabilities on a PC platform. And that was the birth, indeed, the birth of commercial facial recognition in 1994.
My anxiety started very quickly. With cameras everywhere, computers becoming commoditized, and computers getting better and better at processing, I saw a future where there was nowhere to hide. And so I lobbied the industry in 1998 and said, we need to bring together principles for responsible use. And for a while I felt good because I felt we were doing it right. I felt we put in a responsible usage code that would be followed no matter the application. However, this code has not stood the test of time. The reason for this is that we did not anticipate the emergence of social media. Basically, when we created the code in 1998, we said that the most important element in a facial recognition system is the tagged database of known people. We said that if I am not in the database, the system will be blind.
And it was difficult to create the database. We could build 10,000, 15,000, 20,000 at most because every image had to be scanned and manually entered – the world we live in today, we are now in a regime where we let the beast out of the bag. by helping it by nurturing billions of faces and labeling ourselves. Um, we are now in a world where the hope of controlling and demanding that everyone be responsible for the use of facial recognition is difficult. There’s also no shortage of known faces on the internet as you can just scrape it, as is the case with some companies. And so in 2011 I started to panic and wrote an article saying it was time to hit the panic button because the world is moving in a direction where facial recognition will be ubiquitous and faces will be ubiquitous. in databases.
And back then, people were saying I was an alarmist, but today they realize that’s exactly what’s happening today. So where are we going from here? I lobby for legislation. I lobby for legal frameworks that require you to use someone’s face without their consent. And this is no longer a technological problem. We cannot control this powerful technology by technological means. There has to be some kind of legal framework. We cannot let technology get too far ahead of us. Our values are ahead of what we consider acceptable.
The issue of consent remains one of the most difficult and challenging when it comes to technology, just letting someone know is not enough. In my opinion, consent is required. They need to understand the implications of what that means. And not just to say, we set a record and that was enough. We told people and they could go anywhere if they didn’t want to.
I also find it very easy to be seduced by flashy technological features that can give us a short-term advantage in our lives. And then we realize that we are giving up something very valuable. By then we have desensitized the population and reached a point where we cannot withdraw. This is what I’m worried about. I’m concerned about the realization of facial recognition with the work of Facebook, Apple and others. I’m not saying they’re all illegitimate. Many are legitimate.
We’ve gotten to a point where the general public may be fed up and become desensitized by what they see everywhere. And maybe in 20 years you’ll be stepping out of your home. You will no longer have an expectation that you will not be. It will not be recognized by the dozens of people you cross on the road. Then I think the public will be very alarmed because the media will start reporting cases where people are being followed. People were targeted, even selected and kidnapped based on their net worth on the street. I think that we have too much responsibility on our hands.
And so I think the issue of consent will continue to haunt the industry. And until this question comes to a conclusion, maybe it will not be solved. I think we need to set limits on what can be done with this technology.
My career has also taught me that being too ahead is not a good thing, because facial recognition as we know it today was actually invented in 1994. But most people think it was invented by Facebook and machine learning algorithms. now proliferating all over the world. Basically, at one point in time, I had to resign as a public CEO because I restricted the use of technology my company was promoting for fear of negative consequences for humanity. That’s why I think scientists should have the courage to project for the future and see the results of their work. I’m not saying they should stop making breakthroughs. No, we must go with all our might, we must make more breakthroughs, but we must also be honest with ourselves and basically warn the world and policymakers that this breakthrough has its pros and cons. So, when using this technology, we need some kind of guidance and framework to make sure that it is directed towards a positive practice, not a negative one.
Jennifer: I was there… It is an oral history project that includes the stories of people who witnessed or created groundbreaking developments in artificial intelligence and computing.
Do you have a story to tell? Do you know someone who does? Drop us an email at podcasts@technologyreview.com.
[MIDROLL]
[CREDITS]
Jennifer: This episode was recorded in New York in December 2020 and produced by me with the help of Anthony Green and Emma Cillekens. Edited by Michael Reilly and Mat Honan. Garret Lang, our mixing engineer, with sound design and music by Jacob Gorski.
Thanks for listening, I’m Jennifer Strong.
[TR ID]
[ad_2]
Source link