in

The minds behind artificial intelligence: problems and dilemmas

The Dartmouth conference, held in 1956, is considered the birthplace of artificial intelligence . There, some of what would later become the greatest computer scientists and mathematicians of all time met for eight weeks at the Ivy League university of the same name and laid the foundation that has brought this field, conceptually and materially, to where it is today.

Fifty years later, the members of that working group met in the same place to celebrate their anniversary. But his vision of what artificial intelligence could mean for everyone had changed.

Marvin Minsky , co-founder of MIT, Turing Award winner, and one of those founding fathers of AI , had gone during those 50 years from building what is considered the first neural network, called SNARC, to denying them.

When asked by an anniversary attendee if he “was the devil” for fighting what seemed like the most powerful mechanism AIs had for getting better and better, Minsky replied: “Yes, I am.”

The dilemmas of AI, always present among its main promoters

 

Minsky passed away 10 years after that response, in 2016, at the age of 88. During his brilliant career, he never removed a certain air of controversy, but it seemed counterintuitive that someone who had laid the foundations of AI, disavow neural networks. Perhaps because he had imagined the potential of it?

A neural network is a method, construction, and improvement of an artificial intelligence that teaches machines to process data in a way that is inspired by the human brain. It enables what is called deep learning, which uses nodes as interconnected neurons in a layered structure. Thus, it creates an adaptive system that computers use to learn from their mistakes and continually improve.

It is the method on which DALL-E 2 or ChatGPT are based and it is what also seemed to scare Minsky who, however, has not been the only great mind within AI to express his doubts .

Minsky’s work on artificial intelligence in the 1950s and 1960s has been instrumental in the advancement of symbolic AI. He would later found the MIT Artificial Intelligence Laboratory with John McCarthy in the early 1960s. Why then this aversion to neural networks?

For that you have to go back to those beginnings of AI.

The Perceptron vs. Minsky: The first ‘War’ for AI happened before it was born

In July 1958, the United States Office of Naval Research demonstrated the Perceptron : An IBM 704 was fed a series of punched cards, and after 50 tests, the 5-ton computer learned to identify the left-marked cards. those marked on the right.

“Stories about creating machines with human qualities have long been a fascinating field in science fiction. However, we are about to witness the birth of such a machine: a machine capable of perceiving, recognizing and identifying its environment without any human training or control,” said Frank Rosenblatt, its creator.

The Perceptron had the potential to launch a thousand primal neural networks , but it hit a snag. You guessed it: Marvin Minsky.

Minsky questioned the usefulness of Perceptron. He claimed that neural networks couldn’t handle anything beyond what Rosenblatt had shown. Minsky lashed out at Rosenblatt every chance he got while they were both alive.

Minsky and his partner Seymour Papert attempted to disassemble Rosenblatt’s ideas in the 1969 book, Perceptrons: An Introduction to Computational Geometry . That worked and meant the paralysis of research in the Perceptron and neural networks for several years… Until new researchers became interested in them.

In a way, Minsky always seemed to refuse to see that what he helped create was ever capable of competing with the human mind, even though he surely saw it getting closer and closer. In part, because his vision of the human brain was very similar to that of those machines. “It is demeaning or insulting to say that someone is a good person or that they have a soul. Each person has built this incredibly complex structure, and if you attribute it to a magical pearl in the middle of an oyster that does you good, that is trivializing a person and prevents you from thinking about what is really happening, ”Minsky said years later in his book The Emotion Machine .

But let’s go back to those new researchers that we mentioned. In the mid-1980s, Geoff Hinton, a young professor at Carnegie Mellon University, and his team built a complex network of artificial neurons , addressing some of the concerns previously raised by Minsky. They inserted an extra layer of complexity that allowed networks to learn more complicated functions. However, it had no long-term impact, causing neural networks to fall out of favor in the late 1990s. In 2006, Hinton improved on previous work by researcher Yann LeCun to introduce a new technique called deep learning or deep learning .

Geoff Hinton, younger than Minsky and still alive, was dubbed The Father of Deep Learning for it . His insistence managed to lay the foundation of the algorithmic systems on which a good part of the technologies used by the current big technology companies, from Google to Meta, have been founded. Neural networks had returned stronger than ever.

And yet, this new father of the second generation of AIs, would also disown them.

Like Minsky, the fathers of deep learning are also raising their doubts about current AIs today.

Hinton, having pushed the AI ​​along this path, is also wary, however, of how it’s settling down. Specifically, Hinton believes that the method called backpropagation , and which most of today’s most powerful software is based on, is not the way to find truly advanced and useful AI.

In subsequent propagation, tags are used to represent, for example, a photo or data within a neural layer that mimics how the brain works. These are adjusted, layer by layer, until the network can perform an intelligent function with as few errors as possible.

But Hinton suggested that, to get to where neural networks can become intelligent on their own, “you’ll have to get rid of back propagation. I don’t think that’s how the brain works . Clearly we don’t need all the data labelled,” he said in an interview with Axios , leading him to believe that there will come a time when you have to almost “start over with all the AIs”.

His name is not the only one of the second batch of AI promoters who have since raised their doubts, both technological and ethical. Hinton received the Turing Award in 2018 for his progress along with the aforementioned Yann LeCun and Yoshua Bengio, both French . The latter, despite the fact that his research has been used by companies such as Google or Facebook, has been the only one of the trio who has spent his entire career in academia, where he raised his misgivings about a possible future misuse of this technology. .

At the end of 2017, Bengio presented the so-called Montreal Declaration (by the University where he has held a chair): a set of ethical guidelines for AI where he warns of the problems that its abuses could lead to.

In an interview with Nature, he warned that “Much of what is most concerning with AI does not happen in broad daylight. It is happening in military laboratories , in security organizations, in private companies that provide services to governments or the police.

Bengio’s Montreal Declaration is today recognized as the first and strongest proposal for an agreement to prevent AI from promoting attacks against privacy, inequality or violence against other states or organizations. But, despite having gathered the support of hundreds of companies and universities, none of the big technology companies still appears on its list of signatories.

Written by Go Business Tips

Kibho Coin Login: Have you ever thought it might be possible to earn money more efficiently?

6 Applications With Artificial Intelligence That Will Help You Not Get Bored