Thursday, December 7, 2023
Google search engine
InicioTechnology NewsEdward Friedkin, who saw the universe as one big computer, dies at...

Edward Friedkin, who saw the universe as one big computer, dies at 88

Edward Friedkin, who never graduated from college but became an influential professor of computer science at the Massachusetts Institute of Technology, a pioneer in artificial intelligence and a maverick scientific theorist who espoused the idea that the entire universe is like one big computer Could work, passed away. June 13 in Brookline, Mass. He was 88 years old.

His death in a hospital was confirmed by his son, Richard Friedkin.

Driven by a seemingly limitless scientific imagination and unflinching indifference to conventional thinking, Professor Friedkin worked through an endlessly variable career what at times seemed as mind-bending as the iconoclastic theories that led him to computer science. and made him an intellectual force in both physics.

“Ed Friedkin had more thoughts per day than most people have in a month,” Gerald Sussman, a professor of electronic engineering and a longtime collaborator at MIT, said in a phone interview. “Most of them were bad, and he would have agreed with me on that. But some of them were good ideas too. So he had more good ideas than most people in his lifetime.”

After serving as a fighter pilot in the Air Force in the early 1950s, Professor Friedkin became a renowned, albeit unorthodox, scientific thinker. He was a close friend and intellectual companion of the renowned physicist richard feynman and famous computer scientist Marvin MinskyA pioneer in artificial intelligence.

An autodidact who dropped out of college after one year, he nevertheless became a full professor of computer science at MIT at age 34. He later taught at Carnegie Mellon University in Pittsburgh and Boston University.

Not satisfied with confining his energies to the Ivory Tower, Professor Friedkin founded a company in 1962 that manufactured programmable film readers, allowing computers to analyze data captured by cameras, such as Air Force radar information. Got permission.

That company, Information International Incorporated, went public in 1968. With his new fortune, he bought a Caribbean island in the British Virgin Islands, where he traveled in his Cessna 206 seaplane. The island lacked potable water, so Professor Friedkin developed reverse-osmosis technology to desalinate seawater, which he turned into another business.

He eventually sold his property, Mosquito Island, to British billionaire Richard Branson for $25 million.

Professor Friedkin’s life was full of contradictions, so it is only fair that he is given credit for it. Friedkin’s ParadoxAs known, it assumes that When someone is deciding between two options, the more similar they are, the more time is spent worrying about the decision, even though the difference in choosing one or the other may be insignificant. Conversely, when the difference is more significant or meaningful, less time is likely to be spent on decision making.

As an early researcher in artificial intelligence, Professor Friedkin foreshadowed the current debate about super-intelligent machines half a century ago.

“It requires a combination of engineering and science, and we already have the engineering,” Professor Friedkin said. An interview with The New York Times in 1977, “To build a machine that thinks better than humans, we don’t need to understand everything about humans. We still don’t understand wings, but we can fly.”

As a starting point, they helped pave the way for checkmate machines. bobby fisher Of the World. Professor Friedkin, the developer of the initial processing system for chess, created it in the 1980s. friedkin awardHe offered a $100,000 prize to the person who could develop the first computer program to win the World Chess Championship.

In 1997, a team of IBM programmers did just that, and brought it home six-figure prize When his computer Deep Blue defeated world chess champion Garry Kasparov.

“There was never any doubt in my mind that a computer would eventually beat a current world chess champion,” Professor Friedkin said at that time, “The question has always been when.”

Edward Friedkin was born in Los Angeles on October 2, 1934, the youngest of four children of Russian immigrants. His father, Manuel Friedkin, ran a chain of radio stores that failed during the Great Depression. His mother, Rose (Spiegel) Friedkin, was a pianist.

A brooding and socially awkward youth, Edward shied away from sports and school dances, preferring to lose himself in hobbies such as building rockets, designing fireworks, and dismantling and rebuilding old alarm clocks. “I have always been in good sync with the machines,” he said. A 1988 interview with The Atlantic Monthly,

After high school, he enrolled at the California Institute of Technology in Pasadena, where he studied with Nobel Prize-winning chemist Linus Pauling, However, attracted by his desire to fly, he left school in his second year to join the Air Force.

During the Korean War, he trained to fly fighter aircraft. But his prodigious skills in math and technology lead him to work on military computer systems rather than in combat. The Air Force eventually sent him to MIT Lincoln Laboratory, a wellspring of Pentagon-funded technological innovation, to further his education in computer science.

This was the beginning of a long tenure at MIT, where in the 1960s he helped develop early versions of multiple access computers as part of a Pentagon-funded program. project mac, That program also explored machine-assisted cognition, an early investigation into artificial intelligence.

Professor Sussman said, “He was one of the world’s first computer programmers.”

In 1971, Professor Friedkin was chosen to direct the project. He became a full-time faculty member shortly thereafter.

As his career developed, Professor Friedkin continued to challenge mainstream scientific thinking. He made major advances in reversible computing, an esoteric area of ​​study combining computer science and thermodynamics.

With a pair of innovations – the billiard-ball computer model, which he developed with Tommaso Toffoli and Friedkin Gate – he showed that computation is not inherently irreversible. Those advances show that computation need not consume energy by overwriting intermediate results of computation, and it is theoretically possible to build a computer that does not consume energy or generate heat.

But none of his insights sparked more debate than his famous theories on digital physics, a distinct area in which he became a leading theorist.

His universe-is-a-giant-computer theory, as described by author and science writer Robert Wright Atlantic Monthly in 1988, is based on the idea that “information is more fundamental than matter and energy.” Professor Friedkin, Mr Wright said, believed that “atoms, electrons and quarks are ultimately made up of bits – binary units of information, such as the currency calculated in a personal computer or pocket calculator.”

As Professor Friedkin was quoted as saying in that article, DNA, the fundamental building block of heredity, “is a good example of digitally encoded information.”

“The information that describes what an animal or plant will be like is encoded,” he said. “It’s represented in the DNA, isn’t it? Well, now, there’s a process that takes that information and turns it into a creature.

Even an animal as simple as a mouse, he concluded, “has a big, complex informational process.”

Professor Friedkin’s first marriage, to Dorothy Friedkin, ended in divorce in 1980. Besides his son Richard, he is survived by his wife Joycelyn; a son, Michael, and two daughters, Sally and Susan, from his first marriage; a brother, Norman; a sister, Joan Antz; six grandchildren; and a great-grandson.

Until the end of his life, Professor Friedkin’s theory of the universe remained modest but intriguing. “Most physicists don’t believe this to be true,” Professor Sussman said. “I’m not sure Friedkin believed that was true. But of course there is a lot to be learned from thinking like this.

On the contrary, his early views on artificial intelligence seem to be getting more visionary by the day.

He told The Times in 1977, “In the distant future we will not know what computers are doing, or why they are doing it.” Of all the people who have ever lived on this planet.”

Yet, unlike many current destroyers, he did not feel a sense of existential dread. “Once apparently intelligent machines arrive,” he said, “they will be less interested in stealing our toys or dominating us than they will be in dominating chimpanzees or getting nuts from squirrels.”

Source link



Por favor ingrese su comentario!
Por favor ingrese su nombre aquí

- Advertisment -
Google search engine

Most Popular


The Art of Tipping

Crypto fraud with AI tie up

Recent Comments