Now that computers connect us all, for better and worse, what’s next?

This article was written, edited and designed on laptop computers. Such foldable, transportable devices would have astounded computer scientists just a few decades ago, and seemed like sheer magic before that. The machines contain billions of tiny computing elements, running millions of lines of software instructions, collectively written by countless people across the globe. You click or tap or type or speak, and the result seamlessly appears on the screen.

Computers were once so large they filled rooms. Now they’re everywhere and invisible, embedded in watches, car engines, cameras, televisions and toys. They manage electrical grids, analyze scientific data and predict the weather. The modern world would be impossible without them.

Scientists aim to make computers faster and programs more intelligent, while deploying technology in an ethical manner. Their efforts build on more than a century of innovation.

In 1833, English mathematician Charles Babbage conceived a programmable machine that presaged today’s computing architecture, featuring a “store” for holding numbers, a “mill” for operating on them, an instruction reader and a printer. This Analytical Engine also had logical functions like branching (if X, then Y). Babbage constructed only a piece of the machine, but based on its description, his acquaintance Ada Lovelace saw that the numbers it might manipulate could represent anything, even music. “A new, a vast, and a powerful language is developed for the future use of analysis,” she wrote. Lovelace became an expert in the proposed machine’s operation and is often called the first programmer.

In 1936, English mathematician Alan Turing introduced the idea of a computer that could rewrite its own instructions, making it endlessly programmable. His mathematical abstraction could, using a small vocabulary of operations, mimic a machine of any complexity, earning it the name “universal Turing machine.”

The first reliable electronic digital computer, Colossus, was completed in 1943 to help England decipher wartime codes. It used vacuum tubes — devices for controlling the flow of electrons — instead of moving mechanical parts like the Analytical Engine’s cogwheels. This made Colossus fast, but engineers had to manually rewire it every time they wanted to perform a new task.

Perhaps inspired by Turing’s concept of a more easily reprogrammable computer, the team that created the United States’ first electronic digital computer, ENIAC, drafted a new architecture for its successor, EDVAC. Mathematician John von Neumann, who penned EDVAC’s design in 1945, described a system that could store programs in its memory alongside data and alter the programs, a setup now called the von Neumann architecture. Nearly every computer today follows that paradigm.

In 1947, researchers at Bell Telephone Laboratories invented the transistor, a piece of circuitry in which the application of voltage (electrical pressure) or current controls the flow of electrons between two points. It came to replace the slower and less-efficient vacuum tubes.

In 1958 and 1959, researchers at Texas Instruments and Fairchild Semiconductor independently invented integrated circuits, in which transistors and their supporting circuitry were fabricated on a chip in one process.

For a long time, only experts could program computers. Then in 1957, IBM released FORTRAN, a programming language that was much easier to understand. It’s still in use today. In 1981, the company unveiled the IBM PC, and Microsoft released its operating system called MS-DOS, together expanding the reach of computers into homes and offices. Apple further personalized computing with the operating systems for their Lisa, in 1982, and Macintosh, in 1984. Both systems popularized graphical user interfaces, or GUIs, offering users a mouse cursor instead of a command line.

black and white image of two women operating Colossus at Bletchley Parkcta module

Sign Up For the Latest from Science News

Headlines and summaries of the latest Science News articles, delivered to your inbox

Thank you for signing up!

There was a problem signing you up.

Software hacks

Researchers continue to work on a cornucopia of new technologies for transistors, other computing elements, chip designs and hardware paradigms: photonics, spintronics, biomolecules, carbon nanotubes. But much more can still be eked out of current elements and architectures merely by optimizing code.

In a 2020 paper in Science, for instance, researchers studied the simple problem of multiplying two matrices, grids of numbers used in mathematics and machine learning. The calculation ran more than 60,000 times faster when the team picked an efficient programming language and optimized the code for the underlying hardware, compared with a standard piece of code in the Python language, which is considered user-friendly and easy to learn.

Neil Thompson, a research scientist at MIT who coauthored the paper in Science, recently coauthored a paper looking at historical improvements in algorithms, sets of instructions that make decisions according to rules set by humans, for tasks like sorting data. “For a substantial minority of algorithms,” he says, “their progress has been as fast or faster than Moore’s law.”

People, including Moore, have predicted the end of Moore’s law for decades. Progress may have slowed, but human innovation has kept technology moving at a fast clip.

Chasing intelligence           

From the early days of computer science, researchers have aimed to replicate human thought. Alan Turing opened a 1950 paper titled “Computing Machinery and Intelligence” with: “I propose to consider the question, ‘Can machines think?’ ” He proceeded to outline a test, which he called “the imitation game” (now called the Turing test), in which a human communicating with a computer and another human via written questions had to judge which was which. If the judge failed, the computer could presumably think.

The term “artificial intelligence” was coined in a 1955 proposal for a summer institute at Dartmouth College. “An attempt will be made,” the proposal goes, “to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.” The organizers expected that over two months, the 10 summit attendees would make a “significant advance.”

More than six decades and untold person-hours later, it’s unclear whether the advances live up to what was in mind at that summer summit. Artificial intelligence surrounds us in ways invisible (filtering spam), headline-worthy (self-driving cars, beating us at chess) and in between (letting us chat with our smartphones). But these are all narrow forms of AI, performing one or two tasks well. What Turing and others had in mind is called artificial general intelligence, or AGI. Depending on your definition, it’s a system that can do most of what humans do.

photo of Garry Kasparov moving a piece on a chess board during the match against Deep BlueNow that computers connect us all, for better and worse, what’s next?