Artificial Intelligence and Quantum Computers Part 3

Planck Introduced the Idea of Quantized Energy

Quantum computing has deep roots that trace back to the early 20th century. The journey began with the revolutionary work of Max Planck in quantum theory, and continued with many critical milestones. Planck introduced the idea of quantized energy which set the groundwork for quantum theory. His theory suggested that energy isn’t continuous, but comes in discrete packets which is known as quanta. It explained the concept of black body radiation in a way no other theory could. Building on Planck’s theory, Albert Einstein presented a bold idea in 1905. It is known as the photoelectric effect. He proposed that light has a dual nature behaving as both particles and waves.

Artificial Intelligence and Quantum Computers

Niels Bohr Took Quantum Theory

He suggested that light’s energy is also quantized into discrete packets which we now call photons. Niels Bohr took quantum theory a step further in 1913 with his model of the hydrogen atom. He theorized that electrons exist in specific energy levels or orbits around an atomic nucleus. According to Bohr, these electrons can shift between these levels by either absorbing or emitting energy. In 1925, Wolfgang Polly introduced the concept of quantum superposition. He proposed that particles could exist in multiple states at once with their precise properties only determined when observed. Werner Heisenberg expanded on this idea in 1927 with the uncertainty principle.

A Game-Changing Paper by Albert Einstein, Boris Podolski, And Nathan Rosen

This principle states that you can’t simultaneously know certain pairs of properties; such as position and momentum of a quantum particle. In 1935, a game-changing paper by Albert Einstein, Boris Podolski, and Nathan Rosen was published. This paper, known as the EPR paper, introduced the concept of entanglement, and like we mentioned in the previous chapter, it is a strange but fundamental aspect of quantum mechanics where two or more particles are intertwined.

Alan Turing First Proposed the Idea of Quantum Computing

Fast forward to 1951, When Alan Turing first proposed the idea of quantum computing. Turing envisioned a machine operating on quantum mechanical principles which could outperform classical computers. But for several decades, this concept remained mostly theoretical due to the challenges of practical implementation. Significant progress was made in the 1970s by physicist Richard Feynman. He recognized that quantum systems could simulate physical systems which are tough to model with classical methods.

The Concept of a Quantum Algorithm

 Feynman proposed a universal quantum simulator, a machine that can accurately replicate any physical system. In 1982, physicist Paul Benioff put forward the concept of a quantum Turing machine. This theoretical model operates on quantum principles. The 1980s also saw the birth of quantum algorithms. A physicist by the name of David Deutsch also came up with the concept of a quantum algorithm. He proposed that a quantum computer could tackle certain problems faster than traditional computers. He introduced what’s now called the Deutsche algorithm. In 1994, Peter Shore made a significant breakthrough with Shore’s algorithm. It was able to show how quantum computers could efficiently solve integer factorization problems. What made this so groundbreaking was its potential threat to the security of public key encryption systems since it could crack classical encryption algorithms.

Collaboration Between Google, NASA, And D-Wave Systems

When it comes to experimental progress, various research groups made important strides in creating quantum computers and exploring different uses for qubits. One key development was a collaboration between Google, NASA, and D-Wave Systems. Together, they launched the D-Wave 2 quantum computer in 2013. This was one of the first commercially available quantum computers. what set it apart was its use of a different approach known as adiabatic quantum computing. By using superconducting qubits, it was designed to solve optimization problems in 2016, IBM grabbed the spotlight by introducing the IBM quantum experience. This was a cloud-based platform where users could experiment with a small quantum computer made up of five superconducting qubits.

Artificial Intelligence and Quantum Computers Part-III

Logical Qubits with The Ability to Correct Errors

Their goal was to give more people access to quantum computing and to encourage collaboration within the quantum community. With the IBM quantum experience, researchers and enthusiasts around the world could study quantum algorithms and conduct experiments from anywhere. Then in 2017, we saw a major milestone with the demonstration of qubits that could correct their own errors. Physicists at Yale University came up with a breakthrough design known as the surface code. This allowed them to create logical qubits with the ability to correct errors. It was a key step to reaching the goal of fault-tolerant computing where information can be kept safe from errors and decoherence. Companies like Google are pushing hard to build practical and market-ready quantum computers.

Continuous……

By

Dr. Abid Hussain Nawaz, Ph.D. & Post Doc

Zeenat Mushtaque, Master of philosophy in Solid State Physics

Leave a Reply

Your email address will not be published. Required fields are marked *