Conversion between microwaves and optics at the quantum level
We pursue conversion of information between microwaves and optics. Today’s demand for computing and connectivity outstrips even the most optimistic Moore’s law estimates. There is no realistic path to a sustainable growth in our capabilities to compute and connect without changes to our infrastructure. A key issue is the excessive energy loss caused by communication inside electronic devices. In classical datacenters and computers, this energy loss leads to heating and greenhouse gas emissions. In emerging quantum computers, the energy loss prevents scaling up small quantum systems into more powerful distributed quantum processors.
Many scientists believe using light for communication is our only hope for solving this given its high bandwidth, low loss, and low noise even at room temperature. To do so, we must convert microwave signals onto light. This is an extremely difficult problem as the wavelength of the microwaves is about a centimeter, while the wavelength of near-infrared light is around a micrometer – a vast scale difference. We are developing new hardware to bridge this gap. We first convert the microwave signals (orange in picture) into gigahertz sound. Next, we convert the sound to light (light blue in picture). The hardware makes use of the strong interactions between light and sound in nanostructures as they have similar wavelength. It must convert even entangled input states with high fidelity. Even classically the approach may become important in any situation that requires large amounts of data to be analyzed and moved around quickly with low power consumption – such as in self-driving cars, supercomputers, AI engines, and quantum processors. Read more in this popular science article.
Acousto-optic devices: non-reciprocal modulators and sensors
Light and gigahertz sound are uniquely matched to each other as both have wavelengths of around one micrometer in common materials like silicon. This enables them to interact strongly and thus sound can process electromagnetic signals with low power consumption. In one example this enables quantum microwave-to-optics conversion (see above). Other examples are classical electro-optic modulation, optical non-reciprocal elements like isolators and circulators, and optical beam-steering sensors. Such optical beam-steering sensors – called “lidars” – can exploit sound trapped on the surface of a silicon chip to efficiently steer light and make images. This is a much-needed function in autonomous systems like self-driving cars. It results in a vision (picture) where light and gigahertz sound are co-integrated on mass-manufacturable chips, enabling us to capture, analyze, and sculpt light both on and off the chip in new ways. Read more in this review.
Acoustic quantum processors
Like the microwave electromagnetic radiation used in mobile phones and wireless networks, acoustic devices can operate at gigahertz frequencies. However, acoustics offers unique advantages over microwaves. The acoustic wavelength is about five orders of magnitude smaller than the microwave wavelength at the same frequency – providing far more compact devices with reduced crosstalk. In addition, acoustic systems have superior coherence levels with lifetimes as high as seconds observed even in micron-scale phononic crystals. These acoustic systems can also be connected to superconducting qubits with strong interaction rates via piezoelectricity such that even individual phonons can be resolved (Image credit: Wentao Jiang). We aim to leverage these unique acoustic properties for quantum information processing. If the impressive acoustic coherence can be maintained with microwave control and read-out, it would usher in the arrival of new quantum sensors and memories – letting us scale up microwave quantum processors in a hardware-efficient way.