Speaking of citations!

Quantum supremacy using a programmable superconducting processor Google AI Quantum and collaboratorsy The tantalizing promise of quantum computers is that certain computational tasks might be executed exponentially faster on a quantum processor than on a classical processor. A fundamental challenge is to build a high- delity processor capable of running quantum algorithms in an exponentially large computational space.

Here, we report using a processor with programmable superconducting qubits to create quantum states on 53 qubits, occupying a state space 253 ˘1016. Measurements from repeated experiments sample the corresponding probability distribution, which we verify using classical simulations. While our processor takes about 200 seconds to sample one instance of the quantum circuit 1 million times, a state-of-the-art supercomputer would require approximately 10,000 years to perform the equivalent task.

This dramatic speedup relative to all known classical algorithms provides an experimental realization of quantum supremacy on a computational task and heralds the advent of a much-anticipated computing paradigm. In the early 1980s, Richard Feynman proposed that a quantum computer would be an effective tool to solve problems in physics and chemistry, as it is exponentially costly to simulate large quantum systems with classical computers [1]. Realizing Feynman’s vision poses significant experimental and theoretical challenges. First, can a quantum system be engineered to perform a computation in a large enough computational (Hilbert) space and with low enough errors to provide a quantum speedup? Second, can we formulate a problem that is hard for a classical computer but easy for a quantum computer? By computing a novel benchmark task on our superconducting qubit processor[2{7], we tackle both questions. Our experiment marks a milestone towards full scale quantum computing: quantum supremacy[8].

In reaching this milestone, we show that quantum speedup is achievable in a real-world system and is not precluded by any hidden physical laws. Quantum supremacy also heralds the era of Noisy Intermediate- Scale Quantum (NISQ) technologies. The benchmark task we demonstrate has an immediate application in generating certifiable random numbers[9]; other initial uses for this new computational capability may include optimization optimization [10{12], machine learning[13{ 15], materials science and chemistry [16{18]. However, realizing the full promise of quantum computing (e.g. Shor’s algorithm for factoring) still requires technical leaps to engineer fault-tolerant logical qubits[19{23].

To achieve quantum supremacy, we made a number of technical advances which also pave the way towards error correction. We developed fast, high delity gates that can be executed simultaneously across a two-dimensional qubit array. We calibrated and bench-marked the processor at both the component and system level using a powerful new tool: cross-entropy bench-marking (XEB).

Finally, we used component-level delities to accurately predict the performance of the whole system, further showing that quantum information behaves as expected when scaling to large systems. A COMPUTATIONAL TASK TO DEMONSTRATE QUANTUM SUPREMACY To demonstrate quantum supremacy, we compare our quantum processor against state-of-the-art classical computers in the task of sampling the output of a pseudo- random quantum circuit[24{26].

Random circuits are a suitable choice for bench-marking since they do not possess structure and therefore allow for limited guarantees of computational hardness[24, 25, 27, 28]. We design the circuits to entangle a set of quantum bits (qubits) by re- peated application of single-qubit and two-qubit logical operations. Sampling the quantum circuit’s output produces a set of bitstrings, e.g. f0000101, 1011100, …g. Due to quantum interference, the probability distribution of the bitstrings resembles a speckled intensity pattern produced by light interference in laser scatter, such that some bitstrings are much more likely to occur than others. Classically computing this probability distribution becomes exponentially more difficult as the number of qubits (width) and number of gate cycles (depth) grows. We verify that the quantum processor is working properly using a method called cross-entropy bench-marking (XEB) [24, 26], which compares how often each bitstring is observed experimentally with its corresponding ideal probability computed via simulation on a classical com- puter. For a given circuit, we collect the measured bit- strings fxigand compute the linear XEB delity [24{ 26, 29], which is the mean of the simulated probabilities of the bitstrings we measured: F XEB = 2 nhP(x i)i i 1 (1) where nis the number of qubits, P(x i) is the probability of bitstring x i computed for the ideal quantum circuit, and the average is over the observed bitstrings. Intuitively, F XEB is correlated with how often we sample high probability bitstrings. When there are no errors in the quantum circuit, sampling the probability distribution will produce F XEB = 1. On the other hand, sampling from the uniform distribution will give hP(x i)i i = 1=2n and produce F XEB = 0. Values of F XEB between 0 and 2 Qubit Adjustable coupler a b 10 millimeters FIG. 1. The Sycamore processor. a, Layout of processor showing a rectangular array of 54 qubits (gray), each connected to its four nearest neighbors with couplers (blue). In- operable qubit is outlined. b, Optical image of the Sycamore chip. 1 correspond to the probability that no error has oc- curred while running the circuit.

The probabilities P(x i) must be obtained from classically simulating the quan- tum circuit, and thus computing F XEB is intractable in the regime of quantum supremacy. However, with certain circuit simplications, we can obtain quantitative delity estimates of a fully operating processor running wide and deep quantum circuits. Our goal is to achieve a high enough F XEB for a circuit with sufficient width and depth such that the classical computing cost is prohibitively large. This is a difficult task because our logic gates are imperfect and the quantum states we intend to create are sensitive to errors. A single bit or phase ip over the course of the algorithm will completely shuffle the speckle pattern and result in close to 0 delity [24, 29].

Therefore, in order to claim quantum supremacy we need a quantum processor that executes the program with sufficiently low error rates. BUILDING AND CHARACTERIZING A HIGH-FIDELITY PROCESSOR We designed a quantum processor named \Sycamore” which consists of a two-dimensional array of 54 trans-mon qubits, where each qubit is tunably coupled to four nearest-neighbors, in a rectangular lattice. The connectivity was chosen to be forward compatible with error- correction using the surface code [20].

A key systems- engineering advance of this device is achieving high- delity single- and two-qubit operations, not just in isolation but also while performing a realistic computation with simultaneous gate operations on many qubits. We discuss the highlights below; extended details can be found in the supplementary information. In a superconducting circuit, conduction electrons con- dense into a macroscopic quantum state, such that cur- rents and voltages behave quantum mechanically [2, 30]. Our processor uses transmon qubits [6], which can be thought of as nonlinear superconducting resonators at 5 to 7 GHz.

The qubit is encoded as the two lowest quantum eigenstates of the resonant circuit. Each transmon has two controls: a microwave drive to excite the qubit, and a magnetic ux control to tune the frequency. Each qubit is connected to a linear resonator used to read out the qubit state [5]. As shown in Fig. 1, each qubit is also connected to its neighboring qubits using a new adjustable coupler [31, 32]. Our coupler design allows us to quickly tune the qubit-qubit coupling from completely o to 40 MHz. Since one qubit did not function properly the device uses 53 qubits and 86 couplers. The processor is fabricated using aluminum for metalization and Josephson junctions, and indium for bump- bonds between two silicon wafers. The chip is wire- bonded to a superconducting circuit board and cooled to below 20 mK in a dilution refrigerator to reduce ambient thermal energy to well below the qubit energy. The processor is connected through filters and attenuators to room-temperature electronics, which synthesize the control signals.

The state of all qubits can be read simultaneously by using a frequency-multiplexing tech- nique[33, 34]. We use two stages of cryogenic amplifiers to boost the signal, which is digitized (8 bits at 1 GS/s) and demultiplexed digitally at room temperature. In total, we orchestrate 277 digital-to-analog converters (14 bits at 1 GS/s) for complete control of the quantum processor. We execute single-qubit gates by driving 25 ns microwave pulses resonant with the qubit frequency while the qubit-qubit coupling is turned off.

The pulses are shaped to minimize transitions to higher transmon states[35]. Gate performance varies strongly with frequency due to two-level-system (TLS) defects[36, 37], stray microwave modes, coupling to control lines and the readout resonator, residual stray coupling between qubits, ux noise, and pulse distortions. We therefore 3 Pauli and measurement errors CDF am, E ted histogr Integra e 1 e 2 e 2c e r a b Average error Single-qubit (e 1 ) Two-qubit (e 2 ) Two-qubit, cycle (e 2c ) Readout (e r ) Isolated 0.15% 0.36% 0.65% 3.1% Simultaneous 0.16% 0.62% 0.93% 3.8% Simultaneous Pauli error e 1 , e 2 10 -2 10 -3 Isolated FIG. 2. System-wide Pauli and measurement errors. a, Integrated histogram (empirical cumulative distribution function, ECDF) of Pauli errors (black, green, blue) and readout errors (orange), measured on qubits in isolation (dotted lines) and when operating all qubits simultaneously (solid). The median of each distribution occurs at 0.50 on the vertical axis. Average (mean) values are shown below. b, Heatmap showing single- and two-qubit Pauli errors e 1 (crosses) and e 2 (bars) positioned in the layout of the processor. Values shown for all qubits operating simultaneously. optimize the single-qubit operation frequencies to miti- gate these error mechanisms. We benchmark single-qubit gate performance by using the XEB protocol described above, reduced to the single- qubit level (n= 1), to measure the probability of an error occurring during a single-qubit gate. On each qubit, we apply a variable number mof randomly selected gates and measure F XEB averaged over many sequences; as m increases, errors accumulate and average F XEB decays.

We model this decay by [1 e 1=(1 1=D2)]m where e 1 is the Pauli error probability. The state (Hilbert) space dimension term, D= 2n = 2, corrects for the depolarizing model where states with errors partially overlap with the ideal state. This procedure is similar to the more typical technique of randomized bench-marking [21, 38, 39], but supports non-Cli ord gatesets [40] and can separate out decoherence error from coherent control error. We then repeat the experiment with all qubits executing single- qubit gates simultaneously (Fig.2), which shows only a small increase in the error probabilities, demonstrating that our device has low microwave crosstalk. We perform two-qubit iSWAP-like entangling gates by bringing neighboring qubits on resonance and turning on a 20 MHz coupling for 12 ns, which allows the qubits to swap excitations. During this time, the qubits also ex- perience a controlled-phase (CZ) interaction, which originates from the higher levels of the transmon. The two- qubit gate frequency trajectories of each pair of qubits are optimized to mitigate the same error mechanisms considered in optimizing single-qubit operation frequencies. To characterize and benchmark the two-qubit gates, we run two-qubit circuits with mcycles, where each cy- cle contains a randomly chosen single-qubit gate on each of the two qubits followed by a xed two-qubit gate. We learn the parameters of the two-qubit unitary (e.g. the amount of iSWAP and CZ interaction) by using F XEB as a cost function. After this optimization, we extract the per-cycle error e 2c from the decay of F XEB with m, and isolate the two-qubit error e 2 by subtracting the two single-qubit errors e 1. We found an average e 2 of 0:36%.

Additionally, we repeat the same procedure while simultaneously running two-qubit circuits for the entire array. After updating the unitary parameters to account for effects such as dispersive shifts and crosstalk, we nd an average e 2 of 0.62%. For the full experiment, we generate quantum circuits using the two-qubit unitaries measured for each pair during simultaneous operation, rather than a standard gate for all pairs. The typical two-qubit gate is a full iSWAP with 1=6 of a full CZ. In principle, our architecture could generate unitaries with arbitrary iSWAP and CZ inter- actions, but reliably generating a target unitary remains an active area of research. Finally, we benchmark qubit readout using standard dispersive measurement [41]. Measurement errors aver- aged over the 0 and 1 states are shown in Fig 2a.

We have also measured the error when operating all qubits simultaneously, by randomly preparing each qubit in the 0 or 1 state and then measuring all qubits for the probability of the correct result. We nd that simultaneous readout incurs only a modest increase in per-qubit measurement errors. Having found the error rates of the individual gates and readout, we can model the delity of a quantum circuit as the product of the probabilities of error-free opera- 4 single-qubit gate: 25 ns qubit XY control two-qubit gate: 12 ns qubit 1 Z control qubit 2 Z control coupler cycle: 1 2 3 4 5 6 m time column row 7 8 A BC D A B D C a b FIG. 3. Control operations for the quantum supremacy circuits. a, Example quantum circuit instance used in our experiment. Every cycle includes a layer each of single- and two-qubit gates. The single-qubit gates are chosen randomly from f p X; p Y; p Wg. The sequence of two-qubit gates are chosen according to a tiling pattern, coupling each qubit sequentially to its four nearest-neighbor qubits.

The couplers are divided into four subsets (ABCD), each of which is executed simultaneously across the entire array corresponding to shaded colors. Here we show an intractable sequence (repeat ABCDCDAB); we also use different coupler subsets along with a simplifiable sequence (repeat EFGHEFGH, not shown) that can be simulated on a classical computer. b, Waveform of control signals for single- and two-qubit gates. tion of all gates and measurements. Our largest random quantum circuits have 53 qubits, 1113 single-qubit gates, 430 two-qubit gates, and a measurement on each qubit, for which we predict a total delity of 0:2%.

This delity should be resolvable with a few million measurements, since the uncertainty on F XEB is 1= p N s, where N s is the number of samples. Our model assumes that entangling larger and larger systems does not introduce additional error sources beyond the errors we measure at the single- and two-qubit level | in the next section we will see how well this hypothesis holds. FIDELITY ESTIMATION IN THE SUPREMACY REGIME The gate sequence for our pseudo-random quantum circuit generation is shown in Fig.3. One cycle of the algorithm consists of applying single-qubit gates chosen randomly from f p X; p Y; p Wgon all qubits, followed by two-qubit gates on pairs of qubits. The sequences of gates which form the \supremacy circuits” are designed to minimize the circuit depth required to create a highly entangled state, which ensures computational complexity and classical hardness. While we cannot compute F XEB in the supremacy regime, we can estimate it using three variations to reduce the complexity of the circuits.

In \patch circuits”, we remove a slice of two-qubit gates (a small fraction of the total number of two-qubit gates), splitting the circuit into two spatially isolated, non-interacting patches of qubits. We then compute the total delity as the product of the patch delities, each of which can be easily calculated. In \elided circuits”, we remove only a fraction of the initial two-qubit gates along the slice, allowing for entanglement between patches, which more closely mimics the full experiment while still maintaining simulation feasibility. Finally, we can also run full \veri cation cir- cuits” with the same gate counts as our supremacy circuits, but with a different pattern for the sequence of two- qubit gates which is much easier to simulate classically [29]. Comparison between these variations allows tracking of the system delity as we approach the supremacy regime. We rst check that the patch and elided versions of the verification circuits produce the same delity as the full verification circuits up to 53 qubits, as shown in Fig.4a. For each data point, we typically collect N s = 5 106 total samples over ten circuit instances, where instances differ only in the choices of single-qubit gates in each cycle.

We also show predicted F XEB values computed by multiplying the no-error probabilities of single- and two-qubit gates and measurement [29]. Patch, elided, and predicted delities all show good agreement with the delities of the corresponding full circuits, despite the vast differences in computational complexity and en- tanglement. This gives us confidence that elided circuits can be used to accurately estimate the delity of more complex circuits. We proceed now to benchmark our most computationally difficult circuits. In Fig.4b, we show the measured F XEB for 53-qubit patch and elided versions of the full supremacy circuits with increasing depth.

For the largest circuit with 53 qubits and 20 cycles, we collected N s = 30 106 samples over 10 circuit instances, obtaining F XEB = (2:24 0:21) 10 3 for the elided circuits. With 5˙con dence, we assert that the average delity of running these circuits on the quantum processor is greater than at least 0.1%. The full data for Fig.4b should have similar delities, but are only archived since the simulation times (red numbers) take too long. It is thus in the quantum supremacy regime. 5 number of qubits, n number of cycles, m n = 53 qubits a Classically verifiable b Supremacy regime idelity, XEB F

XEB m = 14 cycles Prediction from gate and measurement errors Full circuit Elided circuit Patch circuit Prediction Patch E F G H A B C D C D A B Elided (±5 error bars) 10 millennia 100 years 600 years 4 years 4 years 2 weeks 1 week 2 hour sC la ic mp ng @ Sycamore 5 hours Classical verification Sycamore sampling (N s = 1M): 200 seconds 10 15 20 25 30 35 40 45 50 55 12 14 16 18 20 10 -3 10 -2 10 -1 10 0 FIG. 4. Demonstrating quantum supremacy. a, Verification of bench-marking methods. F XEB values for patch, elided, and full verification circuits are calculated from measured bitstrings and the corresponding probabilities predicted by classical simulation. Here, the two-qubit gates are applied in a simplifiable tiling and sequence such that the full circuits can be simulated out to n= 53;m= 14 in a reasonable amount of time. Each data point is an average over 10 distinct quantum circuit instances that differ in their single-qubit gates (for n= 39;42;43 only 2 instances were simulated). For each n, each instance is sampled with N s between 0:5M and 2:5M.

The black line shows predicted F XEB based on single- and two-qubit gate and measurement errors. The close correspondence between all four curves, despite their vast differences in complexity, justifies the use of elided circuits to estimate delity in the supremacy regime. b, Estimating F XEB in the quantum supremacy regime. Here, the two-qubit gates are applied in a non-simplifiable tiling and sequence for which it is much harder to simulate. For the largest elided data (n= 53, m= 20, total N s = 30M), we nd an average F XEB >0.1% with 5˙confidence, where ˙includes both systematic and statistical uncertainties.

The corresponding full circuit data, not simulated but archived, is expected to show similarly significant delity. For m= 20, obtaining 1M samples on the quantum processor takes 200 seconds, while an equal delity classical sampling would take 10,000 years on 1M cores, and verifying the delity would take millions of years. DETERMINING THE CLASSICAL COMPUTATIONAL COST We simulate the quantum circuits used in the experiment on classical computers for two purposes: verifying our quantum processor and bench-marking methods by computing F XEB where possible using simplifiable circuits (Fig.4a), and estimating F XEB as well as the classical cost of sampling our hardest circuits (Fig.4b).

Up to 43 qubits, we use a Schrodinger algorithm (SA) which simulates the evolution of the full quantum state; the Julich supercomputer(100k cores, 250TB) runs the largest cases. Above this size, there is not enough RAM to store the quantum state [42]. For larger qubit numbers, we use a hybrid Schrodinger-Feynman algorithm (SFA)[43] running on Google data centers to compute the amplitudes of individual bitstrings. This algorithm breaks the circuit up into two patches of qubits and efficiently simulates each patch using a Schrodinger method, before connecting them using an approach reminiscent of the Feynman path-integral.

While it is more memory- ecient, SFA becomes exponentially more computation- ally expensive with increasing circuit depth due to the exponential growth of paths with the number of gates connecting the patches. To estimate the classical computational cost of the supremacy circuits (gray numbers, Fig.4b), we ran portions of the quantum circuit simulation on both the Sum- mit supercomputer as well as on Google clusters and extrapolated to the full cost. In this extrapolation, we account for the computational cost scaling with F XEB, e.g. the 0.1% delity decreases the cost by 1000[43, 44]. On the Summit supercomputer, which is currently the most powerful in the world, we used a method inspired by Feynman path-integrals that is most efficient at low depth[44{47].

At m= 20 the tensors do not reasonably t in node memory, so we can only measure runtimes up to m= 14, for which we estimate that sampling 3M bitstrings with 1% delity would require 1 year. 6 On Google Cloud servers, we estimate that perform- ing the same task for m= 20 with 0:1% delity using the SFA algorithm would cost 50 trillion core-hours and consume 1 petawatt hour of energy. To put this in per- spective, it took 600 seconds to sample the circuit on the quantum processor 3 million times, where sampling time is limited by control hardware communications; in fact, the net quantum processor time is only about 30 seconds. The bitstring samples from this largest circuit are archived online. One may wonder to what extent algorithmic innovation can enhance classical simulations. Our assumption, based on insights from complexity theory, is that the cost of this algorithmic task is exponential in nas well as m. Indeed, simulation methods have improved steadily over the past few years[42{50].

We expect that lower simulation costs than reported here will eventually be achieved, but we also expect they will be consistently outpaced by hardware improvements on larger quantum processors. VERIFYING THE DIGITAL ERROR MODEL A key assumption underlying the theory of quantum error correction is that quantum state errors may be considered digitized and localized [38, 51]. Under such a dig- ital model, all errors in the evolving quantum state may be characterized by a set of localized Pauli errors (bit and/or phase ips) interspersed into the circuit. Since continuous amplitudes are fundamental to quantum mechanics, it needs to be tested whether errors in a quantum system could be treated as discrete and probabilistic. In- deed, our experimental observations support the validity of this model for our processor. Our system delity is well predicted by a simple model in which the individually characterized delities of each gate are multiplied together (Fig

4). To be successfully described by a digitized error model, a system should be low in correlated errors. We achieve this in our experiment by choosing circuits that randomize and decorrelate errors, by optimizing control to minimize systematic errors and leakage, and by design- ing gates that operate much faster than correlated noise sources, such as 1=f ux noise [37]. Demonstrating a pre- dictive uncorrelated error model up to a Hilbert space of size 253 shows that we can build a system where quantum resources, such as entanglement, are not prohibitively fragile. WHAT DOES THE FUTURE HOLD?

Quantum processors based on superconducting qubits can now perform computations in a Hilbert space of dimension 253 ˇ9 1015, beyond the reach of the fastest classical supercomputers available today. To our knowledge, this experiment marks the rst computation that can only be performed on a quantum processor. Quantum processors have thus reached the regime of quantum supremacy. We expect their computational power will continue to grow at a double exponential rate: the classical cost of simulating a quantum circuit increases exponentially with computational volume, and hardware improvements will likely follow a quantum-processor equivalent of Moore’s law [52, 53], doubling this computational volume every few years. To sustain the double exponential growth rate and to eventually o er the computational volume needed to run well-known quantum algorithms, such as the Shor or Grover algorithms [19, 54], the engineering of quantum error correction will have to become a focus of attention. The \Extended Church-Turing Thesis” formulated by Bernstein and Vazirani [55] asserts that any \reasonable” model of computation can be efficiently simulated by a Turing machine. Our experiment suggests that a model of computation may now be available that violates this assertion.

We have performed random quantum circuit sampling in polynomial time with a physically realized quantum processor (with sufficiently low error rates), yet no efficient method is known to exist for classical computing machinery. As a result of these developments, quantum computing is transitioning from a research topic to a technology that unlocks new computational capabilities. We are only one creative algorithm away from valuable near-term applications. Acknowledgments We are grateful to Eric Schmidt, Sergey Brin, Je Dean, and Jay Yagnik for their executive sponsorship of the Google AI Quantum team, and for their continued engagement and support. We thank Peter Norvig for reviewing a draft of the manuscript, and Sergey Knysh for useful discussions.

We thank Kevin Kissel, Joey Raso, Davinci Yonge-Mallo, Orion Martin, and Niranjan Sridhar for their help with simulations. We thank Gina Bortoli and Lily Laws for keeping our team organized. This research used resources from the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725.

A portion of this work was performed in the UCSB Nanofabrication Facility, an open access laboratory. Author contributions The Google AI Quantum team conceived of the experiment. The applications and algorithms team provided the theoretical foundation and the specifics of the algorithm. The hardware team carried out the experiment and collected the data. The data analysis was done jointly with outside collaborators.

Citations:

[1]Feynman, R. P. Simulating physics with computers. Int. J. Theor. Phys. 21, 467{488 (1982).

[2]Devoret, M. H., Martinis, J. M. & Clarke, J. Mea- surements of macroscopic quantum tunneling out of the zero-voltage state of a current-biased josephson junction. Phys. Rev. Lett 55, 1908 (1985).

[3]Nakamura, Y., Chen, C. D. & Tsai, J. S. Spectroscopy of energy-level splitting between two macroscopic quan- tum states of charge coherently superposed by josephson coupling. Phys. Rev. Lett. 79, 2328 (1997).

[4]Mooij, J. et al. Josephson persistent-current qubit. Sci- ence 285, 1036 (1999).

[5]Wallra , A. et al. Strong coupling of a single photon to a superconducting qubit using circuit quantum electrody- namics. Nature 431, 162 (2004).

[6]Koch, J. et al. Charge-insensitive qubit design derived from the cooper pair box. Phys. Rev. A 76, 042319 (2007).

[7]You, J. Q. & Nori, F. Atomic physics and quantum optics using superconducting circuits. Nature 474, 589 (2011).

[8]Preskill, J. Quantum computing and the entanglement frontier. Rapporteur talk at the 25th Solvay Conference on Physics, Brussels (2012).

[9]Aaronson, S. Certi ed randomness from quantum supremacy. In preparation .

[10]Hastings, M. B. Classical and Quantum Bounded Depth Approximation Algorithms. arXiv e-prints arXiv:1905.07047 (2019). 1905.07047.

[11]Kechedzhi, K. et al. Ecient population transfer via non- ergodic extended states in quantum spin glass. arXiv e-prints arXiv:1807.04792 (2018). 1807.04792.

[12]Somma, R. D., Boixo, S., Barnum, H. & Knill, E. Quan- tum simulations of classical annealing processes. Phys. Rev. Lett. letters 101, 130504 (2008).

[13]McClean, J. R., Boixo, S., Smelyanskiy, V. N., Babbush, R. & Neven, H. Barren plateaus in quantum neural net- work training landscapes. Nat. Comm. 9, 4812 (2018).

[14]Cong, I., Choi, S. & Lukin, M. D. Quantum convolutional neural networks. arXiv:1810.03787 (2018).

[15]Bravyi, S., Gosset, D. & Konig, R. Quantum advantage with shallow circuits. Science 362, 308{311 (2018).

[16]Aspuru-Guzik, A., Dutoi, A. D., Love, P. J. & Head- Gordon, M. Simulated quantum computation of molecu- lar energies. Science 309, 1704{1707 (2005).

[17]Peruzzo, A. et al. A variational eigenvalue solver on a photonic quantum processor. Nat. Commun. 5, 4213 (2014). [18]Hempel, C. et al. Quantum chemistry calculations on a trapped-ion quantum simulator. Phys. Rev. X 8, 031022 (2018).

[19]Shor, P. W. Algorithms for quantum computation: dis- crete logarithms and factoring proceedings. Proceedings 35th Annual Symposium on Foundations of Computer Science (1994).

[20]Fowler, A. G., Mariantoni, M., Martinis, J. M. & Cle- land, A. N. Surface codes: Towards practical large-scale quantum computation. Phys. Rev. A 86, 032324 (2012).

[21]Barends, R. et al. Superconducting quantum circuits at the surface code threshold for fault tolerance. Nature 508, 500{503 (2014).

[22]Corcoles, A. D. et al. Demonstration of a quantum error detection code using a square lattice of four supercon- ducting qubits. Nat. Commun. 6, 6979 (2015).

[23]Ofek, N. et al. Extending the lifetime of a quantum bit with error correction in superconducting circuits. Nature 536, 441 (2016).

[24]Boixo, S. et al. Characterizing quantum supremacy in near-term devices. Nat. Phys. 14, 595 (2018).

[25]Aaronson, S. & Chen, L. Complexity-theoretic founda- tions of quantum supremacy experiments. In 32nd Com- putational Complexity Conference (CCC 2017) (2017).

[26]Neill, C. et al. A blueprint for demonstrating quantum supremacy with superconducting qubits. Science 360, 195{199 (2018).

[27]Bremner, M. J., Montanaro, A. & Shepherd, D. J. Average-case complexity versus approximate simulation of commuting quantum computations. Phys. Rev. Lett. 117, 080501 (2016).

[28]Bouland, A., Fe erman, B., Nirkhe, C. & Vazi- rani, U. Quantum supremacy and the com- plexity of random circuit sampling. Preprint at https://arxiv.org/abs/1803.04402 (2018).

[29]See supplementary information .

[30]Vool, U. & Devoret, M. Introduction to quantum electro- magnetic circuits. Int. J. Circ. Theor. Appl. 45, 897{934 (2017). 8

[31]Chen, Y. et al. Qubit architecture with high coherence and fast tunable coupling circuits. Phys. Rev. Lett. 113, 220502 (2014).

[32]Yan, F. et al. A tunable coupling scheme for implement- ing high- delity two-qubit gates. Phys. Rev. Applied 10, 054062 (2018).

[33]Schuster, D. I. et al. Resolving photon number states in a superconducting circuit. Nature 445, 515 (2007).

[34]Je rey, E. et al. Fast accurate state measurement with superconducting qubits. Phys. Rev. Lett. 112, 190504 (2014).

[35]Chen, Z. et al. Measuring and suppressing quantum state leakage in a superconducting qubit. Phys. Rev. Lett. 116, 020501 (2016).

[36]Klimov, P. V. et al. Fluctuations of energy-relaxation times in superconducting qubits. Phys. Rev. Lett. 121, 090502 (2018).

[37]Yan, F. et al. The ux qubit revisited to enhance coher- ence and reproducibility. Nat. Commun. 7, 12964 (2016).

[38]Knill, E. et al. Randomized benchmarking of quantum gates. Phys. Rev. A 77, 012307 (2008).

[39]Magesan, E., Gambetta, J. M. & Emerson, J. Scalable and robust randomized benchmarking of quantum pro- cesses. Phys. Rev. Lett. 106, 180504 (2011).

[40]Cross, A. W., Magesan, E., Bishop, L. S., Smolin, J. A. & Gambetta, J. M. Scalable randomised benchmarking of non-cli ord gates. NPJ Quantum Information 2, 16012 (2016).

[41]Wallra , A. et al. Approaching unit visibility for control of a superconducting qubit with dispersive readout. Phys. Rev. Lett. 95, 060501 (2005).

[42]De Raedt, H. et al. Massively parallel quantum computer simulator, eleven years later. Comput. Phys. Commun. 237, 47 { 61 (2019).

[43]Markov, I. L., Fatima, A., Isakov, S. V. & Boixo, S. Quantum supremacy is both closer and farther than it appears. Preprint at https://arxiv.org/abs/1807.10749 (2018).

[44]Villalonga, B. et al. A exible high-performance sim- ulator for the veri cation and benchmarking of quan- tum circuits implemented on real hardware. Preprint at https://arxiv.org/abs/1811.09599 (2018).

[45]Boixo, S., Isakov, S. V., Smelyanskiy, V. N. & Neven, H. Simulation of low-depth quantum circuits as complex undirected graphical models. Preprint at https://arxiv.org/abs/1712.05384 (2017).

[46]Chen, J., Zhang, F., Huang, C., Newman, M. & Shi, Y. Classical simulation of intermediate-size quantum circuits. Preprint at https://arxiv.org/abs/1805.01450 (2018).

[47]Villalonga, B. et al. Establishing the quantum supremacy frontier with a 281 p op/s simulation. Preprint at https://arxiv.org/abs/1905.00444 (2019).

[48]Pednault, E. et al. Breaking the 49-qubit barrier in the simulation of quantum circuits. Preprint at https://arxiv.org/abs/1710.05867 (2017).

[49]Chen, Z. Y. et al. 64-qubit quantum circuit simulation. Sci. Bull. 63, 964{971 (2018).

[50]Chen, M.-C. et al. Quantum teleportation-inspired al- gorithm for sampling large random quantum circuits. Preprint at https://arxiv.org/abs/1901.05003 (2019).

[51]Shor, P. W. Scheme for reducing decoherence in quan- tum computer memory. Phys. Rev. A 52, R2493{R2496 (1995).

[52]Devoret, M. H. & Schoelkopf, R. J. Superconducting circuits for quantum information: An outlook. Science 339, 1169{1174 (2013).

[53]Mohseni, M. et al. Commercialize quantum technologies in ve years. Nature 543, 171 (2017).

[54]Grover, L. K. Quantum mechanics helps in searching for a needle in a haystack. letters 79, 325 (1997).

[55]Bernstein, E. & Vazirani, U. Quantum complexity the- ory. Proc. 25th Annual ACM Symposium on Theory of Computing (1993).

Quick and dirty i8080 emulator in BASH!

Based on the Toledo 8080 processor emulator, it’s Peter Naszvadi’s i8080.sh!

You will need bash version 4 for this to run. So for those of you in the 1990’s you are out of luck.

And to make this fun, the 2kb basic ‘ROM’ is bootstrapped into RAM for immediate execution!

$ ./8080.sh
Quick and dirty i8080 emulator by NASZVADI Peter, 2019.
Press Ctrl+C to quit anytime!
Initializing upper memory to NOPs
Initializing register values
Starting CPU emulation


OK
>10 PRINT "HIHI"
>RUN
HIHI

OK
>

So we’ve seen emulation in javascript of all things, but now you can run 8080 instructions from bash of all things!

And for anyone even more crazy there is always BDS C: which was opened up back in 2002, a K&R for the 8080 on CP/M!

Macintosh Application Environment (Failure pt 1)

It’s a rainy day, and I didn’t feel well enough to wander around outside, and check out the ‘spyware street lights’. Although that’s the latest rage at the moment along with the usual stuff.

Instead I thought it’d be fun to try some ‘undercover’ software like that time when Apple thought it would be a good idea to port the ToolBox to Unix and run it as a competitor to programs like SoftPC as people with $30,000 workstations clearly needed a virtual Macintosh.

So I spent way too much time letting my 2006 Mac Pro grind out a Solaris 2.6 install, and then using the latest Tenox dump on archive.org I installed MAE 2.0! For anyone wondering the actual serial number is: 192c4838330166201b

And after serializing the app, and running it, I get a nice kernel panic. I thought I was playing it safe by opting out of the kernel module stuff as I don’t care about reading MacOS formatted diskettes.

I guess I really need Solaris 2.4 and those weird patches.

Internet Explorer. For Unix. Really.

Meanwhile you can rest assured that Internet Explorer 5.0 loads up just fine. It is painfully slow, but there it is. I was even able to download stuff with my ‘http backdoor’ thing to vpsland. So at least that works.

Hard to imagine a time that Microsoft was going cross platform under Gates, but Internet Explorer 3/4/5 were available on MacOS & OS X, with 4&5 under Solaris and HP/UX. Then of course there was that SUN Java lawsuit and they pulled the rug out from interoptability. Although I was a Linux on SPARC user in this era, having it available on Linux would have helped a LOT, but naturally Microsoft didn’t entertain that Linux market thing as it was cancer to them. Meanwhile Chromium Edge is only a Windows / OS X thing.

I was hoping to go further, but it appears not today.

AVGN reviews Chex Quest

Okay that was funny. I never thought of even trying Brutal DooM + Chex Quest. Sounds awesome.

Although I’d played a little with Chex Quest before, I never tried it on the DooM source. Oddly enough it’s Ultimate DooM. In d_main.c you can do some simple test for chex.wad and pass it off as the ‘retail’ version.

	if ( !access (chex,R_OK) )
	{
		gamemode = retail;
		D_AddFile (chex);
		return;
	}

Or you can simply just rename chex.wad to doom.wad or doomu.wad. Many of the strings for DooM are compiled into the EXE, not taken from the WAD file (although it could have been, I guess it was to prevent people from making overtly cheeky mods?).

So firing up the wad under my crappy DooM port thing to MS-DOS (for the sane people just use some other Win64/OSX/Linux thing like zdoom), and when selecting an episode you’ll see the Ultimate DooM levels.

Chex Quest, Thy Flesh Consumed!

Wait? What? I though Chex Quest was a ‘total conversion’ WAD. Well it turns out that it is, and it isn’t. They replaced a lot of the default stuff from a retail version of Ultimate DooM. And what they didn’t replace, well it’s still there. And yes that does mean everything outside of the first 5 levels are the original DooM levels. And that includes the music as well!

E2M2 from Chex Quest

Well isn’t that surprising! And yes that means that it’s possible to just replace the first 5 levels with the default DooM levels and have that reverse conversion. In the same way the menu screens are very Chexy too:

It certainly gives that kid like feeling to it. Although the replacement Barrons of Hell are a little too big so they do look kind of silly.

E1M8

So as always I’m late to the party. I’m sure someone out there didn’t have the retail version of DooM and instead used Chex Quest for those LAN games. Although it does detect that the WAD has been modified so I don’t think it would just be all that fine.

Not that finding the original WAD files, or source to the maps, and just compiling them yourself is all that difficult, but I guess it is something else to load up.

How not to store optical media

Or how I finally broke down and bought that MkLinux book after all these decades. When I did own a PowerPC Mac as my daily driver it was an iMac back in 1999 and I ran OS X Server. I later bought a G4 to only find out that OS X didn’t support the G4. Linux had issues too and I ended up running OpenBSD on the G4. Which was fun, although for the ‘work at home’ bit, I ended up needing Windows NT 4.0, so I ran that in OS 8 on SoftPC. Yay.

I didn’t have any luck with Linux on Power as MkLinux wanted the beige hardware, and by the time I felt like digging in again to Linux, OS X had finally been ported to the G4 Sawtooth’s so it really didn’t matter.

What secrets lie inside?

I’d seen this book in a store but it was pretty expensive, and geared to such a tiny market. Although Mach does compile on the i386, why they didn’t include it was well to push Mach/Linux as a platform well that’s beyond me. Then again looking at the stunning success of Darwin on i386/x86_64 I guess the reality is, why bother.

I ordered this on Ebay, for the usual $5 plus $10 to ship, and it just showed up today! What mysteries lie on the CD-ROMs? I know others have posted stuff, but I wanted to hold them in my hands myself.

Uh-oh

I didn’t know if the CD-ROM’s were included, and I first thought I got lucky: not only were they included, neither had been opened up before! These CD’s had been packed way like this for the last 22.5 years! Now for the bad part.

Ink transfer

See that stupid leaflet in the back? Yeah well it turns out that it was a really stupid idea. No doubt this thing sat in the bottom of a stack for decades where the ink had been pressed for so long against the disk that it has transferred to the surface.

HOW ANNOYING.

Gag me with a spoon…

So yes, they actually advertise the book, namely the one that I had bought with some crap ink leaflet in the CD-ROM pouch and it’s transferred to the disc.

Sigh.

I tried rubbing alcohol but that had no effect. I tried rubbing with a credit card, and it got a little off, but I fear I’m just going to damage the surface more.

I can only imagine what other CD-ROM’s out there that haven’t been archived are sitting under hundreds or thousands of pounds of book weight having nonsense imprinted onto them.

At least the second CD-ROM doesn’t suffer this defect and I’ll be uploading it later.

No book review yet, I’m just sitting here with this impacted CD.

UPDATE

Thanks to Shawn Novak for uploading the R3 images so I can at least pull up the compatible machines:

The latest Qemu can pull it the image fine, however trying to boot up looks like the Mach kernel just isn’t compatible enough with the emulated Mac99 machine (which isn’t surprising).

I’ll need to mess with stuff to see if the G3beige can boot Linux on Qemu, and if the BootX (I think it’s bootx?) can load the mach kernel.

Slackware finally goes to Patreon

It’s been a long time overdue, where helping to keep afloat the oldest continues Linux distro is now super easy to do!

Hi folks, Slackware founder Patrick Volkerding here. Welcome to the official page supporting the Slackware Linux Project. Slackware was founded in 1993 and made its first public releases that same year, making it the longest active Linux distribution project.

Today there are many Linux distributions available, but I’ve remained dedicated to this project as I believe it still holds an important place in the Linux ecosystem. Slackware is free, open source software that gives you everything you need to recompile the entire operating system and make any changes you like. The package system is constructed from shell scripts using standard utilities, and easily modified scripts are used throughout so that users may customize their operating system without having to recompile anything. Included software is changed very little (usually not at all) from the way the upstream developers intended. Einstein once said, “Everything should be made as simple as possible, but not simpler” — and when it comes to an operating system, trying to make things “easy” does not always achieve that goal. Anyone who has ever struggled with a package dependency system that doesn’t like the replacement of a system component knows what I’m talking about here. Slackware users know that once you’ve learned how to use the system, it’s easier to use than something that tries to put a GUI setup tool in front of the underlying text files that actually configure the system. And they know that Slackware can be trusted not to constantly change the way things work, so that your investment in learning Slackware lasts longer than it would with a system that’s a moving target. Slackware still builds upon a foundation of the latest development tools, libraries, and applications, and is supported by a vibrant user/developer community that welcomes new users and provides well-tested build scripts for nearly any software you may require.

Your support is greatly appreciated, and will make it possible for me to continue to maintain this project. Cheers!

For me, Slackware holds a special place as just as soon as SLS became a thing it quickly fell into disrepair and rot. The final straw was trying to bootstrap that new fangled ELF thing (it’s almost impossible to find any talk of that COFF thing) and it was just an incredible amount of work. At that point just getting Slackware on a shovelware CD-ROM set was just nothing short of incredible. And it was obvious at the time of ELF that the whole ‘I can’t xyz with little to no effort, therefore dump that distro and try another one’ was born. And Linux never recovered. Look no further at the insane cruft and required infrastructure for Docker to just load an application.

With that said when building a lot of crap from source it’s sometimes nice to have something that’ll just get out of the way, and Slackware does a reasonable job of that. Although last time I really fought with Slackware 13.37 it really didn’t go so well. But at the same time I am willing to make monetary support to ensure the future of Slackware.

The project currently stands at 230 supporters. Which is pretty good, as when it comes to supporting things, few will actually put their money into it, so hopefully this will allow Slackware to continue to exist, and maybe even flourish. I for one find this anti systemd distro important for the sake of software diversity in the ‘me too Debian’ sea of clones out there. Just as supporting OpenBSD is important in the standpoint of security.

You too can support Patrick over on patreon.com.

Ion Fury released!

So I’ve been busy with the day job stuff, and although I had ordered this I haven’t had time to play Ion Fury until today.

And yes, I’m sure you’ve seen video of it all over, and it’s so out of place, as it squarely belongs in 1997. And it’s great. The musical oddity is using samples much like a MOD, instead of the full synth or CD sounds. But it’s all the more era appropriate that I’m sure with a 90MB data file it would have been on CD-ROM, although I guess in the day it’d have audio tracks that would be quickly lost in the mists of time like the Quake 1/2 soundtrack.

I’m absolute crap, but it was fun enough to play. Although I did get ‘lost’ in the fan service, I didn’t realize I could make those jumps between the giant fans in the ventilation area.

So yes, this being a 3-D Realms experience means it’s not on rails. And the maps feel absolutely massive, and just look so great.

$109 HKD or $13.89 USD!

And it’s pretty cheap too! Although I think that is the local pricing for Hong Kong. For just under $110 it’s a steal.

From the sales blurb:

· While Shelly “Bombshell” Harrison earned her codename defusing bombs for the Global Defense Force, when evil transhumanist mastermind Dr. Jadus Heskel unleashes the members of his cybernetic cult onto the streets of Neo DC, she knows it’s time to start causing explosions instead of preventing them.
· Her quest to slay Heskel leaves a trail of carnage throughout huge, multi-path levels filled with gigantic explosions, more secret areas than we can count, and inhuman foes behind every corner. There’s no regenerating health here; stop taking cover and start running and gunning.
· Shelly’s crusade to take down Heskel’s army will see her leave destruction in her wake with a wide arsenal of weapons, complete with alternate fire modes and different ammo types. Her signature revolver, the tri-barreled Loverboy, brings enemies pain and players pleasure with both single shots and Old West-style hammer fanning action. Who needs a regular shotgun when you can load buckshot into your grenade launcher? Violent, over-the-top Bowling Bombs rip enemies apart with ease.
· Ion Fury laughs at the idea of mandatory checkpoints and straight paths through shooting galleries. But, just because this is a true old-school first-person shooter doesn’t mean we’ve ignored all the good new stuff the last two decades have brought. Headshots? Hell yeah. More physics and interactivity? You betcha. Widescreen, controller support, and Auto Saves? 3D Realms and Voidpoint have taken the best of both worlds and cooked it all into a bloody stew.
· The true successor to classic shooters such as Duke Nukem 3D, Shadow Warrior, and Blood. Experience the original BUILD engine on steroids, pumped up and ready to rock again after 20 years! Duck, jump, climb, swim, and blast your way through 7 exciting zones packed with multiple levels of mayhem!
A beautiful game world assembled from thousands of hand-crafted textures and sprites An awe-inspiring arsenal of devastating weapons, including multiple ammo types and alternate fire modes. Tri-barreled revolvers, grenade launchers, and heat-seeking smart bombs are your best friends!
The classic ’90s FPS action you love, meshed with modern inventions like locational damage and seamless level transitions ZERO PROCEDURAL GENERATION. All levels are hand made and full of multiple paths, cool effects, and complex set pieces!
Thumping soundtrack comprised of true tracked module music, in authentic FastTracker 2 format Easy access on to level editor and other development tools on day one, plus Steam Workshop support

For any Duke/Blood/Nam/TechWar fans really how could you pass Ion Fury by?

As goes Linux, so goes Microsoft

It’s entirely expected as print media is a dead artifact on 8/7 it was announced that Linux Journal would cease publication.

And then on 8/8 it was announced that MSDN magazine would also cease publication.

In the 90’s both magazines had an important role in both communities of developers and enthusiasts. I suppose it’s only fitting that they both passed away a day apart.

Both trade publications have been sources of pain with ‘new and exciting tech (that isn’t quite ready), but seeing their demise seems to just relegate us back to the world of word of mouth, that had served us poorly in the 80’s and before times.

There is something authoritative and to be said for putting print on word.