More
    Future TrendsThe Real Impact of Quantum Supremacy on AI and Machine Learning

    The Real Impact of Quantum Supremacy on AI and Machine Learning

    Quantum supremacy—the moment when a programmable quantum device outperforms the best classical supercomputer on a specific computational task—has shifted from a theoretical milestone to a practical signpost for what’s coming next. For practitioners in artificial intelligence and machine learning, the implications are profound. Not because quantum computers will suddenly replace GPUs and TPUs, but because supremacy-style demonstrations point to concrete classes of problems where quantum methods change the cost curve: sampling, optimization, simulation, and certain linear-algebraic subroutines that appear everywhere in modern AI. This article unpacks the impact of quantum supremacy on artificial intelligence and machine learning, what it means for day-to-day engineering, and how to prepare an organization to capture value—responsibly and measurably.

    Who this is for: ML engineers, data scientists, research leads, CTOs/CISOs, and technical product managers who need a realistic, actionable roadmap rather than hype.

    What you’ll learn: Where supremacy-era results connect to real AI/ML workloads; how to build hybrid pipelines that take advantage of quantum subroutines; how to structure experiments, track KPIs, avoid common pitfalls, and plan a four-week starter sprint.


    Key takeaways

    • Supremacy is a signal, not a solution. It proves there exist tasks for which quantum devices beat classical computers; it does not mean today’s models train faster end-to-end.
    • AI benefits first through subroutines. The most direct impacts are in sampling, combinatorial optimization, kernels, and specialized simulation—components that slot into existing ML pipelines.
    • Hybrid is the near-term reality. Noisy, intermediate-scale devices pair with classical accelerators; the art is deciding what to send to quantum hardware and how to measure the gain.
    • Error correction is the bridge. Demonstrations that larger logical codes reduce error point toward scalable, fault-tolerant systems—critical for reliable AI use cases.
    • Security is part of the impact. As standards for post-quantum cryptography finalize, AI/ML stacks need crypto-agility to protect models, data, and MLOps pipelines.
    • A disciplined playbook beats hype. Treat quantum like any new accelerator: pick use cases with measurable KPIs, prototype on simulators, run controlled trials, and retire what doesn’t pay.

    Understanding quantum supremacy, advantage, and utility for AI/ML

    What it is and why it matters.
    Supremacy marks a clear experimental separation: a quantum processor completes a well-defined task faster than any feasible classical method. Related terms you’ll encounter:

    • Advantage: a quantum method performs a useful task better (speed, cost, or accuracy) than the best classical approach.
    • Utility: a quantum method that integrates into real processes and yields repeatable business or research value.

    For AI/ML, supremacy serves as a compass: it identifies computational structures (e.g., random-circuit sampling or photonic boson sampling) with properties that map to subproblems in generative modeling, optimization, and probabilistic inference. It tells us where to look for leverage.

    Requirements/prerequisites.

    • Familiarity with linear algebra, probability, and optimization.
    • Access to a quantum simulator and a cloud backend for small real-device runs.
    • A reproducible ML environment (containerized) and experiment tracker.

    Step-by-step (beginner-friendly).

    1. Inventory algorithmic hotspots. Profile training and inference to locate expensive steps: sampling, kernel evaluation, combinatorial search, or physics-based simulation.
    2. Map to quantum motifs. Align hotspots with quantum motifs: circuit-based sampling, variational optimization, kernel methods, or Hamiltonian simulation.
    3. Prototype on a simulator. Build tiny benchmarks replicating the hotspot with a quantum subroutine.
    4. A/B with classical baselines. Compare wall-clock, energy, and sample efficiency at equal accuracy.
    5. Gate to hardware. If simulation looks promising, run a reduced instance on real devices to assess queueing, error rates, and cost.
    6. Decide-and-scale. Promote to a larger internal pilot only if KPI improvements survive end-to-end measurement.

    Beginner modifications and progressions.

    • Start with toy datasets and shallow circuits, then increase feature dimension or circuit depth.
    • Use classical surrogates (e.g., approximate kernels) to bound expected gains.

    Recommended frequency/metrics.
    Measure speedup, energy per training example, wall-clock per epoch, and accuracy at equal compute budgets. Re-evaluate every quarter as classical simulators and compilers improve.

    Safety, caveats, and mistakes to avoid.

    • Mistake: equating supremacy with immediate production readiness.
    • Caveat: classical simulation and algorithmic tricks often erode claimed speedups.
    • Safety: avoid encoding sensitive data into quantum states you cannot reliably scrub; keep privacy reviews standard.

    Mini-plan (example).

    • Week 1: Profile a reinforcement learning sampler bottleneck.
    • Week 2: Replace sampler with a circuit-based sampler on a simulator; log accuracy/time.
    • Week 3: Trial a small hardware job; compare queueing and fidelity impact.

    Where quantum supremacy touches AI/ML workloads

    What it is and core benefits.
    Supremacy-style experiments highlight problem families with latent advantage for AI:

    1. Sampling for generative models (e.g., drawing from complex distributions).
    2. Combinatorial optimization (feature selection, neural architecture search, scheduling).
    3. Kernel methods (implicit high-dimensional feature maps).
    4. Linear–algebraic routines (certain structured solves).
    5. Simulation (materials, chemistry) useful for model-based RL or scientific ML.

    Requirements/prerequisites.

    • Modular pipelines (so you can swap a subroutine).
    • Access to batch job APIs for quantum runs and a simulator.
    • Budget for small hardware experiments (minutes, not hours).

    Step-by-step (beginner).

    1. Pick one workload (e.g., binary classification with a kernel SVM).
    2. Implement a classical baseline with thorough validation.
    3. Implement a quantum-parameterized kernel or variational model on a simulator.
    4. Tune depth/ansatz to avoid overfitting and unstable gradients.
    5. Compare test accuracy vs. compute cost at fixed budgets.
    6. If promising, run a minimal hardware job to confirm feasibility.

    Beginner modifications/progressions.

    • Start with a toy kernel/QNN on 4–8 features; move to 16–32 only if gradients remain trainable.
    • For optimization, begin with MaxCut on small graphs before real feature-selection tasks.

    Recommended frequency/metrics.

    • KPIs: validation accuracy at equal FLOPs, negative log-likelihood per unit time, regret in RL per wall-clock, or cost-to-target in optimization.
    • Update benchmarks each release of compilers/simulators.

    Safety and common mistakes.

    • Don’t overlook data-loading costs into quantum states.
    • Avoid deep, randomly initialized circuits on many qubits (training may stall).
    • Confirm reproducibility with fixed seeds and circuit transpilation settings.

    Mini-plan (example).

    • Step 1: Train a classical SVM baseline; log AUC and latency.
    • Step 2: Train a tiny quantum-enhanced kernel on the same folds in a simulator.
    • Step 3: If AUC improves at similar compute, schedule a hardware run of 100–500 shots per batch to validate noise behavior.

    Hybrid quantum–classical machine learning in practice

    What it is and core benefits.
    Hybrid algorithms pair a parameterized quantum circuit (state preparation + ansatz) with a classical optimizer. Two families dominate near-term use:

    • Variational eigensolvers/optimizers: variational routines minimize a cost function relevant to chemistry or optimization.
    • Quantum classifiers and kernels: circuits implement feature maps or decision functions.

    Benefits include access to large implicit feature spaces and stochastic sampling useful for certain generative or optimization tasks.

    Requirements/prerequisites.

    • Access to a quantum circuit simulator, transpiler, and a cloud backend.
    • An optimizer with robust hyperparameters (e.g., SPSA, Adam variants).
    • Logging and checkpointing to handle non-deterministic noise.

    Step-by-step (beginner).

    1. Choose a small ansatz tailored to the task (problem-inspired if possible).
    2. Select an encoding (angle, basis, or amplitude) that matches your data scale.
    3. Assemble a hybrid loop: forward pass on quantum device/simulator, classical loss and gradient step, repeat.
    4. Add gradient-estimation batching and shot-frugal settings.
    5. Use early stopping based on validation curves.

    Beginner modifications/progressions.

    • Begin with shallow depth and local connectivity; gradually increase only if gradients remain healthy.
    • Introduce regularization terms to stabilize training.

    Recommended frequency/metrics.

    • Track gradient norms, training loss variance across seeds, and shots consumed.
    • Monitor accuracy vs. cost per epoch.

    Safety, caveats, and mistakes.

    • Watch for “barren plateaus” (vanishing gradients) in deep or unstructured circuits; mitigate with smart initialization, locality constraints, or problem-informed ansätze.
    • Noise can induce plateaus; keep circuits shallow and use error-mitigation techniques when available.

    Mini-plan (example).

    • Step 1: Train a small variational classifier on a 2D toy dataset with 2–4 qubits.
    • Step 2: Increase to 8–12 features with a slightly deeper ansatz; monitor gradients.
    • Step 3: Compare against a classical baseline; retain only if the hybrid model beats it under equal compute or energy budgets.

    Data, features, and encodings for quantum models

    What it is and core benefits.
    Encoding classical data into quantum states is the gateway to any quantum-assisted ML. Common strategies:

    • Angle encoding: map features to rotation angles; simple and hardware-friendly.
    • Basis encoding: represent binary features directly as computational basis states.
    • Amplitude encoding: pack many features into amplitudes; expressive but costly and sensitive.

    Requirements/prerequisites.

    • Feature scaling and normalization utilities.
    • A clear budget for circuit depth and number of qubits.
    • An understanding of shot noise (number of measurements).

    Step-by-step (beginner).

    1. Start with angle encoding; standardize features to stable ranges.
    2. Limit the number of entangling layers; prefer local connectivity.
    3. Use cross-validation to tune depth; stop when validation loss stops improving.
    4. If needed, consider feature hashing to fit within qubit limits.

    Beginner modifications/progressions.

    • Try data re-uploading (multiple encoding layers) for expressivity without exploding depth.
    • Explore kernelized approaches when direct encodings get too large.

    Recommended frequency/metrics.

    • Measure accuracy vs. shots; aim to reduce shots via variance-reduction.
    • Track gradient norms to detect plateaus early.

    Safety and mistakes.

    • Over-encoding leads to untrainable circuits; more features are not always better.
    • Keep sensitive attributes out of encodings unless privacy approvals exist.

    Mini-plan (example).

    • Step 1: Normalize 8 features; angle-encode over 4–6 qubits.
    • Step 2: Add one entangling layer; tune a single depth hyperparameter.
    • Step 3: Evaluate shots from 100 to 2,000 and pick the best cost-accuracy tradeoff.

    Infrastructure and cost planning for quantum-enhanced ML

    What it is and core benefits.
    Quantum compute arrives through cloud APIs. The practical path is: prototype on simulators, then send the smallest meaningful jobs to hardware to verify noise assumptions and cost models.

    Requirements/prerequisites.

    • Cloud credentials for simulators and limited hardware access.
    • A thin service layer to submit circuits, manage queues, and collect results.
    • Experiment tracking with per-run cost capture.

    Step-by-step (beginner).

    1. Sim first: validate functionality and performance bounds on a simulator.
    2. Batch wisely: schedule small batched jobs (few circuits, few shots) to sample real noise.
    3. Instrument costs: log per-job time, queue delay, success rates, and spend.
    4. Automate fallbacks: revert to simulator if queues spike or device calibrations drift.
    5. Create a change window: plan hardware runs during stable calibration periods.

    Beginner modifications/progressions.

    • Add adaptive shot allocation: fewer shots when gradients are large; more near convergence.
    • Explore multi-backend redundancy to avoid single-provider bottlenecks.

    Recommended frequency/metrics.

    • Cost per training epoch, cost per percent of accuracy gained, job success rate, and queue time.
    • Device fidelity metrics as a gating condition before runs.

    Safety and mistakes.

    • Don’t run sensitive workloads on shared backends without a data classification review.
    • Avoid long jobs early; prioritize small, information-dense experiments.

    Mini-plan (example).

    • Step 1: Set a monthly budget and a per-job cap.
    • Step 2: Automate a daily 10-minute calibration probe (synthetic circuit) to monitor drift.
    • Step 3: Only green-light larger jobs if probe metrics beat thresholds.

    Governance, risk, and security implications for AI teams

    What it is and core benefits.
    Supremacy-era progress accelerates the need for post-quantum cryptography in AI/ML pipelines. Model checkpoints, feature stores, and data lakes often use cryptographic protocols that will be replaced or augmented with quantum-resistant algorithms over the next few years. Early planning reduces migration pain and protects long-lived data.

    Requirements/prerequisites.

    • Asset inventory of cryptographic use in MLOps.
    • A plan to test quantum-resistant key exchange and signatures in non-production environments.
    • Crypto-agility (ability to rotate algorithms and keys).

    Step-by-step (beginner).

    1. Inventory: list encryption and signature algorithms used in data ingestion, storage, model artifacts, and deployment.
    2. Prioritize: flag long-lived secrets and archives for early migration.
    3. Pilot: test standardized quantum-resistant algorithms in dev/staging.
    4. Measure: log performance overhead and compatibility issues.
    5. Rollout: phase migration with key rotation playbooks.

    Beginner modifications/progressions.

    • Start with non-customer, internal pipelines.
    • Use hybrid modes (classical + quantum-resistant) during transition.

    Recommended frequency/metrics.

    • Migration percentage by system, encryption coverage, signing coverage, and performance overhead.
    • Incident drill frequency for key compromise scenarios.

    Safety and mistakes.

    • Don’t delay inventory; “harvest-now-decrypt-later” risks mean today’s captured traffic can be attacked in the future.
    • Avoid bespoke cryptography; stick to published standards.

    Mini-plan (example).

    • Step 1: Replace a model registry’s key exchange with a standardized, quantum-resistant alternative in staging.
    • Step 2: Run load tests; quantify latency overhead.
    • Step 3: Schedule phased production rollout with rollback procedures.

    Research roadmap: error correction, scaling, and realistic timelines

    What it is and core benefits.
    Error correction is the hinge between clever demos and dependable AI acceleration. Experiments have shown that larger surface-code logical qubits can suppress errors and that certain protected encodings can surpass break-even, strengthening the case for scalable, fault-tolerant computation. Photonic and superconducting supremacy-style results continue to refine where quantum methods may pay off first.

    Requirements/prerequisites.

    • A watchlist of peer-reviewed results in error correction, boson sampling, and random-circuit sampling.
    • Internal “translation memos” that map new results to AI subroutines you care about.

    Step-by-step (beginner).

    1. Track milestones where logical error decreases as code distance increases.
    2. Note when classical simulators narrow performance gaps; update ROI assumptions.
    3. Maintain a backlog of candidate AI tasks to retest when hardware improves.

    Beginner modifications/progressions.

    • Prototype “algorithmic error correction” (noise-aware training) for variational models.
    • Keep a small, evergreen benchmark suite to rerun each quarter.

    Recommended frequency/metrics.

    • Logical error per cycle, shots required per gradient estimate, and classical-simulation catches-up incidents.
    • Time-to-solution at equal accuracy across releases.

    Safety and mistakes.

    • Don’t lock budget forecasts to a single roadmap; diversify across modalities (superconducting, trapped-ion, photonic).
    • Avoid assuming speedup persists once classical simulation advances.

    Mini-plan (example).

    • Step 1: Rerun your sampler benchmark after every major device or compiler update.
    • Step 2: Log energy per sample and accuracy.
    • Step 3: Green-light scale-up only if both improve beyond thresholds.

    Implementation playbook: from POC to production

    What it is and core benefits.
    A structured pathway to evaluate quantum subroutines as accelerators within existing ML lifecycle stages—without derailing delivery timelines.

    Requirements/prerequisites.

    • A sandbox environment; CI for circuit tests; contracts for limited hardware time.
    • Cross-functional squad: ML, optimization, security, and reliability.

    Step-by-step (beginner).

    1. Scouting: pick one subroutine (e.g., kernel evaluation or sampler) and one metric (e.g., wall-clock per training epoch).
    2. Baseline: build or reuse the best classical implementation.
    3. Quantum variant: reproduce the same API with a quantum backend (simulator first).
    4. Experiment: run matched trials; log compute, energy, and accuracy.
    5. Pilot: if promising, run a small hardware-backed pilot with budget caps.
    6. Decision: productize, park, or retire.

    Beginner modifications/progressions.

    • Run “dark launches” where a quantum subroutine computes in parallel but outputs aren’t used, purely to measure reliability.
    • Introduce cost-aware early stopping.

    Recommended frequency/metrics.

    • Adoption rate (number of pipelines using quantum subroutines), cost per 1% accuracy gain, failure/timeout rates, and queueing delays.

    Safety and mistakes.

    • Avoid one-off hero experiments; require reproducible benchmarks.
    • Do not expand scope without meeting go/no-go gates.

    Mini-plan (example).

    • Step 1: Replace an inner-loop optimizer in a meta-learning experiment with a variational routine.
    • Step 2: Freeze other variables; run 10 matched seeds.
    • Step 3: Decide based on cost-adjusted accuracy and stability.

    Quick-start checklist

    • Identify one hotspot (sampling, kernel, or combinatorial search).
    • Build a strong classical baseline and freeze it.
    • Prototype a tiny quantum subroutine on a simulator; keep circuits shallow.
    • Define one KPI and a strict budget.
    • Run a minimal hardware job to validate assumptions.
    • Log everything (seeds, shots, transpilation settings, queue times).
    • Decide: productize, park, or retire.

    Troubleshooting and common pitfalls

    • Training stalls (flat loss). Reduce circuit depth; switch to problem-informed ansätze; try alternative initialization; restrict entanglement locality.
    • Noise overwhelms signal. Increase shots modestly, then look at error-mitigation strategies; keep depth minimal.
    • Data doesn’t fit qubit budget. Use feature selection or hashing; consider kernel approaches that avoid explicit high-dimensional encodings.
    • Classical catches up. Re-run with the latest classical simulators/solvers; if the gap closes, retire or pivot.
    • Costs spike. Batch small jobs, schedule during low-queue windows, and enforce per-job caps.
    • Security uncertainty. Start crypto-agility pilots in non-production systems; prioritize long-lived secrets.

    How to measure progress or results

    • Time-to-target: seconds to reach a fixed validation score or regret threshold.
    • Energy-to-target: watt-hours per training objective point.
    • Sample efficiency: shots or iterations per fixed accuracy.
    • Reliability: job success rate, calibration drift sensitivity, variance across seeds.
    • Cost-efficiency: currency per 1% relative accuracy gain versus the baseline.

    A simple 4-week starter plan (AI/ML team)

    Week 1 — Scoping & baselines

    • Pick one pipeline hotspot (e.g., a sampler in a reinforcement learning loop).
    • Freeze a classical baseline with full profiling and experiment tracking.
    • Define KPIs: time-to-target and energy-to-target.

    Week 2 — Simulator prototypes

    • Implement a shallow, parameterized circuit as a drop-in sampler.
    • Run matched experiments across 10 seeds; log gradient norms and shots.
    • Decide whether performance is potentially competitive.

    Week 3 — Hardware shakedown

    • Submit a minimal hardware job (e.g., 100–500 shots per batch) during a low-queue window.
    • Record queue time, calibration stability, and output variance.
    • Compare cost-adjusted KPIs to the simulator and baseline.

    Week 4 — Decision & roadmap

    • If KPIs improve meaningfully, plan a pilot with a larger dataset and a fixed budget.
    • If not, document lessons, park the approach, and select a new hotspot (e.g., a kernel method or a small optimization).

    FAQs

    1) Does quantum supremacy mean my current models will train faster now?
    No. Supremacy validates tasks where quantum wins in principle; practical gains arrive first through specific subroutines and careful hybrid designs.

    2) What types of ML tasks are most likely to benefit first?
    Sampling-heavy generative methods, certain kernel-based classifiers, combinatorial optimization inside AutoML or feature selection, and simulation-informed ML.

    3) Are today’s devices reliable enough for production?
    They’re best for research and controlled pilots. Noise, queueing, and cost require guardrails. Error-corrected systems are advancing but not yet routine at scale.

    4) How do I know if a quantum subroutine is worth it?
    Hold the rest of the pipeline constant and compare cost-adjusted KPIs (time-to-target, energy-to-target, accuracy at fixed compute) against a top-tier classical baseline.

    5) What’s the risk that classical algorithms will erase any advantage?
    It’s real. Classical simulators and solvers improve continuously. That’s why re-benchmarking and strong baselines are essential before scaling up.

    6) How many qubits do I need to see an effect?
    For experiments, a handful to a few dozen can demonstrate behavior on toy problems. Real advantages for sizable ML tasks will demand larger, better-corrected systems.

    7) Do I need to rewrite my whole stack?
    No. Treat quantum as a new accelerator. Build adapters so quantum routines slot into existing training loops, evaluators, and model registries.

    8) What about security—does supremacy endanger my data or models?
    It accelerates the timeline for adopting quantum-resistant cryptography across AI pipelines. Start with inventory and pilots; focus on long-lived secrets.

    9) Aren’t variational circuits hard to train?
    They can be. Use shallow, problem-structured ansätze, smart initialization, and regularization to avoid vanishing gradients.

    10) Is photonic or superconducting hardware “better” for ML?
    Both have strengths. Photonics has showcased strong sampling tasks; superconducting platforms lead in circuit programmability and error-correction milestones. Keep an open, evidence-driven stance.

    11) Can optimization problems inside AutoML benefit?
    Possibly—small instances of combinatorial subproblems are good candidates. Measure rigorously; don’t assume universal speedups.

    12) How soon until fault-tolerant systems impact everyday ML?
    Milestones in error correction suggest steady progress, but timelines remain uncertain. Build capability now with pilots and crypto-agility; scale when metrics justify it.


    Conclusion

    Supremacy-era results don’t hand AI a magic button—but they do reveal where to push. For teams that treat quantum like any accelerator—prototype small, measure ruthlessly, and harden only what wins—hybrid AI workflows can start producing insight today while laying the groundwork for tomorrow’s error-corrected systems. The winners won’t be those who chase headlines; they’ll be those who build disciplined pipelines that can absorb quantum gains the moment they turn real.

    Call to action: Pick one hotspot in your pipeline, swap in a quantum subroutine on a simulator this week, and measure—if the numbers sing, take the next step.


    References

    1. Quantum supremacy using a programmable superconducting processor, Nature, 2019. https://www.nature.com/articles/s41586-019-1666-5
    2. On “quantum supremacy”, IBM Quantum Blog, 2019. https://www.ibm.com/quantum/blog/on-quantum-supremacy
    3. Quantum Computing in the NISQ era and beyond, Quantum, 2018. https://quantum-journal.org/papers/q-2018-08-06-79/
    4. Quantum computational advantage using photons, Science, 2020. https://www.science.org/doi/10.1126/science.abe8770
    5. Quantum computational advantage with a programmable photonic processor, Nature, 2022. https://www.nature.com/articles/s41586-022-04725-x
    6. Suppressing quantum errors by scaling a surface code logical qubit, Nature, 2023. https://www.nature.com/articles/s41586-022-05434-1
    7. Real-time quantum error correction beyond break-even, Nature, 2023. https://arxiv.org/abs/2211.09116
    8. Encoding a magic state with beyond break-even fidelity, Nature, 2024. https://www.nature.com/articles/s41586-023-06846-3
    9. Barren plateaus in quantum neural network training landscapes, Nature Communications, 2018. https://www.nature.com/articles/s41467-018-07090-4
    10. A Quantum Approximate Optimization Algorithm, arXiv, 2014. https://arxiv.org/abs/1411.4028
    11. A variational eigenvalue solver on a photonic quantum processor, Nature Communications, 2014. https://www.nature.com/articles/ncomms5213
    12. Supervised learning with quantum-enhanced feature spaces, Nature, 2019. https://www.nature.com/articles/s41586-019-0980-2
    13. TensorFlow Quantum: A Software Framework for Quantum Machine Learning, arXiv, 2020. https://arxiv.org/abs/2003.02989
    14. PennyLane Documentation, PennyLane, accessed 2025. https://docs.pennylane.ai/
    15. Qiskit Machine Learning, Documentation, accessed 2025. https://qiskit-community.github.io/qiskit-machine-learning/
    Amy Jordan
    Amy Jordan
    From the University of California, Berkeley, where she graduated with honors and participated actively in the Women in Computing club, Amy Jordan earned a Bachelor of Science degree in Computer Science. Her knowledge grew even more advanced when she completed a Master's degree in Data Analytics from New York University, concentrating on predictive modeling, big data technologies, and machine learning. Amy began her varied and successful career in the technology industry as a software engineer at a rapidly expanding Silicon Valley company eight years ago. She was instrumental in creating and putting forward creative AI-driven solutions that improved business efficiency and user experience there.Following several years in software development, Amy turned her attention to tech journalism and analysis, combining her natural storytelling ability with great technical expertise. She has written for well-known technology magazines and blogs, breaking down difficult subjects including artificial intelligence, blockchain, and Web3 technologies into concise, interesting pieces fit for both tech professionals and readers overall. Her perceptive points of view have brought her invitations to panel debates and industry conferences.Amy advocates responsible innovation that gives privacy and justice top priority and is especially passionate about the ethical questions of artificial intelligence. She tracks wearable technology closely since she believes it will be essential for personal health and connectivity going forward. Apart from her personal life, Amy is committed to returning to the society by supporting diversity and inclusion in the tech sector and mentoring young women aiming at STEM professions. Amy enjoys long-distance running, reading new science fiction books, and going to neighborhood tech events to keep in touch with other aficionados when she is not writing or mentoring.

    Categories

    Latest articles

    Related articles

    Leave a reply

    Please enter your comment!
    Please enter your name here

    Table of Contents