Table of contents
Ayming Institute : the think tank of the Ayming Group.
The Ayming Institute (AI) aims to help leaders in the private and public sector gain a deeper understanding of the evolving global economy by focusing on three areas.
The first area is sustainability. We believe that the environment and social responsibility are critical issues for businesses today. For this reason, our content aims to help companies integrate these issues into the way they make decisions.
The second area is business development. Through our content, we wish to help companies to develop a stronger business culture and a sustainable approach to growth.
The third area is the people side of the business. With our content, we want to support individuals as they navigate their careers, learn new skills, and find ways to contribute in a world that is constantly changing.
Our strongest commitment is to help organizations better understand how markets are changing, and how they can build better businesses as a result. We aim to do this by providing analysis of the global economy’s transformation; sharing our insights through thought-provoking publications, and engaging business leaders in conversations about the economic changes that are affecting all of us.
Executive Summary
A shared feature of diverse transformative technologies is that even their authors, no matter how visionary, will not foresee the full extent of their disruptive effects.
Although artificial intelligence, blockchain and quantum computing are at different stages of development, their many applications and implications may seem clear. Yet such is the pace and intensity of innovation in these fields and other convergent technologies that the emergent trends cannot yet convey their true significance.
In the case of each of these exciting technologies, it would be difficult to over- estimate that potential to solve today’s intractable problems and create new opportunities for progress in industry and commerce, and society at large. But amid the uncertainty there are also causes for concern about their potential misuse and the ramifications, including those for civic society, economic stability and cybersecurity.
In this business insight note, we explore the innovations emerging in these three areas, consider the most promising applications, and outline some of the factors shaping their development.
A-dAppting for a new web built on blockchain
What began as the enabling technology for Bitcoin and other cryptocurrencies, is now the foundation stone for a rapidly expanding ecosystem. Blockchain applications are being built out in diverse sectors from supply chain management and cybersecurity to healthcare, manufacturing and retail.
One of the more obvious, and potentially most disruptive, business uses of blockchain’s distributed ledger is in financial services. The growth of DeFi – decentralised finance – is driving innovation in the fintech sector.
DeFi services can use blockchain to store, manage and mint crypto assets without the need for a central bank or other overseeing authority. Digital information on the blockchain is held within immutable, trusted and distributed networks.1
DeFi platforms allow people to lend or borrow, earn interest, trade and speculate on asset-based derivatives and cryptocurrencies, insure against risks, and earn interest in savings-like accounts – without intermediaries such as exchanges, brokers or banks.
Smart contracts & dApps
All this hinges on two other aspects of this revolutionary technology – smart contracts and decentralised apps.
Instead of two parties signing duplicate copies of an agreement, with smart contracts the contractual terms are compiled into computer code that can run on blockchain. A network of computers executes the actions once predetermined conditions have been met and verified. The blockchain is then updated as the transaction completes, making the smart contract irrevocable. Smart contracts streamline complex processes involving several intermediaries, cutting transaction costs, time and risk, while enhancing the resilience and transparency of the entire financial system.
Decentralised apps (dApps) use smart contracts to interact with the blockchain and manage all the network users’ interactions. These digital applications can also operate on peer-to-peer computer networks. Like websites or smartphone apps, dApps use similar technology to display pages.
Sitting on a blockchain, they are fully decentralised and use open-source code. Another feature is that users can generate tokens when they complete a task such as verifying a transaction. The incentive is often in the form of a cryptographic token. As with smart contracts, dApps and their user reward feature will have innumerable applications beyond financial services to wherever processes are digitised.
Reconfiguring the web
As Web 3.0 unfolds to become a, hopefully, more transparent and democratic space for users free of controlling influences, dApps are part of this digital zeitgeist.
In its early form, the internet basically served up pages for users to read. Web 2.0 saw the rise of social media and interaction, so passive consumers became content creators, sharing information. In the process, Big Tech commodified and monetized users and their data through centralised apps such as Facebook, Google and Amazon, and digital advertising.
According to this view, Web 3.0 will displace the current business model of the centralised tech giants, which is built into today’s web, as blockchains’
decentralised technologies and dApps mature.2 Users will retain control of their personal data through “transparent, opt-in, peer-to-peer communications”. While that may sound utopian, it is also closer to Tim Berners-Lee’s original vision for the internet.
Certainly, we are seeing a lot of small companies and start-ups working in this space, developing dApps and related systems. As this is genuine innovation beyond the state of the art in tech, their efforts are attracting investment and support, not least through tax credits for research and development, from North America to Europe.
Many are targeting DeFi services, for example, through a platform for investors (FLUIDEFI) and a Canadian marketplace for crypto (NDAX). The potential for streamlining stock market trading is immense. However, in the highly regulated world of finance, financial institutions are likely to be cautious about embracing dApps. While the US is reluctant to legislate in this area, the UK and Canada are moving towards regulation.
From gaming to browsing
Just about any marketplace could be disrupted by blockchain and dApps, which are tailor-made for trading in non-fungible tokens (NFTs). Other fertile ground for dApps include entertainment (CryptoKitties), gaming (OpenSea) and gambling (WINk) – all involving payment and collectibles redeemable in cryptocurrencies.3
Anticipating the shift to Web 3.0, companies are also seeking new ways to apply blockchain and digital currencies to web browsing and digital advertising.
The Basic Attention Token (BAT) is a currency used on the Brave browser platform. BAT’s developers (who include the co-founder of the Mozilla and Firefox browsers) believe they will be able to provide a better and fairer service to users, advertisers and publishers of content, who all share in the token exchange.
Storing information on users’ interactions with digital ads in a distributed ledger, Brave promises they will be exposed to fewer and more tailored ads, and safe from today’s increasingly aggressive malware. Advertisers get a higher return through better targeting without violating users’ privacy. Publishers, who have lost out to Google and Facebook, gain new sources of revenue.4 The private ad network claims to have 55 million monthly users and thousands of campaigns with leading brands.
Earn as you move
This new model of shared reward is applicable to a host of products and services, as digital wallets replace customer loyalty cards. In 2021 the ‘play to earn’ (G2E) craze made online gaming a fun way to earn cryptocurrencies and attract new investors. ‘Move to earn’ schemes are shaping up to be a more sustainable for the vibrant leisure and fitness market.
The opportunity to earn tokens for exercising seems a no-brainer to people who may already be sharing details anyway on their Web 2.0 smartphone fitness apps. NFTs are another incentive to sustain a healthy lifestyle. Meanwhile, the host builds up a community of committed, appreciative users of their products and services, along with various revenue streams.
Users wearing sneakers equipped with Step’n NFTs (purchased in the in-app marketplace) are rewarded every time they walk, run or cycle outdoors. An in- built digital wallet allows non-crypto users to take part. Tokens can be used to upgrade sneakers or swopped for cryptocurrency, and users vote on the revenue share donated for carbon neutrality. Step’n’s GMT ‘governance token’ is a high- performing cryptocurrency.5
Founded in 2016, Sweatcoin has 100 million registered users, who walk 20% more on average after downloading the free app. Rewards can be spent on products, donated to charity or converted into the digital coin. Metagym is an equivalent app for gym workouts.
A common feature of these and other ‘move to earn’ dApps is they merge lifestyle with gaming, social sharing and finance (Game-Fi and Social-Fi).
As the technologies of the blockchain and distributed web reconfigure the internet, we await an explosion of innovation. New and novel dApps and services will avail of the advantages that Web 3.0 offers, exchanging information securely with no middleman or controlling authority, and able to run on any operating systems or device from cars and TVs to microwaves and toasters.
The evolution of AI
While the prospect of sentient machines still divides opinion, there is no denying how artificial intelligence is colonising modern life and commerce from drug discovery to chatbots, and from business process automation to a controversial AI rapper with a recording contract and half a million subscribers.6
Such is progress. But it’s not linear or limitless. The uptake of AI and big data analytics has exposed limitations too. Problems include bias and privacy concerns, the high costs of procuring data and implementing supervised machine learning analysis, flawed predictive models, the obsolescence of datasets distorted by disruptive events such as the COVID-19 pandemic, and the environmental impacts of energy-intensive number crunching on an unprecedented scale.
Just as human intelligence is highly adaptive, AI too is highly adaptable. Analysts are turning to ‘wide data’ – of various types drawn from unstructured, disparate sources – and ‘small data’, where company-specific or personal datasets are collected and mined for insights. Each approach can provide more context for analytics, helping solve problems and informing decision-making in real time at manageable cost. By 2025, Gartner researchers expects that 70% of organisations will be compelled to shift their focus from big to small and wide data, making AI less data-hungry.7
Two other AI adaptations are of more immediate and growing significance: augmented intelligence and generative AI. They are part of the movement to more user-friendly AI. With other convergent technologies, mediating APIs (application interfaces) and pre-prepared workflows, AI is becoming straightforward for users with limited AI experience while still being robust enough to provide valuable insights even for data scientists as they too can work smarter.
Augmenting human intelligence
The augmentation versus replacement argument is not over. AI will continue making it possible to automate activities. These new applications relieve people of mundane tasks, and their jobs in some cases, as business systems in particular are made more efficient. But existing roles change and new ones are being created too as, increasingly, AI assumes the role of smart servant or co-worker. This more benign vision for AI is usually called ‘augmented intelligence’ though there is an abundance of other terms (such as enhanced intelligence, intelligence amplification, cognitive augmentation, decision support, and machine-augmented intelligence).
The aim is to complement brainpower rather than build AI models that operate independently. Machine learning and predictive analysis from data sets enhances human intelligence, resulting in better decisions.8
Augmented intelligence is being deployed in numerous ways in life and business, from smarter chatbots and intelligent voice assistants to roboadvisors and automated business reporting or processing of documents in insurance and financial services.
Language lessons
Complementary technologies such as natural language processing (NLP) and natural language generation (NLG) are helping to bridge machine and human intelligence. In the case of AI-powered chatbots and intelligent voice assistants, for example, they help service providers work smarter, by triaging callers, directing them to the right department.
This form of user-friendly, augmented AI also facilitates data analysis. Before, a company needed teams of data scientists to analyse large datasets. Augmented analytics automates the onerous task of data cleaning and can provide the insights organisations need without investing in data preparation, processing and analysis. So, by combining ML and NLP, augmented intelligence allows insights to be gleaned by analysing data at scale more swiftly for timely decision-making.9
Coupling NLP with computer vision and ML makes ‘intelligent document processing’ possible by extracting relevant data from documents, routes it through digital systems, and processes the content to achieve business goals.
Insurance companies with lots of claims and supporting documentation to process are among the businesses investing heavily in AI for this purpose. Extracting
and classifying key information is time-consuming. One North American client estimates their augmented intelligence application is doing the work of 2,000 employees. However, variations in the type and quality of documents and the limitations of image and optical character recognition (OCR) software can still defeat AI. So human supervision is required, and some items have to be recorded manually.
And with human help, the AI gets better. For example, when the user of a document maker-checker views a split-screen interface with the original document and AI-generated version on either side, and corrects mistakes, the AI learns from every correction. Nividous, an intelligent automation platform with bases in India, the US and UK, sees this kind of augmented AI as a subset of more general augmented intelligence.10 Guiding the ML process improves AI for full-circle augmented intelligence.
Automating analytics has limitless uses across the economy. An early example involved oil and gas firms using the NLP capability of IBM’s Watson platform to retain the tribal knowledge of an ageing workforce. Processing years of unstructured and structured data across a trove of documents – from manuals, standards, safety procedures, reports and historical work logs – it could answer questions from maintenance and operations technicians making decisions about complex projects, and in a quarter of the time they would otherwise need.11
Even with advanced digital technologies like AI-enhanced robotic process automation (RPA), most workplace processes require human workers. An RPA bot enhanced with AI may handle one task within the process, and a human the next. For example, an insurance agent onboarding a new customer may trigger an attended RPA bot to collect data from customer documents, using intelligent computer vision technology. By verifying the details, the bot reduces errors and detects potential fraud. Together, human and bot improve customer experience and business outcomes.12
Emerging patterns
Machine learning models are particularly good at discerning patterns in large data sets. From spotting credit card fraud to calculating insurance – or helping medical researchers detect disease (see below) – predictive analytics platforms use this capability to project likely future outcomes.
Many other applications are in the spheres of consumer service, sales and marketing. Unscrambl, whose Qbo software allows data users to analyse disparate data sources using natural language, cites examples from retail to hospitality involving targeted advertising and dynamic pricing driven by inventory and demand.13 In one case, CocaCola quadrupled the click rate on digital ads after using AI-driven image recognition and augmented analytics to analyse product references across social media.
But what if AI could forecast stock market movements by tapping, if not the wisdom of crowds, the sentiment of the herd? Social media’s user-generated content is a data tsunami and, like the vast majority of available data that is unstructured, was considered unfathomable.
Using NLP and machine learning, AI companies have developed sentiment indices that outperform the stock market by tracking positive comments about shares in people’s online posts. One of many AI applications in finance and risk prediction, this trend was spurred almost 10 years ago when Twitter began ‘cashtagging’ stocks – placing a dollar sign instead of a hashtag in front of the stock symbol, making it easier to track millions of relevant tweets.
Companies such as BUZZ and AIstockfinder offer retail and/or institutional investors advice to help them beat the market. Based on its NLP and AI solutions, Generative Mind says it is developing a new generation of AI products for businesses that “require no training and do not operate in a black box”. The Los Angeles-based company’s system simulates human cognition to extract actionable insights from social media messages to forecast risk and market price movement.
Multiple data sources on the web, including financial reports and blogs, are being mined in this way to predict price movements.
The creative side of AI
Another significant trend in the evolution of AI is putting previously unimaginable processing and creative power in the hands of business workers and even consumers, as well as researchers and scientists.
Generative AI uses ML algorithms and other technologies to enable machines to generate new, artificial content using existing text, data, audio files, videos, or images. Software enabled in this way can abstract the underlying pattern in information inputs to create new outputs similar to the original but not slavishly repetitive.
The most familiar of this far-reaching innovation’s many applications range from the sinister to the frivolous – deep-fake videos to face- swapping smartphone apps. It would be naïve to downplay the harmful potential of a technology used to show Ukraine’s president Zelenkskiy apparently capitulating in the early days of the Russian invasion, even if it was crudely executed.14
However, generative AI is also being exploited in multiple ways for the good of industry and society, and has been described by MIT as one of the most promising advances in the world of AI in the past decade.15
Power of networking
While ML models are a powerful tool for prediction, clustering and classification, generative AI is a deep learning application that creates new from existing content. It does this using ‘generative adversarial networks’ (GANs) and transformer networks. These are neural nets (structured to mimic brains) that transform a
given sequence of elements, such as a sentence (a sequence of words), into another sequence.16 Like game-playing programs such as DeepBlue that improve by competing against themselves to improve their strategies, these ML systems learn from adversarial training. GANs comprise two neural networks, a ‘generator’ and ‘discriminator’: the generator creates new data while the discriminator ensures that the source data is different from the newly created content.
Generative AI’s potential applications are legion. It becomes possible to convert satellite images to map views for exploring uncharted regions or creating
360-degree photos of a person’s head from say, a side view, for security verification.
In the movie industry weather and light conditions can be transformed, and actor’s voices and faces cloned, synthesised and lip-synced with little effort, even when restoring artefacts. In healthcare, x-rays and CT scans can be transformed into photo-quality images for more accurate diagnosis.17
Various AI-powered writing models are being touted for creating marketing content, from emails, social media posts and website product descriptions to blogs and articles; (please note, AI had no hand in drafting this text).
Text to image translation is another generative tool, which is raising ethical issues as well as eyebrows. The pioneering Dall-E system was developed using the
GPT (generative pre-trained transformer) language models of OpenAI. It learned how to swap pixels for text by analysing 400 million images with text captions scraped from the internet.18 The even more realistic and high-resolution Dall-E-2 is now in beta. Meanwhile, the version offered by research lab Ah! Midjourney has illustrated the front covers of established publications such as The Atlantic and Economist. Whatever the advantages, many in the creative economy have concerns about plagiarism and copyright, not to mention artists’ livelihoods.19
Multiple applications
Other uses range from personalisation – in the design of prosthetics or delivery of banking services – to drug discovery and the creation of synthetic data.
Generative AI could optimise prosthetic limbs through designs based on ML modelling of the behaviour and movement patterns of patients. The technology’s imaging capabilities can advance healthcare in other ways, such as earlier diagnosis of retinopathy in diabetic patients. By evaluating millions of images of affected retinas, and generating new datasets to characterise the earliest stages of this progressive blindness, ophthalmologists may take preventive measures to eliminate or at least mitigate the condition.20
In banking and investment services, both GANs and natural language generation (NLG) are found in most scenarios for fraud detection, trading prediction, synthetic data generation and risk factor modelling. Banks will apply generative AI in growth areas such as fraud detection, trading prediction and risk factor modelling.21
In drug discovery, Gartner predicts that by 2025, more than 30% of new drugs and materials will be systematically discovered using generative AI techniques.22 It cites the creation of a drug to treat obsessive compulsive disorder (OCD) in less than 12 months.
Exiscientia, a spin-off from the University of Dundee in Scotland, claims to be both the first to automate drug design and to have an AI-designed molecule enter clinical trials. It is just one of many pharmtech companies founded in the past decade around AI-based strategies for drug discovery and development that have gone on to raise substantial funding from investors and attract pharmaceutical partners.23
Chemistry42 is a platform combining generative algorithms and reinforcement learning designed to find new molecular structures with desired properties – automatically and in days, as opposed to the years it normally takes to identify one candidate. Its biotech creator, Insilico Medicine, announced in 2021 that it had found a new protein target and molecule for a drug against idiopathic pulmonary fibrosis (IPF). The discovery, said to be a world first, took less than 18 months and cost 10% of a conventional study.24
Synthesising data
Combing both generative AI and augmented intelligence, an Ayming client was able to improve accuracy when detecting heart anomalies in small datasets from wearable devices by generating synthetic data. Despite the rarity of these events, the technological advances made earned tax credits for this pioneering R&D, though obstacles remained to improve the reliability of predictive alerts for review by clinicians.
The power to generate synthetic data opens up new fields of endeavour for AI with countless applications from medicine to the corporate world.
Among the biggest challenges when harnessing AI, including in business, are privacy concerns, bias and lack of training datasets. Generative AI can provide solutions. Based on patterns detected in data, GANs can also be used to generate new data, configured with similarities to the original data.
Crucially, the synthetic data set should have the same predictive power as the original data, but none of the concerns that restrict sharing of such data sets. This is game-changing for innovation, partnering and monetizing data, according to Accenture.25 It cites a financial institution with a cache of valuable data so closely controlled that its own analysts were hamstrung. When the company created synthetic data, the team could continuously update their model and generate ongoing insights to improve business performance.
Generating reliable data, and verifying it, requires specialist skills as well as AI know-how and sophisticated frameworks. This may include a pre-agreed definition of fairness. The new dataset can then be used to train a model, without the need for bias mitigation strategies like algorithmic fairness, which can lead to accuracy trade-offs.
Generative AI produces less than 1% of data today, but that is projected to rise to 10% by 2025. In consumer-facing applications, Gartner predicts that 20% of all test data will be synthetically generated.
Towards a more trustworthy AI
When technology as important AI permeates every sector of the economy, it touches people’s lives in innumerable ways. The danger of bias caused by skewed or partial training datasets, or other flaws in AI models, is real. And there are many notorious examples, ranging from discrimination against women in recruitment to over-prediction of prisoner re-offending rates and inferior healthcare interventions for ethnic minorities.26
How do users ensure that AI algorithms are fair, reliable and trustworthy so that the technology commands public confidence and acceptance? There are various tools, recommendations and best practices available from consultants27 and others.28
The role of ethics in assuring trustworthy AI is also exercising regulators and policymakers. An Artificial Intelligence Act proposed by the European Commission in April 2021 addresses the risks of misuse.29 It adopts a broad definition of AI, defends fundamental rights, and explicitly bans certain applications. It will also establish standards for quality and disclosure, require assessments, and set out control mechanisms and penalties for breaches.30
Applications are assigned to three risk categories. Government-run social scoring as used in China is deemed unacceptable, and will be banned. CV-scanning tools for job applicants are an example of a high-risk application subject to specific legal requirements. Others that don’t fall into these categories would go largely unregulated.
Some campaigners are lobbying for changes to close loopholes and ensure that unforeseen applications that prove dangerous are re-assigned to the ‘high-risk’ category.31 Digital rights groups have also expressed alarm at ‘national security’ exemptions added ahead of negotiations over the text in the European Parliament.
When enacted, this could be the first AI law by a major regulator, unless pre- empted by US financial authorities, who have also signalled their intention to regulate AI.32 In September 2021, Brazil’s Congress approved a bill to create a legal framework for AI regulation.33
Like its General Data Protection Regulation (GDPR), the EU Act could become a global standard. Meanwhile, companies are urged to self-assess their AI applications in advance.34 The Commission promises a light touch, especially for SMEs.35 Like other jurisdictions, the EU does not want to curb the competitiveness of business, while making Europe ‘a global hub for trustworthy AI’.
New horizons
That priority is reflected in the bloc’s support framework for innovation. Europe Horizon 2020 includes TAILOR – a collaborative network spanning the scientific community and industry in 21 countries. The aim is to help researchers promote trustworthy AI that is robust, transparent and explainable. It also addresses the ‘brain drain’ and reliance on US and China’s AI technologies. TAILOR targets opportunities in health and logistics, in particular, according to France’s INRIA National Research Institute in Digital Science & Technology.36
This is just one of several Horizon-funded initiatives to build a vibrant AI market. HumanE-AI also focuses on ethics and trust, including the problematical impacts of networked AI systems on society and the environment.37 ELISE is a network of artificial intelligence research hubs.38 Society, media and democracy are addressed by AI4Media – a centre of excellence for next-generation AI research and training.39
INRIA is also a partner in TRUST-AI, an international network developing a new paradigm for collaboration between human experts and machine learning.40 Separately, France’s national research hub is collaborating directly with its German counterpart, DFKI (Deutsches Forschungszentrum für Künstliche Intelligenz). Through ENGAGE, they aim to create a new generation of computing infrastructure for AI based on deep neural networks.41
Computing carbon
These algorithms will simulate the human brain to process data in complex ways using advanced mathematical models. Apart from maximising the speed of
calculation, ENGAGE also recognises the other ethical dilemma posed by AI – its carbon intensity. The new architecture should consume less energy by improving resource management, optimising AI worklflows, and more efficient memory management.
The main factors that influence the energy consumption and emissions of supercomputers are the location of servers (their energy source and cooling required) and memory availability (rather than usage).42
Deep learning in NLP alone has a mind-boggling carbon footprint.43 The same 2019 study also confirmed that the computational efficiency of hardware and algorithms was not a priority for researchers. Developing GPT-3, the largest language model ever trained reportedly involved the equivalent of 355 years’ computing time.44 A picture may paint a thousand words – and Dell-E no doubt does so on demand in seconds – but how much carbon was consumed crunching 400 million training images and captions?
Research in 2021 found that the environmental impact of computing and ICT was heavier than expected.45 In terms of global greenhouse gas emissions, it upsized previous estimates from 1.8-2.8% to 2.1-3.9% by factoring in the full supply chain impacts. Worldwide aviation’s share hovers around 2%.
It is true that AI developers tend to release already-trained models via open sources, avoiding the need for others to reinvent their energy-intensive wheel. And AI can be a force for good in tackling the climate emergency, as Deloitte also points out.46 Among many eco-positive applications in sectors from agriculture to transport, it can play an enabling role in a circular economy that replaces our linear ‘extract-make-waste’ model by closing the loop for the world’s depleting natural resources.
A greener AI?
But what would ‘green AI’ look like? There is scope to prune neural networks without undermining their computational power.47 But even with high energy prices the incentives to monetize technological gains will continue to trump environmental concerns.
Regulatory pressure will be required to effect the change of mindset required of developers, service providers, users and in AI itself. To avoid unnecessary data storage and large-scale training of new language models from scratch, green AI will need to hard-code the kind of abstract reasoning human brains perform, according to one view, perhaps echoing the work of ENGAGE.48
In a challenge to their Big Tech employers’ claims to sustainability, US tech workers have joined protests over their industry’s contribution to the climate emergency.49 Action is overdue to tackle the sector’s lack of transparency, especially in regard to the impact of data centres.
The New York-based AI Now Institute, which studies the social implications of artificial intelligence, has called on policymakers to address the outsized impact of AI on climate change.
They recommend forcing companies to provide full energy and carbon transparency. This should mean accounting for the full-stack supply chain – from mining minerals for chips to waste produced by consumer gadgets. Calculating the energy and climate impacts of AI should be a standard, mandated part of policy practice. It also called for tech regulation to be integrated in climate policymaking and curbing the use of AI to accelerate fossil fuel extraction.50
Compared with bias and unethical uses, these carbon costs are still something of a dirty secret. Little or no heed has been paid to environmental costs in the race to develop and apply AI and other energy-hungry digital applications such as blockchain and quantum computing (see below). While AI can make significant a contribution to business efficiency and climate solutions, it seems that tech will remain a big part of the problem.
Computing’s unknown quantity
Quantum computing will be game- changing because, at the subatomic (quantum) level, different rules apply.
A digital computer relies on transistors in silicon chips that exist in one of two states – the binary 1/0. In a quantum computer, qubits can represent either one, anything in between or multiple states at the same time (superposition).
To tap the unprecedented processing power, controlling these interactions and preventing interference is essential. So high vacuums and ultra-stable mechanical and thermal environments are required. The prize is the ability to solve complex computational problems beyond even today’s supercomputers – and a huge competitive advantage for early adopters.
With applications from materials science and medicine to complex business challenges and implications for other digital technologies, quantum computing could be as transformative as computing itself was.
From hype to happening?
For decades now, computing’s holy grail has been hyped, but development has accelerated in the last couple of years. As quantum technologies are maturing, private investors are following governments by betting on breakthroughs.
At least $35 billion is known to have been invested so far in quantum technologies worldwide. More than $30 billion was by states, but in the last quarter of 2021 alone private investors added another $1 billion.51
Most G20 countries have flagship initiatives involving clusters of established companies, start-ups and universities. China, Canada, the US and European Union are leading on funding. Germany, France, the UK and Netherlands have significant national programmes. Singapore, Japan, South Korea, India and Australia are also strongly committed to quantum R&D.52
Around 50 companies are thought to be working on various hardware platforms at different stages of development. These use either superconductors, neutral atoms, trapped ions, photons or silicon. The most advanced are superconducting qubits and trapped ions, but others may catch up, creating a mixed market of different platforms in a quantum ecosystem.
Various milestones have been reached since IBM made the first quantum computer available via the cloud in 2016.53 In 2019 Google claimed supremacy with the completion of the first calculation beyond the capabilities of classical computing.54 But no machine has yet reached the reliability, speed and scale required to demonstrate the quantum advantage.
Trial and error
Qubits remain in a quantum state for fractions of a second, making them error- prone. Controlling this qubit error rate is one of those challenges. A French start- up claims that its self-correcting superconducting quantum bit, the cat qubit, is a breakthrough that makes fault-tolerant quantum computing easier. Alice & Bob (named after fictional personae used in cryptographics and quantum physics thought experiments) raised $30 million to develop their new chip architecture and quantum computing-as-a-service for launch in 2023.55
As more university spin-offs join Big Tech in the quantum race, the technology’s ecosystem is expanding. Many systems need highly specialised environments to operate in, such as cryogenic cooling, ultra-high vacuums and magnetic shielding to protect the physical interactions of qubits whether electrical, optical or magnetic.
Solid-state quantum processors tend to operate at deep-space temperatures below 1 Kelvin or -272°C. Finland’s Bluefors manufactures equipment in the advancing field of hi-tech refrigeration, which will be essential to scale these machines. In the Netherlands, Delft Circuits, which is based at the town’s university, supplies the big names in quantum computing with cryogenic cables.
Microsoft and semiconductor specialist Rambus are currently collaborating on a project to develop prototype memory block DRAM modules that function at cryogenic temperatures.56
Other systems can function at slightly less challenging extremes. Universal Quantum uses trapped ion-based quantum computing at 70K. Another UK university spin-off, Oxford Quantum Circuits, has built its first commercial machine based on an aluminium circuit on super-cooled silicon. Swiss quantum technology incubator Terra Quantum is developing its own high-temperature technology.57
It is one of several companies developing both hardware and software, while a growing number are specialising in the algorithms and protocols needed for operating systems and applications. These might seem countless, but even when perfected, quantum computers will be better suited to solving only some problems, so will probably work in tandem with classical computers.
Decoding complexity
The technological race is also about applying the increasingly powerful set-up to the right business uses. Quantum technology’s advantage in optimising and managing risk in complex systems makes finance, insurance and investment products prime candidates.
The machines already in use are mostly NISQ – ‘noisy intermediate-scale quantum’ – systems. Useful applications in finance are some five years off, according to Goldman Sachs.58
Yet Spanish software start-up Multiverse Computing insists that quantum computers can already outperform classical computers, solving problems such as portfolio optimisation and fraud detection 100 times faster. This is despite having only 100 qubits – about a tenth of what was thought necessary for an effective system – and high error rates. Its clients include the European tax agency Bankia, Bank of Canada and BBVA – an early investor in this space.59
Medicine is another field eager for quantum’s computing power. In 2020 many top pharmaceutical companies – convinced that the technology will revolutionise drug discovery, manufacturing and the supply chain – formed the QuPharma alliance to collaborate on quantum solutions and applications.60 Its partners include the UK’s Quantum Computing Centre, which awarded a £6.85 million grant to a SEEQC-led consortium to build a commercially scalable machine for drug development.61
Other appropriately complex problems are crying out for solutions in logistics, including route and traffic optimisation and international shipping. German rail operator Deutsche Bahn has partnered with Cambridge Quantum to optimise its scheduling.62
As with biology, materials science could be transformed by quantum-driven discoveries. Possibilities range from better crops and fertilizers to green hydrogen catalysts and more efficient batteries.
Quantum computing can also turbocharge machine learning, with ramifications for AI and neural networks, and applications in autonomous vehicles, automated trading and predictive maintenance. Quantum ML is a research field in its own right that could accelerate smaller business access to AI and complex models.
Cyber insecurity
The danger the technology poses for cybersecurity is also real and approaching. Quantum computers would easily solve the mathematical puzzles underpinning current encryption protocols. In the US, most critical communication systems and infrastructure are considered vulnerable to some degree.63 In May 2022 the US issued a presidential memorandum on the quantum security and ecosystem development required to combat this threat.64
In the tech race, cybercriminals won’t be playing by the rules, but there is a role for international agreements to safeguard the potential social and economic benefits of quantum technologies. While allowing fair competition, these might cover standards on interoperability, performance measurement, certification, software standards and responsible use.
There is also the carbon footprint to consider. This challenge has not been solved for conventional computing and AI. Quantum technology’s intensive energy use is a problem that needs to be tackled before it scales.
Quantum computing may not save the planet by solving its greatest problem, but it will change the world. The technology can contribute to the many solutions the climate emergency requires – along with other intractable problems, in drug development and other fields, that are beyond the capabilities of artificial intelligence and blockchain.
Contributors
Delphine Favre, PhD
Resource manager – ICT expert Innovation Funding Consultant, Ayming France
For the past 11 years, Delphine has managed a portfolio of clients working in the ICT sector, from start-ups to large companies. She helped clients to adopt the best strategy for funding their innovation. Furthermore, she helped the French ICT teams in providing the best scientific strategy defense for client innovations during control phases.
She graduated with a master’s degree in photonics and a PhD in biomedical image processing.
Maha Jonovich
Senior Manager, Finance and Innovation Performance Ayming Canada
For more than 14 years, Maha has helped secure government funding and R&D tax credits for numerous large and mid-size corporations in the IT industry, with a focus on software engineering, machine learning, applied mathematics, telecommunications, and electronics engineering. Maha also leads a team of consultants in Central Canada to support the R&D funding needs of Ayming’s clients in the IT sector. She graduated with a Bachelor of Engineering in Computer and Communications Engineering.
1 https://cryptoadventure.com/defi-vs-dapps-how-do-they-work-and-whats-the-difference/
2 https://www.forbes.com/sites/forbestechcouncil/2020/01/06/what-is-web-3-0/
3 https://www.geeksforgeeks.org/what-are-decentralized-apps-dapps-in-blockchain/
4 https://www.investopedia.com/terms/b/basic-attention-token.asp
5 https://www.iqstock.news/n/move-earn-earning-crypto-breaking-sweat-worth-4312286/
6 https://www.bbc.co.uk/news/newsbeat-62659741?at_medium=RSS&at_campaign=KARANGA
7 https://www.gartner.com/en/articles/the-4-trends-that-prevail-on-the-gartner-hype-cycle-for-ai-2021
8 https://www.hitechnectar.com/blogs/examples-of-augmented-intelligence/
9 https://unscrambl.com/blog/7-augmented-intelligence-examples-and-industry-use-cases/
10 https://nividous.com/blogs/augmented-intelligence
11 https://www.techtarget.com/searchbusinessanalytics/feature/5-augmented-analytics-examples-in-the-enterprise
12 https://nividous.com/blogs/augmented-intelligence
13 https://unscrambl.com/blog/7-augmented-intelligence-examples-and-industry-use-cases/
14 https://jp.reuters.com/article/uk-ukraine-crisis-deepfake-idUKKCN2LD2GP
15 https://www.analyticsinsight.net/top-3-emerging-technologies-in-artificial-intelligence-in-the-2020s/
6 https://iq.opengenus.org/transformer-network-replace-gans/
17 https://digitalreality.ieee.org/publications/what-is-augmented-intelligence
18 https://www.theguardian.com/commentisfree/2022/aug/20/ai-art-artificial-intelligence-midjourney-dall-e-replacing-artists?ref=refind
19 https://social.legal/midjourney-testing-the-boundaries-of-art-ai-and-ip-law/
20 https://www.forbes.com/sites/naveenjoshi/2022/03/23/exploring-the-plethora-of-use-cases-of-generative-ai-in-various- sectors/?sh=73804a621ff4
21 https://www.gartner.com/en/newsroom/press-releases/2022-05-24-gartner-identifies-three-technology-trends-gaining-tr
22 https://www.gartner.com/en/articles/the-4-trends-that-prevail-on-the-gartner-hype-cycle-for-ai-2021
23 https://www.nature.com/articles/d43747-021-00045-7
24 https://www.nature.com/articles/d43747-021-00039-5
25 https://www.accenture.com/us-en/insights/artificial-intelligence/synthetic-data-speed-security-scale
26 https://towardsdatascience.com/bias-in-artificial-intelligence-a3239ce316c9
27 https://www2.deloitte.com/be/en/pages/risk/articles/trustworthy-ai.html
28 https://research.aimultiple.com/ai-bias/
29 https://ec.europa.eu/commission/presscorner/detail/en/ip_21_1682
30 https://artificialintelligenceact.eu/the-act/
31 https://artificialintelligenceact.eu
32 https://hbr.org/2021/04/new-ai-regulations-are-coming-is-your-organization-ready
33 https://www.camara.leg.br/noticias/811702-camara-aprova-projeto-que-regulamenta-uso-da-inteligencia-artificial
34 https://ai-regulation-info.eu
35 https://ec.europa.eu/commission/presscorner/detail/en/QANDA_21_1683
36 https://www.inria.fr/fr/tailor-reseau-europe-intelligence-artificielle
37 https://www.humane-ai.eu/research-roadmap/
38 https://www.elise-ai.eu
39 https://www.ai4media.eu
40 http://www.trustai.eu/index.html
41 https://www.inria.fr/fr/engage-infrastructures-calcul-intelligence-artificielle-dfki
42 https://green-algorithms.org
43 https://arxiv.org/pdf/1906.02243.pdf
44 https://lambdalabs.com/blog/demystifying-gpt-3/
45 https://www.treehugger.com/computing-emissions-worse-than-thought-study-5204571
46 https://www2.deloitte.com/uk/en/blog/experience-analytics/2020/green-ai-how-can-ai-solve-sustainability-challenges.html
47 https://twitter.com/alex_renda_/status/1237393727389184007 https://openreview.net/forum?id=S1gSj0NKvB
48 https://towardsdatascience.com/what-would-a-green-ai-look-like-28d91aaff3be
49 https://techworkerscoalition.org
50 https://medium.com/@AINowInstitute/ai-and-climate-change-how-theyre-connected-and-what-we-can-do-about-it-6aa8d0f5b32c
51 https://thequantuminsider.com/2022/01/06/tqi-annual-report-looks-back-on-3-2-billion-in-investments-steady-stream-of-scientific- advances-of-2021/
52 World Economic Forum – State of Quantum Computing, 2022
53 https://uk.newsroom.ibm.com/2016-May-04-IBM-Makes-Quantum-Computing-Available-on-IBM-Cloud-to-Accelerate-Innovation
54 https://www.nature.com/articles/d41586-019-03213-z
55 https://techcrunch.com/2022/03/09/alicebob-a-quantum-computing-startup-raises-30m-to-launch-its-first-fault-tolerant-cat- qubit-computers-in-2023/
56 https://www.rambus.com/blogs/a-closer-look-at-rambus-cryogenic-memory-collaboration-with-microsoft/
57 https://sifted.eu/articles/11-european-quantum-computing-companies/
58 https://www.ft.com/content/bbff5dfd-caa3-4481-a111-c79f0d38d486
59 https://www.bbva.com/en/bbva-pursues-the-financial-sectors-quantum-advantage/
60 https://www.linkedin.com/pulse/qupharm-pharmaceutical-companies-form-alliance-share-risks-hua/
61 https://www.nqcc.ac.uk/updates-article/announcing-the-qupharma-project-for-drug-development-using-quantum-computing/
62 https://quantumzeitgeist.com/cambridge-quantum-and-deutsche-bahn-employ-quantum-algorithms-to-optimise-train- scheduling/?amp
63 https://www.rand.org/pubs/research_reports/RRA1367-6.html
64 https://www.nsa.gov/Press-Room/News-Highlights/Article/Article/3020175/president-biden-signs- memo-to-combat-quantum-computing-threat/