Anyone who remembers the dot-com boom 25 years ago will not be surprised by the rapid rise in the popularity of AI among users, developers, and investors. But the biggest earners are the manufacturers of chips and equipment on which AI systems run. Just look at the growth rate of NVIDIA’s shares. Now, let’s delve into the impact of artificial intelligence on the economy and our lives. Will AI destroy jobs? No, AI will create a plethora of new jobs. Will AI create many new and useful things? No, AI will flood the internet with a lot of secondary, dubious-quality, and false content. Is AI only capable of dissecting existing data? No, AI can generate new ideas. Will AI create new industries? No, AI is just another bubble.

In reality, these questions and their answers are only partially true and partially false. The actual situation is somewhere in between these questions and answers. So, let’s try to figure out where the truth lies. Disclaimer: this article is written by a human, not AI.

In the Caribbean, there is a tiny island called Anguilla, with an area of just 91 square kilometres and a population of approximately 16,000 people. A small offshore territory that is also a British Overseas Territory. But it so happened that this country now has the highest share of its economy based on artificial intelligence in the world. How could this happen? Very simply, Anguilla was blessed by the International Telecommunication Union in 1995 when it was given the country code .ai for internet domains. Interestingly, at that time, there was neither internet access nor any IT infrastructure on the island.

Anguilla is, for now, the only country that has only benefited from artificial intelligence.

When the AI boom started in 2022, the country felt it immediately. The demand for domains in the .ai zone skyrocketed: from 100,000 in 2021 to 354,000 in 2023. By the end of 2023, the government received $32 million from domain registration fees, approximately $2,000 per capita, including infants. This accounts for 20% of the nation’s income, with only tourism bringing in more at 37%.

Anguilla’s case demonstrates that in a gold rush, the most money is made by the suppliers of shovels and jeans for the miners. But jokes aside, AI is currently one of the most powerful forces in the global economy and politics, capable of manipulating voters. It generates not only benefits but also risks. It creates and distorts simultaneously. There are still no recipes for dealing with AI or protecting against the undesirable consequences of its use.

But first, let’s understand what this artificial intelligence can do.

Photo: IMF Managing Director Kristalina Georgieva: “Supervisory and regulatory bodies must balance the risks and opportunities associated with new technologies, including cross-border payments and digital currencies. AI promises transformation, but it can also test operational resilience and cyber risk management.” Source: Swiss Institute of International Studies.

AI is a Tsunami

Kristalina Georgieva, Managing Director of the International Monetary Fund:

“Artificial intelligence is hitting the global labour market like a tsunami. AI is likely to affect 60% of jobs in developed countries and 40% of jobs worldwide in the next two years. We have very little time to prepare people and businesses.

This could lead to a huge increase in productivity if we manage it correctly, but it could also lead to an increase in misinformation and, of course, greater inequality in our society”.

From a speech at a discussion organised by the Swiss Institute of International Studies, affiliated with the University of Zurich, on 13 May 2024.

What It Can Really Do

At present, it seems like AI knows and can do almost everything. In reality, this is far from true. The majority of applications appear to work roughly like this: the user formulates a task, and the AI-powered programme delivers a ready-made result.

For example, you might ask for the best pizza recipe with unusual ingredients. But wait, aren’t there search engines like Google that can find any recipe? Correct, they can, but only if:

  •  such a recipe actually exists;
  •  the search engine has previously found and indexed a web page containing that recipe.

An AI system operates quite differently:

  •  It has reviewed a multitude of various recipes, not just for pizza. It has even read cookbooks, all in different languages. In the industry, this is known as machine learning and the use of Large Language Models (LLM);
  •  It receives a task (prompt) from the user, which includes the conditions the result must meet. For example, a list of ingredients, the maximum cost of products for pizza in a particular country, the presence or absence of allergens, desired calorie content, etc. This interaction typically occurs in a chat where a specialised programme communicates with the client;
  •  The programme begins to use AI algorithms to formalise the task, i.e., to understand what is required of it;
  •  To process vast amounts of very diverse information, AI uses enormous computational power and huge data sets. It may use similar existing solutions previously created during task processing;
  •  Ultimately, the AI programme generates the result in the form of a recipe text, and the happy client uses it.

But the most crucial point is that the forms and methods of using AI can look very different.

A car’s autopilot? It certainly has AI, which was trained not on texts but on videos from dash cams and by recording the driving styles of thousands of experienced drivers. This is much more complex than working with text. But thanks to such technologies, corporations produce cars with autopilots. However, the question arises: if a car using autopilot hits a pedestrian, who is responsible? The car manufacturer? The driver? The car owner? The developer of the AI system used to create the autopilot?

A system for forming medical diagnoses based on recognising images from tomographs or studying clinical test results? Here, too, there must be AI trained on vast amounts of medical data, case histories, and scientific knowledge. But even in this case, the question remains: who is responsible for a false diagnosis, and therefore, harmful treatment?

Quality translation from one language to another? This is perhaps the simplest: there are dictionaries, vast amounts of literature, including translations. AI with LLMs can train to their heart’s content. Those who use AI for translations already know: the more popular the language pair, both the source and target language, the better the translation. This is understandable – existing texts and translations are the raw material for future translations.

Speech recognition? This is a very typical and popular way of using AI and LLMs. But for such systems to work successfully, millions of hours of human speech and ready-made transcriptions of these audio recordings – i.e., ready-made texts – are needed. The programme learns and uses the acquired recognition experience.

Identifying certain images within large graphic images or videos? This is also a usual task for AI. There are already numerous real applications of such technologies. For example, Ukrainian IT specialists have created technologies to enhance the capabilities of their drones. They train recognition systems on various images to identify camouflaged Russian military equipment. These drones can carry out missions to destroy the enemy even under conditions of powerful jamming systems that disrupt communication between the drone and its operator.

But a curious thing is that none of the applications listed generates anything fundamentally new. This is no coincidence. For now, AI best assists humans in managing vast amounts of data and other information, processing analysis tasks. Creating something fundamentally new? Very unlikely. More often, what appears new is something existing but processed using additional information. For example, Instagram filters or deep fake programmes, which significantly distort existing photos, audio, and video recordings.

Artificial intelligence can already recreate the participation of a well-known actor in a new film without the actor’s physical presence. Even professional actors’ associations are successfully negotiating with film companies for royalties for the use of an actor’s image in a new film created without their physical participation in the filming.

Thus, AI’s capabilities resemble a powerful amplifier of human intelligence, especially when tackling large volumes of relatively structured tasks with a known desired result. Writing a poem that might please a loved one is easy if you know what kind of poetry they generally like. Making a loved one happy – even the most powerful AI is unlikely to help with that.

But why such a boom in AI applications? Why do corporations always emphasise AI use in their presentations to consumers and investors? Even if this application is sometimes far-fetched or if the desired goals can be easily and more cheaply achieved without AI? Because AI indeed significantly increases labour productivity, reduces resource costs, adds safety and comfort in many cases. And, of course, because AI is currently riding a wave of popularity. And catching that wave is essential.

AI Benefits? They’re Already Here

The daily mantra of any business is “sell, sell more and more”. Hence, AI is actively used for market and customer analysis. The company accumulates a lot of data in the process, and it also has access to various sources of information. All of this can and should be used to forecast market movements and individual customers. This leads to more effective sales. But it’s not just about sales. Price management and product modification also require the help of AI.

Automating production? That’s already done. But what about automating customer communication? Using live people for this will soon be a sign of luxury service. Chatbots for communication, including voice interaction, not just text chats, are becoming routine. Recognising the customer’s voice, forming a prompt for the service system, generating a response to the customer query, and reacting with a voice message and/or certain actions – each of these steps requires AI systems of varying complexity.

Self-service is a hit for increasing sales and reducing business costs. Here, dialogue with the customer is the key tool. Without AI, it’s simply impossible to create truly effective self-service, as we can see.

Personalisation is the highest level of customer service because it gives the customer a sense of uniqueness. This makes them more willing to buy more and at a higher price. No person can account for the vast number of details that affect service quality and customer satisfaction. So, a diligent AI with a good memory is needed.

If you break down the cycle of a product’s journey from manufacturer to customer, you can optimise the purchasing process – this is implemented by all successful online stores and trading platforms. What does the customer want, and how can they be offered the optimal response to their query here and now? These tasks are successfully solved by AI systems, which use previous sales experiences of many previous customers and study the behaviour of the specific customer.

The same applies to the supplementary sale of financial services. Banks and insurance companies accumulate vast amounts of information about both the entire customer population and each individual. A person pays for an airline ticket? Perhaps offer them a travel insurance policy? This is one of the most harmless supplementary sales possible.

Detecting fraud at early stages is a dream for any manager, as theft and fraud hurt profits. Unusual, and thus risky, customer behaviour is calculated by an AI system very quickly and easily because “it has seen it all before”.

The automation of processes doesn’t end with customer communication. Logistics needs artificial intelligence since it constantly uses the analysis of diverse data and the making of optimal decisions – from where to where to transport, how much to order, how to manage inventory, and how to track the consumption of various resources.

Is human capital the highest value of a company? Yes, but managing talents and attracting truly valuable employees require a very deep understanding of the motives and effectiveness of specific employees. Of course, this can be done without AI, but it would require inflating HR departments to huge sizes, and even then, success is not guaranteed. However, AI excels at analysing careers, studying interactions with colleagues, and monitoring work efficiency.

In reality, the use of AI is becoming incredibly widespread, and it’s impossible to list all the ways it is used in one publication. Here, the most visible and widespread examples are given. But it’s not all easy and straightforward. This powerful tool comes with risks.

HR in Real Time

Sudeep Srivastava, Co-Founder and Director of Appinventiv (India):

The impact of artificial intelligence on business is most noticeable in talent acquisition compared to any other area. AI is used in several areas that belong to the superset of Talent Acquisition, such as candidate search, resume screening, using chatbots to interact with candidates, and then using AI-based facial recognition software to recognise the emotions the candidate displays.

With the advent of NLP (Natural Language Processing), chatbot technology, and sentiment analysis, it has now become much easier for companies to analyse and get real-time feedback from their employees to take the right actions.

The Risks of Artificial Intelligence: More Than We Expected

The most convincing negative consequences of implementing AI are perhaps the use of tools like deep fake. At the beginning of this article, we touched upon applications such as creating films and videos using the images of famous actors without their physical participation. Why not use this too, thought the fraudsters.

The latest victim of AI deep fake was almost the Ferrari corporation. One Monday in July, one of the company’s executives began receiving rather strange messages from CEO Benedetto Vigna. “Hey, have you heard about the big acquisition we’re planning? I might need your help,” read one of them. The fraudsters even resorted to artificially generated voice communication. The voice imitating Vigna was more than similar to the original, even featuring his characteristic southern Italian accent.

Since the conversation involved significant financial operations, the executive who received the call tried to identify the company’s CEO more carefully. He asked “Vigna” about a book he had recommended recently. At this point, the interaction with the fake “Vigna” ceased.

This is just one of many episodes that often end successfully for fraudsters. Deep fake is a vivid but not too frequent use of artificial intelligence. There are much more serious and large-scale problems.

The use of personal data for training artificial intelligence is one of the most significant challenges on this topic. Without a large amount of data, often quite sensitive, AI programmes would be worthless. But how to maintain confidentiality, and how to guarantee non-dissemination when the results of training are used by others, not those authorised to store this data? Although personal information is not disseminated in its original form, only the AI’s work results are shared, but where are the boundaries?

Liability for damage caused by an AI-driven system is also a significant risk of AI implementation. At the beginning of the article, we mentioned the legal conundrum – who should be held responsible for the consequences of a road incident involving a vehicle controlled by an AI autopilot?

What AI is certainly capable of is creating a lot of content based on existing content. Let’s assume the existing original content is neutral, objective, and honest. We’ll call it content 1.0. A rather risky process can then start. With AI, someone can begin generating secondary content in large quantities and with a specific direction. Let’s call this content 1.1. This content will be accessed by various machine learning programmes, which will start consuming this secondary content. Moreover, such content will likely remain freely accessible, unlike some valuable original content.

What happens next is clear: in the second round, AI programmes will start creating the next generation of secondary content, let’s call it content 2.1. In the construction of this generation of content, the ideas and distortions introduced earlier in the generation of content 1.1 will have a significant advantage.

What will we have in the end? A distorted information field. Finding undistorted information in it will be almost impossible. This is just one of the many problems created by the risks of using AI.

The spread of AI tools requires enormous amounts of computation and storage of vast amounts of information. This requires not only expensive equipment but also constant large energy costs. Here we face a contradiction – humanity is trying to prevent climate change, but traditional energy sources consumed by AI create a carbon footprint, and thus additionally strain climate stability.

Therefore, governments are trying to regulate all this somehow. But such regulation is not an innocent matter. When you regulate, you have to limit someone’s capabilities. And limited capabilities are obstacles to progress. There must be a golden mean somewhere, but it is still being sought.

The EU Takes on Regulation

Oliver King-Smith, Founder and CEO of smartR AI:

“Essentially, the EU’s Artificial Intelligence Act attempts to classify AI systems based on their level of risk:

  • Unacceptable Risk: These are AI prohibitions. Such things as state social scoring systems (this reminds me a bit of that “Black Mirror” episode) and certain biometric identification systems are outright banned. They have also condemned AI systems designed to manipulate human behaviour. If you ask me, that’s a good thing. 
  • High Risk: This category includes AI used in critical areas such as infrastructure, education, law enforcement, and healthcare. To comply, these systems will have to undergo significant scrutiny.
  • Limited Risk: Think of chatbots. They must meet some transparency requirements, but nothing too strenuous. 
  • Minimal Risk: Everything else falls here. No additional obligations, but they are encouraged to follow best practice.

…I was pleasantly surprised to see how much attention the Act pays to startups and small businesses. They talk about “simplified compliance methods” that shouldn’t be overly expensive for small companies. But here’s the catch – they don’t actually define what “overly expensive” means”.

Source: The Gaze