The IT world, and the world at large, is abuzz with the promise and threat of AI — artificial intelligence. One view of AI is that it will be a globally disruptive technology that will lead to societal changes akin to those of the industrial revolution. Another view is the more apocalyptic view that AI will surpass humans in every way and eventually effectuate our extinction. The more moderate view is that AI will provide us with tools that will incrementally improve all human activity. Which of these views most accurately depicts the future will be for future historians to decide, be they human or robot. But what does the AI landscape look like at the moment? What are the benefits already delivered, and which pitfalls have we already recognised?

The AIs of today are not what experts call artificial general intelligence (AGI). The AIs we see used today are in fact LLMs, or large language models. They are built to recognise patterns in large amounts of data, and extrapolate information from these patterns. The process of feeding the models with data is called training. The data the models are trained on can be publicly available digital data, like interactions on social media, digital books and magazines, information contained in webpages, etc. But more specific LLMs can be trained on specific types of data. Examples include x-ray pictures with diagnoses in order to automate diagnostic processes, or financial information on people and institutions in order to calculate risks for bankruptcy. The quality of the AI depends on the quality of the data, as well as the quality of the algorithm for pattern recognition and extrapolation.

What are examples of benefits with AI, as we see it today? Many tool developers are including AI support for simple creative tasks, like generating or reworking text or images for specific needs. AI tools can be especially good at editing formulaic texts, like CVs, job applications or other formal letters. They can also be used to enhance and edit real images, or to generate images from instructions. Completely AI generated pictures are still largely recognisable as fake. But initial problems, such as the wrong number of fingers or Escherian mishaps with limbs protruding from impossible places, have started to improve. There are also promising reports of special application AIs making significant strides forward, e.g., by recognising early stages of cancer in x-rays at a success rate that is higher than expert humans.

What are examples of problems stemming from the introduction of AI? Some problems are to be expected when there is a paradigm shift, while others are more insidious. An example of the former is the problems teachers face with students using AI to write their school papers without having to actually learn the subject matter. New tools call for new teaching methods to be developed, but it will take time. At the moment we are at the same place as Socrates was when he complained about the laziness of youth due to the introduction of books, and the lessened need for memorisation of knowledge.

Another example of a paradigm shift related issue is the fair remuneration of effort. What rights do the creators of the original training data have to remuneration for derivative work? How does an artist or an author protect their intellectual property from inclusion in LLM training data? And what happens when AI created work makes it financially impossible for humans to keep creating? Then the well of training data will dry up, and we get a closed loop of AI training itself on data it has created. As human creativity is so central to the human experience, this is not a goal to strive for. We need to find ways to share the prosperity generated from the adoption of AI equitably, in order for humans to continue to thrive.

An example of the more insidious problems stemming from the introduction of AI is its use for nefarious activities. There are examples of fraud where an AI generated voice is used to make the target think they are talking to a family member in crisis. There are endless possibilities for using AI to generate propaganda and disinformation. Lawmakers try to mitigate this problem by demanding AI generated content be marked as such. But a scammer or a troll factory is not bothered by legal restrictions. This issue will require massive investments in media literacy and fraud awareness training.

Another insidious problem with AI is its carbon footprint. The calculations needed to generate specific output for every request of every user are immense. And each calculation uses electricity. At the same time humanity stands at a crossroads of global warming, with some analysts saying we’ve already passed the point of no return. We need to reduce our need for electricity, but with the introduction and commercialisation of AI, we are massively increasing it. Maybe our final demise will not be because a far superior artificial intelligence deems us unnecessary. Maybe we’re orchestrating our own demise one silly AI generated picture of ourselves as an action figure at a time.

Categories:

Comments are closed