Skip to content

What is AI and how to regulate it?

Opening Remarks

Knowledge is not established by authority. Knowledge is established through criticism, until the best explanation remains standing.

Yet, the urge to appeal to authority is powerful. Notice the tight alignment between those ringing alarmism bells, and those making arguments from authority (“person/organisation X” says, as opposed to, “this explanation says”). Once you notice, it cannot be unseen.

As technologists, we have an onus to present explanations; As policymakers, we must engage explanations – for explanations and criticism are the basis for society and prosperity as we have it today.

Knowledge is not “just data”

There is a misperception – egged on by the trend of “big data” – that knowledge is simply a combination of data and trends.

This is false, for a number of reasons:

  1. When Galileo posited that the earth travelled around the sun, there wasn’t a change of data. The data was largely the same, but the explanation was different – his explanation better accounted for the data!
  2. When formulating an explanation, we decide what “data” is relevant. We decide what the “outliers” are in a dataset. In the context of AI, we decide what data to include for training [1].
  3. We often develop theories before we have the data. The Higgs Boson was hypothesised and only later was “discovered/detected”.

And so, knowledge is not purely predictive. Knowledge requires hypothesising – beyond data that is currently available. And knowledge requires criticism to remove the explanations that are false.

[1] In AI (transformer) training, it is widely known that data is the secret sauce. You get a great transformer by cleaning the data and controlling what to include or not to include. Through an excessive focus on the (boring) task of data preparation, French company Mistral has built GPT-4 class transformers with a fraction of the cost of OpenAI.

What do we mean – in 2024 – by AI?

We mean the prediction of the next word in a partially completed sentence – just like this:

THE QUICK BROWN FOX JUMPED OVER THE _____.

AI today is about predicting that next word (LAZY) based on the previous words in the sentence. This prediction is done by tuning the transformer to match the patterns in a very large amount of language (now over about 15 trillion words) from the internet and other sources like books.

The specific way that patterns are stored is called a “transformer”. The transformer was invented in 2017, in a well known paper called “Attention is All You Need”. The transformer approach involves having each previous word in a sentence have a small “say” in what the next word in the sequence will most likely will be. A word’s “say” is based on what that word is (e.g. “quick” or “the” or “and”) and based on how far back the word is relative to the word to be predicted.

This approach can be extended then to images (predicting a next pixel, or set of pixels) or to sound (predicting a next digital sound byte).

So, what can AI (aka transformers) do?

There are complex patterns in our language. Transformers allow us to represent those patterns. And, with a large enough amount of data (called the “training data”), the patterns represented by the transformer are good enough to predict the next word with quite high accuracy!

So long as input sentences being completed are in or around the content of the “training data”, the transformer works well as a predictive tool. Provided the use case exhibits patterns (biology, chess, 3D worlds), and you have sufficient “training data” covering those patterns (although what those patterns are, you may not know in advance), predicting the next word stands a good chance. For new patterns… not so much.

What can AI (aka transformers) not do?

Transformers are – by definition – prediction-only tools. Their great strength is in being able to represent a large body of data. Their great weaknesses are:

  1. An inability to criticise [3] (like Galileo)
  2. An inability to reach for knowledge (like Einstein reaching for general relativity)
  3. Rapidly decreasing accuracy with every next word that is predicted [4].

So, why is it that transformers are only capable of prediction? It is because the “prediction-only” design of transformers is built-in. Let’s briefly return to our short sentence:

THE QUICK BROWN FOX JUMPED OVER THE _____.

Now, think of the AI model (the transformer) as a brake that allows you to control where a slot machine stops. The slot machine has the following possible outcomes:

  • LAZY
  • DOG
  • QUICK

Before training, a language transformer is just like a slot machine, and will evenly choose between the three possible words above. Of course, we know the correct word is “LAZY”. So, when the slot machine lands on DOG (or on QUICK), we note that this differs from the correct word, and we adjust the brake to bias the slot machine a little more towards the word LAZY.

This is how a language model (transformer) is trained – by comparing the transformer’s prediction to what the next word “should” be.

Once you realise this, you realise how greatly a transformer (AI) is constrained. Quite literally, if you train the transformer on more sentences that say “The quick brown fox jumped over the EGG” or “Pluto is a planet”, the transformer will start to predict egg jumping and defunct planets.

This is also why language models (transformers) suffer from “the reversal curse”, whereby they respond accurately to the question of “Who is Tom Cruise’s mother?”, but hallucinate on the question of “Who is Mary Lee South’s son?”. Why…? Because Mary Lee South’s name primarily appears on the internet after Tom Cruise. And the solution…? Include training data with sentences in the reverse order (i.e. talking about Mary Lee South’s son, Tom Cruise – rather than Tom Cruise’s mother)!

The above predictive behaviours inform how best to “train” transformers. Here are just two heuristics, and there are many more:

  • Data quality matters. AI trained on Wikipedia performs better than on a crawl of the internet – because wikipedia has a form of editing/review process. This is because bad answers shift the transformer towards representing the wrong patterns.
  • Deduplication – where duplicate sentences and paragraphs are removed from the training data – improves performance. Deduplication avoids the transformer shifting its patterns towards a certain prediction just because it appears many times in the same form [5].

[3] Of course, any tool that can pattern match can trivially produce any desired output, provided it is contained within the training data.

[4] Rapidly = Exponentially. Notably, human knowledge does not degrade in this manner as humans can do error correction (because they have explanations!). See footnote [6] for notes on retrieval augmented generation.

[5] A more subtle, but critical, point is that training on wikipedia or the web works because of repetition of the same ideas (with different phrasing, not exact matches). A property of good explanations is that they tend to propagate among humans (i.e., in what we write). This does not necessarily make those explanations true, but it does allow the dataset of what is on the web to have strong predictive power.

[6] Interestingly, feeding in live data (e.g. the weather or info from a database) to an AI helps but doesn’t entirely correct errors in its predictions. If a transformer attaches high statistical confidence to the next word it predicts, any text input – even input that should shift how the model answers – may not have enough of a statistical “say” to shift the transformer to predicting the correct word. For this reason, best performance is achieved in narrow domains by both a) training on additional data for that domain and b) providing as much live data/context as possible within the sentence/paragraph that is being fed to the transformer.

How should we regulate AI today?

The question of how to regulate AI today is the question of how to regulate transformers. Transformers are a form of statistical regression (drawing a line to fits some dots) that is more predictive in some ways (due to more data) and weaker in others (more hallucinating, due to recursive next token prediction). Neither transformers nor statistical regression are capable of explanation.

Just as we don’t regulate statistics, I don’t see why we would regulate the transformer.[7][8]

Rather, our attentions should be focused at the overarching levels of a) data privacy, ownership, copyright and impersonation, and b) the same physical threats that we regulate today (e.g. nuclear arms, bio weapons, cyber security, national/regional defense).

[7] There is the argument that AI (transformers) becomes uniquely dangerous when connected to the physical world. The counterargument is that software today – including software used for physical assets – is already highly automated. Yes, transformers hallucinate (make bad predictions) – but so does using a normal distribution if you’re trying to model a fat tail! Software systems are designed to be robust to these realities.

[8] There is the argument that AI should be regulated if it can “recursively self improve”. The counter argument is that a trivial optimisation problem can self improve if it is in a well defined (convex) domain. Much software today runs self improvement programs (get me from Dublin to Galway in the shortest amount of time/distance). What transformers cannot do is criticise or hypothesise beyond their patterns.

How should we regulate AI tomorrow?

I propose three scenarios:

I) A future where AI still means transformers, and other statistical/predictive tools.

II) A future where AI means the ability to a) criticise and b) hypothesise beyond any regular interpolation or extrapolation from the dataset. (Criticisms and conjectures based solely on patterns in the training data do not count!)

Scenario II) requires new technology… new explanations. I do not know what form they will take. I believe we will find those explanations some day, and they will constitute major progress.

When we do develop “a brain” capable of criticism and conjecture, I would imagine we treat that brain as a human. Alternatively, I worry we might re-trace the same paths trodden in subjugating humans based on skin colour or sex.

III) A (likely) future where we discover capabilities beyond criticism and conjecture, and beyond anything we can think of today.

THIS is what we should be thinking when it comes to regulation and policy – both TODAY and TOMORROW. We should work to improve our approach, but our approach should not be brand new. Our legal and political systems already involve a) a broad set of responsibilities and liabilities for individuals and organisations, and b) adding and adapting specific laws on an incremental basis as an understanding of risks and harms emerges.

Trying to provide specific regulation for a non-specific future technology neither improves safety nor enables progress. If anything, it encourages a system where players are encouraged to play up abstract risks and their handling thereof so as to a) signal their responsibility/care, or, b) to distract from more important matters [9].

By contrast, legal systems that incorporate liability based on outcomes, coupled with incremental rule-making is one that can be both forward looking and adaptive.

Conclusion

In statistics, and AI, there is a term called “over-fitting”. It occurs when the transformer (AI) fits the training data perfectly, but loses its ability to predict the next word for sentences it has never seen[10]. This same “overfitting” phenomenon occurs when we pass laws that are too specific and cannot adapt to reality. Indeed, we have legal systems today – such as common law, and even civil law – because they evolved to, and survived by, avoiding this problem of over-fitting.

Now… wouldn’t it be ironic if we over-fit our laws in trying to regulate transformers…

[9] True for companies and institutions. Some samples: Can transformer companies be profitable if distribution is controlled by Google/Microsoft and models are largely commoditised? How does a currency union work long-term without a fiscal union – do citizens want that? How do you achieve energy security (much less a carbon neutral grid) when reliant on an unofficial policy of stabilising a renewables grid on a weekly to seasonal time-scale with natural gas (and government policy has made it illegal to even consider nuclear power in policy analyses [Ireland])? How best to manage immigration in the face of anti-immigrant sentiment, tight housing markets and the strong economic benefit/necessity of immigrants in economies with falling birth rates?

[10] To be more accurate: “sentences it has never seen but whose completion relies on patterns that are present in the training data”.

Post-script on AI and Jobs

We panicked about technology replacing jobs when unemployment rates were low and employment taxes were high.

Me, but I’m sure others have likely come up with this.

Leave a Reply