Loading content...
"Who am I?" Descartes asks in his perhaps most well-known work, Meditations 1. The answer he gives us is that the “I” that is asking is not in any way determined by how you perceive yourself, but by the actual things you think. Thinking itself, Descartes says, constitutes consciousness. In stark contrast to this line of reasoning, contemporary functionalism in the philosophy of mind rejects metaphysical dualism and claims that mentality is comprised of functions and processes. A condition for creating (artificial) intelligence is that there is an isomorph relation between the hardware and the original subject. And so, many functionalist philosophers believe real AI intelligence is possible. It is possible to copy the subject, the human brain and achieve human-like behaviour.
Artificial intelligence, then, views the human brain as its blueprint. Where this blueprint has synaptic networks, the AI brain has neural ones; a vast number of artificial neurons, interconnected. The rate those neurons can process information with, a measure called cognitive velocity, is the litmus test by which AI developers measure the prowess of cognition. This velocity strives to be more than a measure of raw processing power. Rather, cognitive velocity attempts to evaluate how logic, creativity, and intuition can speed up artificial thought. It is the quality of the interconnectivity between the nodes that allows for a continuous transformation of information.
I believe Descartes would have had issues with the notion of present day AI. To him, intelligence is, as I hinted above, not based on the material world. I am convinced, however, that Descartes would have found the subject of AI worthy of a good debate. And such a—public—debate needs to be had. Unfortunately, in present times, rivalling AI labs seem to prefer to push ahead at light-speed with little reflection on the consequences of the technology they’re progressing. Dissenting voices appear here and there, but the race towards new scientific milestones often gives them little credence. The Godfather of AI, Geoffrey Hinton, said in an interview with the BBC2,“I’ve come to the conclusion that the intelligence we’re developing differs greatly from the intelligence we have . . . So it’s as if you had 10,000 people and whenever one person learned something, everybody knew it. And that’s how these chatbots can know so much more than any one person.”
Faced with this spectre of such a collective assimilation of knowledge: Should we not all feel a little uneasy? I know I do. I sometimes wonder where AI will be in ten years. Now, don't get me wrong. I’m the first to admit that AI does wonders in physics, medicine, and astronomy. Concerning astronomy, a team of astrophysicists at the Flatiron Institute this year utilised AI to improve their knowledge of the cosmological parameters that describe the physical universe. Such findings could one day help us answer the age-old question: How did it all come to be? And AI, as we know it, also does its magic right by our desks. I use AI when I when I code, to aid my efforts in spotting errors. I even use AI to check my grammar when writing this article (to my defence, I own quite a few grammar books). To my defence!
But with most technological paradigms, there is both an upside and a downside. A snapshot of current AI research reveals a confounding terrain. Evidence3 suggests that advanced language learning models (LLM in AI speak) have a tendency to degenerate over time, forgetting the underlying data. The LLM can become erratic, downright unpredictable. Such unpredictability is not a good thing when performing mission critical tasks. An AI named The Scientist extended the time given to solve a problem during a complex test. This came as a total surprise to everyone, because cheating is not what we would expect from a machine. And of course, there is the AI that plugs into vast data sets, some datasets less than unbiased. This form of AI, which trawls the web, is quite familiar to us all. Most of us have by now become accustomed—if one can become accustomed to these things—to fake news sites, fake videos, fake social media posts and so on. A recent statistical survey by Deep Media4 estimates that social media users across the world will share about 500,000 video and voice deep fakes in 2024.
Considering this new and confounding terrain, we need to take heed of the changing everyday world that has so far been progressing at post-industrial speed, but now is running past us on steroids. We need to establish public data points that provide insights into the current realm of AI research. We need to make agencies aware of what lines not to cross. Because debate is not enough by itself; no armchair philosopher has ever brought about change by however many arguments. Thankfully, there are some beacons on the horizon. One such positive development: the European Union’s AI Act5, approved by EU in May 2024. This act requires transparency and accountability regarding high-risk AI systems. It seeks to, and I quote
“...address risks specifically created by AI applications; prohibit AI practices that pose unacceptable risks; determine a list of high-risk applications; set clear requirements for AI systems for high-risk applications; define specific obligations deployers and providers of high-risk AI applications; require a conformity assessment before a given AI system is put into service or placed on the market; put enforcement in place after a given AI system is placed into the market; establish a governance structure at European and national level”
Albeit the AI Act is a first excellent step, it turns out that its rule-based formulations are restricted to large scale AI systems. Many small yet powerful new AI systems fall outside of its scope. There is also no mention of how to prove accountability to us, the civil society. Yet, decisions around AI will still keep affecting us in a positive or negative way, but the final say so is at the moment out of our hands. And I’m convinced that our say so on AI is necessary in the long run. We cannot and should not leave fierce technological competition in this field unchecked. Without a raised public debate, we face a grey revolution; one where this technology stays out of daylight. We must not let this happen, because AI has the potential to be a friend or a foe. And which one of them we create should be our—intelligent—decision.
Notes
1 René Descartes, Meditations on First Philosophy, Oxford University Press, 20082 Kris Vallance and Zoe Kleinman, AI 'godfather' Geoffrey Hinton warns of dangers as he quits Google, BBC, May 2023, https://www.bbc.com/news/world-us-canada-65452940
3 Ilia Shumailov and others, AI models collapse when trained on recursively generated data, Nature, July 2024, https://www.nature.com/articles/s41586-024-07566-y
4Neil Jacobson, Deepfakes and Their Impact on Society, OpenFox, February 2024, https://www.openfox.com/deepfakes-and-their-impact-on-society/
5 Europarl, EU AI Act: first regulation on artificial intelligence, Europarl, June 2023, https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence