Philosophical and political challenges of computational automation
Contrary to what the term “artificial intelligence” might suggest, contemporary digital technologies do not learn or invent. They are writing systems and calculation devices. Thanks to the (human) indexing of massive quantities of data and by means of certain very specific mathematical operations, these algorithmic systems make it possible to “generate” content (textual or imaged) comparable to so-called “human” content (on the statistical exploitation of which these systems are based).
If we must criticize the notion of “artificial intelligence”, it is first to deconstruct the analogies between minds, brains and computers (which are based on the most classic metaphysical dualisms), in order to open up a reflection that is at once epistemological, anthropological and political on computational or digital automatons.
Such a critique can be called “pharmacological” insofar as it does not aim to denounce or condemn digital devices, but to question the (theoretical as well as practical) limits of computational technosciences and automation, while proposing alternative models.
The dazzling development of “generative artificial intelligences”, which now integrate all digital ecosystems (search engines, office suites, computer code, etc.) is preparing a major technological shift, which could ultimately lead to the automation of expression and thought.
Such extractivist systems rely on the exploitation of natural and cultural resources, reinforcing the ongoing ecological catastrophe as well as the risks of symbolic proletarianization and standardization. In such a context of radical and disruptive innovation, is it still possible to put algorithms at the service of hermeneutic and contributory practices?