Both the rate of publication and the quantity of publications that delve upon the subject of AI have increased here on Substack and abroad. For starters, I am neither here nor there when it comes to AI, it seems to be in a similar stage to videogames put in early access1: The developers, instead of giving their customers an early trial mode of a game in development, should wait, finish the game, sell it. In the current stage, it makes no sense to ponder about what AI can and cannot do, it is in a stage were it is best left for a minority of people to test out this gadget, rather than the entire populace of late stage democracies.
The cause of this increase appears to be, from what I can gather, that fellow scribblers on this platform, many of which live of the customers freely given money, use, increasingly, artificial intelligence to write their posts. For the sake of argument, given that this might be the case, some words are in due order, given my condition as one among other writers.
First and foremost, if people are using AI to write for them, and people are paying for their writing, the problem is not, of course, AI. In that case, the problem is a much simpler one: AI ghost writers2 combine lessened intelligence with a weakened sense of what’s right3.Nothing new in having ghost writers, and known recipients of fame, probably, many of the acclaimed writers we find at hour local first hand bookstores have not written their books, or at least, not all of them. The problem with AI ghost writing is a moral one. Without AI, such scoundrels and parasites would search for other avenues, on this topic, there is no need to panic, nothing ever happens. On this point we already find our theme: AI can only amplify the latent problems. Saint Thomas Aquinas said that God’s grace did not eliminate the natural capacities of man, but rather, perfected them. On a similar note, one might say that artificial intelligence does not eliminate, but instead amplifies the respective stupidity and malice of the user.
On the point of AI ghost writers we need to apply a very useful tool: Hanlon’s Razor: Never attribute to malice that which is adequately explained by stupidity. I think that settles the issue of potential AI ghost writing on Substack, here, yesterday and tomorrow. At the end of the day, if people are using a underdeveloped artificial intelligence to write, prose, fiction or metric, it boils down, ultimately, to mediocrity.
We might move on to a more interesting case of AI, one that might arouse something more interesting than dignified anger, something like cunning.
The AI Question
The poles of the debate, and my own take.
The quantity of articles and posts on the subject of AI is inversely correlated with the quality of the publications. In other words, there are so many mediocre half baked pieces about AI, pro and con, that one wishes, with malice, that artificial intelligence would write the articles for its own critics. I must make a confession, I am not much impressed with the level of discussion concerning artificial intelligence, be it the loss of jobs it produces, be it the political danger it implies, be it that AI will take away our humanity4.
When it comes to AI, the sociologist has first and foremost the task of constructing the ideal types of both sides of the issue, basically, the broad strokes of the worldviews or ideologies for or against AI. Most fundamentally, there are two poles towards which people are attracted: One pole if they dislike AI, one pole if they love AI. Names are imperative for any classification, so we might as well name the positions: The humanist and the optimist position.
The name for the general critiques and critics of AI is humanistic, humanism, human. It is easy to understand why this position is called so: No, it has nothing to do with them being fond of Renaissance painting, for in truth, have more similarities with the thought of Martin Heidegger, a professed enemy of humanism5, than with any Renaissance philosopher. This position is called the humanist because it rests its case, ultimately, on a distinct argument: There is a human element, condition, property or nature that AI, or any gadget for that matter, cannot imitate. A good case of such a human element, often brought up, is consciousness: humanists believe that AI cannot become sentient or conscious, because there is something about consciousness special, distinctive, and unique that AI cannot enthral. In other words, AI cannot write, or paint, or think or be truly alive because it cannot capture what is uniquely and unalienably human.
The other side is often called technocratic or positivistic, but positivism or technocracy or any other similar label only captures an aspect, a side, of this pole, not its entire spectrum. It is also called “post or trans humanism”. The thing is, most transhumanist writings are also, as it turns out, quite mediocre. It is unfair to put Martin Heidegger against some modern best seller like Yuval Noah Harari, it would also be misconceived, because Mr. Harrari is not the main point of this side. That being said, when Mr. Harrari makes his defence of AI, or any upcoming gadget, invention or technological advantage he shows a certain characteristic procedure: The AI optimist (and all those who stand for any newer technology) gives a vote of confidence or, at least, the benefit of the doubt. In other words, the optimists, from Mr. Harrari to Aguste Comte all are open to what changes AI might bring because they believe AI, as most technological improvements, are part of a larger process with qualities and properties that need be respected. Thus, we have humanistis and optimists, the names seem descriptive and respectful, also, it seems that the core of their case has been put forward without diminishing their strength6
A good example of a humanist position is Johannes A. Niederhauser which builds on Heidegger’s philosophical and metaphysical understanding of technology to talk about AI and the danger it poses. A good case of someone who sees AI as a good, admittedly, with a much less dramatic tone than most optimists, is Joscha Basch, people who are against AI would benefit by listening to him, one of the more intelligent “defenders” of AI. Neither of them are very famous, but they are actually interesting to listen to concerning AI, and they make the better cases for “humanism” and “optimism” that I know of on the digital space. There are also many philosophers who have written about it, but the digital nature of our society and our preference for speech over writing makes both of this rather fringe intellectuals a good place to go, for starters at least.
I think it is fair to make my own position concerning AI clear, I think that stating our position is always the true and sincere form of objectivity. I don’t consider myself a centrist concerning almost any subjects, I don’t think the middle of the road is the best strategy to truth, I also think that ultimately, in life, you must pick a side, and centrism always tries to hide this condition, that choice must be made, with its dreadful dullness. I stress that I am not a centrist because of what I say next: My position is neither humanistic nor optimistic, I think that I ought to side with the humanist, because I believe in the soul and eternal life as facts, but I find technological optimists much more enjoyable, probably, because I like to be a contrarian.
My position is one of apathy, of relative apathy, not in any way absolute, apathy exclusive concerning the topic of AI. I don’t think AI is a big issue, and I think we are wasting energy, as a society and people partaking in the “marketplace of ideas”, trying to find out if AI is good or bad, and where it will lead us. I think it is a waste of time to predict the future, and I think we should confine ourselves to the long run tendencies we can find around us rather than hocus-pocus second guessing about robots taking over and Terminator being a documentary. What I want to achieve, practically, is that people focus less on AI and look at what I consider real problems and threats, or blessings, of humanity.
For instance, I consider the mass production, and use, of condoms or contraceptive pills technologies, in the true sense of the term, are both more important and interesting to ponder about. You see, whether you are pious Christian or a bleeding heart Atheist, irrespective of the reader’s understanding of sexual morality, you most agree that the normalization of contraceptives over the last century (aprox.)7, is a fascinating trend, that deserves the best application of the intellectuals we have around us. Lastly, I think that the existence of nuclear weaponry is an actual threat to humanity, and the fact that we prefer to remain silent about the existence of such bombs, and just accept that the debate about such weapons is settled, settled despite the ongoing Genocide in Gaza and geopolitical crisis in the Middle East, a true concern. Ultimately, I consider AI a pseudoproblem, and I think you should too.
This view of mine is not exclusively mine, but I think it is a voice that is much needed, given that every day there are more and more hit pieces about AI, or philosophers like Byung-Chul Han. But instead of talking about the issue of AI abstractedly, and my view of it in broad strokes and try to fix the world this way, I will instead look at different cases where the debate about AI appears, topics like the loss of jobs, topics like the destruction of written and painted art. First and foremost however, I will address the issue that brought me to write this piece: This article by a fellow writer on substack, and also youtuber, Jared Henderson.
Most people who talk about AI do it in an essay like manner. I am not of such an artsy persuasion, so to speak, and I prefer a sociological and economical8 approach. My issue is not here the need for the humanities, I have talked about that here. Here, I am interested in addressing distinctly and exclusively this quote:
I have seen a spate of discussions about the furthering decline of student attention spans. The Chronicle of Higher Education, Slate, and even Teen Vogue have reported on the matters. Professors cannot get their students to read, and students increasingly use AI to summarize readings instead of actually attempting to do the readings themselves. Classes that used to assign 50 or 60 pages per week now assign 20 pages, and even then students struggle to do the reading.9
Quick reminder: I find it interesting that Jared Henderson quoted and showed concern for this tendency among students, I don’t find much to disagree with in the original article. I have followed his content irregularly for a long time, so this is not meant as a polemic with his article.
With that said, lets look at the problem:
Concerned teachers, lazy students
The use of AI at universities, the root problem.
And the point of view is that of the Devil’s advocate. Ruling opinion, the zeitgeist, is the following: AI is being used by students to write, diminishing their intellect by artificial intelligence, AI is the root cause. What will the devil’s advocate argue with? A cliché: AI is not the problem, it is just a manifestation of a deeper human problem. What will be learnt is also a admitted cliché, but none the less true: Technology will only amplify the problems that are already there, neither create nor solve them.
At first glance, AI is damaging education at university level. AI is disrupting the work of teachers and the learning of students, by tempting young and naïve students with the power to automatically and effortlessly write essays. A deeper look would discover this as a consequence of the current economic system, that can be classified as socialist or capitalist, economic theories altering the label for the current system we live under. But sociology, in the classical tradition10, could actually make the debate interesting. For sociology tells a different story: The institution of universities is not adequately adapted to contemporary society, hence, problems such as the abuse of AI will happen, and no solution will be available.
Before you might even conceive the reply: No, I am not claiming that there is too much reading at university, a old form of education, one of punishment as the means of education. I am not claiming that universities are in a moral sense uptight and conservative, while society has acquired a different sense of morality, or simply downright lost it. I do believe universities, and specially teachers, are incredibly conservative, but not in a moral sense, for most teachers have and practice the contemporary value system of acknowledging divorce as something normal, mass consumption of usually plastic preservatives, religious pluralism, a vague sense of respect for the individual and obedient payment of one’s taxes. Universities are conservative not because teachers are conservatives in and of themselves, but rather, because the institution is inherently conservative.
The first law of Robert Conquest can clarify plenty. According to the first of three laws of Conquest Everyone is conservative about what they know best. Therefore, teachers are conservative, independently of them being fourth wave feminist or unabashed reactionaries, about teaching. That teachers across the West are concerned about the development of this technology, artificial intelligence, and less so about other technologies such as, well, condoms, explains itself by the simple fact that teachers know much about teaching. It follows from their general attitude that teachers, specially those in the realm of the humanities, react negatively to the appearance of AI. But teachers, it so happens, also reacted to a different element of progress back in the day11.
Are you acquainted with John Searle? together with the German philosopher Jurgen Habermas, he is the greatest among the liver philosophers, he has also been here for a long while. For our purposes, his longevity is interesting, for he wrote a book about the 1968 University protests across the US and Western Europe, from the perspective of a teacher who, while maybe sympathetic in their origins and not on the cultural or political right, is critical with the student protestations. He wrote a book to tell his personal experience, as a teacher, with the Campus War. John Searle, the teacher, displays the common conservative elements of the trade, the difference consisting in his honesty, for Searle, as the short essay goes along, often points out how the generally left wing sensibilities of the teachers explain their lack of strength or cunning when it comes to dealing with the protests when putting up with capricious demands by young and rebellious students… But a quick question might come to the reader’s mind: Isn’t the 1968 student protest irrelevant to AI takeover in the 2020s? Mostly it is irrelevant, but John Searle makes one claim that was already back then out of touch with the prevailing zeitgeist, and that claim has not lost strength but, to the contrary, gained it over the last almost sixty years: Universities are inherently undemocratic, a democratic university is a contradiction in terms.
He says:
A specialized institution dedicated to intellectual objectives is by its very nature not egalitarian. It does not place all its members on an equal footing; on the contrary, it makes a sharp distinction between the masters and the apprentices, between the professionals and the aspirants. To mark these two features of the university, its intellectual objectives, and its inherent class structure, I prefer to characterize it as an aristocracy of the intellect.12
John Searle claims in the book that the university is an “aristocratic institutions”, despite existing in a democratic polity. Despite their existence in a democracy, because John Searle believed, and maybe still believes, that universities are necessarily elitist, in the sense that they are meant for the best minds to share and develop, and require meritocratic and selective measures to be preserved. A claim often heard, and can be traced at least to the 60s, is that youngsters should not be forced and drafted to school. Right wingers, who lack any sympathy for the values of the hippies, but have, sometimes, libertarian sentiments, make a different claim: Most people are unable to keep up with classes, because they lack the intellectual capacity to do so. A polemical critique of XX century development in British education is Peter Hitchens.
To compare with the view brought up by John Searle or Peter Hitchens, a voice from the other side of the isle, of neglect by the universities and love by the dissidents, some words by Murray Rothbard. Rothbard, by all standards, is a radical, some would add: an extremist. But Rothbard observes something that is nowadays, and it seems that almost also back then, is taken as a fact of nature. Nowadays, the same way apple trees grow apples, children are meant to be at school, and younger adults at university. For most of history, and more importantly, for nine tenths of the history of education, private or public, school was not a normal thing for most people; this is historically undeniable.
Thus in the 1970s Rothbard, one of the great public critiques of public education, writes:
In contrast to earlier decades, when a relatively small proportion of the population went to school in the higher grades, the entire mass of the population has thus been coerced by the government into spending a large portion of the most impressionable years of their lives in public institutions.13
But an even wider picture can be drawn, the previous books where written at a similar moment, in the 1970s, and they include a critical reflection, extremely in the case of Mr. Libertarian. John Searle makes the case that universities are by design not suited to accommodate the bulk of the population, as a part of a larger process, but necessarily a minority. Rothbard makes the remark that, objectively speaking, never have the masses (us included of course) been forced into school. Lastly, a book by someone from the left, or from the left back in the day, 1964 to be precise, is Paul Goodman’s Compulsory Miseducation. It relates to high schools, but of course, without normalizing high schools Universities cannot be made a normal stage of life. His critique is a classically humanistic one, instead of appealing to the aristocratic nature of universities or education, he observes the inherently bureaucratic nature that any compulsory public and by law decreed education will have. The picture is complete, with a moderate, a lefty and a man from the libertarian far right.
The reader might ask with all justice why I have painted this mosaic of authors and writings from the 1970s about education if not for my own amusement. While my amusement is definitely to be factored in the reason for my composition is unrelated to it. In all honesty, the connection between the original concerned, artificial intelligence, and the ideas here espoused is rather vague, thinly related to each other as “education”. But there is an important distinction between Paul Goodman, John Searle and Murray Rothbard, on one side, and any variety of recent hit pieces about AI. The question about AI, when put next to the true technological transformation that is contemporary public education, of Prussian origin in the late XIX century, is honestly quite unremarkable. There is no need to doubt anyone’s good will or conscience when they prefer to endlessly barrage the web with scribblings about AI, but honestly, if most major magazines don’t factor in how strange our universities are by default, ceteris paribus and excluding AI… They are dealing with small problems, weak problems, pseudoproblems. I don’t have any data on this, but honestly, when it comes to the problems of students or modern education, can you honestly say that the ratio of articles such as mine to articles concerned about AI is not something approximating ten to one?
Are most people capable to sustain public mandatory education, or university? Can the university be made from an “aristocratic institution” to a form of “youth city”, which would actually make universities stand up to democratic scrutiny? Given that universities were instituted when Popes roamed Europe and they have not seen, since that time, any change more radical than the changes occurring since the second half of the XX century, might it not be best to get rid of them and put something in their place? Is it not possible that, as compared to 1924, universities have grown, in proportion to their original size, more than any other institution?
When you make universities public and therefore national universities, and you give students access to AI then don’t be surprised that this abuse occurs. If you are not going to cut out the root, you might as well let students do what they want. Using this as a pretext to complain about AI or some other gadget is becoming… Uninteresting.
Conclusions
This blitzing reflection about AI and the university was of course not meant as a rigorous treatment of either the access of AI for students or reforming universities, truly, my views are very different from any of these issues. My point was to make people at least ponder questions that produce pondering. Of course it is plain that 20 year old middle class white kids can’t even write a page and a half about a subject they chose to study. The thing is, when you see the sea of articles written about AI, or generally, any new technology, a cynical part side of you starts to grow, it is fed. All the speeches about humanity and robots seem so banal and unreal, one would almost prefer the return of orthodox communist who considered all problems to be rooted in the existence of a certain economic system. Instead, we get everyone, or most people, against such a evidently mediocre stance (that AI ought to write all the books and do all the homework) that one can only wonder who would be dim enough to defend AI and be an optimist?
I have strong beliefs about most things, and I would like to have them of the important ones, but, dogmatic as I am, there is something that frustrates me more than any centrist-relativist. There are so many changes in the second half of the XX century, and the first quarter of the XXI century, that I cannot fathom why we are only allowed to talk, like grannies, about how dangerous AI is, not of the other changes. We might be indulged to talk with moderate (always moderate) criticism of electronic devices, and some conservatives might talk about abstract art being terrible, on their most daring days. Another classic point of criticism, moderate (always moderate), is music, more specifically, pop music: Everyone is critical of pop music now, but not from some years ago. Why do I care and why am I so harsh with people who expect everything, fantastic or terrible, from AI? Because writing about AI, believing this underdeveloped tool will either haunt or save, is loosing the forest for the tree, it is not naively optimistic or fatally pessimistic, it is dull, with all due respect. Luckily, at least feminism and, lately, the lack of religiosity have become issues, and preoccupation with AI is sign that we care for the present future, something rather than nothing, but please, I have a favour to ask of other westerners: You are not my grandmother, so don’t blame AI.
For those who are boomers in the audience, or those who have not had the pleasure of enjoying such exquisite contemporary delights as videogames: A Game in early access is first and foremost a funding model, i.e., a way for people who make videogames to get money, to get money so that they can keep on making the game; it is fundraising. What they do is, basically, to allow people to try the game when it is not finished (alike children trying the dough of a cake before it goes into the oven) so that, if they really like the game, they might feel inspired to drop off some money for the project. There is, however, a concerning trend of videogames that are put in early access, become very famous and are then… left, like that, unfinished (it would be like putting the cake dough in a museum). Maybe AI is moving in that direction, difficult to tell, since the writer of this piece, while a Devil’s Advocate, knows next to nothing about AI. Both the state of ignorance of the advocate, as well as the possibility that AI will become terminally unfinished are, or would be, manifestations of a sociological trend: The idiotizing process.
AI ghost writers means the following: People who use AI to write their articles without making it known to the reader. There are shades of grey concerning this issue, of course, because there are many people, such as translators, which work by editing what AI does. I don’t think, therefore, that using AI to write is therefore necessarily malum in se, evil in and of itself. Basically, someone is not necessarily a moron because he uses AI, there are shades of grey. What is sure, however, is that currently, to use AI to write pieces and then correct them, is a big waste of time, you would save more time and write better by writing it itself.
Can also the customers be blamed? I don’t think so, or at least, as a matter of principle I will not judge anyone paying for the writings of an AI ghost writers. I don’t do this as a matter of principle: I take money seriously, what they do with it is not my business. Basically, I take the libertarian-nihilistic-highly convenient stand: It is not my business (since I would also thoroughly enjoy and appreciate people willing to send money my way just because I write, something that I already enjoy).
One of the great exceptions is an episode from a podcast I thoroughly enjoy: The DemystifySci Podcast. There are many podcasts around in this day and age, but this one is solid. A great episode with an artificial intelligence researcher, Joscha Bach is a must for anyone who wants to become a Devil’s Advocate for AI: Is There Really a Hard Problem of Consciousness
Since humanism has a nice feel attached to it, the reader might be reminded that here humanism, humanistic, human is meant as a class, a name to contain multitudes. The writer is not trying to make Heidegger look bad by saying that the German philosopher was against humanism, nor was it intended to convey that he had bad taste when it came to art. Martin Heidegger was critical at different points of positions and doctrines that espoused, one way or another, “humanism”, therefore, it is said here that he was against humanism.
There are many philosophies and theories of technology, and also many different philosophical perspectives on AI, as well as the view that common sense might make. My effort here is to find the common line of arguing that one can find on different camps of this debate about AI, a camp, in truth, that is related to the general debate about technology that can be traced at least to the XIX century, including thinkers, commoners and men of action. Of course Heidegger had no view of AI, neither had Karl Marx or Murray Rothbard, but it is evident that someone who shares the frame of mind with any one of them will probably move in the humanistic or the optimistic direction, and this move can be traced back to the general stance to be expected from such philosophical presuppositions. Everyone has an opinion about technology, and every age has had people who look happily at a new invention, as well as discontented. Similarly, before AI, there was debate about television, as there are also now about social media. In the realm of society, labels are never geometric proportions, but they are useful language games to make our life easier, communication successful and action a possibility.
Before anyone gets the writer of guard: Yes, contraceptives, male and female, have existed for a long time. Here I am concerned with the capacity to produce with ease and simplicity such contraceptives and, more importantly, how it has become, for a sizable chunk of the Western population, to be expected of people to use contraception and it has become to a certain extent, normalized.
By economical I mean “praxeological” and “catallactic” approach. But these big words, from the Austrian School of economics and social philosophy are big words, so I will stick to economics, even if it does not mean what most people understand as “economics”.
I could refer to the original two articles, but honestly, I am interested in what Jared Henderson took from such articles, rather than the articles themselves.
It would be useful to explain what is meant with this “classical tradition”, specially given that sociology is only two centuries old at most. By classical here it is meant sociology as it was done until the early XX century. The view taken here, which is admittedly controversial, is that the pioneers or fathers of sociology, Auguste Comte and Herbert Spencer are still very much relevant and on par with the most pioneering research in social science. Modern sociologist who thoroughly defends them are Norbert Elias and Stanislav Andreski.
Given that teachers are nowadays bureaucrats, a useful book to understand how “conservative” teachers are, not in a political or philosophical sense, but in a rather literal sense, a good book would be Bureaucracy by the great Ludwig von Mises.
The Campus War, a sympathetic look at the University in Agony, 1971, 224.
From For a New Liberty, A Libertarian Manifesto, 1973, 145.