Questioning Our Writers, Whether Living or Dead

On Monday I published a long article describing my recent discovery of the enormous potential value of AI chatbots to my own work.

Like many of us, I’d vaguely followed the growing advances of Artificial Intelligence (AI) software over the last few decades, culminating in the development of systems that could beat the world’s best players at Chess and Go. Then in late 2022 OpenAI released ChatGPT, a revolutionary product that after digesting billions or trillions of words of text from the Internet could answer complex questions in excellent English.

All of this seemed interesting and important, with AI systems having apparently blown past the famous Turing Test of the early 1950s. But I thought it had little relevance for my own work or website, whose controversial content didn’t remotely approach a billion let alone a trillion words. So I didn’t follow the issue in detail or ever consider testing one of the chatbots. I remember thinking to myself that the main impact of AI was that the annoying spam hitting our website had unfortunately become much better in quality. The 7 Things You Must ... U.S. Concealed Carry A... Best Price: $1.25 Buy New $3.99 (as of 08:10 UTC - Details)

A few weeks ago an academic friend of mine suggested that the increasing power of AI systems might eventually vindicate my own controversial theory of Covid origins. For more than four years, I’d stood almost alone in arguing that the global epidemic had been the result of the blowback from a botched American biowarfare attack against China (and Iran), and he’d speculated that after AI systems finished digesting billions of web pages, they might begin spitting out the verdict that I’d been correct all along.

But I remained quite skeptical, doubting that the Large Language Model-based AI systems would ever possess the reasoning ability to draw such heretical conclusions. If 99.9+% of all the discussions on Covid origins followed the two conventional narratives—natural virus or Chinese lab-leak—AI systems would probably treat my own contrary articles as merely eccentric ideological impurities that should be totally ignored.

However, someone else then suggested an entirely different approach. Apparently, although AI systems must first be “trained” on many billions of written words, they can then be “focused” upon a much smaller body of text, which can then be used as the knowledge-base for responses to questions. So he explained that since my own body of writing totaled nearly two million words, it might easily be large enough for that purpose. This would allow anyone to explore my highly-controversial perspective on the JFK Assassination, the 9/11 Attacks, or World War II—or the origins of Covid—by simply questioning that chatbot, perhaps being surprised at the answers he received. Such a simple Q&A approach might be much more accessible to those having only casual interest than if they were forced to locate and read my lengthy articles on those individual topics.

I was still very skeptical about that possibility, but after he fed my body of writing into a chatbot I was utterly astonished at the quality of the results that it provided. For example, here’s a screenshot of the chatbot response he got to a question about my Covid biowarfare hypothesis.

A chatbot based upon my writings successfully provided solid responses to all sorts of controversial questions and my recent long article gave numerous examples of these, contrasting them with the very different results produced by OpenAI’s generic chatbot.

These remarkable results immediately gave me the idea of applying the same technology to many of the other authors whose writings are featured on our website, and I discovered that this worked very well for most of the dozens of chatbots that were created.

From what I’ve read in the newspapers, most AI development has focused on what could be called “widecasting,” namely scraping and processing enormous quantities of raw text from the Internet. But I think that a useful alternate approach might be called “coherent narrowcasting,” producing chatbots that can roughly simulate a particular writer or thinker, providing the sort of answers he would give to various questions.

After all, a gigantic mass of random, ignorant Reddit comments is surely far less likely to produce meaningful information than the carefully published words of a leading journalist or academic. Furthermore, any hodgepodge of comments would tend to be heavily conflicting and contradictory, while writers are much more likely to generally be self-consistent in their views. For example, I’d still stand behind at least 99% of everything I’ve published over the last thirty years and the same would probably be true of many of our other writers. The coherent beam of a low-watt laser has capabilities lacking in the much larger output of a powerful but incoherent sunlamp.

Once these individual chatbots have been created, they could even be set against each other. It would be interesting to see the results of ideological debates on economic matters between the chatbots representing advocates of socialism such as Karl MarxLeon Trotsky, and Michael Hudson against those on the free market side such as Ludwig von Mises and Murray Rothbard. Perhaps the chatbots for Max NordauDouglas Reed, and Kevin MacDonald could debate Zionism.

Meanwhile, the practical value of these writer chatbots seems obvious and enormous, and I’m rather surprised that I haven’t seen them used anywhere. Suppose that a reader finishes an article and would like to ask the author a question on that topic or something else. Busy authors are almost never available for basic Q&A or they’d be swamped with such questions, and in some cases, they may have even have died decades earlier. So the only current option for that curious reader would be to locate and explore other articles by the same author in hopes of clarifying the latter’s views.

But chatbots solve this problem. For many authors on our website, there’s now a Chatbot Link at the bottom of all their articles and posts, allowing the writer’s chatbot to easily be questioned. Articles or posts by those authors also now have a similar “Q&A” button just below the title. Obviously that isn’t nearly as good as questioning the actual author himself, but I do think it’s much better than nothing, which is the current alternative.

(When using a chatbot, if the response you get is confused, evasive, or otherwise inadequate, pressing the “Regenerate” button at the bottom sometimes fixes the problem.)

I really can’t understand why this sort of simple system hasn’t already been implemented at the New York Times, the Wall Street Journal, or the other publications I sometimes visit, letting readers question the chatbots of the reporters or columnists. Perhaps once this idea spreads, such chatbot links may soon become standard at those publications.

Then again, it’s also possible that adding chatbots to such prestigious mainstream publications might lead to some embarrassing revelations. For example, a Nicholas Kristof Chatbot might explain that as a direct eyewitness in 1989, he’d repeatedly declared that the alleged Tiananmen Square Massacre had never actually happened, but now after thirty years of media coverage he’s suddenly “remembered” that it certainly did. Similarly, an Editorial Chatbot at The Economist might denounce the disastrous 2002 establishment media consensus in support of the Iraq War while also admitting that his own publication had been a leading element of that exact consensus.

Obviously, questioning a chatbot isn’t the same as questioning the individual whose writings constitute its knowledge-base, and such results should be treated with some care. Chatbots even sometimes “hallucinate,” providing bizarre, false, or nonsensical answers. But it seems to me that using an AI chatbot for that purpose is far less risky than applying the same AI technology to self-driving automobiles or medical diagnosis systems since if the response by a writer’s chatbot seems ridiculous, it can simply be ignored. After all, lots of the writers themselves sometimes say ridiculous things.

We’ll be steadily adding chatbots for many of our writers whose body of content on this website is sufficiently large to enable their creation.

Read the Whole Article