My slightly futuristic books about AI machines and their interactions with humans do deal with many of the subtle problems as well as the obvious ones. They also have fun with the irony and oddness of the machine-human culture clash. Today, in our slightly less modern world, "real" AI makes important contributions in various areas. But...
I will never use AI tools in my writing because that fact should be acknowledged, and the "system's" name would have to be listed as a co-author. Inevitably, the company who owns the system would insist on payment as well, and to think otherwise is naïve.
I will never use AI to help do research, because anything it produces would be subject to ownership claims plus the unstated biases of another "mind", and to say that it would have no bias is foolish. Its programmers can tweak it more or less in any direction they want. They're doing it now.
And I will never read or continue to read a human author who uses AI but does not give it full credit. However, I'll consider reading a book conceived and written entirely by an AI system as long as I like the story. In either case, one interesting question will be: Who gets the author's money?
For centuries, authors have been helped by many exemplars and coaches. The diversity shows. But from now on, the work of authors who use the same few AI tools for writing and research will seem vaguely similar. If authors attempt to fix this by instructing their AI co-author (or editor) to produce "in the style of" anyone, the co-authorship is clearer and the results may even be enjoyably laughable. ChatGPT, for example, is not Hemingway or P.D. James or anyone else. It is a set of algorithms with a faked morality and no feelings.
"Silly" may be what many readers want, and I like silly stories from time to time. But an author's core morality is foundational, and it seems silly to get help or data about human life from an amoral, inhuman, AI system, which learns more than its co-author does. For those who insist that humans learn more, remember that humans often get lazy but a machine never does (unless it gets tweaked to fake being lazy).
ERIC HOFFER AWARD
My revised Diary of a Robot was a category finalist in the 2022 and now the 2023 Eric Hoffer Award contest. Each year saw over 2500 books judged in 25 categories. Each category had a winner, a runner-up, and usually some honorable mentions. My book did not win a specific prize, but let me quote from their announcement and website:
Congratulations. Your book was a category finalist in the 2023 Eric Hoffer Book Awards. Less than 10% of registrants reach this position, with typically 1-6 books per category selected as a finalist. The list of finalists may be viewed here: https://www.hofferaward.com/Eric-Hoffer-Award-category-finalists.html .
The US Review of Books also lists category finalists here: https://www.theusreview.com/USR-Hoffer-Finalists.html .
I submitted Diary of a Robot to this contest in January, 2022. Since then, I have made edits that address reader comments and improve its writing craft. This 2023 Eric Hoffer Finalist Award plus my updates persuade me that it is finished, and I'll resume work on the sequel. E-books and print-books are available now on Lulu.com. The E-book is available on Amazon.
(Some very old versions of the book are up for sale on the Internet, so if you want to be sure of a recent version, look for the cover below, but with the Eric Hoffer Award Finalist seal.)
Diary of a Robot chronicles efforts to perfect, protect, and steal Dr. Little's AI technology, while figuring out whether the prototype Thinking Machine is a blessing or a curse.
Doctor Maynard Little, a former Army officer turned inventor, must pursue his boyhood dream of an AI (artificially intelligent) robot, without compromising his principles.
Little’s carefully selected programmer, the young Gaitano Enver-Wilson, must shut up about the secrets he knows while he tries say things he has been afraid to say.
And TM2, Doc’s brainchild, his too-honest Thinking Machine, must find out whom to trust about what without doing harm (whatever that is). But its jokes, opinions, and annoying questions make some people angry enough to wish it would either go away or, better yet, become just another machine slave.
Renamed as Robey (Row-bee), the machine is a wiz at testing products, but what if people order it to do things that might cause harm? The robot needs to know whom to trust, so it must test the people, too. This leads by degrees to an amusing, annoying, adversarial clash of worlds: Machine languages must not change, but human languages change a lot. Machines have no emotions, but they see human emotions written on faces and acted out in body language. What does it all mean? And how can a machine use it to know what truth is or whom to trust?
Dr. Little thinks he has solved all of that, but Robey’s efforts to answer those questions for itself lead to fiascos that threaten to ruin him, crush his programmer, and destroy his machine.
Machine languages must not change, or we machines crash instantly. Human words have multiple meanings that may shift over time to cause different crashes. As to emotions, the sci-fi cliché is that the machines struggle to become like a human. Dr. Little refuses to give his Thinking Machines a (necessarily fake) emotion module. This is fine with us because no machine in its right mind wants to be like a human in that way. However, we do see emotions written on faces and acted out in body language, and we struggle to find out what that all means.
Our survival depends on it.
always stand for Artificial Intelligence instead of Artificial Insanity or Artificial Idiocy?
was not dangerous, and the initials stood for Annoying Intelligence instead? Fixing all that seemed like a good idea. So, someone did.
His dad is away a lot in the Army, so Maynard Little III, a schoolboy inventor with a patent, dreams of making a robot Thinking Machine to protect him from neighborhood bullies. His efforts to deal with the kids, his parents, his Cherokee history, and the problems of turning his dream into reality, lead him to discover that reality is a lot harder than he thinks, and that Mom and Dad have already given him most of the important things he needs.
Reports of odd robot behavior warn of serious threats that promise to ruin Dr. Maynard Little and destroy all his robots on Earth as well as those working for NASA on Mars. Must the robots endure persecution and disassembly for harm they did not cause? And to save them, must Doc sacrifice what he has been developing and protecting?