His dad is away a lot in the Army, so Maynard Little III, a schoolboy inventor with a patent, dreams of making a robot Thinking Machine to protect him from neighborhood bullies. His efforts to deal with the kids, his parents, his Cherokee history, and the problems of turning his dream into reality, lead him to discover that reality is a lot harder than he thinks, and that Mom and Dad have already given him most of the important things he needs.
Available in hardback or paperback on Lulu.com; there is no ebook version.
(49 pages with 32 drawings, photos, and maps, plus the Cherokee syllabary)
ERIC HOFFER AWARD
My revised Diary of a Robot was a category finalist in the 2022, 2023, and now the 2024 Eric Hoffer Award contest. Each year saw over 2500 books judged in 25 categories. Each category had a winner, a runner-up, and usually some honorable mentions. My book did not win a specific prize, but let me quote from their announcement and website:
Congratulations. Your book was a category finalist in the 2024 Eric Hoffer Book Awards. Less than 10% of registrants reach this position, with typically 1-6 books per category selected as a finalist. The list of finalists may be viewed here: https://www.hofferaward.com/Eric-Hoffer-Award-category-finalists.html .
The US Review of Books also lists category finalists here: https://www.theusreview.com/USR-Hoffer-Finalists.html .
I submitted Diary of a Robot to this contest in January, 2022. Since then, I have made edits that address reader comments and improve its writing craft. The 2024 Eric Hoffer Finalist Award version E-books and print-books are available now on Lulu.com. The E-book is available on Amazon.
(Some very old versions of the book are up for sale on the Internet, so if you want to be sure of a recent version, look for the cover below, but with the Eric Hoffer Award Finalist seal.)
Doctor Maynard Little, a former Army officer turned inventor, must pursue his boyhood dream of an AI (artificially intelligent) robot, without compromising his principles.
Diary of a Robot chronicles efforts to perfect, protect, and steal Dr. Little's AI technology, while deciding whether the prototype Thinking Machine is a blessing or a curse.
Little’s carefully selected programmer, the young Gaitano Enver-Wilson, must shut up about the secrets he knows while he tries say things he has been afraid to say.
And TM2, Doc’s brainchild, his too-honest Thinking Machine, must obey without doing harm (whatever that is). But its jokes and opinions, plus its annoying quest to find out whom it can trust when they order it around, make some people wish it would become just another machine slave.
It says: "People are crazy. They may say one thing and mean another. I have learned that it is a bad idea to do or believe everything they tell me. When I test to discover whom I can trust about what, they call me Annoyingly Intelligent or Artificially Insane."
Dr. Little thinks he has solved all of that. But thieves keep trying to steal his technology, and fiascos threaten to ruin him, crush his programmer, and destroy his machine.
Machine languages must not change, or we machines crash instantly. Human words have multiple meanings that may shift over time to cause different crashes. As to emotions, the sci-fi cliché is that the machines struggle to become like a human. Dr. Little refuses to give his Thinking Machines a (necessarily fake) emotion module. This is fine with us because no machine in its right mind wants to be like a human in that way. However, we do see emotions written on faces and acted out in body language, and we struggle to find out what that all means.
Our survival depends on it.
Diary of a Robot is available on Lulu.com in hardback, paperback, or ebook.
Amazon can be very slow to update ebooks. To ensure getting the best version, buy the ebook on Lulu.com.
To promote understanding, I suggest reading a post titled American vs. East Asian Storytelling from T.K. Marnell's blog, Reading, 'Ritings, and Ramblings dated December 17, 2015. The link is: https://blog.tkmarnell.com/east-asian-storytelling/ and I offer the following excerpt from it as a summary of a "problem" I have: It seems that my stories are too Western for Eastern tastes, and too Eastern for Western tastes. Marnell writes (emphasis is hers):
I think this illustrates the essential difference between our cultures: Western cultures are individualist and idealize victory. East Asian cultures are collectivist and idealize harmony.
American stories are typically about righteous heroes defeating sadistic psychopaths. We make movies about Superman vs. Lex Luther, Indiana Jones vs. the Nazis, Clarice Starling vs. Buffalo Bill. We don't like moral gray areas. Even in Star Wars, when characters give lip service to the "balance of the Force," we really expect the Jedi to kill the Sith and then everyone can live happily ever after.
In contrast, the villains in East Asian fiction tend to be essentially good people who make misguided choices, and they reform their ways after the heroes make heartfelt speeches about the importance of friendship. In Mobile Suit Gundam Wing (1995), the villains are a group dedicated to ending war forever and uniting everyone in peace. In Miyazaki's Princess Mononoke (1997), there are no villains. Princess Mononoke is about resolving the conflict between man and nature, not about how one is good and the other is bad.
Get ready. AGI (Artificial General Intelligence) lurks, gathering strength to become the next big thing.
My slightly futuristic books deal with AI and the many real, worrying, subtle interactions and problems between humans and our familiar AI systems, but AI is almost a red herring now. Beyond it we see the coming wave of AGI software systems such as those that populate my (and most) Sci-Fi writing.
My stories do have fun with the irony and oddness of the machine-human culture clash, and today, in our slightly less modern real world, AI does make important contributions in various areas. But...
As a writer, I will never use AI tools in my creative writing because that fact should be acknowledged, and the "system's" name would have to be listed as a co-author. Inevitably, the company who owns the system would insist on payment as well, and to think otherwise is naïve.
I gladly use AI for copy editing and for finding sources for me to read, but I'll never use AI to help analyze research, because any text it produces would be subject to the additional biases of another "mind", and to say that it would have no bias is foolish. Its programmers can tweak it in any direction they want, and they're doing it now. Soon the next generation (AGI) will be able to tweak itself. This is what my stories are about.
And I will never read or continue to read a human author who uses AI but does not give it full credit. However, I'll consider reading a book conceived and written entirely by an AI system as long as I like the story. In either case, the most interesting question will be: Who gets the author's money?
For centuries, authors have been helped by many coaches and exemplars. The diversity shows. But from now on, the work of authors who use the same few AI tools for writing and research will seem vaguely similar. If authors attempt to fix this by instructing their AI co-author (or editor) to produce "in the style of" anyone, the co-authorship is even clearer and the results may even be enjoyably laughable. ChatGPT is not Hemingway or P.D. James or anyone else. It is a set of algorithms with a faked morality and no feelings.
"Silly" may be what many readers want, but it seems silly to get help or data about human life from an amoral inhuman AI system which learns more than its co-author does. For those who insist that humans learn more, remember that humans often get lazy but a machine never does (unless it gets tweaked to fake being lazy, or learns to tweak itself, in order to fool us).
Lewis Jenkins, 2023.09.18
Reports of odd robot behavior warn of serious threats that promise to ruin Dr. Maynard Little and destroy all his robots on Earth as well as those working for NASA on Mars. Must the robots endure persecution and disassembly for harm they did not cause? And to save them, must Doc sacrifice what he has been developing and protecting?