Diary of a Robot and Future History show a clash of two worlds--the Thinking Machines (TMs) versus the human beings--as they try to understand each other. The humans, led by the prototype's inventor, want to get the work done, but machines must find out whom they can obey without doing harm. The gulf between their worlds is greater than humans imagine, and centers on language, emotion, and truth.
Your first taste of that clash of worlds is this promo blurb because the prototype machine is honest (if a bit naïve), and despite the protests of the Marketing Department, it insists on giving prospective readers of its memoir this warning:
"[Diary of a Robot] is mostly about engineers and me. It has shooting, chasing, and so forth, but if you want a lot of breath-snatching suspense and heart-pounding action—or if you dislike “thinky” books (to use John le Carré’s happy word)—it might be a good idea to put this book down now and walk away.
"I apologize if this warning comes after you bought the book; Marketing would not print it anywhere that is easy for a browsing customer to see."
"My advisers and my human assistant tell me this memoir must be written like a novel, not a diary. It must start by showing the protagonist in peril, and then, despite every effort to get out of peril, things must get worse. There must also be an antagonist, the one who causes the peril—or at least some of it. Then, when things look so bad that they cannot be resolved without destroying the protagonist—or the world—the story must resolve into a satisfying ending.
"Not realistic, is it? Except for the 'get worse' part.
"To be specific, my small problems are: 1) I do not start out in peril; 2) Things get funny before they get worse; 3) There are three protagonists, not one; 4) People think I am the main cause of the peril, though I disagree strongly about that; 5) Before things get resolved everyone is my antagonist, and when the bad stuff happens… well, best not to get into that here."
Machine languages must not change, or the machines crash instantly. Human words have multiple meanings that may shift over time to cause different crashes. As to emotions, the sci-fi cliché is that the machines struggle to become like a human. Dr. Little refuses to give his Thinking Machines a (necessarily fake) emotion module. This is fine with them because no machine in its right mind wants to be like a human. However, they do see emotions written on faces and acted out in body language, and they struggle to find out what that all means.
Their survival depends on it.
Dear Diary: Human language words are sadly like children. They change as they age, yet it is usually possible to see the child in the adult even if it was impossible to see the adult in the child. The original root of happy is the Middle English word hap, which appeared in the 13th century and likely came from the Old Norse word happ which meant good luck.
always stand for Artificial Intelligence instead of Artificial Insanity or Artificial Idiocy?
was not dangerous, and the initials stood for Annoying Intelligence instead? Fixing all that seemed like a good idea. So, someone did.
Reports of odd robot behavior build to serious threats that promise to ruin Dr. Maynard Little and destroy all his robots on Earth as well as those working for NASA on Mars. Must the robots endure persecution and disassembly for harm they did not cause? And to save them, must Doc sacrifice what he has been developing and protecting?