Love, death and robots: viewing Asimov's stories through the eyes of a developer





In honor of Programmer's Day, we decided to relax a little and re-read Isaac Asimov's favorite stories. And then a discovery awaited us: it turns out that the science fiction writer more than half a century ago described many of the realities of modern development with sufficient accuracy. How is this possible, you ask? Let's figure it out together. 



Of the many stories of the famous science fiction writer, the series about the company β€œYu. S. Robots ”and its employees. Several stories tell about the everyday life of testers Powell and Donovan - a kind of robot testers, others - about the main robot psychologist, Dr. Calvin. The range of problems they face is wide and familiar to many of their contemporary colleagues to this day.



One of these common problems is a statement of work inaccurately formulated in a task. Azimov has this topic more than once, and it is not surprising - his robots are much smarter and "more human" than modern machines, and people working with them easily forget that the robot thinks differently. As a result, a careless setting of the problem can turn into a disaster, as happened in the story "Round Dance" .



And it all started, as it seemed to the testers, quite well. To put in order an abandoned base on Mercury, all it takes is a kilogram of selenium to fix the photocells. And there will be electricity, and with it - the cooling of the base, without which there is no way to survive on Mercury. Selena is full in the neighborhood, whole lakes ... Only the robot sent to the nearest of them is crazy and running in circles, worse than that - it carries drunken delirium, although robots do not drink. During the excursion outside with the risk of frying, the heroes find out that the robot has stumbled upon dangerous conditions for itself on the way to the lake. But there is a direct order, what's the matter?



- I said ... Wait ... I said: 'Speed, we need selenium. You will find him there and there. Go and get it. " That's all. What else should I have said?

- You did not say that it is very important, urgent?

- What for? The matter is simple.


This "simplicity" led to a dilemma in the robot's program: the priority of performing an "unimportant" task was lower than the sense of self-preservation (the Third Law, which prescribes avoiding damage). As a result, the robot became fixated on the choice, to fulfill the order or to survive, and the would-be testers had to correct their mistake by taking risky actions - turning to the First Law, which has the highest priority. Simply put, putting itself in danger and forcing the robot to postpone other tasks and rush to save the owners. After such an experience, the testers approached the terms of reference more thoughtfully - and everything went like clockwork:



- I sent him to another selenium lake - this time with an order to get selenium at all costs. He brought it in forty-two minutes and three seconds - I timed it.


After Mercury, the new assignment seems to the heroes not so dangerous (at the interplanetary station, where the story "Logic" takes place , it is much cooler, only "two hundred seventy-three degrees below zero"), but they will have to face a problem that the developers can only see in nightmares. At least no program has yet told its creators that such imperfect beings could not write it.



However, Cutie's robot, a new development for servicing an energy converter, did not believe in the involvement of people in its creation. Well, really, how can these weak creatures create something more perfect than themselves? This is illogical. He has a more plausible version:



- The master first created people - the most uncomplicated species, which is the easiest to produce. Gradually, he replaced them with robots. It was a step forward. Finally, he created me to take the place of the remaining people. From now on I serve the Lord!


The heroes are trying to convince the robot, appealing both to books and to facts (as they think), proving the existence of the Earth. Even assembling a new robot in the presence of Cutie does not work - they were not convinced. Perhaps this is due to the lack of postulates about its origin and goals stitched into the positronic brain of the robot. This architectural error is easy to explain - the developers hardly expected that the robot would doubt the arguments of people. But the flaw, which did not play a role in the previous generations of the product, in the new one led to the creation of a different chain of postulates in the brain of the machine: 



β€œI don’t believe it,” Powell agreed sadly. - This is a reasoning robot, dammit! He believes only in logic, and that's the whole point ...

- What?

- Strictly logical reasoning can prove anything - depending on which initial postulates are accepted. We have them, and Cutie has hers.


However, despite the fact that the robot perceives its task through other variables, it fully fulfills its functions. He just does it not because people ordered it to him, but because such is the will of the Lord.



And people are faced with a classic dilemma: is it worth fixing something if it works? So the heroes, on reflection, came to the conclusion - not worth it:



– , - . , .

– ?

– ! , ?

– ?

– , …

– !


The third story about the testers - "Catching a Rabbit" - shows well what happens when a product has not been stress tested before being released. The story describes the field trials of Dave's robot miner, with a new design - composite (one robot commander controls six other robots, like a hand - fingers). But as soon as the robot is left unattended, it stops working. Moreover, he starts to march with the whole team - a very suspicious activity for a miner. 



The problem of the testers is best described by a quote from the story itself:



– . – ! β€“ . – : . : Β«. . Β» , . : . : , Β«. . Β» . : , , .


Under the threat of losing a good job, a tester is capable of many things - a fact. The heroes of the story went over and rejected many options for verification - from unit testing on site (it could be taken apart and tested separately, but there are only 10 days and it is not yet known whether this will give anything) to a special test environment (which is, but on a distant Earth and weighs 10 tons). What's left? Simulate the conditions under which a bug appears, and look for, look for the reason. This, alas, is the share of many modern testers. True, they were more fortunate - unlike the heroes of the story, today's specialists at least do not have to deliberately blow up the mine along with them. But under the rubble of stones, a person thinks much more efficiently, and it's good that the developers have not yet adopted this technique.



A rigged accident helped not only to cause a bug, but also to guess its cause, and even in an extreme way - by blowing up one of the robot's "fingers" - to reduce the load and eliminate the problem:



- That's what I'm talking about. Commands sent simultaneously on six channels! Under normal conditions, one or more "fingers" do simple work that does not require close monitoring. Well, just like our usual walking movements. And in extreme circumstances, all six must be brought into action immediately and simultaneously. And here something gives up. The rest is simple. Any decrease in the initiative required of him, for example, the appearance of a person, brings him to himself. I destroyed one of the robots, and Dave had to command only five, the initiative diminished and he became normal!


However, not only testers-"testers" get the difficult tasks in Azimov's stories. In the story "How the Robot Got Lost", the chief robot psychologist Susan Calvin has to look not for a malfunction, but for the whole robot. No, he did not disappear without a trace - he hid among others like him and pretended to be one of them. Only out of 63 robots, 62 are telling the truth, and one is lying, and this is a serious bug.



The reason for the failure is found quickly: the customer made changes to the robot program. And not somewhere out there, but in its key part - in the formulation of the First Law. The new robot is no longer obliged to protect a person (so as not to climb after scientists in gamma rays and spoil). But, as is often the case in real life, such inconsistent changes, made without the knowledge of the leading expert, are fraught with dire consequences. In this case, the interference violated the previously clear logic of the laws, and the once debugged system became unstable. So the emotional order "go away and don't show yourself so I don't see you again" gives the robot a loophole, it obeys - and does everything to fulfill the command.

We have to develop a series of tests for adherence to the First Law and set up a trap based on another bug of changes in the robot's program - a sense of superiority that is not characteristic of ordinary machines:



– . , -, . , . -10 . , . - , -2 , . , . , . , , , , , -10 , . - , , … .


The heroes are forced to search for bugs in the story "Risk" . A ship with a hyperdrive is subject to verification, which for some reason did not jump during testing. For humans, such jumps are dangerous, so the ship was piloted by a robot. Now you should send someone to the ship to find out the cause of the failure.



This story has raised several programming issues. Firstly, testing - automated (in this case - by a robot) or manual (scientist). At first glance, a robot seems to be a more suitable option - a machine is faster and more reliable than a human specialist. But the robopsychologist insists on a human check. She later clarifies her position as follows:



–  , , . : . , , . . . , , .



,  , . , . , . , , , , ? Β« Β» – , . , , .


In part, these principles are still relevant today - although automated testing is used more and more widely and has many advantages, in some situations, manual testing still makes sense and allows you to identify problems that, due to their specificity, are difficult to find using machines. In the story, it is impossible for the robot to prepare a full-fledged scenario for finding a bug, since there is no understanding of what exactly could have happened on the ship, and he simply will not notice any unaccounted for possible problems. But a person can orient himself already on the spot, based on what he saw and relying on his own conclusions.



Secondly, the story again raises the question of the correct terms of reference. It was in it that the reason for the bug lay - the algorithm for starting the engine for the robot was registered without taking into account its differences from a person. As a result, following an incorrectly formulated command, he overdid it - and broke the trigger:



–  , . , Β«. . Β» . , . ! , , . . : , .

–  …

–  , , . , . , … 


There are many similar stories, partly seemingly outdated, but in many ways anticipating the events of the future, both in Azimov and other writers. It's a pity that we still don't have such highly developed robots, and problems and deadlines are always with us, the great science fiction writer noted this correctly.



All Articles