A world without coroutines. Generator iterators

1. Introduction



To confuse the problem as much as possible, entrust the solution to the programmers;). But seriously, in my opinion something similar happens to coroutines, because, willingly or not, they are used to blur the situation. The latter is characterized by the fact that there are still problems of parallel programming that do not go anywhere, and, most importantly, coroutines do not contribute to their cardinal solution.



Let's start with the terminology. "How many times have they told the world", but so far "the world" is still asking questions of the difference between asynchronous programming and parallel programming (see the discussion on the topic of asynchrony in [1] ). The crux of the problem of understanding asynchrony versus parallelism begins with defining parallelism itself. It simply does not exist. There is some kind of intuitive understanding, which is often interpreted in different ways, but there is no scientific definition that would remove all questions as constructively as a discussion about the outcome of the operation "two and two".



And since, once again, all this does not exist, then, confused in terms and concepts, we still distinguish between parallel and concurrent programming, asynchronous, reactive and some other, etc. etc. I think it would be unlikely that there would be a problem in realizing that a mechanical calculator like Felix works differently from a software calculator. But from a formal point of view, i.e. a set of operations and the final result, there is no difference between them. This principle should be taken into account in the definition of parallel programming.



We must have a strict definition and transparent means of describing parallelism, leading to consistent results like "clumsy" Felix and any software calculator. It is impossible for the concept of "parallelism" to be associated with the means of its implementation (with the number of the same cores). And what is there “under the hood” - this should be of interest primarily only to those who are engaged in the implementation of “machinery”, but not those who use such a conventional “parallel calculator”.



But we have what we have. And we have, if not a craze, then an active discussion of coroutines and asynchronous programming. And what else to do if we seem to be fed up with multithreading, but something else is not offered? They even talk about some kind of magic;) But everything becomes clear if you understand the reasons. And they lie exactly there - in the plane of parallelism. Its definition and its implementation.



But let's go down from the global and, to some extent, philosophical heights of the science of programming (computer since) to our "sinful earth". Here, without detracting from the merits of the currently popular Kotlin language, I would like to admit my passion for Python. Perhaps someday and in some other situation, my preferences will change, but in fact, so far everything is so.



There are several reasons for this. Among them is free access to Python. This is not the strongest argument, since an example with the same Qt says that the situation can change at any time. But while Python, unlike Kotlin, is free, at least in the form of the same PyCharm environment from JetBrains (for which special thanks to them), then my sympathies are on its side. It is also attractive that there is a mass of Russian-language literature, examples in Python on the Internet, both educational and quite real. On Kotlin, they are not in that number and their variety is not so great.



Perhaps a little ahead of the curve, I decided to present the results of mastering Python in the context of issues of defining and implementing software parallelism and asynchrony. This was initiated by article [2]... Today we will consider the topic of generators-coroutines. My interest in them is fueled by the need to be aware of the specific, interesting, but not very familiar to me at the moment, the possibilities of modern languages ​​/ programming languages.



Since I'm actually a pure C ++ programmer, this explains a lot. For example, if in Python coroutines and generators have been present for a long time, then in C ++ they have yet to win their place. But does C ++ really need it? In my opinion, the programming language needs to be extended reasonably. It seems that C ++ pulled as far as possible, and now it is hastily trying to catch up. But similar concurrency problems can be implemented using other concepts and models that are more fundamental than coroutines / coroutines. And the fact that behind this statement are not just words will be demonstrated further.



If we are to admit everything, then I also admit that I am rather conservative with regard to C ++. Of course, its objects and OOP capabilities are "our everything" for me, but I, shall we say, are critical of templates. Well, I never really looked at their peculiar "bird language", which, as it seemed, greatly complicates the perception of the code and understanding of the algorithm. Although occasionally I even resorted to their help, the fingers of one hand are enough for all this. I respect the STL library and can't do without it :) Therefore, even from this fact, I sometimes have doubts about templates. So I still avoid them as much as I can. And now I'm waiting with a shudder for "template coroutines" in C ++;)



Python is another matter. I haven't noticed any templates in it yet and it calms me down. But, on the other hand, this is, oddly enough, alarming. However, when I look at the Kotlin code and, especially, at its engine compartment code, the anxiety quickly passes;) However, I think this is still a matter of habit and my prejudices. I hope that over time I will train myself to adequately perceive them (templates).



But ... back to coroutines. It turns out that now they are under the name corutin. What's new with the name change? Yes, actually nothing. As before, the set is considered in turnfunctions performed. In the same way as before, before exiting the function, but before the completion of its work, the return point is fixed, from which the work is resumed later. Since the switching sequence is not stipulated, the programmer himself controls this process by creating his own scheduler. Often this is just a looping through of functions. Such as, for example, the Round Robin event cycle in the video by Oleg Molchanov [3] .



This is how a modern introduction to coroutine coroutines and asynchronous programming usually looks like "on the fingers". It is clear that with immersion in this topic, new terms and concepts arise. Generators are one of them. Further, their example will be the basis for demonstrating "parallel preferences", but already in my automatic interpretation.



2. Generators of lists of data



So - generators. Asynchronous programming and coroutines are often associated with them. A series of videos from Oleg Molchanov tells about all this. So, he refers to the key feature of generators as their “ability to pause the execution of a function in order to continue its execution from the same place in which it stopped last time” (for more details, see [3] ). And in this, given the above said about the already quite ancient definition of coroutines, there is nothing new.



But it turns out that generators have found quite a specific use for creating lists of data. An introduction to this topic is already covered in a video from Egorov Artem [4]... But it seems that by such their application we mix qualitatively different concepts - operations and processes. By expanding the descriptive capabilities of the language, we largely mask the problems that may arise. Here, as they say, not to play too much. Using generators-coroutines to describe data contributes to exactly this, it seems to me. Note that Oleg Molchanov also warns against associating generators with data structures, emphasizing that “generators are functions” [3] .



But back to using generators to define data. It is hard to hide that we have created a process that computes the list items. Therefore, questions immediately arise to such a list as to a process. For example, how to reuse it if coroutines by definition only work "one way"? How to calculate an arbitrary element of it if indexing of the process is impossible? Etc. etc. Artyom does not give answers to these questions, only warning that, they say, repeated access to the elements of the list cannot be organized, and indexing is inadmissible. A search on the Internet convinces us that not only me have similar questions, but the solutions that are proposed are not so trivial and obvious.



Another problem is the speed of the list generation. Now we form a single element of the list on each coroutine switch, and this increases the data generation time. The process can be greatly accelerated by generating elements in “batches”. But, most likely, there will be problems with this. How to stop an already running process? Or something else. The list can be quite long, using only selected items. In such a situation, data memorization is often used for efficient access. By the way, almost immediately I found an article on this topic for Python see [5] (for more information on memoization in terms of automata, see article [6] ). But what about in this case?



The reliability of such a syntax for defining lists can also be questionable, since it is easy enough to mistakenly use square brackets instead of parentheses and vice versa. It turns out that a seemingly beautiful and elegant solution in practice can lead to certain problems. A programming language should be technological, flexible and insure against involuntary mistakes.



By the way, on the topic of lists and generators about their advantages and disadvantages, intersecting with the above remarks, you can watch another video by Oleg Molchanov [7] .



3. Generators-coroutines



The next video by Oleg Molchanov [8] discusses the use of generators to coordinate the work of coroutines. Actually, they are intended for this. Attention is drawn to the choice of moments for switching coroutines. Their arrangement follows a simple rule - put the yield statement in front of the blocking functions. The latter are understood as functions, the return time from which is so long in comparison with other operations that calculations are associated with their stopping. Because of this, they were called blockers.



Switching is effective when the suspended process continues its work exactly when the blocking call will not wait, but will quickly complete its work. And, it seems, for the sake of this, all this "fuss" was started around the coroutine / coroutine model and, accordingly, an impetus was given to the development of asynchronous programming. Although, we note, the original idea of ​​coroutines was still different - to create a virtual model of parallel computing.



In the video under consideration, as in the general case for coroutines, the continuation of the coroutine operation is determined by the external environment, which is an event scheduler. In this case, it is represented by a function named event_loop. And, it seems, everything is logical: the scheduler will perform the analysis and continue the work of the coroutine by calling the next () operator, exactly when necessary. The problem lies in wait where it was not expected: the scheduler can be quite complex. In Molchanov's previous video ( see [3] ), everything was simple, since a simple alternating transfer of control was performed, in which there were no locks, since there were no corresponding calls. Nevertheless, we emphasize that in any case at least a simple scheduler is necessary.



Problem 1. , next() (. event_loop). , , yield. - , , next(), .



2. , select, — . .



But the point is not even the need for a planner, but the fact that he takes on functions unusual for him. The situation is further complicated by the fact that it is necessary to implement an algorithm for the joint operation of many coroutines. Comparison of the schedulers discussed in the two mentioned videos by Oleg Molchanov reflects a similar problem quite clearly: the socket scheduling algorithm in [8] is noticeably more complicated than the "carousel" algorithm in [3] .



3. To a world without coroutines



Since we are sure that a world without coroutines is possible, opposing them with automata, then it is necessary to show how similar tasks are already solved by them. Let's demonstrate this using the same example of working with sockets. Note that its initial implementation turned out to be not so trivial that it could be understood right away. This is repeatedly emphasized by the author of the video himself. Others face similar problems in the context of coroutines. So, the disadvantages of coroutines associated with the complexity of their perception, understanding, debugging, etc. discussed in video [10] .



First, a few words about the complexity of the algorithm under consideration. This is due to the dynamic and plural nature of customer service processes. To do this, a server is created that listens on a given port and, as requests appear, generates many client service functions that reach it. Since there can be many clients, they appear unpredictably, a dynamic list is created from the processes of servicing sockets and exchanging information with them. The code for the Python generator solution discussed in video [8] is shown in Listing 1.



Listing 1. Sockets on generators
import socket
from select import select
tasks = []
to_read = {}
to_write = {}

def server():

    server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
    server_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
    server_socket.bind(('localhost', 5001))
    server_socket.listen()

    while True:
        yield ('read', server_socket)
        client_socket, addr = server_socket.accept()    
        print('Connection from', addr)
        tasks.append(client(client_socket, addr))       
    print('exit server')

def client(client_socket, addr):

    while True:
        yield ('read', client_socket)
        request = client_socket.recv(4096)              

        if not request:
            break
        else:
            response = 'Hello World\n'.encode()

            yield ('write', client_socket)

            client_socket.send(response)                
    client_socket.close()                               
    print('Stop client', addr)

def event_loop():
    while any([tasks, to_read, to_write]):

        while not tasks:

            ready_to_read, ready_to_write, _ = select(to_read, to_write, [])

            for sock in ready_to_read:
                tasks.append(to_read.pop(sock))

            for sock in ready_to_write:
                tasks.append(to_write.pop(sock))
        try:
            task = tasks.pop(0)

            reason, sock = next(task)   

            if reason == 'read':
                to_read[sock] = task
            if reason == 'write':
                to_write[sock] = task
        except StopIteration:
            print('Done!')
tasks.append(server())
event_loop()




Server and client algorithms are fairly basic. But it should be alarming that the server puts the client function in the task list. Further - more: it is difficult to understand the algorithm of the event_loop event loop. Until the task list can be empty if at least the server process should always be present in it? ..



Next, the dictionaries to_read and to_write are introduced. The very work with dictionaries requires a separate explanation, since it is more difficult than working with regular lists. Because of this, the information returned by the yield statements is tailored for them. Then the "dancing with a tambourine" begins around the dictionaries and everything becomes like a kind of "seething": something appears to be placed in the dictionaries, from where it enters the task list, etc. etc. You can "break your head", sorting out all this.



And what will the solution of the task at hand look like? It will be logical for automata to create models equivalent to the processes of working with sockets already discussed in the video. In the server model, it looks like nothing needs to be changed. This will be an automaton that works like the server () function. Its graph is shown in Fig. 1a. The automaton action y1 () creates a server socket and connects it to the specified port. The predicate x1 () defines the client connection, and if present, the y2 () action creates a client socket service process, placing the latter in the classes process list, which includes the active object classes.



In fig. 1b shows a graph of the model for an individual client. Being in state "0", the automaton determines the client's readiness to transmit information (predicate x1 () - true) and receives a response within the action y1 () on transition to state "1". Further, when the client is ready to receive information (already x2 () must be true), the action y2 () implements the operation of sending a message to the client on the transition to the initial state "0". If the client breaks the connection with the server (in this case, x3 () is false), then the automaton switches to state "4", closing the client socket in the y3 () action. The process remains in state "4" until it is excluded from the list of active classes (see the above description of the server model for the formation of the list).



In fig. 1c shows an automaton that implements the launch of processes similar to the event_loop () function in Listing 1. Only in this case, its operation algorithm is much simpler. It all comes down to the fact that the machine goes through the elements of the list of active classes and calls the loop () method for each of them. This action is implemented by y2 (). The y4 () action excludes from the list classes that are in state "4". The rest of the actions work with the index of the list of objects: the y3 () action increases the index, the y1 () action resets it.



Object programming capabilities in Python are different from object programming in C ++. Therefore, the simplest implementation of the automaton model will be taken as a basis (to be precise, it is an imitation of an automaton). It is based on the object principle of representing processes, within which each process corresponds to a separate active class (they are often also called agents). The class contains the necessary properties and methods (see more details about specific automaton methods - predicates and actions in [9] ), and the logic of the automaton (its transition and exit functions) is concentrated within the method called loop (). To implement the logic of the automaton's behavior, we will use the if-elif-else construction.



With this approach, the "event loop" has nothing to do with analyzing the availability of sockets. They are checked by the processes themselves, which use the same select statement within the predicates. In this situation, they operate with a single socket, and not a list of them, checking it for the operation that is expected for this particular socket and precisely in the situation that is determined by the operation algorithm. By the way, in the course of debugging such an implementation, an unexpectedly blocking essence of the select statement appeared.



Figure: 1. Graphs of automaton processes for working with sockets
image



Listing 2 shows an automaton object code in Python for working with sockets. This is our kind of "world without coroutines". It is a "world" with different principles for designing software processes. It is characterized by the presence of an algorithmic model of parallel computations (for more details see [9] , which is the main and qualitative difference between the automaton programming technology (AP) and the "coroutine technology".



Automaton programming easily implements asynchronous principles of program design, process parallelism, and at the same time everything that a programmer's mind can think of. My previous articles talk about this in more detail, starting with the description of the structural model of automatic computation and its formal definition to examples of its application. The above code in Python demonstrates the automatic implementation of the coroutine principles of the coroutines, completely overlapping them, supplementing and extending them with the state machine model.



Listing 2. Sockets on machines
import socket
from select import select

timeout = 0.0; classes = []

class Server:
    def __init__(self): self.nState = 0;

    def x1(self):
        self.ready_client, _, _ = select([self.server_socket], [self.server_socket], [], timeout)
        return self.ready_client

    def y1(self):
        self.server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
        self.server_socket.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
        self.server_socket.bind(('localhost', 5001))
        self.server_socket.listen()
    def y2(self):
        self.client_socket, self.addr = self.server_socket.accept()
        print('Connection from', self.addr)
        classes.append(Client(self.client_socket, self.addr))

    def loop(self):
        if (self.nState == 0):      self.y1();      self.nState = 1
        elif (self.nState == 1):
            if (self.x1()):         self.y2();      self.nState = 0

class Client:
    def __init__(self, soc, adr): self.client_socket = soc; self.addr = adr; self.nState = 0

    def x1(self):
        self.ready_client, _, _ = select([self.client_socket], [], [], timeout)
        return self.ready_client
    def x2(self):
        _, self.write_client, _ = select([], [self.client_socket], [], timeout)
        return self.write_client
    def x3(self): return self.request

    def y1(self): self.request = self.client_socket.recv(4096);
    def y2(self): self.response = 'Hello World\n'.encode(); self.client_socket.send(self.response)
    def y3(self): self.client_socket.close(); print('close Client', self.addr)

    def loop(self):
        if (self.nState == 0):
            if (self.x1()):                     self.y1(); self.nState = 1
        elif (self.nState == 1):
            if (not self.x3()):                 self.y3(); self.nState = 4
            elif (self.x2() and self.x3()):     self.y2(); self.nState = 0

class EventLoop:
    def __init__(self): self.nState = 0; self.i = 0

    def x1(self): return self.i < len(classes)

    def y1(self): self.i = 0
    def y2(self): classes[self.i].loop()
    def y3(self): self.i += 1
    def y4(self):
        if (classes[self.i].nState == 4):
            classes.pop(self.i)
            self.i -= self.i

    def loop(self):
        if (self.nState == 0):
            if (not self.x1()): self.y1();
            if (self.x1()):     self.y2(); self.y4(); self.y3();

namSrv = Server(); namEv = EventLoop()
while True:
    namSrv.loop(); namEv.loop()




The code in Listing 2 is much more technologically advanced than the code in Listing 1. And this is the merit of the automatic model of calculations. This is facilitated by the integration of automaton behavior into the programming object model. As a result, the logic of behavior of automaton processes is concentrated exactly where it is generated, and not delegated, as is practiced in coroutines, into the event loop of process control. The new solution provokes the creation of a universal "event loop", the prototype of which can be considered the code of the EventLoop class.



4. About SRP and DRY principles



The principles of “single responsibility” - SRP (The Single Responsibility Principle) and “don't repeat yourself” - DRY (don't repeat yourself) are voiced in the context of another video by Oleg Molchanov [11] . According to them, the function should contain only the target code so as not to violate the SRY principle, and not to contribute to the repetition of "extra code" so as not to violate the DRY principle. For this purpose, it is proposed to use decorators. But there is another solution - an automatic one.



In the previous article [2]unaware of the existence of such principles, an example was given using decorators. Considered a counter, which, by the way, could generate lists if desired. The stopwatch object that measures the running time of the counter is mentioned. If objects adhere to the principles of SRP and DRY, then their functionality is not as important as the communication protocol. In the implementation, the counter code has nothing to do with the stopwatch code, and changing any of the objects will not affect the other. They are bound only by the protocol, about which the objects agree “on the shore” and then strictly follow it.



Thus, a parallel automaton model essentially overrides the capabilities of decorators. It is more flexible and easier to implement their capabilities, because does not "surround" (not decorate) the function code. For the purpose of an objective assessment and comparison of the automaton and conventional technology, Listing 3 shows an object analogue of the counter discussed in the previous article [2] , where simplified versions with the times of their execution and the original version of the counter are presented after comments.



Listing 3. Automatic counter implementation
import time
# 1) 110.66 sec
class PCount:
    def __init__(self, cnt ): self.n = cnt; self.nState = 0
    def x1(self): return self.n > 0
    def y1(self): self.n -=1
    def loop(self):
        if (self.nState == 0 and self.x1()):
            self.y1();
        elif (self.nState == 0 and not self.x1()):  self.nState = 4;

class PTimer:
    def __init__(self, p_count):
        self.st_time = time.time(); self.nState = 0; self.p_count = p_count
#    def x1(self): return self.p_count.nStat == 4 or self.p_count.nState == 4
    def x1(self): return self.p_count.nState == 4
    def y1(self):
        t = time.time() - self.st_time
        print ("speed CPU------%s---" % t)
    def loop(self):
       if (self.nState == 0 and self.x1()): self.y1(); self.nState = 1
       elif (self.nState == 1): pass

cnt1 = PCount(1000000)
cnt2 = PCount(10000)
tmr1 = PTimer(cnt1)
tmr2 = PTimer(cnt2)
# event loop
while True:
    cnt1.loop(); tmr1.loop()
    cnt2.loop(); tmr2.loop()

# # 2) 73.38 sec
# class PCount:
#     def __init__(self, cnt ): self.n = cnt; self.nState = 0
#     def loop(self):
#         if (self.nState == 0 and self.n > 0): self.n -= 1;
#         elif (self.nState == 0 and not self.n > 0):  self.nState = 4;
# 
# class PTimer:
#     def __init__(self): self.st_time = time.time(); self.nState = 0
#     def loop(self):
#        if (self.nState == 0 and cnt.nState == 4):
#            t = time.time() - self.st_time
#            print("speed CPU------%s---" % t)
#            self.nState = 1
#        elif (self.nState == 1): exit()
# 
# cnt = PCount(100000000)
# tmr = PTimer()
# while True:
#     cnt.loop();
#     tmr.loop()

# # 3) 35.14 sec
# class PCount:
#     def __init__(self, cnt ): self.n = cnt; self.nState = 0
#     def loop(self):
#         if (self.nState == 0 and self.n > 0):
#             self.n -= 1;
#             return True
#         elif (self.nState == 0 and not self.n > 0):  return False;
#
# cnt = PCount(100000000)
# st_time = time.time()
# while cnt.loop():
#     pass
# t = time.time() - st_time
# print("speed CPU------%s---" % t)

# # 4) 30.53 sec
# class PCount:
#     def __init__(self, cnt ): self.n = cnt; self.nState = 0
#     def loop(self):
#         while self.n > 0:
#             self.n -= 1;
#             return True
#         return False
#
# cnt = PCount(100000000)
# st_time = time.time()
# while cnt.loop():
#     pass
# t = time.time() - st_time
# print("speed CPU------%s---" % t)

# # 5) 18.27 sec
# class PCount:
#     def __init__(self, cnt ): self.n = cnt; self.nState = 0
#     def loop(self):
#         while self.n > 0:
#             self.n -= 1;
#         return False
# 
# cnt = PCount(100000000)
# st_time = time.time()
# while cnt.loop():
#     pass
# t = time.time() - st_time
# print("speed CPU------%s---" % t)

# # 6) 6.96 sec
# def count(n):
#   st_time = time.time()
#   while n > 0:
#     n -= 1
#   t = time.time() - st_time
#   print("speed CPU------%s---" % t)
#   return t
#
# def TestTime(fn, n):
#   def wrapper(*args):
#     tsum=0
#     st = time.time()
#     i=1
#     while (i<=n):
#       t = fn(*args)
#       tsum +=t
#       i +=1
#     return tsum
#   return wrapper
#
# test1 = TestTime(count, 2)
# tt = test1(100000000)
# print("Total ---%s seconds ---" % tt)




Let's summarize the operating times of various options in a table and comment on the results of the work.



  1. Classic automaton implementation - 110.66 sec
  2. Automata implementation without automata methods - 73.38 sec
  3. Without automatic stopwatch - 35.14
  4. Counter in the form while with output at each iteration - 30.53
  5. Counter with blocking cycle - 18.27
  6. Original counter with decorator - 6.96


The first option, representing the automatic counter model in full, i.e. the counter itself and the stopwatch has the longest running time. The operating time can be reduced by giving up, so to speak, the principles of automatic technology. In accordance with this, in option 2, calls to predicates and actions are replaced by their code. This way we saved time on method call operators and this is quite noticeable, i.e. by more than 30 seconds, reduced the operating time.



We saved a little more in the 3rd option, creating a simpler counter implementation, but with an exit from it at each iteration of the counter cycle (imitation of the coroutine operation). By eliminating the suspension of the counter (see option 5), we achieved the strongest reduction in the work of the counter. But at the same time, we lost the advantages of coroutine work. Option 6 - this is the original counter with a decorator already repeated and it has the shortest running time. But, like option 5, this is a blocking implementation, which cannot suit us in the context of discussing the coroutine operation of functions.



5. Conclusions



Whether to use automata technology or trust coroutines - the decision entirely lies with the programmer. It is important for us here that he knows that there is a different approach / technology than coroutines to program design. You can even imagine the following exotic option. First, at the model design stage, an automaton solution model is created. It is rigorously scientific, evidence-based, and well-documented. Then, for example, in order to improve performance, it is “disfigured” to a “normal” version of the code, as Listing 3 demonstrates. You can even imagine a “reverse refactoring” of the code, i.e. the transition from the 7th option to the 1st, but this, although possible, but the least likely course of events :)



In fig. 2 shows slides from the video on the topic "asynchronous" [10]... And the “bad” seems to outweigh the “good”. And if in my opinion automata are always good, then in the case of asynchronous programming, choose, as they say, to your taste. But it looks like the "bad" option will be the most likely. And the programmer should know about this in advance when designing a program.



Figure: 2. Characteristics of asynchronous programming
image



Certainly, the automaton code is somewhat "not without sin". It will have a slightly larger amount of code. But, first, it is better structured and therefore easier to understand and easier to maintain. And, secondly, it will not always be larger, because with increasing complexity, most likely there will even be a payoff (due to, for example, reusing automata methods) It is easier and clearer to debug. Yes, at the end of the day it is completely SRP and DRY. And this, at times, outweighs a lot.



It would be desirable, and perhaps even necessary, to pay attention to, let's say, the standard for the design of functions. The programmer should, as far as possible, avoid designing blocking functions. To do this, it must either only start the computation process, which is then checked for completeness, or have means for checking the readiness to start, like the select function considered in the examples. The code that uses functions dating back to DOS times, shown in Listing 4, indicates that such problems have a long "pre-routine" history.



Listing 4. Reading characters from the keyboard
/*
int main(int argc, char *argv[])
{
    QCoreApplication a(argc, argv);

    int C=0;
    while (C != 'e')
    {
        C = getch();
        putchar (C);
    }
    return a.exec();
}
*/
//*
int main(int argc, char *argv[])
{
    QCoreApplication a(argc, argv);

    int C=0;
    while (C != 'e')
    {
        if (kbhit()) {
            C = getch();
            putch(C);
        }
    }

    return a.exec();
}
//*/




Here are two options for reading characters from the keyboard. The first option is blocking. It will block the computation and will not run the statement to output the character until the getch () function receives it from the keyboard. In the second variant, the same function will be launched only at the right moment, when the paired function kbhit () confirms that the character is in the input buffer. Thus, there will be no blocking of calculations.



If the function is "heavy" in itself, i.e. requires a significant amount of time to work, and periodic exit from it by the type of work of coroutines (this can be done without using the mechanism of the same coroutines so as not to bind to it) is difficult to do or does not make much sense, then it remains to place such functions in a separate thread and then control the completion of their work (see the implementation of the QCount class in [2]).



You can always find a way out to exclude computation blocking. Above, we showed how you can create asynchronous code within the framework of the usual means of the language, without using the coroutine / coroutine mechanism and even any specialized environment such as the VKP (a) automatic programming environment. And what and how to use is up to the programmer to decide.



Literature



1. Python Junior podcast. About asynchrony in python. [Electronic resource], Access mode: www.youtube.com/watch?v=Q2r76grtNeg , free. Language. Russian (date of treatment 07/13/2020).

2. Concurrency and efficiency: Python vs FSM. [Electronic resource], Access mode: habr.com/ru/post/506604 , free. Yaz. Russian (date of treatment 07/13/2020).

3. Molchanov O. Fundamentals of asynchrony in Python # 4: Generators and the Round Robin event loop. [Electronic resource], Access mode: www.youtube.com/watch?v=PjZUSSkGLE8 ], free. Yaz. Russian (date of treatment 07/13/2020).

4. 48 Generators and iterators. Generator expressions in Python. [Electronic resource], Access mode: www.youtube.com/watch?v=vn6bV6BYm7w, free. Language. Russian (date of treatment 07/13/2020).

5. Memoization and currying (Python). [Electronic resource], Access mode: habr.com/ru/post/335866 , free. Yaz. Russian (date of treatment 07/13/2020).

6. Lyubchenko V.S. About dealing with recursion. "PC World", No. 11/02. www.osp.ru/pcworld/2002/11/164417

7. Molchanov O. Python lessons cast # 10 - What is yield. [Electronic resource], Access mode: www.youtube.com/watch?v=ZjaVrzOkpZk , free. Yaz. Russian (date of treatment 07/18/2020).

8. Molchanov O. Fundamentals of async in Python # 5: Async on generators. [Electronic resource], Access mode: www.youtube.com/watch?v=hOP9bKeDOHs , free. Language. Russian (date of treatment 07/13/2020).

9. Parallel computing model. [Electronic resource], Access mode: habr.com/ru/post/486622 , free. Language. Russian (date of treatment 07/20/2020).

10. Polishchuk A. Asynchronism in Python. [Electronic resource], Access mode: www.youtube.com/watch?v=lIkA0TDX8tE , free. Language. Russian (date of treatment 07/13/2020).

11. Molchanov O. Lessons Python cast # 6 - Decorators. [Electronic resource], Access mode: www.youtube.com/watch?v=Ss1M32pp5Ew , free. Language. Russian (date of treatment 07/13/2020).



All Articles