How cybersecurity is transforming the IT market (part 3)

Despite the annual growth of budgets and the variety of information security tools, every year we receive only a multiple increase in statistics on the number of incidents, an increase in the volume of leaks and phishing emails, etc. Why is this happening? It is possible that the increasing complexity and size of information systems negatively affects the effectiveness of the “overlaid” security controls. In the third part of a series of articles, we will talk about security as an integral part of the architecture of the software systems and networks themselves and about information asymmetry, which in the future may turn the approaches to "technical" protection.







Introduction



The classical approach to information security, which took shape in the early years of the Internet, was reflected in the perimeter-based model. With this approach, the company had an internal secure segment where trusted workstations were located, and an external segment with untrusted resources, access to which was controlled. A firewall was placed between the internal and external segments, which determined the rules for working with the outside world. This approach quickly proved ineffective. The increase in the number of workstations in the local network led to the fact that it became almost impossible to control each host. Attempts to monitor not only the perimeter, but also the internal devices have led to a zero-trust approach, when each entity must be uniquely identified regardless of the connection point.Difficulties with the widespread implementation of the zero-trust approach led to the further development of this concept, which can be termed as "digital-trust". In the case of “digital trust,” every device or every user of the system has certain patterns of behavior that can be considered “normal”. For example, a certain set of software is installed on a smartphone or laptop, which can be recognized by generating specific traffic on the network. The programs and sites that the user opens also define specific patterns of behavior. A sharp deviation from these patterns can be viewed as a security incident resulting from user / device spoofing or malware.which can be referred to as "digital-trust". In the case of “digital trust,” every device or every user of the system has certain patterns of behavior that can be considered “normal”. For example, a certain set of software is installed on a smartphone or laptop, which can be recognized by generating specific traffic on the network. The programs and sites that the user opens also define specific patterns of behavior. A sharp deviation from these patterns can be viewed as a security incident resulting from user / device spoofing or malware.which can be referred to as "digital-trust". In the case of “digital trust,” every device or every user of the system has certain patterns of behavior that can be considered “normal”. For example, a certain set of software is installed on a smartphone or laptop, which can be recognized by generating specific traffic on the network. The programs and sites that the user opens also define specific patterns of behavior. A sharp deviation from these patterns can be viewed as a security incident resulting from user / device spoofing or malware.which can be recognized by the generation of specific traffic on the network. The programs and sites that the user opens also define specific patterns of behavior. A sharp deviation from these patterns can be viewed as a security incident resulting from user / device spoofing or malware.which can be recognized by the generation of specific traffic on the network. The programs and sites that the user opens also define specific patterns of behavior. A sharp deviation from these patterns can be viewed as a security incident resulting from user / device spoofing or malware.



This evolution of approaches to network security reflects the general fact of changing protection methods due to the inevitable increase in the complexity of information systems. However, the ever-widening gap between information security spending and incidents suggests that the cybersecurity paradigm must change. Researchers around the world are increasingly inclined to the idea that such a paradigm shift will occur in the area of ​​information asymmetry that exists between the attacker and the defender of information systems. Asymmetry reflects the fact that the time for an attacker to study an information system exceeds the time to design it, and also that an attacker needs to find and implement a single vulnerability, while when designing it is necessary to find all of them.



Towards eliminating information asymmetry



Since any attack on an information system is always preceded by a reconnaissance process, the idea of ​​making this process as difficult as possible for an attacker seems obvious. You can, of course, block access to certain processes and devices and thereby protect them from external research, but practice has shown that this approach is not always effective. The idea of ​​continuously changing the parameters of an information system has gained much more popularity. As a result, the information obtained by the attacker becomes irrelevant at the next moment in time. This approach was called Moving Target Defense (MTD - defense based on moving targets).



The growth of interest in the topic is noteworthy, which can be traced to the number of publications in the leading databases on the MTD topic. The main breakthrough occurred after 2011, when in the United States the MTD topic was included in the number of priority areas for the development of state security technologies . After that, a large amount of grant funds on the MTD topic were allocated by various funds in the USA (DARPA) and other countries (the European Union, India, China, etc.). If in 2011 there were 50 publications on the MTD topic, then in 2017 there were more than 500 of them a year. However, they did not make a significant technological breakthrough in the early years. MTD methods have emerged that have become the de facto industry standard, such as ASLR technology, allowing you to randomly mix the sections of addresses used by the application in RAM. ASLR is now used in all operating systems.



There are not so many superimposed security products that have been able to reach the market and start selling. Here you can select Morphisec , which is installed on endpoints and can act as a layer to the areas of memory used. Appendix CryptoMove allows you to transform a secret encrypting and distributing it to multiple sites using the MTD.



MTD's solutions for obfuscating local network parameters and addresses have gained even less popularity. Most of these developments have remained in theoretical studies and have not been reflected in the products of large information security vendors. Despite the fact that MTD technologies have become de facto standards for working with memory, full-fledged protection of information systems according to the MTD methodology did not happen. The reason for the defeat of such a beautiful theory with objectively proven effectiveness is probably not to be found in its ineffectiveness, but in the difficulty of adapting MTD methods to real systems. The system cannot be completely unique. Some application components must understand others, communication protocols must be universal, and the software structure must be recognizable for the consumer.Throughout the history of IT development, they followed the path of standardization and maximum unification of processes, and this very path led them to the cybersecurity problems that we have now. The key conclusion from the problem of information asymmetry is the need to change the paradigm of the unity of the functional structure of information systems and move to the principle of maximum randomization of their parameters. As a result, the system itself, due to its uniqueness, will acquire "immunity" to attacks and will allow to bridge the gap in information asymmetry.As a result, the system itself, due to its uniqueness, will acquire "immunity" to attacks and will allow to bridge the gap in information asymmetry.As a result, the system itself, due to its uniqueness, will acquire "immunity" to attacks and will allow to bridge the gap in information asymmetry.



Most MTD methods have found it difficult to distinguish them as specific product solutions. For example, many developments have focused on randomization to protect against code injection. The simplest, but at the same time effective method is the randomization of interpreted code commands. For example, a random number is added to classic SQL commands, without which the interpreter does not understand it as a command. Let's say an INSERT command is interpreted as an INSERT command with only a unique code that is known to the interpreter: INSERT853491. In this case, it will be impossible to make SQL injection, even if there really is a vulnerability due to the lack of parameter validation. Although this method is effective, it obviously cannot be implemented with "overlay" security features, but must be part of the logic of the database server itself.Another important approach to system randomization is code diversification.



Diversification of program code



Code diversification implies that we can functionally clone a program while modifying the program code. There is a huge amount of research on this topic, but most of this work has remained at the R&D level, without turning into commercially interesting solutions. As a rule, these are programs that allow you to "increase" the number of logic circuits with a finite zero addition of functionality or to perform a template substitution of certain sections of the code. In the end, however, the diversified program often had the same vulnerabilities as the original one.



The main problem with this approach is that already written code is fed to the input of the diversifier. A diversifier cannot “understand” the significance of certain software constructions, therefore, it is not able to truly diversify them, but only replaces one piece of code in a template with another or generates additional “useless” code.



In order to radically solve the problem of diversification, it is necessary to achieve automatic generation of application code. If we can eliminate the programmer's labor of writing specific instructions and algorithmic constructs, then we will solve the diversification problem as well. Code auto-generation assumes that you can create a program at a higher level, for example, with a list of functional requirements or graphical relationships - and the code, in turn, will be generated automatically for this construction.



There are a number of approaches to code generation that have gained popularity over the past few years.





A logical question may arise: what does security have to do with it? The revolution in code generation technologies will ultimately lead to a revolution in cybersecurity. If you open the CVE database, you can find that more than 90% of vulnerabilities are not logical errors in software development, but their specific implementation in the program code (a controversial issue is whether hardware vulnerabilities in CVE are also included here). If we move the development to a higher abstract level, then this can be expressed in two consequences:



  1. , «». , .
  2. . . , . . , , , , .


Accordingly, the generated "software", due to its uniqueness and unknown to the attacker, eliminates the information asymmetry that was before. And this "blurring" of functionality and software parameters creates an insurmountable barrier for an attacker, even taking into account the presence of vulnerabilities in the system. Vulnerabilities due to the lack of information asymmetry will never be found. > Deepfake technologies as a threat to information security



The new reality of information systems



As you can see, we are seeing a tendency to overcome the information asymmetry between the attacker and the defender of information systems, which can be expressed in several features:



  1. Maximum build-up of pseudo-randomness of development (data model, machine instructions, functions, etc.) regardless of external protocols and interaction interfaces.
  2. Transition to the dynamic structure of key parameters both at the design stage and at the stage of the information system functioning.


This will not lead to the solution of all cybersecurity issues, but it will definitely make a significant transformation of the information security market.



Here we will face several key changes in the industry:



  1. The end of the era of viruses and antiviruses. If antiviruses were once almost synonymous with a cybersecurity product, today their market share has significantly decreased. If, ultimately, all software flaws exist only at the logical level, without the ability to exploit code errors, then the concept of malware will become a thing of the past. This will be the end of a whole technological era of cybersecurity and perhaps some vendors who are not structuring their businesses right now.
  2. () . — , . , (AI) (ML) , , . NLP- (NLP — Natural Language Processing, ) , . . , — NLP (PhishNetd-NLP, ). , (Deepfake).
  3. . , - «» .
  4. , . ( , .), - (Web Application Firewalls), «» «» , ( , Darktrace).


-



IT and information security markets exist in conjunction, technologically influencing each other. Security is a common problem in information systems. A separate cybersecurity market exists now only because most cybersecurity tools are imposed, but this trend may change in the near future. Only the systems for controlling user behavior in the company (DLP, activity monitoring, UEBA, etc.) can feel most confident: they are likely to retain their "separate" market, while the systems for controlling network attacks, penetration testing, code analysis, etc. are transformed along with overcoming the information asymmetry of information systems design.



The most significant changes will occur in the area of ​​coding. Even if in the coming years we do not switch to 4GL in development and there is no revolution here, the principles of diversification will still become the general rule, as now ASLR is such a rule. And here there are not only obvious bonuses associated with an increase in the speed of development (and possibly with a decrease in the qualifications of developers), but also advantages in the field of cybersecurity. We get a lower probability of vulnerabilities to appear as a result of a programmer's error and can additionally diversify the code at a low level by adding elements of pseudo-randomness to it. Of course, this transition will not happen quickly. The main obstacle to innovation may be that such funds are not "imposed" on IP, and therefore it is unlikely thatthat startups and high-tech businesses will be the driver of progress.



The second important component is general diversification and the transition to dynamic parameters of information systems. If, for example, you are designing a local area network with dynamic addressing on IPv6 using the MTD methodology, then this allows you to exclude unauthorized hosts from joining the network. They will simply be "rejected" like a foreign body in the body. Similarly, using MTD in other processes will make it difficult for any unauthorized change to normal operation. This allows you to acquire some kind of immunity from unauthorized modification and penetration into the system.



How this can significantly affect the IT market:



  1. , - , .
  2. (open-source) open-source . «algoend»- . open-source — . , :

    • open-source ( ),
    • open-source , «algoend».


    , open-source , .
  3. AI / NLP . , — , ( ) , . NLP-, — NLP .
  4. . , (deception) . deception-, «» , .




Taken together, it can be predicted that in the long term, we will increasingly move away from cybersecurity problems associated with errors in program code, until the elimination of information asymmetry will reduce this problem to zero. But this will not solve all cybersecurity issues. Security is a kind of element of excellence in understanding the processes of functioning of the IS and the ability to control these processes. The more complex the systems are, the greater the likelihood of the presence of logical errors in their functioning and the influence of the human factor.



There is a very clear trajectory in which the cybersecurity market in the future will be divided between human factor management solutions, while low-level security will cease to exist in the form of imposed protections and will turn into an integral part of platform IT solutions.



As cybersecurity transforms the IT market (Part 2)

How to transform cyber security IT market (Part 1)



Original article published here



All Articles