"Docker is already dead" or whatever you wanted to know about Devops but were afraid to ask



Recently, Alexander Chistyakov, DevOps with 7 years of experience and co-founder of the St. Petersburg DevOps Engineers Community, spoke on our social networks.



Sasha is one of the top speakers in this area, he performed on the main stages at Highload ++, RIT ++, PiterPy, Strike, making at least 100 reports in total. Last Monday he answered questions from viewers and talked about his experience.



We share the broadcast recording and transcript.






My name is Alexander Chistyakov, I have been working as a DevOps engineer for many years. I have been advising various companies for a long time on implementing DevOps practices, using modern DevOps tools and organizing infrastructures so that we can all sleep peacefully at night, and people continue to receive money for their goods and services.



Basically I consulted foreign companies.



We will talk about how people use Kubernetes in everyday practice, why it is needed, why you should not be afraid of it, what you should pay attention to and what will happen next.

I think sysadmins, DevOps engineers, CIOs, and other infrastructure managers (and supporters) will find it useful.



How did this landscape develop? I remember computers with ROM Basic installed and it was possible to write programs without an OS installed. Much water has flowed under the bridge since then. At first, there was no OS as such (more precisely, they were written in assembler). Then the C language appeared and the situation improved dramatically. Of course, now we are all familiar with the concept of an OS: it is a platform that allows us to run custom applications and manage the resources that we have on this computer. Or others, if it is distributed. Even then, it was possible to assemble a high-performance computing cluster from your laptop and desktop - this is what the students did in the dormitory of the St. Petersburg Polytechnic Institute in 1997.



Then it turned out - I probably read this article about 10 years ago - that Google, which invented web mail, is building an operating system around this very web mail so that users can use it from tablets and phones. This was unexpected, usually the operating system is something that runs the binaries, and when you look at the application through the browser, you don't even know if the binary is there. You can read mail, chat in an online messenger, draw slides, edit documents together. It turned out that this suits many.



True, Google was not very consistent and made many such products that were not needed and did not go beyond the prototype - for example, Google Wave. Well, this is the policy of any large company - to move quickly, break down often, until the company ceases to exist.



Nevertheless, in the mass consciousness there has been a shift towards the idea of ​​the OS as a platform that does not provide those services that were once approved by some standard-setting committee and were assigned to it, but simply satisfies the needs of users. What are these needs?



It used to be customary to ask the developer what he writes on. There were C ++ specialists (probably, and now there are somewhere), now there are many PHP specialists (they laugh at themselves, sometimes) and a lot of JavaScript developers. Typescript, the GoLang language that people with PHP knowledge have switched to, is gaining popularity now. There was the Perl language (probably it still remains, but has lost much in popularity), the Ruby language. In general, an application can be written in anything. If you live in the real world, you've probably come across the fact that they are written in anything: Javascript, Rust, C; come up with a name - something was written on it.



And all this must be exploited. It turned out that in such a heterogeneous system, firstly, specialists are needed to develop in different languages, and, secondly, a platform is needed that allows you to launch a service in a pleasant environment and monitor its life cycle. There is a certain contract with this platform, which in the modern world looks like this: you have a container image (the container management system is now everywhere - Docker, although I can't say anything good about it; we'll talk about problems later).



Despite the fact that humanity is moving along a certain evolutionary process, this process has convergence. In our industry, it turns out that someone is still writing code in Perl (under mod_perl Apache), and someone is already writing a microservice architecture in GoLang. It turned out that the contract with the platform is very important, the content of the platform is very important, and it is very important that the platform helps a person. It becomes very expensive to do manual operations in order to wrap up and start the service. I had to deal with projects where there were 20 services - and it was not a very big project; I've heard about guys who have a thousand different services. 20 is a normal amount; and each set of services is developed by its own team in its own language, they are connected only by the exchange protocol.



With regards to how the contract for the application works. There is a "12-factor application manifesto" - 12 rules for how an application should be arranged so that it is convenient to use. I do not like him. In particular, it says that you need to deliver configurations via environment variables; and I have already come across the fact that in Amazon, for example, the number of environment variables that can be passed to Elastic Beanstalk is one page of the Linux kernel - 4 kilobytes. And they overflow very quickly; when you have 80 different variables, 81 is hard to stick in. Moreover, it is a flat configuration space - when there are variables, they must be named in capital letters with underscore, and there is no hierarchy among them; it's hard to understand what's going on. I have not yet figured out how to deal with this, and I have no one to discuss it with - there is no group of enthusiasts,who would be against such an approach. If this suddenly does not suit someone either - write to me (demeliorator in Telegram), I will know that I am not alone. I absolutely do not like it. It is difficult to manage, difficult to transfer hierarchical data; it turns out that the job of a modern engineer is to know where the variables are, what they mean, whether they are set correctly, how easy it is to change them. I think the good old configuration rules were better.whether they are set up correctly, how easy it is to change them. I think the good old configuration rules were better.whether they are set up correctly, how easy it is to change them. It seems to me that the good old configuration rules were better.



Returning to the contract. It turns out that you need a Docker Image: you will need Docker itself (despite the fact that it itself is poor - I hope that some Microsoft will buy them and either bury or develop it normally). If you are not happy with Docker, you can try Red Hat's Stack; I can't say anything about him, although it seems to me that it should be better than just vanilla Kubernetes. The guys from Red Hat pay much more attention to security issues, they even know how to do multi-turned installations, multi-user, multi-client, with normal separation of rights - in general, they make sure that rights management is in place.



Let's dwell on security issues. It's bad with her everywhere, not only on Kubernetes. If we talk about security and container orchestration issues, I have high hopes for the web assembly, which is done by the server side, and for web assembly applications it will be possible to limit resource consumption, system calls can be tied in special containers, in Rust. This would be a good answer to the security question. And Kubernetes has no security.



Let's say we have an application. It is a Docker image, it is 12-factor - that is, it can take your configuration from environment variables, from a file that you mount inside the container. It can be launched - inside it is self-sufficient, you can try to link it with other applications through configurations, automatically. And it should write logs to standard output - which is probably the least evil; when the container writes logs to files, it is not very easy to collect from there. Even Nginx was patched so that it was possible to collect logs from the standard output of the container, this is acceptable (as opposed to passing the configuration through a parameter). In fact, we used to have several orchestrators: Rancher, Marathon Mesos, Nomad; but, as the principle of Anna Karenina says in relation to technical systems,complex technical systems are arranged in the same way.



With Kubernetes, we have a situation like airlines with the Boeing 737 MAX - it does not fly because it has a mistake, but nothing else, because the design is very complex. I can't say that I like it - like the GoLang language, and control through the YAML format, when you have some syntax and there is nothing on top of it - no checks on what you write, no data types. All the checks you make before applying configurations in Kubernetes are rudiments. You can try to apply the wrong configuration and it will apply without question and you don't know what. It is very easy to write the wrong config file. This is a big problem, and people have already started to slowly solve it using DSLs and Kubernetes in Kotlin languages, even Typescript. There is a Pulumi project,there is an Amazon project EKS - although it is more focused on Amazon; Pulumi is a Terraform, only in Typescript. I wish I had tried Pulumi yet because I believe it is the future. The configuration must be described in a programming language with strong statically typing, so that before being applied, which can potentially destroy the cluster, you would at least be told at compile time that this is not possible.



Thus, at the moment there is only one orchestrator. I know there are some MATA users left, I shake their hand; I hope no one else uses Docker Swarm - my experience with it was quite negative. I believe it could be otherwise, but I don't know why; I do not foresee any further development of Docker Swarm, and I do not think that the people who release it are going to do anything with it now. Capitalism; if you don't make money, then there is nothing to spend on development - and their company has been in the "valley of death" for startups for the last two years: no one wants to give them money. You can place bids on who will buy them. Microsoft was not interested. Maybe some Microfocus will do it if Docker survives.

Since there is only one Kubernetes left, let's talk about it. There is a beautiful picture with a pentagram of five of his binaries; it is written that Kubernetes is very simple, only five binaries. But I am far from measuring the complexity of a system in the number of binaries into which it is compiled and in the number of services that make up the core of the system. It doesn't matter how many binaries there are - what matters is what Kubernetes can do and how it works internally.



What can he do? Just imagine that you need to explain to a five-year-old child what you did at work. And now dad, who tried to write playbooks and roles on Ansible that would allow him to do blue-green deployment based on Nginx on a host and a set of containers that are not registered with anything other than tv-ansible, says: “You know, son, I tried everything it's time to make your own Kubernetes. It doesn't work well, it's poorly tested, I don't understand it well, I don't know all the boundary conditions, it works within the same machine, but it's mine! " I have unjustifiably seen this many times - I just watched it 2 or 3 times, and 2 times I participated in writing something like this. Sooner or later, a person who participates in this, realizes that there should not be a 4th time. It's like my car friendswho once restored the VAZ-2101 - installed power windows, repaired the interior with flock, painted it in metallic. Creating your own orchestrator is something like this. You can try it once to make sure you can, but I'm not ready to recommend it to everyone - not just enthusiasts. Therefore, lifecycle management, container state management is the task of Kubernetes.



He can make sure that the container is running on the node where there are resources, he can restart a dead container, he can make sure that if the container does not start, then traffic will not go to it if there is a new deployment. We also started by saying that Kubernetes is the OS; where the OS is, there should be a package manager. When Kubernetes began, object descriptions were imperative; stateful set and code description are descriptions that work directly, and you need to add something on top to make the state of your [??? 18:52 - recording glitch]. Actually, the radical difference from Ansible and other similar configuration management systems is that in Kubernetes you describe what should turn out, not how it should turn out. You naturally describewhat objects you have and what properties they have. Objects are service, deployment, daemonset, statefulset. It is interesting that, in addition to those objects that can be created standardly, there are also custom objects that can be described and created in Kubernetes. It is very useful; it will also greatly thin the ranks of sysadmins and devops engineers.



When will Kubernetes die?



Good question. Depends on what is meant by the word "die". Here is Docker - a year ago we gathered at a conference in St. Petersburg, there was a round table, and we decided together (well, since we are an industry, I think there was a qualified majority there, and we could well afford to speak for everyone) that Docker already died. Why? Because at the conference there were no talks about Docker, although it is not that many years old. Nobody told anything about him. We talked about Kubernetes, about the next steps - Kube Flow, for example, about using operators, about how to place Kubernetes bases. Anything but Docker. This is death - when you are so bad that you seem to be alive, but no one comes to you.



Docker is already dead. When Kubernetes dies - let's wait 5 years. He will not die, everyone will have him - he will be inside Tesla, inside your phone, everywhere, and no one will be interested in talking about him. I think this is death. Maybe not even in 5 years, but in 3. Another question is what will replace it: some loud new technology that everyone will talk about, perhaps not from the DevOps world at all. Now they talk about Kubernetes even just to keep the conversation going, and that's okay - it's fashionable.



What's wrong with Docker?



All. This is a single binar for managing everything, this is a service that must be launched in the system, this is a piece that is controlled via a socket as well. This is a product that has a lot of code that hasn't been reviewed by anyone, as I think. This is a product for which, by and large, there is no enterprise money. Red Hat has very smart people, I have a lot of respect for them, and if you are an average engineer you should look at what they are doing because that could define the landscape for the next 5 years. Well, Red Hat ditched Docker altogether. They are moving towards not having him. So far, they cannot do this to the end, but they are close, and sooner or later they will finish Docker. He, in addition to everything that I have listed, has a huge attack area. There is no security there. Not many CVEs were raised on it, but if you look at them, it is clear that,as in any other project where safety is not at the forefront, it is dealt with on a leftover basis. This is the law. Security is long, expensive, dreary, restricts the developer, greatly complicates life. Getting safety done right is hard work and you have to pay for it. If you talk to any security professional, no matter what qualifications, you will hear plenty of horror stories about Docker and stories about how bad things are. They are partly related to Docker itself, partly to the people who operate it, but Docker itself could help people and carry out some security checks inside itself; for example, do not start a process in a container from root, unless explicitly told to do so.limits the developer, makes life very difficult. Getting safety done right is hard work and you have to pay for it. If you talk to any security professional, no matter what qualifications, you will hear plenty of horror stories about Docker and stories about how bad things are. They are partly related to Docker itself, partly to the people who operate it, but Docker itself could help people and carry out some security checks inside itself; for example, do not start a process in a container from root, unless explicitly told to do so.limits the developer, makes life very difficult. Getting safety done right is hard work and you have to pay for it. If you talk to any security professional, no matter what qualifications, you will hear plenty of horror stories about Docker and stories about how bad things are. They are partly connected with Docker itself, partly with the people who operate it, but Docker itself could help people and carry out some security checks inside itself; for example, do not start a process in a container from root, unless explicitly told to do so.partially - with the people who exploit it, but Docker itself could help people and carry out some security checks inside itself; for example, do not start a process in a container from root, unless explicitly told to do so.partially - with the people who exploit it, but Docker itself could help people and carry out some security checks inside itself; for example, do not start a process in a container from root, unless explicitly told to do so.



How to store states? Can I host the database on Kubernetes?



If you ask the DBA, or the person who was responsible for the infrastructure of this database before you decided to put it on Kubernetes, that person will say no. I think at many round tables people will say that there should not be any databases in Kubernetes: it is difficult, unreliable, it is not clear how to manage it.



I believe that DBs in Kubernetes can be represented. How reliable is it? Well, look: we are dealing with a distributed system. When you put a database in a cluster, you must understand that you have a requirement for fault tolerance. If you have such a requirement, most likely what you put inside your Kubernetes is a database cluster. How many people in the modern world know how to do a normal database cluster? Do many databases provide clustering capabilities? Here begins the division of databases into traditional - relational, and non-relational. Their difference is not that the latter do not support SQL in one form or another. The difference is that non-relational databases are much better suited for working in clusters, because they were originally written so that the database was distributed. Therefore,if for some MongoDB or Cassandra you want to do hosting in Kubernetes, I cannot dissuade you, but you should have a very good idea of ​​what will happen next. You should understand very well where your data is, what will happen in case of failure and recovery, how backups are going, where the recovery point is located and how long it will take to recover. These questions are not relevant to Kubernetes; they have to do with how you basically operate a cluster solution based on conventional databases. It is easier with NoSQL solutions, they are immediately cloud-ready.where is the restore point and how long will it take to recover. These questions are not relevant to Kubernetes; they have to do with how you basically operate a cluster solution based on conventional databases. It is easier with NoSQL solutions, they are immediately cloud-ready.where is the restore point and how long will it take to recover. These questions are not relevant to Kubernetes; they have to do with how you basically operate a cluster solution based on conventional databases. It is easier with NoSQL solutions, they are immediately cloud-ready.

But still, the question arises - why put the database in Kubernetes? You can take a service provided by your provider, a managed solution; you can take RDS at Amazon, and also managed database at Google. And even a geographically distributed cluster of this database, in the case of Amazon, this is Aurora, you can install and use. But, if you are going to install a geographically distributed cluster of the base, read the documentation carefully; I've come across Aurora clusters with a single node — they weren't even split into two regions. Moreover, the second region was not needed at all. In general, people have very strange things in their heads: they believe that the main thing is to choose a product, and then it will provide itself and work as it should. No. Relational databases were not at all ready to work even in a regular cluster,not to mention geo-distributed. Therefore, if you are doing something complex based on them, check out the documentation.



Basically, I have experience in operating a database inside Kubernetes. There was nothing terrible. There have been several crashes related to the node falling; the move worked normally to another node. Everything's under control. The only thing that I cannot please you and say that there was a huge number of thousands of requests per second.



If you have a medium or small solution - an average solution for Russia roughly corresponds to a large news agency like Lenta - then there should not be a large number of complex queries to the database. If the base does not cope, then, probably, you are doing something wrong, and you need to think about it. Don't mindlessly scale up. Moving a non-clustered solution to a cluster has its advantages, but also a large number of disadvantages - especially if you take a postgres-cluster based on Patroni or Stallone. There are many boundary conditions; I have not come across them, but colleagues from Data Egret will be happy to tell you about how it happens. There is a wonderful report by Alexei Lesovsky about what will happen if you try to transfer your base to a cluster without thinking.



About thousands of requests per second. If you have a single DB Instance, tuning it at thousands of requests per second is still scaling up. Sooner or later you will run into trouble. It seems to me that it is better, if a heavy load is planned, to consider horizontal scaling options. Find the largest table in your database, see what is there, and think about whether it can be moved to non-relational storage. Think about how not to store in a relational database what you usually store in it - for example, access logs to the system, of which there are a lot, and you usually access them according to the same pattern by which you access the key-value storage ... So why not write logs in Cassandra? This is a question for an architect. Keeping a small and not very busy base instance in Kubernetes is the very thingbecause the magic of the operator begins to be responsible for it.



Basically, if you look at what Kubernetes is as an OS and a platform, then this is a do-it-yourself constructor. There is everything for you to build a microservice architecture, including the ability to store states on disks, organize telemetry, monitoring, and alerting. This is done by Helm, the Kubernetes package manager. There are a huge number of open source, publicly available Helm Charts on the Internet. The easiest way to raise the infrastructure of a project is to take the Helm Chart that puts your application, your service, if this service is a Redis database, PostgreSQL, Patroni - whatever; start configuring it, and apply this configuration. It is completely declarative; whatever can be foreseen, the Helm Charts authors usually provide. Your application is best released with Helm too.The third Helm does not contain cluster-side services; the second one contained, he had a service constantly working from the cluster administrator, which was necessary to distribute releases by namespace, but the third Helm closed this security hole.

Helm is such a template engine based on the GoLang template syntax. It takes a list of variables, which are a non-planar structure (thank God, hierarchical - written in YAML); Using Helm, these variables are placed in the right places in Helm Templates, then you apply it all in some namespace, your codes, services are launched there, your roles are created. There is a scaffolding generator that allows Helm Chart to write without regaining consciousness. The only thing I don't like is the need to know the syntax of GoLang templating and conditional branches in Helm itself; they are made according to the lisp principle, with prefix notation. It's good that someone else likes it, but it makes the head switch every time. Well, we'll get over it.



Now a little about what will happen next. I already resembled operators; these are the services that live inside the Kubernetes cluster and manage the lifecycle of another, larger application. Hard enough. You can think of an operator as your silicon home site reliability engineer; for sure in the future, people will write more and more operators, because no one wants to keep a change of 1st support level people who would follow the Nagios schedule, notice outages and take manual actions. The operator understands what system states are possible; it is a state machine. Now the concentration of human knowledge, written in the GoLang language or the like, is compiled, put into a cluster and does a lot of work for you: add or remove nodes, reconfigure, make sure that the fallen one rises,so that the data docks to the desired codes from where they are. In general, it manages the lifecycle of what is installed underneath. Operators are now literally for everything. I've been having fun with the fact that using the Rook operator, I put sev directly into the Kubernetes cluster. I looked at how this is happening, and I am very pleased, and I think that more operators are needed, and we should all participate in testing them. The time you spend correcting someone else's operator is a gift to humanity. You no longer need to do the same job over and over again. You can put this work in an alienated form in the repository, and then a smart program will do it for you - is it not happiness?Operators are now literally for everything. I've been having fun with the fact that using the Rook operator, I put sev directly into the Kubernetes cluster. I looked at how this is happening, and I am very pleased, and I think that more operators are needed, and we should all participate in testing them. The time you spend correcting someone else's operator is a gift to humanity. You no longer need to do the same job over and over again. You can put this work in an alienated form in the repository, and then a smart program will do it for you - is it not happiness?Operators are now literally for everything. I've been having fun with the fact that using the Rook operator, I put sev directly into the Kubernetes cluster. I looked at how this is happening, and I am very pleased, and I think that more operators are needed, and we should all participate in testing them. The time you spend correcting someone else's operator is a gift to humanity. You no longer need to do the same job over and over again. You can put this work in an alienated form in the repository, and then a smart program will do it for you - is it not happiness?that you spend on correcting someone else's operator is a gift to humanity. You no longer need to do the same job over and over again. You can put this work in an alienated form in the repository, and then a smart program will do it for you - is it not happiness?that you spend on correcting someone else's operator is a gift to humanity. You no longer need to do the same job over and over again. You can put this work in an alienated form in the repository, and then a smart program will do it for you - is it not happiness?



I downloaded the Red Hat book - they give it out for free - about how Kubernetes operators work and how to write your own, I recommend that you go to their website and download it too, the book is very interesting. When I read it in full, maybe we can analyze some operator together.



Who Supports Swarm? Besides Docker Inc?



Nobody. And Docker Inc. already torn into two halves, and one half sold somewhere. I don't follow what is happening with them; at one time they called the product Mobi, but it was an open source version, that is, it was not an open version. And something was sold with rights, but something was not sold. For me it looks like "a patient sweated before his death" - people are trying to somehow extract the money invested. In general, nobody supports it.



Master / Slave



It works. If master / slave stops working in a relational database, then the world will end there. Kubernetes does not interfere with the work of the database in any way; the only thing that it brings is different health checks, which can be disabled if desired, and, in principle, state management. It is advisable not to disable it - your operator, who manages the database, should see its state.



What's the name of the Red Hat book?



Kubernetes Operators is called that. The duck is drawn there. O'Reilly's book, now redesigned the covers; quite a few books have been published in collaboration with Red Hat. Red Hat has 3 or 4 more Kubernetes books that you can download for free: for example, Kubernetes Patterns, gRPC. The gRPC protocol, although not directly related to Kubernetes, is used by many to exchange data between microservices. In addition, it is used in next generation load balancers, for example, in Envoy.



What is an SRA?



This is the kind of person who understands the time frame of changes taking place in a distributed system. Roughly speaking, he understands how long you have been lying down this month, how much more you can, whether you can give permission for the release. Manages keys from backups, recovery plans, disaster recovery issues, infrastructure maintenance issues for an industrial application that should work 24/7. It has metrics for the state and business state of the application in the application metrics - how much latency, where and how many requests; those same 4 gold signals. Further, the SRA, based on these metrics, can take steps to bring the system back into combat readiness, he has a plan on how to do this. For some reason, this is not required from classic DevOps, it just helps the developer bring the application to release, generally roll it out somewhere.And SRA also resists the flow of requests from different sides.



I promised to talk about security. You know, this is a fairly young topic in Kubernetes. I only know the very basics: for example, that you should not run an application in a service from root, because as soon as this happens, it has access to everything in the namespace, superuser rights, and you can try to break the host system kernel, which, probably it will work out (and perform any other operations from the root). Don't give hackers a hint like that; if possible, give users as few rights as possible and handle user input well. It seems to me that if we talk about Kubernetes security, it needs to be taken out somewhere in the closed circuits that we currently have. Or, if you really want to get into security issues, you should check out the Cilium project. He knows how to use surge protectors,BPF based network traffic differentiation - this works better than iptables. This is the future. It seems to me that outside of California no one really uses it, but you can already start. Maybe there will be some other project, but I doubt it. In general, humanity has few working hands.



Therefore, I have nothing special to say about security. There are various options for mounted tenancy in Kubernetes, but there you have to sit with a pencil and figure out what people did, what vulnerability they closed, whether it made sense, whether it applies to your threat model. By the way, I recommend starting by looking for a description of how the threat model is built and building it for yourself. There are more or less formal methods. Draw, look at it, and maybe an insight will happen, and you will understand what you need and what is not needed in the current situation. Maybe it will be enough to drive all Kubernetes into a closed loop. By the way, this is the right decision; I've come across this, and it's impenetrable. If you can get access to the system only by presenting a pass at the watch, and there is no Internet inside, and the exchange goes through some special gateway,located in the DMZ and is difficult to break because it is unusually written - then it is a well protected system. How to do it by technical means - well, you need to monitor the current market of solutions. It's changing a lot, security is a hot topic. There are a lot of players trying to be, but which of them is lying and which is not - I do not presume to say. While Red Hat is probably not lying, it is not ahead of everyone. You just need to do research (and me too), because it's not clear yet.You just need to do research (and me too), because it's not clear yet.You just need to do research (and me too), because it's not clear yet.



Let's talk about what else the Kubernetes cluster should have. Since we have the opportunity to install applications there for free, and we are not afraid to store the database there. By the way, if you have Managed Kubernetes, then there are no questions about where to store the database: you have fault-tolerant storage, in one form or another (often in the form of block devices) brought in from the cloud, which hosts Managed Kubernetes. You can safely place disks from this storage in your cluster and use snapshots to take consistent backups. Just do not forget that snapshot is not a backup, you also need to copy the snapshot somewhere. This is obvious, but the obvious things are good to repeat so they won't be forgotten.



It is very important when you have a microservice architecture platform to make it traceable, observable, so you can understand where requests are and where they are wasting time, and so on. Building such a platform is a lot of work. You will need Prometheus. It will be needed because it is a Cloud Native Computing Foundation project; it is specifically designed to monitor Kubernetes. Contains a huge number of exporters, metrics collectors; some applications natively contain all of its dashboards. If your application does not contain them, then it takes 20 minutes to attach to the long-lived Prometheus dashboard application. Although for some reason no one attaches it. In my experience, this is due to people keeping their products in the clouds. There's Amazon CloudWatch, Google StackDriver,and you can send metrics there in the same way - although it will cost money. That is, if people still pay for the infrastructure, then they pay for the monitoring tools attached to it. Nevertheless, Prometheus can be very convenient if you have several different places from which you take metrics, if the cloud is not in one place, if it is hybrid, if you have on-premise machines and need a centralized infrastructure. Then Prometheus is your choice.if you have on-premise machines and need a centralized infrastructure. Then Prometheus is your choice.if you have on-premise machines and need a centralized infrastructure. Then Prometheus is your choice.



What else do you need? It is clear that where Prometheus is, there is also a need for Alert Manager. And you also need some kind of distributed tracing of your requests. How to do this in Kubernetes - well, take some product like jaeger, or zipkin, or whatever is on the top right now; also Cassandra to store your traces, also Grafana to render them. As far as I understand, this feature appeared in Grafana recently, but this is not a reason not to implement it. That is, you can manually assemble an environment in which applications will [glitches 49:14] (have?) By this runtime both counters and other metrics suitable for building, visualization of your distributed traces: where does the application spend how long?



It is less convenient to tell about it than to show it, but there is nothing to show me now; I have none of this in my laboratory now. One day I’ll probably get ready.



I think I told everything I wanted to tell. I will repeat the main provisions once again.



First: Kubernetes relieves you of the need to write a mechanism for fail-safe replacement of one container with another on Ansible Engine X.



Second, Kubernetes will not disappear anywhere. It can "die", but it is no longer possible to stop using it, it has captured most of the market. To the question "when will Kubernetes die:" I want to ask "when will WordPress die?" And he still has to live and live Let's set the order: first WP, then Kubernetes.



So Kubernetes is with us. It is rather not he who is interesting, but those services that are wound on top are interesting - these are operators and Custom Resource Definition. The ability to take and write your own resource, which will be called "PostgreSQL cluster", describe it with one YAML footcloth and throw it into the cluster.



What else will happen? There will also be the ability to manage all of this, statically typed programming languages ​​such as GoLang and TypeScript. And I really believe in Kotlin - a lot of cool DSLs are already written on it. And more will be written.



There will also be paid Helm Chart, paid applications that can be launched on-premise, some licensed ones, by subscription. There will be integration of various services - in fact, it already exists, for example, DataDog is already integrating with Kubernetes. And new guys who will appear on the monitoring-alert market will also integrate with Kubernetes, for obvious reasons. Any cloud product will not pass by Kubernetes, one way or another. This is the platform that everyone will aim at.



But all this does not mean that Kubernetes is good, and nothing could be better. I compare it with what it was before - with Amazon solutions: ECS, Elastic Beanstalk. Those who have come across them know that in my old analogy that one thing, that another would be not just 737 MAX, but 737 MAX, made of electrical tape and plasticine. Therefore, the main players - Amazon, Microsoft Azure, Google - are all already in Kubernetes. Probably Yandex and Mail.ru are also, but I don't follow them. This is such a common future. Such a bad, but “good enough” common future, on which everyone agrees so far. What is it all mutating further, you have to ask Red Hat - they are smarter than me.



How does Java feel about Kubernetes?



Fine.



What OS are you using on your PC?



Both are on macOS.



Are DevOps specialists actively recruited now?



Yes, they were always actively taken on a remote location, and now they are actively taken in the same way. The situation will not fundamentally change, I think. In general, I do not consider unremoved work: not every good company has an office in St. Petersburg. Of course, there will be a distance, and current events have shown people that it is possible. The number of people who are more comfortable with it will only grow. We are told that “a lot of people tried it and went back to the office” - well, going back to the office costs money. No money - no choice, and many companies are saving now.






What happened before



  1. Ilona Papava, Senior Software Engineer at Facebook - how to get an internship, get an offer and everything about working in a company
  2. Boris Yangel, Yandex ML engineer - how not to join the ranks of dumb specialists if you are a Data Scientist
  3. Alexander Kaloshin, EO LastBackend - how to launch a startup, enter the Chinese market and get 15 million investments.
  4. , Vue.js core team member, GoogleDevExpret — GitLab, Vue Staff-engineer.
  5. , DeviceLock — .
  6. , RUVDS — . 1. 2.
  7. , - . — .
  8. , Senior Digital Analyst McKinsey Digital Labs — Google, .
  9. «» , Duke Nukem 3D, SiN, Blood — , .
  10. , - 12- — ,
  11. , GameAcademy — .
  12. , PHP- Badoo — Highload PHP Badoo.
  13. , CTO Delivery Club — 50 43 ,
  14. , Doom, Quake Wolfenstein 3D — , DOOM
  15. , Flipper Zero —
  16. , - Google — Google-
  17. .
  18. Data Science ? Unity
  19. c Revolut
  20. : ,
  21. IT-











All Articles