Organization of development of large-scale React applications

This post is based on a series on modernizing the jQuery frontend with React. In order to better understand the reasons why this article was written, it is recommended to take a look at the first article in this series. It is very easy these days to organize the development of a small React application, or to start from scratch. Especially when using create-react-app . Some projects will likely need only a few dependencies (for example, to manage application state and to internationalize the project) and a folder containing at least a directory







srccomponents... I believe that this is the structure that most React projects start with. Usually, however, as the number of project dependencies grows, programmers are faced with an increase in the number of components, reducers and other reusable mechanisms included in its composition. Sometimes it all becomes very uncomfortable and difficult to manage. What to do, for example, if it is no longer clear why certain dependencies are needed and how they fit together? Or, what if the project has accumulated so many components that it becomes difficult to find the right one among them? What to do if a programmer needs to find a certain component whose name has been forgotten?



These are just some examples of the questions that we had to find answers to while reworking the frontend in Karify . We knew that the number of dependencies and project components could one day get out of hand. This meant that we had to plan everything so that, as the project grew, we could confidently continue working on it. This planning included agreeing on file and folder structure and code quality. This included a description of the overall architecture of the project. And most importantly, it was necessary to make it so that all this could be easily perceived by new programmers who come to the project, so that they, for inclusion in the work, would not have to study the project for too long, understanding all its dependencies and the style of its code.



At the time of this writing, we have about 1200 JavaScript files in our project. 350 of them are components. The code is 80% unit tested. Since we still adhere to the agreements we have established and work within the framework of the previously created project architecture, we decided that it would be good to share all this with the general public. This is how this article appeared. Here we will talk about organizing the development of a large-scale React application, and what lessons we learned from the experience of working on it.



How do I organize files and folders?



We only found a way to conveniently organize our React front-end materials after going through several stages of the project. Initially, we were going to host the project materials in the same repository where the jQuery-based frontend code was stored. However, due to the requirements for the folder structure imposed on the project by the backend framework we are using, this option did not work for us. Next, we thought about moving the frontend code to a separate repository. At first this approach worked well, but over time we started thinking about creating other client parts of the project, for example, a frontend based on React Native. This got us thinking about the component library. As a result, we split the new repository into two separate repositories. One was for a component library, and the other was for the new React frontend.Even though at first we thought this idea was successful, its implementation led to a serious complication of the code review procedure. The relationship between the changes in our two repositories has become unclear. As a result, we decided to switch again to storing the code in a single repository, but now it was a mono repository.



We settled on a mono repository because we wanted to introduce a separation between the library of components and the frontend application in the project. The difference between our mono repository and other similar repositories is that we didn't need to publish packages inside our repository. In our case, packages were only a means of ensuring modularity of development and a tool for separation of concerns. It is especially useful to have different packages for different variants of your application, as this allows you to define different dependencies for each one and apply different scripts with each one.



We set up our mono repository using yarn workspaces using the following configuration in the root file package.json:



"workspaces": [
    "app/*",
    "lib/*",
    "tool/*"
]


Now some of you may be wondering why we just didn't use package folders, doing the same as in other monorepositories. This is mainly due to the fact that we wanted to separate the application and the component library. In addition, we knew that we needed to create some of our own tools. As a result, we arrived at the above folder structure. Here's how these folders play in a project:



  • app: all packages in this folder are related to frontend applications like Karify frontend and some other internal frontends. Our Storybook materials are also stored here .
  • lib: -, , . , , . , , typography, media primitive.
  • tool: , , Node.js. , , , . , , webpack, , ( Β« Β»).


All our packages, regardless of the folder they are stored in, have a subfolder src, and optionally a folder bin. The srcpackage folders , stored in directories appand lib, may contain some of the following subfolders:



  • actions: Contains functions for creating actions whose return values ​​can be passed to dispatch functions from reduxor useReducer.
  • components: contains folders of components with their code, translations, unit tests, snapshots, histories (if applicable to a specific component).
  • constants: this folder stores values ​​that are unchanged in different environments. Utilities are also stored here.
  • fetch: this is where type definitions are stored for processing data received from our API, as well as the corresponding asynchronous actions used to receive such data.
  • helpers: , .
  • reducers: , redux useReducer.
  • routes: , react-router history.
  • selectors: , redux-, , API.


This folder structure allows us to write truly modular code, as it creates a clear system for dividing responsibilities between the various concepts that our dependencies define. This helps us to search the repository for variables, functions and components, regardless of whether the person who is looking for them knows about their existence or not. Moreover, it helps us to keep the minimum amount of content in separate folders, which, in turn, makes it easier to work with them.



When we started to apply this folder structure, we were faced with the challenge of ensuring a consistent application of such a structure. When working with different packages, the developer may want to create different folders in the folders of these packages, organize the files in these folders in different ways. While not always a bad thing, such a disorganized approach would lead to confusion. To help us systematically apply the above structure, we have created what can be called a "filesystem linter". We will talk about this now.



How do you ensure that the style guide is applied?



We strived for uniformity in the structure of files and folders in our project. We wanted to achieve the same for the code. By that time, we already had a successful experience of solving a similar problem in the jQuery version of the project, but we had a lot to improve, especially when it comes to CSS. As a result, we decided to create a style guide from scratch and make sure to use it with a linter. Rules that could not be enforced with a linter were controlled during the code review.



Setting up a linter in a mono repository is done in the same way as in any other repository. This is good as it allows you to check out the entire repository in one linter run. If you are not familiar with linters, I recommend taking a look at ESLint and Stylelint . We use them exactly.



Using a JavaScript linter has proven especially useful in the following situations:



  • Ensuring the use of components built with content accessibility in mind, instead of their HTML counterparts. When creating the style guide, we introduced several rules regarding the accessibility of links, buttons, images and icons. Then we needed to enforce these rules in the code and make sure that we, in the future, would not forget about them. We did this using the react / forbid-elements rule from eslint-plugin-react .


Here's an example of what it looks like:



'react/forbid-elements': [
    'error',
    {
        forbid: [
            {
                element: 'img',
                message: 'Use "<Image>" instead. This is important for accessibility reasons.',
            },
        ],
    },
],






In addition to linting JavaScript and CSS, we also have our own "filesystem linter". It is he who ensures a uniform use of the folder structure we have chosen. Since this is a tool we created ourselves, if we decide to switch to a different folder structure, we can always change it accordingly. Here are examples of the rules that we control when working with files and folders:



  • Checking the folder structure of the components: ensuring that there is always a file index.tsand a .tsx.file with the same name as the folder.
  • File Validation package.json: Ensuring that there is one such file per package and that the property privateis set to trueto prevent accidental publication of the package.


Which type system should you choose?



Nowadays, the answer to the question in the title of this section is probably quite straightforward for many. You just need to use TypeScript . In some cases, regardless of the size of the project, implementing TypeScript can slow down development. But we believe that this is a reasonable price to pay for improving the quality and rigor of the code.



Unfortunately, at the time when we started working on the project, the prop-types system was still very widely used.... At the beginning of our work, this was enough for us, but as the project grew, we began to miss the ability to declare types for entities that are not components. We have seen that this will help us improve, for example, reducers and selectors. But introducing a different typing system into a project would require a lot of code refactoring to type the entire codebase.



In the end, we still equipped our project with type support, but made the mistake of trying Flow first.... It seemed to us that Flow was easier to integrate into the project. Even though it was, we regularly had all sorts of problems with Flow. This system did not integrate very well with our IDE, sometimes for some unknown reason it did not detect some bugs, and creating generic types was a real nightmare. For these reasons, we ended up migrating everything to TypeScript. If we knew then what we know now, we would immediately choose TypeScript.



Due to the direction in which TypeScript has developed in recent years, this transition was quite easy for us. The transition from TSLint to ESLint was especially useful for us .



How do I test the code?



When we started working on the project, it was not very clear to us which testing tools to choose. If I were thinking about it now, I would say that, for unit and integration testing, it is best to use jest and cypress, respectively . These tools are well documented and easy to use. The only pity is that cypress does not support the Fetch API , the bad thing is that the API of this tool is not designed to use the async / await construct . We, after starting to use cypress, did not immediately understand this. But I would like to hope that the situation will improve in the near future.



At first, it was difficult for us to find the best way to write unit tests. Over time we have tried approaches such as snapshot testing , test renderer , shallow renderer . We tried Testing Library . We ended up with shallow rendering, used to test the output of the component, and used test rendering to test the internal logic of the components.



We believe the Testing Library is a good solution for small projects. But the fact that this system relies on DOM rendering has a big impact on benchmark performance. Moreover, we believe that criticismsnapshot testing using surface rendering is irrelevant when it comes to very "deep" components. For us, snapshots turned out to be very useful in checking all possible options for outputting components. However, the component code should not be overcomplicated; you should strive to make it convenient to read. This can be toJSONachieved by making the components small and defining a method for the component inputs that are not related to the snapshot.



In order not to forget about unit tests, we set up the code coverage threshold by tests... With jest, this is very easy to do, and there isn't much to think about. It is enough just to set an indicator of the global code coverage by tests. So, at the beginning of work, we set this figure at 60%. Over time, as the test coverage of our codebase grew, we increased it to 80%. We are satisfied with this indicator, since we do not think that it is necessary to strive for 100% code coverage with tests. Achieving this level of code coverage with tests does not seem realistic to us.



How to simplify the creation of new projects?



Usually the beginning of work on the React-application is very simple: ReactDOM.render(<App />, document.getElementById(β€˜#root’));. But in the case when you need to support SSR (Server-Side Rendering, server rendering), this task becomes more complicated. Also, if your application's dependencies include more than just React, your client and server code might need to use different parameters. For example, we use react-intl for internationalization, react-redux for global state management , react-router for routing , and redux-saga for managing asynchronous actions . These dependencies need some tweaking. The process of configuring these dependencies can be complex.



Our solution to this problem was based on the " Strategy " and " Abstract Factory " design patterns . We used to create two different classes (two different strategies): one for the client configuration and one for the server configuration. Both of these classes received parameters of the created application, which included the name, logo, reducers, routes, the default language, sagas (for redux-saga), and so on. Reducers, routes and sagas can be taken from different packages of our mono-repository. This configuration is then used to create the redux store, sagas middleware, router history object. It is also used to load translations and to render the application. Here are, for example, the signatures of the client and server strategies:



type BootstrapConfiguration = {
  logo: string,
  name: string,
  reducers: ReducersMapObject,
  routes: Route[],
  sagas: Saga[],
};
class AbstractBootstrap {
  configuration: BootstrapConfiguration;
  intl: IntlShape;
  store: Store;
  rootSaga: Task;
abstract public run(): void;
  abstract public render<T>(): T;
  abstract protected createIntl(): IntlShape;
  abstract protected createRootSaga(): Task;
  abstract protected createStore(): Store;
}
//   
class WebBootstrap extends AbstractBootstrap {
  constructor(config: BootstrapConfiguration);
  public render<ReactNode>(): ReactNode;
}
//   
class ServerBootstrap extends AbstractBootstrap {
  constructor(config: BootstrapConfiguration);
  public render<string>(): string;
}


We found this separation of strategies useful, since there are some differences in setting up storage, sagas, internationalization objects and history, depending on the environment in which the code is executed. For example, a redux store on the client is created using data pre-loaded from the server and using the redux-devtools-extension . None of this is needed on the server. Another example is an internationalization object that, on the client, gets the current language from navigator.languages , and on the server from the Accept-Language HTTP header .



It is important to note that we came to this decision a long time ago. While classes were still widely used in React applications, there were no simple tools to perform server-side rendering of applications. Over time, the React library took a step towards a functional style and projects like Next.js appeared . With this in mind, if you are looking for a solution to a similar problem, we recommend that you research current technologies. This, quite possibly, will allow us to find something that will be simpler and more functional than what we are using.



How to keep the quality of your code at a high level?



Linters, tests, type checking - all this has a beneficial effect on the quality of the code. But a programmer can easily forget to run the appropriate checks before including code in a branch master. The best way is to have such checks run automatically. Some people prefer to do this on every commit using Git hooks., which does not allow you to commit until the code has passed all the checks. But we believe that with this approach, the system interferes too much with the programmer's work. After all, for example, work on a certain branch may take several days, and all these days it will not be recognized as suitable for sending to the repository. Therefore, we check commits using the continuous integration system. Only the code of the branches associated with merge requests is checked. This allows us to avoid running checks that are guaranteed not to pass, since we most often make requests to include the results of our work in the main code of the project when we are sure that these results are able to pass all checks.



The flow of automatic code validation starts with installing dependencies. This is followed by type checking, running linters, running unit tests, building an application, running cypress tests. Almost all of these tasks are performed in parallel. If an error occurs at any of these steps, the entire checkout process will fail and the corresponding branch cannot be included in the main project code. Here is an example of a working code review system.





Automatic Code Verification The



main difficulty that we encountered while setting up this system was to speed up the execution of checks. This task is still relevant. We performed a lot of optimizations and now all these checks are stable in about 20 minutes. Perhaps this indicator can be improved by parallelizing the execution of some cypress tests, but for now it suits us.



Outcome



Organizing the development of a large-scale React application is no easy task. To solve it, a programmer needs to make many decisions, a lot of tools need to be configured. At the same time, there is no single correct answer to the question of how to develop such applications.



Our system suits us so far. We hope that talking about it will help other programmers who are faced with the same tasks that we faced. If you decide to follow our example, first make sure that what was discussed here is right for you and your company. Most importantly, strive for minimalism. Don't overcomplicate your apps and toolboxes used to create them.



How would you approach the task of organizing the development of a large-scale React project?






All Articles