Concurrent Mode in React: adapting web applications for devices and internet speed

In this article, I will introduce Concurrent Mode in React. Let's figure out what it is: what are the features, what new tools have appeared and how to optimize the operation of web applications with their help so that everything can fly for users. Concurrent mode is a new feature in React. Its task is to adapt the application to different devices and network speeds. So far, Concurrent Mode is an experiment that can be changed by the developers of the library, which means that there are no new tools in the stable. I warned you, and now let's go.



Currently, there are two limitations for rendering components: processor power and network data transfer rate. Whenever something needs to be shown to the user, the current version of React tries to render each component from start to finish. It doesn't matter if the interface may freeze for a few seconds. It's the same story with data transfer. React will wait for absolutely all the data the component needs, instead of drawing it piece by piece.







The competitive regime solves these problems. With it, React can pause, prioritize, and even undo operations that were previously blocking, so in concurrent mode you can start rendering components regardless of whether all the data has been received or only part of it.



Concurrent Mode is Fiber Architecture



Competitive mode is not a new thing that the developers suddenly decided to add, and everything worked right there. Prepared for its release in advance. In version 16, the React engine was switched to a Fiber architecture, which, in principle, resembles the task scheduler in the operating system. The scheduler distributes computational resources between processes. He is able to switch at any time, so the user has the illusion that the processes are running in parallel.



Fiber architecture does the same thing, but with components. Despite the fact that it is already in React, the Fiber architecture seems to be in suspended animation and does not use its capabilities to the maximum. Competitive Mode will turn it on at full power.



When you update a component in normal mode, you have to draw a whole new frame on the screen. Until the update is complete, the user will not see anything. In this case, React works synchronously. Fiber uses a different concept. Every 16 ms there is an interrupt and a check: has the virtual tree changed, has new data appeared? If so, the user will see them immediately.



Why 16ms? React developers strive to redraw the screen at a speed close to 60 frames per second. To fit 60 updates into 1000ms, you need to do them approximately every 16ms. Hence the figure. Competitive Mode comes out of the box and adds new tools that make front-end life better. I'll tell you about each in detail.



Suspense



Suspense was introduced in React 16.6 as a mechanism for dynamically loading components. In concurrent mode, this logic is preserved, but additional opportunities appear. Suspense becomes a mechanism that works in conjunction with the data loading library. We request a special resource through the library and read data from it.



Suspense concurrently reads data that is not ready yet. How? We request the data, and until they come in full, we are already starting to read them in small pieces. The coolest thing for developers is managing the order in which the loaded data is displayed. Suspense allows you to display page components both simultaneously and independently of each other. It makes the code straightforward: you just have to look at the Suspense structure to see what order the data is requested in.



A typical solution for loading pages in "old" React is Fetch-On-Render. In this case, we request data after render inside useEffect or componentDidMount. This is standard logic when there is no Redux or other data layer. For example, we want to draw 2 components, each of which needs data:



  • Component request 1
  • Expectation…
  • Get data -> render component 1
  • Component request 2
  • Expectation…
  • Get data -> render component 2


In this approach, the next component is requested only after the first is rendered. It is long and inconvenient.



Let's consider another way, Fetch-Then-Render: first we request all the data, then we draw the page.



  • Component request 1
  • Component request 2
  • Expectation…
  • Getting component 1
  • Getting component 2
  • Component rendering


In this case, we move the state of the request somewhere upward - we delegate it to the library for working with data. The method works great, but there is a nuance. If one of the components takes much longer to load than the other, the user will not see anything, although we could already show him something. Let's look at a sample code from the demo with 2 components: User and Posts. We wrap components in Suspense:



const resource = fetchData() // -    React
function Page({ resource }) {
    return (
        <Suspense fallback={<h1>Loading user...</h1>}>
            <User resource={resource} />
            <Suspense fallback={<h1>Loading posts...</h1>}>
                <Posts resource={resource} />
            </Suspense>
        </Suspense>
    )
}


It might seem that this approach is close to Fetch-On-Render, when we requested data after rendering the first component. But in fact, using Suspense will get the data much faster. This is due to the fact that both requests are sent in parallel.



In Suspense, you can specify the fallback, the component you want to display, and pass the resource implemented by the data retrieval library inside the component. We use it as is. Inside the components, we request data from the resource and call the read method. This is a promise that the library makes for us. Suspense will understand if the data has loaded, and if so, will show it.



Note that components are trying to read data that is still in the process of being received:



function User() {
    const user = resource.user.read()
    return <h1>{user.name}</h1>
}
function Posts() {
    const posts = resource.posts.read()
    return //  
}


In the current demos of Dan Abramov, such a thing is used as a stub for a resource .



read() {
    if (status === 'pending') {
        throw suspender
    } else if (status === 'error') {
        throw result
    } else if (status === 'success') {
        return result
    }
}




If the resource is still loading, we throw the Promise object as an exception. Suspense catches this exception, realizes it is a Promise, and continues loading. If, instead of a Promise, an exception with any other object arrives, it will become clear that the request ended in error. When the finished result is returned, Suspense will display it. It is important for us to get a resource and call a method on it. How it is implemented internally is a decision of the library developers, the main thing is that Suspense understands their implementation.



When to request data? Asking at the top of the tree is not a good idea, because they may never be required. A better option is to do this right away when navigating inside event handlers. For example, get the initial state through a hook, and then make a request for resources as soon as the user clicks the button.



This is how it will look in code:



function App() {
    const [resource, setResource] = useState(initialResource)
    return (
        <>
            <Button text='' onClick={() => {
                setResource(fetchData())
            }}>
            <Page resource={resource} />
        </>
    );
}


Suspense is incredibly flexible. It can be used to display components one after another.



return (
    <Suspense fallback={<h1>Loading user...</h1>}>
        <User />
        <Suspense fallback={<h1>Loading posts...</h1>}>
            <Posts />
        </Suspense>
    </Suspense>
)


Or at the same time, then both components need to be wrapped in one Suspense.



return (
    <Suspense fallback={<h1>Loading user and posts...</h1>}>
        <User />
        <Posts />
    </Suspense>
)


Or, load the components separately from each other by wrapping them in independent Suspense. The resource will be loaded through the library. It is very cool and convenient.



return (
    <>
        <Suspense fallback={<h1>Loading user...</h1>}>
            <User />
        </Suspense>
        <Suspense fallback={<h1>Loading posts...</h1>}>
            <Posts />
        </Suspense>
    </>
)


Additionally, the Error Boundary components will catch errors inside Suspense. If something went wrong, we can show that the user has loaded, but the posts have not, and give an error.



return (
    <Suspense fallback={<h1>Loading user...</h1>}>
        <User resource={resource} />
        <ErrorBoundary fallback={<h2>Could not fetch posts</h2>}>
            <Suspense fallback={<h1>Loading posts...</h1>}>
                <Posts resource={resource} />
            </Suspense>
        </ErrorBoundary>
    </Suspense>
)


Now let's take a look at other tools that can fully unlock the full benefits of the competitive regime.



SuspenseList



SuspenseList concurrently helps control the load order of Suspense. If we needed to load several Suspense strictly one after the other without it, they would have to be nested inside each other:



return (
    <Suspense fallback={<h1>Loading user...</h1>}>
        <User />
        <Suspense fallback={<h1>Loading posts...</h1>}>
            <Posts />
            <Suspense fallback={<h1>Loading facts...</h1>}>
                <Facts />
            </Suspense>
        </Suspense>
    </Suspense>
)


SuspenseList makes this much easier:



return (
    <SuspenseList revealOrder="forwards" tail="collapsed">
        <Suspense fallback={<h1>Loading posts...</h1>}>
            <Posts />
        </Suspense>
        <Suspense fallback={<h1>Loading facts...</h1>}>
            <Facts />
        </Suspense>
    </Suspense>
)


The flexibility of SuspenseList is amazing. You can nest SuspenseList in each other as you like and customize the loading order inside as it will be convenient for displaying widgets and any other components.



useTransition



A special hook that postpones the update of the component until it is fully ready and removes the intermediate loading state. What is it for? React strives to make the transition as fast as possible when changing state. But sometimes it's important to take your time. If a part of the data is loaded on a user action, then usually at the time of loading we show a loader or skeleton. If the data arrives very quickly, then the loader will not have time to complete even half a turn. It will blink, then disappear, and we will draw the updated component. In such cases, it is wiser not to show the loader at all.



This is where useTransition comes in. How does it work in code? We call the useTransition hook and specify the timeout in milliseconds. If the data does not come within the specified time, then we will still show the loader. But if we get them faster, there will be an instant transition.



function App() {
    const [resource, setResource] = useState(initialResource)
    const [startTransition, isPending] = useTransition({ timeoutMs: 2000 })
    return <>
        <Button text='' disabled={isPending} onClick={() => {
            startTransition(() => {
                setResource(fetchData())
            })
        }}>
        <Page resource={resource} />
    </>
}


Sometimes we don't want to show the loader when we go to the page, but we still need to change something in the interface. For example, for the duration of the transition, block the button. Then the isPending property will come in handy - it will inform you that we are in the transition stage. For the user, the update will be instant, but it is important to note here that the useTransition magic only affects components wrapped in Suspense. UseTransition itself will not work.



Transitions are common in interfaces. The logic responsible for the transition would be great to sew into the button and integrate into the library. If there is a component responsible for transitions between pages, you can wrap the onClick in handleClick, which is passed through props to the button, and show the isDisabled state.



function Button({ text, onClick }) {
    const [startTransition, isPending] = useTransition({ timeoutMs: 2000 })

    function handleClick() {
        startTransition(() => {
            onClick()
        })
    }

    return <button onClick={handleClick} disabled={isPending}>text</button>
}


useDeferredValue



So, there is a component with which we make transitions. Sometimes the following situation arises: the user wants to go to another page, we have received some of the data and are ready to show it. At the same time, the pages differ slightly from each other. In this case, it would be logical to show the user stale data until everything else is loaded.



Now React does not know how: in the current version, only data from the current state can be displayed on the user's screen. But useDeferredValue in concurrent mode can return a deferred version of the value, show outdated data instead of a blinking loader or fallback at boot time. This hook takes the value we want to get the deferred version for and the delay in milliseconds.



The interface becomes super fluid. Updates can be made with a minimum amount of data, and everything else is loaded gradually. The user has the impression that the application is fast and smooth. In action, useDeferredValue looks like this:



function Page({ resource }) {
    const deferredResource = useDeferredValue(resource, { timeoutMs: 1000 })
    const isDeferred = resource !== deferredResource;
    return (
        <Suspense fallback={<h1>Loading user...</h1>}>
            <User resource={resource} />
            <Suspense fallback={<h1>Loading posts...</h1>}>
                <Posts resource={deferredResource} isDeferred={isDeferred}/>
            </Suspense>
        </Suspense>
    )
}


You can compare the value from the props with the one obtained through useDeferredValue. If they differ, then the page is still loading.



Interestingly, useDeferredValue will allow you to repeat the lazy loading trick, not only for data that is transmitted over the network, but also in order to remove the interface freeze due to large calculations.



Why is this great? Different devices work differently. If you run an application using useDeferredValue on a new iPhone, the transition from page to page will be instant, even if the pages are heavy. But when using debounced, the delay will appear even on a powerful device. UseDeferredValue and concurrent mode adapt to the hardware: if it works slowly, the input will still fly, and the page itself will be updated as the device allows.



How do I switch a project to Concurrent Mode?



Competitive Mode is a mode, so you need to enable it. Like a toggle switch that makes Fiber work at full capacity. Where do you start?



We remove the legacy. We get rid of all obsolete methods in the code and make sure that they are not in the libraries. If the application works fine in React.StrictMode, then everything is fine - the move will be easy. The potential complication is problems within libraries. In this case, you need to either upgrade to a new version, or change the library. Or abandon the competitive regime. After getting rid of legacy, all that remains is to switch root.



With the arrival of Concurrent Mode, three root connection modes will be available:



  • Old

    ReactDOM.render(<App />, rootNode)

    Render Mode will be deprecated after competitive mode is released.
  • Blocking mode

    ReactDOM.createBlockingRoot(rootNode).render(<App />)

    As an intermediate stage, blocking mode will be added, which gives access to some of the opportunities for competitive mode on projects where there are legacies or other difficulties with relocation.
  • Competitive mode

    ReactDOM.createRoot(rootNode).render(<App />)

    If everything is fine, there is no legacy, and the project can be switched right away, replace the render in the project with createRoot - and off to a bright future.


conclusions



Blocking operations inside React are made asynchronous by switching to Fiber. New tools are emerging that make it easy to adapt the application to both the device's capabilities and network speed:



  • Suspense, thanks to which you can specify the order of loading data.
  • SuspenseList, with which it is even more convenient.
  • useTransition to create smooth transitions between Suspense-wrapped components.
  • useDeferredValue - to show stale data during I / O and component updates


Try experimenting with Concurrent Mode while it's still out. Concurrent Mode allows you to achieve impressive results: fast and smooth loading of components in any convenient order, super-fluid interface. The details are described in the documentation, there are demos with examples that are worth exploring on your own. And if you're curious about how Fiber architecture works, here's a link to an interesting talk.



Evaluate your projects - what can be improved with the new tools? And when the competitive regime is out, feel free to move. Everything will be great!



All Articles