Tuesday

20 / Jan / 2026

tech editorials

ui/ux

In my example I have a total of 50 items in the list and here is the comparison of the number of total html elements in the DOM. Without virtualization, it has 197 nodes. With virtualization it has 27 nodes. This is an 86% reduction in the DOM nodes. This factor can change based on the elements each list item has. Virtualis. helps in reducing the number of DOM nodes which reduces the heap size occupied by the references of these elements. It maintains the frame rate while scrolling in case of longer lists. Let’s take a look at how this is achieved. List Virtualization needs the data that is to be shown as a list, let’s say n in the length. Then we decide how many list items are to be shown at a time, let’s say t. Based on this we will control the range of items that is being currently shown to the user. As the user scrolls down, we increment the window in a way that it starts showing the next set of items. So, if it starts with index [0, t], upon scrolling it will change to [1, t+1] and so on. To decide when this increment should happen, we use the scroll height of the container element. Let’s go further in detail. Each item is given a fixed height in this list and their position is set relative to the container based on their index and height. If the fixed height of each item is 10px, then the first element at 0, second at 10px, third at 20px…As the user scrolls, dividing the scrollTop with the default item height, we get how many items down has the user scrolled. This value is where our window starts and adding t in this value gives us the last element of our window. Together, this dynamic change of window range and the respective position of each item in this window gives the illusion that the user is scrolling through a static list of items. To understand this whole workflow better, break it intentionally or introduce some bugs. First, break the part which is updating the range of the list to control the visible elements. Then, break the part where we are setting the positions of the elements based on their index. Then, you will understand the significance of each logical component involved.

read more

database

Using explain and explain analyze is one of the ways of query profiling. Using these keywords in any query gives you a summation of algorithms used to execute that particular query. Why bother? Because when the tables grow in size, some of the queries start taking a lot of time and this latency gets added up in the API response time which is evident on the UI, to the user. Optimizations can be on different levels. You can use a caching db to avoid querying your master database, you can normalize your table to divide the fields into atomic level, you can use client side caching, you can avoid repeated calls to the db in the middlewares and controllers, you can improve the queries by avoiding fetching the columns that are not needed or avoiding sub queries etc. But right now I wish to discuss the impact of indexes on query optimizations. Let’s say you have done these aforementioned things and still want to reduce the latency further. Creating an index table can be helpful but like everything, excess indexing is also harmful because in order to reduce the time complexity you might increase the space complexity. Now, it depends on the individual case or a project or the budget. But let’s draw a line that we don't want to go overboard with indexing but use it definitely. But first, let’s define the playground. It has these columns and I have populated it with around 7 million rows.The table’s size is 1.6 GBs. Apart from the table’s size, what separates it from the real world scenario is its auto increment id as a primary key, that fact that i’m saving the address in the same table as events, and there can be a few more to point out which are escaping me at the moment but that is not important for now. It’s not a real world table but emulating the real world table will not contradict these points but only amplify them. Starting with a simple query which fetches all the events happening in the current month. It’s using EXPLAIN ANALYZE which will run the query and give us behind the scenes as well. It’s fetching all the columns and has a simple where clause which is checking only one column which is “date”.

read more

browser

The visual lags on screen can be caused by a couple of reasons. One of the reasons is when there are too many objects or variables in use to keep track of. This is when the efficiency of garbage collection is tested. Browser’s target is to maintain 60 fps rendering for the ideal experience and garbage collection helps achieve it. Idle time is the window during which garbage collection takes place. This idle time is the duration when the main thread has no other tasks such as finishing an ongoing animation or executing microtasks. It’s usually in milliseconds. Garbage collector alone does not make all these decisions. One of the entities involved is a task scheduler. Task scheduler is a centralized component for javascript and rendering engines. If it expects a task to finish in 20ms and it finishes in 16ms, then 4ms can be used for other high priority tasks such as garbage collection. Garbage Collection is a procedure when the js engine removes the data that is no longer getting used or will not be used. For a while let’s not think about the execution of tasks in terms of callstack, queue and event loop. Consider a scenario where a lot of concurrent and serial actions are going on like animating a div, responding to a hover, responding to scroll, painting the next frame. Keeping a simple queue of tasks doesn’t make sense for the browser because some tasks hold more importance than the others. For example, painting the next frame on screen might be at the end of the queue which can lead to visual stutters thus hampering the experience. Priority of each task has to be considered as well. For this, there is a task scheduler. It assigns and updates the priority of each task. Priority of a task also depends on the current and expected state of rendering. If there are no more rendering tasks then the priority of idle tasks such as GC, prefetching, caching etc. is increased and they are added in the main thread for execution but if this idle time ends before expected then priorities have to be changed or reverted again so that current tasks can be replaced with more urgent tasks. Idle tasks are low priority tasks that are picked up for execution during the idle time. This idle time is predicted based on the state of all the components like js engine, rendering engine, gpu engine, media engine, storage engine, security engine etc. involved in the process of running a web app. A generational garbage collector is one of the types which is used to execute the collection. This type of garbage collector divides the heap memory into different parts and assigns the variables or data according to their age. Orinoco is one such collector and is used by v8. By assigning, I mean each part will hold a reference to the object that belongs to them. It mainly divides the memory into two spaces - young generation and old generation. The young generation is further divided into - Nursery and Intermediate. Any object that is created is first assigned to the young...

read more

browser

The phases that are executed in order to run the javascript in the browser are as follows. Just in Time compilation is a part of these stages. Scanning, Pre-parsing, parsing, interpreting, baseline compiling and optimized compiling. For chrome V8, ignition is an interpreter. Turbofan and crankshaft are optimized compilers. Orinoco is a GC and Oilpan is a GC library shared with the rendering engine. Scanning creates the tokens. Parsing creates the Abstract Syntax Tree. Interpreting creates the bytecode and compilation creates the machine code. Having bytecode before compiling helps in reducing the memory overhead. It’s more efficient than direct interpretation from the source code. Parsing and compiling is done during the critical path. Parsing each function in the code is not preferred because this process takes memory and time and there can be possibilities where a particular function which was parsed and compiled did not get invoked. So, this is why browsers started opting for lazy parsing. There is a pre parser which does the bare minimum syntax verification required to parse the function it encounters and when one of these pre parsed functions is invoked, that is when it is fully parsed and interpreted. Scanner reads from the stream of unicode characters decoded from utf-16 code unit and creates tokens which are used by the parser. Scanner gets these utf 16 unicode characters from the js engine. Scanning identifier tokens is the most complicated part. These are the tokens used for naming. Each of these tokens has a valid starting character and valid remaining characters. In order to confirm the validity of the naming convention of a particular variable, a look up table is used where each ascii character is marked whether it’s a valid id_start or id_continue. Parsing longer tokens takes more time. One of the reasons to keep your names shorter like i, j. For scanning keywords, it’s a bit different. There are a static amount of keywords. Either it can compare the first character and the length of the keyword to narrow down the search or it can use perfect hashing. A unique hash for each keyword.

read more

ui/ux

Let’s start with image optimizations. Nextjs has its own Image component that manages the loading and rendering of the image on its own. Even if it is used in a server component, this image component can be lazily loaded and rendered on the client side. But this does not cause any cumulative layout shifts. It takes care of that. That layout shift is prevented using the width and height you provide as props to this component. You can see that in this blog as well if you throttle the network settings. For src, if it's an image hosted at any remote bucket then you should use remotePatterns in nextjs config file to avoid the external domain error. For relative paths, you can use localPatterns. This is not an optimization but just a code pattern. blurDataUrl expects a base64 url string of an image that is less than 10 by 10 pixels and when clubbed with blur placeholder, it shows a blurred placeholder image while the actual src image is being painted on the screen. If you want to use a custom component as a loader, you can even use that and specify the loader key of the image in nextjs config file. If we go into the nextjs config properties for image, there are a lot of things that we can control like size of images for different viewports, formats of image allowed. Then comes the concept of prefetching. It’s achieved through the Link component. This component is an alternative to html anchor tag. If your page has some in app links/routes, you can choose to prefetch their respective js beforehand. It takes a boolean value for prefetch prop. If true, as the page that uses it is painted, it will fetch all the compiled js of the page that this link is referring to. You will see the difference in the load times. These calls were made while being on the landing page and not clicking any of those links. You can only check this on your deployed link, your local dev server won’t be showing you these calls even if you specify true as its value. Notice how it says _rsc which stands for react server component. Since all these pages are static pages, the whole RSC payload of that route is fetched.In essence you are just deciding whether you can afford to keep the latency added by these calls in the route navigation. If not, you can prefetch them. But this is a case of a simple static page. When there is a page which has server rendered components using api calls and some client components, just relying on prefetching does not reduce the responsiveness. One reason being, for dynamic routes, the whole RSC payload is not prefetched. Another reason being, even if it is, provided you are using the loading file, due to the large number of components and libraries imports used in it, its js chunk gets relatively bigger and there you have to make use of code splitting.

read more