Cover image to the article about Flexmonster 3.0 pivot table biggest performance upgrade!

Flexmonster 3.0: Performance

This release is the result of a long process of questioning our own assumptions, facing technical limitations head-on, and rewriting the core of Flexmonster from the ground up.

This blog post is the first in a series of articles in which we’ll tell you more about each of this release’s features. We’d like to share why we decided to rebuild everything, what changed under the hood, and what you can expect from the new version. 

The focus: performance at scale

Our users often encountered limitations when working with large amounts of data. Previous versions of Flexmonster handled data sources up to 1 GB in size, but that was often not enough. We received requests for larger file support, faster loading times, and smoother interface interaction.

Other products on the shelf didn’t satisfy the requirements as well: 

  • Google Sheets: Another example of a web tool for data analysis has a limit of 10 million rows and 18,278 columns, and allows you to load files up to 100 MB.

Besides not being able to process the desired amount of data, both options perform poorly on the edge of their limits. Performance degrades significantly when working with large data volumes, especially when using complex formulas or importing external data.

The central goal of Flexmonster 3.0 is to make performance seamless, even with millions of records. To achieve both more data and a better workflow. Sounds like too much to ask, but we decided to turn it real.

We wanted to make sure the component behaves as good with a 10 GB dataset as it does with a 10 MB one. That meant a lot of work both in the browser and on the server side — redesigning how data is processed, how memory is used, how responses are streamed, and how UI stays reactive no matter the workload.

Why did we start over?

Over the years, Flexmonster has grown into a powerful tool for analytical reporting. But as our clients’ datasets grew larger and use cases became more complex, we began to hit limits — not the kind of limits you fix with a patch or a minor optimization — deeper architectural constraints.

So we asked ourselves a simple question:

 “What if we stopped optimizing the old engine and started building the right one for today’s needs?”

That question turned into a years-long effort. We revisited every assumption, every bottleneck, every moment where someone had to wait for a report to load or for the interface to respond.

What has changed in Flexmonster 3.0?

At the core of the upcoming release is a reimagined engine that optimizes how data is stored, queried, and transferred. But it wasn’t just an optimization. It was a full rewrite of the architecture’s key parts, new approaches, and a redesigned logic of the whole Flexmonster. Combined with improvements to the server side of the component, this unlocks a new level of speed and stability, even with datasets previously thought to be too large for such tools.

Here’s what we ended up rethinking and rebuilding👇:

  • The data processing core: restructured for large-scale performance.
  • Powerful server part of the component: optimized memory usage and parallelized loading.
  • Front-end rendering engine: improved responsiveness, even with large flat tables.
  • Pivot logic: made scalable and asynchronous, without UI lockups.

In short, Flexmonster 3.0 can handle more data in less time with less pain.

So… did it work?😏

It sure did! Here’s what we’re seeing in real-world testing:

The user does not wait in any case, neither with 150MB nor with 1.6GB files. Even a 31GB dataset does not require the user to have a powerful browser with much RAM.

These files make Excel or Google Sheets struggle even with simple scrolling. Flexmonster 3.0 can load them, process them, and let users work with them interactively — without waiting, spinner, or crash.

See it for yourself!

We made a video to show how the new engine handles real test cases. It includes side-by-side comparisons with Excel and shows how we worked with datasets from 150 MB up to 31 GB.

We hope it gives you a clear picture of what’s possible now and what problems we’re aiming to solve with the upcoming release.

How did we achieve such a performance?

Over the years of working in the field of data analytics and developing components for data visualization, we have explored almost all possible alternatives on the market. Most of them use lazy loading to achieve better and smoother rendering. It’s a strategy that helps to identify non-blocking resources and load them only when needed. 

We made it our way. Rebuilding Flexmonster, we implemented what our team calls “active loading” — an approach in which we predict the data the user wants to explore, and prioritize it to be rendered first. Using a bunch of heuristics, the component can predict blocks that will be required soon. As a result, it makes the user experience smoother and allows the customer to concentrate on the actual tasks.

And that’s only a part of the new approach. We gathered all the best practices and modern technologies to optimize every bit of the process inside Flexmonster. Previously, we released a prototype of all these concepts – DataTableDev, a web grid that is able to deal with millions of rows in a split second. And we have a whole article on its website explaining the technology and approaches used. 

🚩What’s next?

Flexmonster 3.0's performance is the first step in a series of improvements we’ve been planning. In the next few articles, we’ll break down other features we brought to life with the new version of the component. As they go live on our demo, we’ll update you with an article with every point you need to know.

If you're already using Flexmonster, we’d love to hear how these changes affect your work!
If you're just discovering us, welcome! There’s a lot more coming.

Subscribe to our news: