Load large datasets without any issues.
I started testing/analysing your product. When I try to load local csv file with 52 columns(118125 records total), pivot grid is not able to load it, crashing/the application.
so, Is there any limitations of records?
Can you please suggest that we can load this data in easy way without any issues(but, we can not use MS Analysis Servives as we have some technology limitations).
I attached .csv file(as a google drive link) to this email,
could you please provide me the solution on this?
System Requirements:64-bit Windows 7
Processor : 2.7 GHz, 8 GB RAM
Thanks & Regards,
Thank you for requesting the issue. We’ve tested the component with your data, and I have to say that problem really exists.
We need more time to research this issue. I hope we will have the bugfix in minor release v2.202 (ETA Dec 23).
Thank you for your response.
I know the data I am trying to load to the grid is huge, but even Can we have a workaround to fix this?
or Can we chunk the data into multiple files then after load into grid, Is there any such a option there to fix this issue?
Please download beta version of the future release 2.2 from here https://s3.amazonaws.com/flexmonster/ThomsonReuters/FLEXMONSTER-2015-THOMSON-REUTERS-NOV16.ZIP
I have to warn you that this build is unstable.
Thank you so much for your help.
we really appreciate for your support, it was really a good sign of support. I will test with our data and get back to you.
I was able to load the dataset now. as per our requirement, I was created a schema with specific rows and defined measures.
It is still crashing and sometimes unresponsive(when drag rows to columns vice versa) many times.
I attached config file to this mail that I used and csv file is same(data.csv), if you want you can download from below link:
Please look and help us Is something that we can improve performance without crashing it?
Thanks & Regards,
I am sorry, missed config file link:
I’ve tested with your config file, and I’m a little bit confused. Do you know that you use 17 dimensions in the same report? And resulted number of rows for such report is something like 6.5e+31? 🙂
My suggestion is to use not more than five dimensions in one report.
Thanks for your suggestion. I will test and get back to you if I face any issues.
Please clarify below questions:
2. By using SqlToCsvConvertor can only connect to SqlServer or any other database server(like via ODBC server)?
2. We use SqlToCsvConvertor just like a sample, and you are free to write your own script to retrieve data from a database. Yes, it can be SQL Server, Oracle, MySQL, PostgreSQL, etc.
Config.xml , where we can prepare the layout structure, creation of measures, datasource type and filename where the source data exists..etc..
it may different for different users. How do we create this manually or any functions avaialble for auto-generation?
Can we directly pass configuration as string(xml string) without creating a file stream?
We’ve moved your latest request to the separate thread.
Please check the answer here: Can we directly pass configuration as string without creating a file stream?
As you know I am using beta version of the future release 2.2, it’s really doing good with large datasets with respect to performance.
As part of our testing, when we are testing with IE(internet explorer) it’s not able to load the data, crashing application. but, in chrome it is really doing better.
definitely it is browser memory depends.
1. In future 2.2 version, can we expect this bug will be resolved for IE?
if you want you can download the file from below link:
Yes, sure. It will work with IE.
Could you please write the version of your IE?
Thanks for confirmation..I am using IE 11.