Does this connector have support for formulas?
Thank you for posting your question.
We would like to confirm that calculated values are supported in Flexmonster Pivot Table for MongoDB.
However, please note that the `individual` parameter of the `measure` object is currently not supported for this data source, as it requires access to raw, unaggregated data. This is not possible with MongoDB, because in this case all the aggregations are calculated on the server-side before the data is transferred to the pivot table.
Feel free to check out the following documentation page for detailed instructions on how to use calculated values in Flexmonster: https://www.flexmonster.com/doc/calculated-values
Please let us know if this helps.
That’s great, thanks.
Do you know at this time whether the Elasticsearch connector will be updated to support formula?
We do have plans to move the Elasticsearch connection to the custom data source API model, which will make it possible to use calculated values with Elasticsearch.
At the moment, we are not able to provide you with a precise ETA for this, but we will make sure to let you know once there are any updates on this matter.
Please let us know if you have any other questions we can help you with at the moment.
I’m having trouble with the MongoDB sample project on a Mac.
rmdir /S /Q "build/" && tsc
fails when running
npm run build
Thank you for reporting this issue.
We’ve updated the sample project on GitHub and tested it – the error is now resolved and everything is working fine.
Could you please clone the updated version of the project and let us know if the issue persists on your side?
Thank you in advance and looking forward to hearing from you.
Sure, will give it a try.
Another question – how can we get flexmonster to apply specific headers to the request? We’re planning to use your demo project as a starting point for a proxy but want to pass in a jwt to authenticate the requests.
This can be achieved using the
requestHeaders property of the
dataSource object, which itself is an object containing key-value pairs, where
key is a header name and
value is the header value.
For more info on the data source object and all its properties please check out the following page: https://www.flexmonster.com/api/data-source-object
Please let us know if this helps.
Thanks that helps a lot.
We’re close to a deployed solution now.
A couple of things:
2. When I tested formula, the formula just dissappeared when I saved them. Could this be a bug?
Thank you for your offer to provide us with a sample dockerfile, it could be put to good use. Could you please send it to us via email so that we can inspect it & release it after that?
Speaking of your second point, we did not manage to reproduce the described behaviour. On our side, the calculated measure was saved successfully and available on the bottom of the “All fields” window in the Field List.
Perhaps, you could share your report and data sample with us so that we can try to reproduce the issue with your configurations? It would also be helpful for our investigation if you could describe the exact steps you’ve followed before running into the issue.
Looking forward to hearing from you.
I can’t reproduct it either now!
Another question – when I set the Bson type in our documents to Bson Decimal128, it appears as a binary array in the fields editor.
Is that expected behaviour?
Thank you for reporting this behaviour.
This is a known issue which we’ve encountered earlier and we are going to fix it in the upcoming 2.8.2 release ETA March 10th.
Please let us know if there is anything else we can help you with in the meantime.
Great, look forward to the new release, thanks!
We are glad to announce that the issue with Decimal128 type was fixed. This is included in the 2.8.2 version of Flexmonster: https://www.flexmonster.com/release-notes/
You are welcome to update the component. Here is our updating to the latest version guide for assistance: https://www.flexmonster.com/doc/updating-to-the-latest-version/
Please let us know if everything works fine for you.
We’re now using the MongoDB connector and the performance has started to degrade now that the collection has over a million rows.
From your own use, do you know whether indexing would help? The performance advisor on MongoDB Atlas doesn’t suggest anything so I wonder whether indexing helps when using aggregations rather than standard queries.
Thank you for your question.
The issue with MongoDB connector performance is a known problem – the whole collection needs to be scanned when executing a query and if the collection is massive, it naturally takes more time to be executed.
Nevertheless, we are planning to improve the performance with the help of caching and further query optimization – we will make sure to let you know once there are any updates on this.
Answering your question regarding indexing, from our experience the indexes for aggregation pipelines are not very efficient and do not help a lot when it comes to query optimization.
Please let us know if you have any other questions we can help you with.
Further to my question on aggregation pipelines, it would be good to see when the queries are hitting the indexes.
Could you consider enabling
in the mongodb connector and logging it in some way so we can see whether there are optimizations that can be done?
Thank you for your suggestion – we’ve discussed this with our team and it seems to us that adding support for
$indexStats would be reasonable.
With that in mind, we’ve added this feature to our todo list – we’ll return to you with updates once there is enough progress with the whole MongoDB connector optimization process.
Please let us know if there is anything else you would like to discuss in the meantime.