How do people with front-end application like consumer websites expose M3 data such as items and pricing.
Which of the following do people use:
Hi,
Below is one the technique we practice.
1. Build a database on the client side and sync necessary information from M3. Use initial load BODs for master data like item master, customer party master etc . . . . . . 2. Use IONAPI and AMQP for the communication.
So basically, once data are initialized on the client side you use IONAPI & AMQP to sync the data going forward is that correct?
Which IONAPI do you use is it the M3 provided ones, IMS BODS or datalake?
Hey @dm9346 ,
M3 does expose APIs to retrieve operational data from the system but that relies on a client to schedule calls to retrieve the data which tends to be less effective, especially at scale.
Data Fabric has a new product coming in April of 2023 aptly named Data Fabric Pipelines and provides a push-based data delivery modeler to publish real-time data events to data warehouses and other data technologies in the future (e.g. AWS Kinesis, Azure EventHub, etc.). At the same time, M3 is moving towards real-time replication of their data events to Data Fabric in April and so the net result is real-time delivery.
We'll be publishing content to the Infor OS YouTube channel, documentation, and elsewhere on the new product, how to use it, discussing the challenges it solves, and more.
In the meantime, using event-based BODs from M3 to clients might be the closest approximation for what you'd need in a scalable solution.
Hi Mike,
Has there been anything more published on utube or documentation to help us decide how we might be able to use this in real-time reporting? What tools are going to be used against this realtime data?
Has the data fabric pipeline been released or any idea about the release date?
Hey @prabodhaa ,
Stream Pipelines for Data Fabric is Generally Available this month with the Infor OS 2023.04 release. Note that this is an add-on SKU - you can find more information by reaching out to your CSM or account executive.
Hey @tdouglass , missed this one but our team will be uploading some new content to our YouTube channel for Pipelines soon
thanks, Mike, will follow up with CSM.
one point, in this version, is M3 also sending data in real-time?
We have an Internal Web-solution that relies on SQL Stored Procedures to populate panels.
When moving to MT Cloud we could not query M3 Data directly anymore (we had a Mirrored instance before to query from), so I created a .Net REST API Site that works as a generic wrapper over M3 API's and converts them to a SQL friendly output which works in Microsoft SQL API Calls.
This works surprisingly well and provides real-time data.
Only problem is that we can only Filter on Main Table Data, unless we create Search API, which is a bit tricky to get the results you expect (works great in some scenarios however). I use a Mix of Both Search and Filter API.
I've read up on the streaming data in data fabric documentation and just wanted to ask the proper usage of the pipelines. How does this balance out between BODs and M3 API. Should we just use pipelines for everything since it's real time?
The two are differentiated by;
Both are near real time as possible.
With streams you would store and reassemble relationships and rules.They could be used in combination such as the BOD triggers additional data needs sourced from Data Fabric.BODs can be inbound, can be extended, new BODs added through the applications.
Each have different characteristics for different use cases.
APIs are another piece that can be leveraged as needed. Hopefully that gives you some of what you were seeking.
Just to be clear though, the only 'real time' option in the Cloud is API's.
Even though Pipelines & BOD's are close to 'real time' it is not really a guarantee that the Transaction is visible before an external process tries to access the data if it is trying to access something that was just created/added in the ERP System (M3).
We use two scenarios, taking into account different needs.When the need for the process requires that the information be updated online, then the solution is via API, when the need for the data to be updated does not have to be online, we use the database
Thanks for this explanation.