I'm in the process of architecting an integration from M3 to a web shop and there's a big discussion around pricing. Ultimately it boils down to two schools of thought.
1. Do you export pricing, discounts and promotions on an event basis either through event hub or streaming pipelines to the website, and replicate the pricing hierarchy in the web shop?
2. Does the web shop connect directly with M3 APIs to get pricing real time, with intelligent caching within the web shop. This method would need to rely on bulk API technology.
More questions will be asked surrounding other data requirements such as ATP. After looking at ION API Monitoring for a customer that uses Infor Rhythm, it looks like option 2 is utilized, although they use the pricing service web request, that has next to no documentation in its usage, or advantages to the standard M3 APIs M3/BDProcessor/ItemPricesService.
I'm leaning towards option 2. I believe the pricing data should only be stored once, pricing hierarchy with discounts and promotions is complex, so it's a risk having this logic maintained 2 systems.
I've built proof of concepts in NextJS where i can call OIS320MI.GetPriceLine in batches of ~25 and get response times of approx 1 second which is sufficient for the website. Whilst this is low for bulk apis, when i increase the batch size I notice the response times creep up to unacceptable threshold for a website.
I'd like to know peoples thoughts on calling bulk apis in parallel. Usage limits for this are not clear for this type of API, there's a lot of limits for ION but not M3 APIs. https://docs.infor.com/inforosulmt/xx/en-us/usagelimits/default.html. I'm talking core business engine technology and causing general system slowness by pushing volume through batch APIs.
Imagine the scenario where a website loads ~ 50 items on a page, using the above technologies I can get 50 prices back in approx 1 second. As the customer pages through the item list, more APIs are fired and provided I stay within the Infor OS Service Limits (starting at 250k APIs per day), i'm okay.
What happens if the customer wants to get an entire set of their pricing in a timely fashion. If there are 100k items, then using the above logic this turns into submitting (100,000 / 25) 4k bulk API requests in parallel. This is obviously an extreme scenario that I wouldn't recommend, but I'd like to know if anyone else has had similar thoughts around this, and what is an acceptable amount of bulk APIs to call in parallel? You could argue that a timely fashion isn't appropriate for this scenario and therefore the batch sizes should be much higher and use asynchronous batch technology to negate the 2 minute timeout policy for APIs. However, there's still the question of what is an acceptable amount of batch APIs you should call in parallel.
Back in the day I could manage this through LCM, and be clever in what service accounts could use what API subsystems and then it just becomes a memory / cpu question.
I appreciate web browsers will restrict the amount of parallel API calls you can make to prevent DDOS attacks, but i'm talking about using server side technology calling the APIs which are unrestricted in the amount of parallel API calls.