Hi,
In our process flow, we cycle through the lines of four Infor-generated HIPAA834 files. With 5,000 employees and 15 lines per employee, each file is approximately 65k lines. We do this to look for any errors prior to sending the file. The outer loop is just looping through each of the four files. Within that, there is a Data Iterator loop. We are using File as the input method and parsing by Line. Basically, cycling through all lines of four 65k files.
The only thing we are storing is a few Bad Lines in the MsgBuilder. We simply use a javascript "indexOf" to see if there is any bad data. If we find bad data, we set a flag to send an email notification with the lines that we saved in the MsgBuilder.
It seems strange that the Assign6340 node would have 94,310 MiB. The only thing it's doing is doing an "indexOf" on each of the lines and setting a flag if it finds a particular value. It's not storing any data other than the flag.
[#1 - |Assign6340|Assign|295442x|94310.33 MiB|149020 ms|125710 ms|424661 ms]
[#2 - |Branch5890|Branch|295442x|47909.04 MiB|68648 ms|57680 ms|185064 ms]
[#3 - |End-FileLines|IterEnd|295446x|33345.39 MiB|57599 ms|49310 ms|105296 ms]
[#4 - |FileLines|DataIterator|295446x|32951.44 MiB|50245 ms|44050 ms|99994 ms]
[#5 - |End|End|1x|9.71 MiB|22 ms|30 ms|56 ms]
[Memory Alert - Final] WU Alloc [182914.85 MiB] > [100000.00 MiB] - See KB 2148373
Are there any hints out there for reading large files to check for bad data? Here is our flow:
