Discussion:
Commit by portions - performance optimization
(too old to reply)
Tatsiana
2019-01-14 11:48:25 UTC
Permalink
Hi,
I'm trying to imrove the performance of the ETL process.
We have ldif to db2 flow.
The iterator in AL goes through objects in ldif file and then update or insert the entries in DB2 table.
We've found some points to improve:
1.commit by portions
2.change update to custom merge statement
3.try parallel processing of ldif files
4.change link criteria to the custom ones
Related to this have the question about commit statement. Where is the set of changes to db are stored before commit? on the side of db or in some connector pool?
Will appreciate your help greatly if you have any other advice regarding the performance.

Thanks in advance.

______________________
Best regards,
Tatsiana
Eddie Hartman
2019-01-14 22:31:33 UTC
Permalink
The best performance is probably to write the data to file and then execute a command line to perform a bulk load

/Eddie
Eddie Hartman
2019-01-14 22:25:09 UTC
Permalink
This depends on the DB. Uncommitted chances could be stored in a buffer pool, or in the table itself, locked until the commit is performed.
Loading...