Data Archiving in AX is an easy to build solution to issues of table size and slow data queries. It's typically performed by copying data in highly used AX tables into newly created ‘History’ tables.
Data archiving tables are kept small while older data is still readily available for reporting, or research. However if done improperly the task can be overly cumbersome, tax system resources heavily or potentially cause data integrity issues.
Preventing Data Archiving Issues
Typically the actual archiving is best achieved by either batch jobs or AOT jobs, as data archiving is usually performed highly infrequently or even a single time and doesn’t need to be tied into any other functionality in the system.
In most cases, Batch jobs should be used. Data archiving is typically done on a timetable based around system downtime (i.e. once every 3 months archive the data) that Batch Jobs allow you to setup at the time of creation or alter based on what is needed. Learn more about running batch jobs.
AOT Jobs are only utilized when archiving requires a developers intervention each time or if the archiving only occurs once. This process is much more manual than the batch jobs approach. Make sure to consider both options when deciding to data archive.
AOT > Jobs > Right Click > new job
Creating History Tables
History tables are used to store historical data. They need to match the values of the table exactly. Duplicating the table is best way of achieving this. Here's how:
- In the development work space, right click the table and select 'duplicate'. This will create a new table
- Rename the new table. Use a name that will help you remember the purpose of the table
- ex. $TableName$History
This is the best process to ensure that the tables are identical between the history and original tables.
Populating History Tables
Population of the history tables will require copying the records from one table into another. This is performed using "insert_recordset" or traditional inserts.
Data archiving revolves around the concept of copying large amounts of data and records from one table to another. An insert_recordset is ideal because it makes much fewer calls to the database as opposed to a while select. It's also possible to update multiple records with a single command.
Traditional inserts, otherwise know as "while select", is a less efficient method of copying records to another table. It take more time and more code in order to copy the data. It's used if it's the only way the performance of the data archiving operation won't suffer.
Purging data tables: delete_from complications
Purging of the data tables should be performed after the population of the history tables. This ensures that if the population fails, it will error out without having data deleted. In general, a delete_from will perform this operation better than a while select delete operation.
Delete_from is suited for deleting large numbers of records from tables. This is because delete_from makes fewer calls to the database just like insert_recordset.
If a delete method or trigger exists on the table you are trying to purge records from, the speed of delete_from is the same as a while select. This is because each record needs to be evaluated. When archiving, we want to evaluate the delete methods or triggers as those methods usually have a cascading effect to other data that corresponds with AX processes.
To prevent this:
Call your table buffer before delete_from with these commands:
This will ensure the delete_from operates as intended and that the archiving of data does not affect more data than it should in AX.
Sudo-coded ex with a commonly used WHS table – typically done in the run command of the batch job class
WHSDimTrackingHistory whsDimTrackingHistory // new history table
insert_recordset whsDimTrackingHistory (field-list)
Select (field-list) from whsDimTracking
— exists joins are also possible here
— exists joins are also possible here