Caching Updates (FireDAC)

From RAD Studio
Jump to: navigation, search

Go Up to Editing Data (FireDAC)

Generalities

The Cached Updates mode allows to defer posting updates to a database to a later time, rather than calling the Post / Delete methods. This allows to post multiple updates in a single batch, optionally enclosed into a transaction.

To set a dataset to the Cached Updates mode, set the CachedUpdates property to True. Then the dataset will track all changes made after the last setting to True or the last calls to CancelUpdates / CommitUpdates. These changes are included in the changes journal, where all changes are ordered by the time a change was made. FireDAC does not track multiple versions of the same record. The last change overrides the previous ones and moves the record to the end of the change order.

FireDAC supports the Decentralized and Centralized Cached Updates modes:

  • Decentralized Cached Updates mode--when each dataset tracks the changes independently of others. This is the classical and default mode.
  • Centralized Cached Updates mode--when several datasets share a single change log and the changes are in historical order.

Centralized Cached Updates

When the application needs to log in and apply multiple dataset changes in a chronological order, Centralized Cached Updates may be used. To this purpose, a single TFDSchemaAdapter instance must be assigned to the SchemaAdapter property of the datasets. TFDSchemaAdapter serves as a central storage for the rows and their changes to several datasets.

At first, Centralized Cached Updates is useful in master-detail relations, where a master dataset propagates changes to the cascading detail datasets, including auto-incrementing field values. To enable propagation for a detail dataset, FetchOptions.DetailCascade must be set to True. This allows the following:

  • Synchronized master-detail changes, so the changes of master and detail datasets will be recorded and applied in chronological order. For example: first you have the inserted master record, then the inserted corresponding detail records;
  • Propagation of a master identity column value to a detail dataset. For example, when a detail dataset is linked to an identity column of the master dataset and you apply updates, the corresponding detail records must get the actual master identity column value;
  • Cascading deletion of the detail records when a master record is deleted. For example, when a master record is deleted, then all corresponding detail records will be deleted too and this will be recorded in the change log;
  • Cascading propagation of the master field changes to a detail dataset.

The propagation works when the following conditions are met:

You can also use DetailServerCascade to control whether FireDAC posts client-side cascading changes to the database. You use DetailServerCascade together with FetchOptions.DetailCascade.

To enable Centralized Cached Updates with propagation, perform the following steps (first, set up the range based M/D and define the CachedUpdates property to True):

  1. Drop a TFDSchemaAdapter onto a form.
  2. Set the master dataset SchemaAdapter property to a TFDSchemaAdapter.
  3. Set the detail dataset SchemaAdapter property to a TFDSchemaAdapter.
  4. Set the detail dataset FetchOptions.DetailCascade to True.

At the same time, this enables an in-memory referential constraint for a detail dataset. This is similar to an SQL command:

ALTER TABLE <detail> ADD CONSTRAINT
FOREIGN KEY (<detail fields>)
REFERENCES <master> (<master fields>)
ON DELETE CASCADE
ON UPDATE CASCADE

To apply updates, the application should use the TFDSchemaAdapter.ApplyUpdates method, instead of the ApplyUpdates of the dataset. To reconcile errors, use the TFDSchemaAdapter.Reconcile method, instead of the dataset Reconcile.

See the Centralized Cached Updates sample for more details.

Tracking Updates

When your application is working in cached updates mode, you can track the changes and optionally revert the changes for each dataset. To track changes, use the following properties:

To revert the existing changes, use the following properties and methods:

  • SavePoint – sets/gets the current change log state;
  • RevertRecord – reverts the current record to it previous (original) state;
  • UndoLastChange – jumps to the last modified record and reverts it to its previous (original) state;
  • CancelUpdates – reverts all records in the change log.

For example, to implement a simple Undo functionality, you may create an actUndo action and attach the following event handlers:

procedure TForm1.actUndoUpdate(Sender: TObject);
begin
  actUndo.Enabled := FDQuery1.UpdatesPending;
end;

procedure TForm1.actUndoExecute(Sender: TObject);
begin
  FDQuery1.UndoLastChange(True);
end;

For other example, to implement in-memory transactions, with the ability to undo a group of changes, you may do the following:


FDQuery1.CachedUpdates := True;
iSavePoint := FDQuery1.SavePoint;
try
  FDQuery1.Append;
  ...
  FDQuery1.Post;
  FDQuery1.Append;
  ...
  FDQuery1.Post;
  FDQuery1.Append;
  ...
  FDQuery1.Post;
except
  FDQuery.SavePoint := iSavePoint;
end;

Note: In cached updates mode, the following methods and properties are working with the updates journal:

In cached updates mode, some method or property settings will raise an exception, if there are changes in the updates journal. They must be committed or canceled. These are:

  • Refresh;
  • setting CachedUpdates to False.


Applying Updates

To apply updates to a database, use the ApplyUpdates method. If a record applying raised an exception, then it will be associated with the record. Note that the ApplyUpdates method:

  • does not raise an exception, instead it returns the number of raised exceptions;
  • does not wrap updates applying into a transaction, an application may do that itself;
  • uses the same updates posting logic as the immediate updates.

After applying updates, the changed records still remain in the changes log. To remove them from the changes log and mark them as unmodified, call the CommitUpdates method. For example:

FDQuery1.CachedUpdates := True;
FDQuery1.Append;
...
FDQuery1.Post;
FDQuery1.Append;
...
FDQuery1.Post;
FDQuery1.Append;
...
FDQuery1.Post;
FDConnection1.StartTransaction;
iErrors := FDQuery1.ApplyUpdates;
if iErrors = 0 then begin
  FDQuery1.CommitUpdates;
  FDConnection1.Commit;
end
else
  FDConnection1.Rollback;

Note: This case corresponds to the expected behavior when AutoCommitUpdates is set to False.

Reviewing Errors

If an error happens inside the ApplyUpdates call, then ApplyUpdates records the error into the internal data record structure and continues to process updates until the number of errors will be equal to or greater than AMaxErrors. ApplyUpdates does not raise exceptions. To process all erroneous records after the ApplyUpdates call, use either reconciling process, either filter erroneous records.

To reconcile records, assign the OnReconcileError event handler and call the Reconcile method. The OnReconcileError event handler allows analyzing error, reading / changing the current record field values. On exit, it should assign action, which action the FireDAC code should take on the current record with the error. After the Reconcile method calls, ApplyUpdates may be called again to try to post erroneous record changes.

To filter erroneous records, include rtHasErrors into FilterChanges. Then navigate through the dataset and read the RowError property to get an exception object associated with the current record. For example:

var
  oErr: EFDException;
...
if FDQuery1.ApplyUpdates > 0 then begin
  FDQuery1.FilterChanges := [rtModified, rtInserted, rtDeleted, rtHasErrors];
  try
    FDQuery1.First;
    while not FDQuery1.Eof do begin
      oErr := FDQuery1.RowError;
      if oErr <> nil then begin
        // process exception object
        ...
      end;
      FDQuery1.Next;
    end;
  finally
    FDQuery1.FilterChanges := [rtUnmodified, rtModified, rtInserted];
  end;
end;

See Also

Samples