TAzureTableManagement

From RAD Studio
Jump to: navigation, search

Go Up to Azure and Cloud Computing with DataSnap

Warning: DSAzure API is deprecated and has been replaced by Data.Cloud.AzureAPI. You are encouraged to use the new API when developing cloud computing applications. See Cloud Computing with DataSnap for more information.
Note: The TAzureTableManagement component is available in the Tool Palette only if you install the dclWindowsAzureManagement190.bpl package, that you can find in the bin folder of the RAD Studio installation folder. TAzureTableManagement is only available for VCL applications.

The Azure table component is a bit more complex than TAzureQueueManagement, but the setup is exactly the same. First drop the TAzureTableManagement component on the form, then drop the connection string component. Make sure you add connection information to the DSAzure.TAzureConnectionString, connect the two components, and then activate the table component.

Once you have your application running and the table component activated, you can right-click the root node to add a new table or to refresh the list of tables.

Note: When naming a table, make sure you do not start with a number; only letters are allowed as first character. Table names follow the naming guidelines listed above for queue names, but hyphens (-) are not supported in the name.

On each table node, there are additional context menu items for deleting the selected table or viewing the table's data in another dialog box. You can also bring up this table dialog by double-clicking the table node, or by holding the SHIFT key while pressing the ENTER key.

Table Data Dialog

The Table Data Dialog has, on the left, a list of all table rows. When you select a row, the table on the right will be automatically populated. The keys of this table will be the row's column name, and the value will be the value stored in the cell of the table (specific row and column).

Note: Microsoft Azure tables have no schema. This means that each row in the table can have a completely unique set of columns. For example, the first row may have 3 columns, while the second row may have 50 columns. If you want the table to behave like a standard Database table, then you need to manually enforce a specific schema.

To add a new row to the table, right-click the left list and choose to create a new Entity. The table on the right will be populated with two items, RowKey and PartitionKey. These two columns are required for all table rows. In this way, the row can be uniquely identified within the table. You can edit the values for RowKey and PartitionKey by right-clicking them and choosing to edit the property. This option is only available for RowKey and PartitionKey when you create a new row. Once you have committed the row for the first time, you cannot change the value of these properties.

Another option on the right table available while adding a new Entity (row) is creating a new property. Doing this will bring up the same dialog as the edit context menu item does. You will specify in this dialog the name of the property, the value, and the data type. Basic validation is done here to reduce the risk of you entering a value and/or data type combination that is invalid.

Note: Even if you pass input validation, the value/data type pairing you choose may still fail when you attempt to commit the new (or modified) row to the server. If this happens, you may lose data you had entered for that row before committing.

After making the changes to the new row, press Commit to push the changes to the server.

Filtering the List

Filtering the list is another action you can perform with the list on the left side of the table data dialog.

Notice that above the table row list there is a filter field. You can type in a value and validate it by pressing the ENTER key. In this way you filter the list to show only the rows that have that value as either their RowKey or PartitionKey. However, note that no partial matches or wild cards are supported in the Azure REST API, so the value you enter needs to be an exact match.

If you want to match on more than one Column (property) of the row, you can use the ADVANCED FILTER option. Note that on the TAzureTableManagement component there is a property, AdvancedFilterPrefix, which by default has a value of ~. In the table data dialog's filter field, you can enter something like:

~RowKey eq '1' and PartitionKey eq '1'

You can match on any column this way, not just PartitionKey and RowKey. If the data type of the column is a string, you need to use single quotation marks around the value in the filter. If it is a number, you must use no quotation marks at all. RowKey and PartitionKey are always string types.

Importing Rows

To import rows, you have to select the context menu item for importing entities, which allows you to choose a text file to read the rows from. The text file needs to have a single JSON Value as its content.

The format of the file is as follows:

[
{"RowKey":"row1","PartitionKey":"Imported","AnyKeyName":"Hello World!"},
{"RowKey":"row2","PartitionKey":"Imported","UniqueKey1":["68","Edm.Int64"]},
{"RowKey":"row3","PartitionKey":"Imported","AnotherKey":["3.14","Edm.Double"]},
{"RowKey":"row4","PartitionKey":"Imported","OtherValue":["true","Edm.Boolean"]}
]

Note that the format previously shown describes a JSON Array, where each item in the array is a JSON Object. Each JSON Object must contain a property for RowKey and PartitionKey. Any other property has its key as a string following the naming convention for a table column. The value of the key is either a string (meaning the column is of string data type) or a JSON Array of size 2, where the value at index 0 is the string representation of the column value, and the value at index 1 is the data type of the column.

Note: If one or more rows from the text file cannot be imported, the text is skipped and the process continues for the rest of the rows, if they can be imported. In other words, it is not an all-or-none import.

See Also