Elastic tables are a new type of structure that Microsoft has added to keep large amounts of data in Power Platform environments without making such a large use of the environment’s capacity. In terms of use and appearance, the experience is the same as with a standard table, but they do have some unique features that make it easier to manage millions of records without sacrificing the performance of the environment and without having to continually manage the storage and deletion of historical data.
In this article, it will be exposed the advantages that this new type of tables can bring to the environments in a clear and summarised way.
More records, less occupied space and potential savings
This is the most obvious change that we can find with the use of elastic tables. The way records are organized in these tables means that they do not take up as much space as standard tables. By automatically distributing their data across multiple nodes, they ensure efficient use of the resources and allow an easy scaling to adapt the fluctuations in data volume without compromising performance.
This is also translating into storage cost efficiency, as the elastic tables dynamically adjust their resources according to demand. In cases where the number of records contained in such elastic tables is fluctuating, the pricing model minimizes storage costs by not requiring fixed storage over time, as is the case with a structure based only on standard tables. This can be better understood with an example :
As noted before the example, this data is only indicative and would need to be studied on a case-by-case basis for each customer and their environment. But this shows that there is a potential saving in moving the data model from the environment to Elastic Tables.
Registers with expiry date
In each elastic table that has been created, the system automatically generates a column of integers called Time to live, which is used to establish the lifetime that these records will have in the table. This time is established in seconds and when this number of seconds is over, the record will be automatically deleted from the database.
Not all records need to have this lifetime set. In fact, if no value is set, the record will remain in the elastic table indefinitely, just like records in a standard table.
This can be very useful for data that must have persistence over time but which by definition tends to become useless after a few years in the system. This saves the need to periodically prepare massive deletions of these tables.
If any field of the record or the Time to live column itself is modified again, the record will take as a reference the date of last modification to apply the time in seconds that the record must remain in the database. In other words, any record created can delay its deletion date or even prevent it from being deleted by leaving the Time to live column blank.
Massive high-volume operations with improved performance
Among other virtues offered by these tables, Microsoft indicates that performance improvements of 10 times are achieved with the same limitation limits of the Dataverse API. In other words, record creation, updates and deletion operations can be performed on many more records and in less time than it would take with standard tables.
This is also due to the way data is structured across multiple nodes, as this distributed structure allows parallel processing, speeding up data retrieval and analysis, improving the responsiveness of the overall system. Therefore, the information not only takes up less space in the system, but also improves the performance of the system.
With this, the system will already perform a series of different steps when creating the table than when generating a standard table. It will create columns that do not appear in the other type of table and vice versa:
- Creation of Time to live column
- It will not generate the Statuscode and Statecode columns that are characteristic of the standard tables.
- The UTC Conversion Timezone Code column and the Timezone Rule Version Number column are also not generated.
- The Partition Id field is generated. This can be used to specify that several records are under the same logical partition so that if they perform write operations or data recovery operations, since they are working under the same partition, the performance will be higher. It is not mandatory to assign this value, it can be left empty and it is the system itself that will transparently and automatically manage the partitions.
In addition to these changes, elastic tables have a number of functions that are currently not supported and should be taken into consideration before creating such a table.
- Business rules
- Graphics
- Business process flows
- A Dataverse connector for Power BI
- N:N (many-to-many) relationships with standard tables
- Alternative key
- Duplicate detection
- Calculated and consolidated columns
- Currency columns
- Column comparison in queries via FetchXML, Web API or SDK API
- Table sharing use
- Composite indexes
- Cascade operations: Delete, Repartition, Allocate, Share, Stop sharing
- Sorting on lookup columns
- Aggregate queries:
- Value other than attribute1 when orderby is according to attribute2 value.
- Pagination when multiple distinct values
- Distinct with multiple sort by
- Sort by and Group by together
- Group by by link entity (left outer combination)
- Distinct with user-owned tables
- Table connections
- Access equipment
- Queues
- Attachments
Finally, it should be mentioned that at this moment there is no way to easily convert a standard table to an elastic table. This internal structure is established at the time of creation, so if you want to transform an existing table into an elastic table, you will have to generate a new elastic table and migrate the data from the existing standard table to the new elastic table.
Conclusions
Elastic tables are a new type of table offered by Power Platform environments for scenarios where large volumes of data exist. These will be able to fluctuate over time with a dynamic scaling and adjustment of resources that will ensure that the performance and latency of the environment is not impaired. At the same time, it offers a solution to reduce the capacity that we have stored in the environment and even a system of expiration of the life of the records to be able to stop worrying about the cleaning of the data from the moment in which they are created. In addition, all this brings performance and time improvements in terms of performing operations on the data through the same Dataverse API used for the existing standard tables.
I hope you find it helpful. See you in the next #blog : )