Summary
In this chapter, we learned about how Fabric improves performance overall with a data warehouse and analytical reporting. To begin with, we went over the additional artifacts in Fabric. We discussed the overarching concept of OneLake while talking about Delta tables. We also discussed data movement with notebooks and dataflows. In addition, we touched on the Spark engine as the ready-to-run serverless compute for most processing in Fabric.
Next, we looked at Direct Lake as a source for semantic models. Since the main data structure is Delta tables, the warehouse and lakehouse have a default semantic model with all Delta tables created in the container. We learned that Direct Lake mode is used only for very large datasets and that Import mode is still the best for small and intermediate semantic models unless near real-time reporting is required. We learned that Import Mode still can be used and should be if not looking to analytically report large data sets (big data). Microsoft...