Preface
In 2010, Microsoft announced a change to its Business Intelligence environment, and said it will focus its development efforts on semantic modeling. At that time, the current technology used for analysis was SQL Server Analysis Server (SSAS), a technology that relied on disk-based storage and the distinct steps of model development, deployment, and processing—a function usually under the control of IT. The new technology will house all its data in memory and allow the user (or model designer) to change the model in real time and view those changes instantaneously. In addition to this, the platform sought to remove many of the barriers that had existed in the traditional Business Intelligence landscape. It offered a uniform platform for data analysis across an entire organization. The same platform can now be used by an individual user in Excel deployed to SharePoint (for team Business Intelligence) or directly to a server (for corporate Business Intelligence). This will remove a large proportion of the rework that was traditionally involved in Business Intelligence projects and lead to the catchcry "BI to the masses" (meaning that anyone can model a Business Intelligence solution). A free add-in was released for Excel 2010, and the 2012 release of Analysis Server (in SQL Server) included a new storage mode called tabular.
This was an interesting challenge to the traditional methods for implementing Business Intelligence models. Under that structure, Business Intelligence was essentially controlled by an IT department, which used a waterfall methodology and there were distinct phases in an analytical project involving the separation of duties and more importantly, the separation of people. Those that had to use data models were often involved with a back-and-forth battle to make the model work as the business user required.
Tabular models were then introduced and overnight Excel users were able to consume massive amounts of data and create their own analytical models without the need to involve IT (other than access to the data of course!). The product extended the familiar pivot table by allowing users to create pivot tables using many different data sources (and removed the requirements for a pivot table to be sourced from a single data table). More importantly, the ability to create models for the analysis of data was delivered directly to those who needed it most—the analytical end user. The restrictions on analysis and data manipulation that they had previously encountered were removed.
This book is primarily written for those users—individuals who need to answer questions based on large amounts of data. For this reason, we focus on how these users can use that technology to build models in Excel using PowerPivot. We simply don't want to exclude those users who need it the most and do not have access to the more traditional tools developed for corporate BI. Furthermore, these techniques are also directly applicable to corporate tabular models.
Finally, the book looks at how these models can be managed and incorporated into production environments and corporate systems to provide robust and secure reporting systems.