Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon

Tech News

3711 Articles
article-image-our-on-prem-to-cloud-database-migration-a-collaborative-effort-from-whats-new
Anonymous
20 Nov 2020
10 min read
Save for later

Our on-prem to cloud database migration: A collaborative effort from What's New

Anonymous
20 Nov 2020
10 min read
Erin Gengo Manager, Analytics Platforms, Tableau Robert Bloom Manager, Data Science and Data Engineering, Tableau Tanna Solberg November 20, 2020 - 10:35pm November 23, 2020 In our last cloud migration post, we outlined the needs and evaluation criteria that drove us to look to cloud database options. Now, we’re going to discuss the first stage of the migration: moving our data into Snowflake so we could take advantage of its many benefits. Self-service analytics is a delicate balance between enabling users with the data and insights they need to do their work while maintaining effective data governance enterprise-wide. This delicate balance between individual empowerment and centralized control extends to our physical migration of data and Tableau content from one platform to another, as well. Our migration timeline and process framework guided each team so they knew exactly when to join in and transition their data sources from SQL Server to Snowflake. Adhering to this timeline was essential because it was costly to the business, both in infrastructure resources and people hours, to keep SQL Server running in parallel with Snowflake. We intentionally started with the Netsuite pipeline belonging to our Finance Analytics team—a well-governed, well-defined data domain with clear owners to migrate. Starting there, we knew we would benefit from a strong partnership and robust testing scenarios, and that we could iron out the kinks for the rest of Tableau before we performed our full migration. A new way of thinking about data management As we reimagined data management across all of Tableau, we identified five pillars for the  migration process framework that dovetailed well with our Snowflake selection criteria, and would thereby increase trust and confidence in the data that everyone uses. These pillars are: staffing, governance, authentication, communication, and documentation.  We’ll first discuss staffing, governance, and authentication in this post, highlighting some key lessons learned, unexpected issues with responses, and recommendations to consider when migrating and tackling data sets—large or small, simple or complex. Staffing We don’t want to sugar-coat the complex undertaking of any migration at enterprise scale. We started by forming a small, core migration team and quickly realized more assistance was needed to update approximately 9,500 workbooks and 1,900 data sources, and address any downstream content effects caused by differences at the database level. The core team possessed some essential skills we suggest that organizations who make the same journey have: project management; development expertise with Python or a similar scripting language for modifying semistructured data like XML; and Custom SQL savvy. Recruiting talent with the right mix of data and programming skills that we needed was time consuming; we ended up reviewing upwards of 300 resumes and placing dozens of calls. Our central migration team counted at seven people—1.5 full-time program managers for six months, 0.25 server admins, approximately three full-time engineers, and two contractors—supporting upwards of 15-20 domain experts across sales, finance, and marketing. The extended team—data scientists, engineers, analysts, and subject matter experts who work in business teams and helped move or transform data in Snowflake—were the first line of defense when questions or concerns surfaced from business users. These “stewards” of our data were able to answer questions ranging from data access and permissions, to process and timeline questions.  “We were the bridge between IT and finance business users since many data sources were managed by our team,” explained Dan Liang, formerly Manager of Finance Analytics at Tableau (now Manager, Finance Data Office, Salesforce). IT provided the centralized platform and standardization across the enterprise, but Finance Analytics tailored communication for their  end users. “It was all hands on deck for a month as we handled content conversion, testing, and validation of data sources for our team’s migration. Tableau Prep was an integral part of our validation strategy to automate reconcile key measures between Snowflake and SQL Server.” Recommendations: Identify and define roles and responsibilities: Without clear roles, there will be confusion about who is responsible for what specific aspects of the process. In our case, we had data stewards and consumers test the data with specific experts designated to sign off on the data. Automate (where possible): We could have allocated more time to better automate this process, especially around workbook and data source XML conversion, as well as data testing.  Know that you’re comparing apples and oranges: We provided summary statistics like row and column counts and data types to our testers to help them compare the two data sets. But because of many factors (differing ETL and refresh times, plus potential latency), it was very difficult to pin down significant differences versus noise.  Governance Our cloud migration was a golden opportunity to strengthen competencies around governance. Everything from Tableau Server content to naming conventions in Snowflake received fresh scrutiny in an effort to improve user experience and ensure scale. Those teams that invested in governance by establishing single sources of truth (through well-curated and certified, published data sources) had a more straightforward content migration experience. Those that didn’t invest as much in governance struggled with unclear ownership and expectations around data, and their users encountered surprise effects downstream during the migration like broken data pipelines and dashboards.  Because we had many different languages used by data engineers over time, we also conducted thoughtful upfront discussion about standardizing code patterns, including outlining which characters were and weren't allowed. Acting on these discussions, “We implemented Continuous Integration and Continuous Deployment (CI/CD) on our source control tool (GIT), so we could more efficiently peer-review code and transfer work between members of the team as needed,” said Isaac Obezo, a software engineer. “This was much easier than having domain experts do everything in a pipe.” Further strengthening governance, built-in Snowflake features enable transparency into database metadata, including the ability to see and save all queries processed. Since that history is typically only stored for a week, we built a pipeline to store all of the historical data so we could provide more targeted support to end users, create new data curations, and promote our single sources of truth. In finance, we used this data to proactively reach out to users who experienced query timeouts and other errors. It also helped us maintain user access controls around Sarbanes-Oxley (SOX) compliance.   Recommendations: Use data quality warnings: These can communicate the status of a data source to users quickly and easily so they know when migration will happen and what will change. Recognize data management is a marathon—not a sprint: Progress and value deliverables are iterative. We concentrated on delivering smaller, but valuable changes as we migrated to the best model or data. We also benefited from using data to monitor performance of our cloud solution. Below is a sample visualization we built to monitor usage and performance of Snowflake. Minimize tech debt: Tableau Catalog gave us visibility into our data, including lineage and impact analysis to identify content owners. We were able to easily communicate to people what data and content could be deprecated and what was critical to move to Snowflake because of its usage or downstream impact. Consider leveraging an enterprise data catalog to help your end-users build knowledge and trust in data assets. Establish a clear cutoff: Appropriately budgeting time to complete a cloud migration is key; we took an informal survey and estimated an average of five hours per data source and one to two hours per workbook migration. Eventually, a final cutoff must be established where employees no longer have support from the legacy database or from the central migration team. If someone didn’t migrate their data when given ample time and assistance, they likely no longer needed it. Authentication Changing to a cloud based database required changing the database authentication method employed by users, apps, and connected systems. We went from an all On-Premise world of Active Directory (AD) identity management and automatic Windows authentication through AD for users, apps, and systems to the reality of the cloud where identity management across different apps or systems is not seamless or integrated out of the box. The best option is a federated Identity Provider (IdP) with Single-Sign On (SSO) capabilities across different cloud vendors and apps. If you are planning on having multiple cloud based apps, or want users to have a SSO experience, selecting the IdP that works best for you should be done before or in conjunction with your Snowflake adoption. Initially, we connected directly to Snowflake with SAML via our IdP. This works fine, but has pain points, especially coming from the automated world of Active Directory, namely: SAML IdP password changes will require manual embedded password changes in all Tableau content using embedded credentials. In the time between a user changing their password and updating connections in Tableau, any extract refreshes using them would fail and workbooks using their embedded password would not render. The only way to have a seamless password change experience with nothing breaking in Tableau was to switch to OAuth use.  Be sure to check if your IdP can be used with OAuth for Snowflake!  One important lesson learned here was the power of a Tableau email alert.  We worked with IT to automate an email that assists users in the password rotation. One month out from a required password rotation, users receive an email from their Tableau Server Admins reminding them that their password needed to be updated, as well as a link to just that content on Tableau Server. Recommendations: Be prepared to communicate and document changes: When changing authentication types you can expect to receive and have to answer many questions about how it works and how it differs, keeping in mind users’ different degrees of technical understanding. Strategically manage your storage driver: When you’re conducting an enterprise deployment to a platform like Snowflake, it’s important to push out the driver for everyone’s machines to maintain version control and updates. Supporting end-users and content migration Beyond staffing, governance, and authentication, communication and documentation were equally important to guarantee everyone was aligned throughout all phases of our migration to Snowflake. In our next blog of the series, we will explore those critical pillars to enable a better end-user experience and transition so no critical workbooks were left behind.  We also hope that sharing some of the individual experiences of our business teams helps other organizations and our customers better understand what it takes for an enterprise migration. Centralized coordination is mandatory, but business teams and their end-users must be equal partners, contributing from beginning to end.  "We knew we needed a landing place for our data, but didn’t realize how valuable it would be as a platform for collaboration because it was simple and brought everyone, including components across the business, feeding into the same thing,” concluded Sara Sparks, Senior Data Scientist at Tableau. Tableau is now in-tune as our people and data sources are more unified. If you missed it, read the first post in our cloud migration story—we covered our evaluation process for modernizing our data and analytics in the cloud.
Read more
  • 0
  • 0
  • 838

article-image-i-wrote-a-book-hands-on-sql-server-2019-analysis-services-from-blog-posts-sqlservercentral
Anonymous
20 Nov 2020
3 min read
Save for later

I Wrote a Book – Hands-On SQL Server 2019 Analysis Services from Blog Posts - SQLServerCentral

Anonymous
20 Nov 2020
3 min read
While not the first time I have authored, this is the first book that I wrote as the sole author. Analysis Services is the product I built my career in business intelligence on and was happy to take on the project when I was approached by Packt. I think one of my favorite questions is about how much research time did I put in for this book. The right answer is almost 20 years. I started working with Analysis Services when it was called OLAP Services and that was a long time ago. Until Power Pivot for Excel and tabular model technology was added to the mix, I worked in the multidimensional model. I was one of the few, or so it seems, that enjoyed working in the multidimensional database world including working with MDX (multidimensional expressions). However, I was very aware that tabular models with the Vertipaq engine were the model of the future. Analysis Services has continued to be a significant part of the BI landscape and this book give you the opportunity to try it out for yourself. This book is designed for those who are most recently involved in business intelligence work but have been working more in the self-service or end user tools. Now you are ready to take your model to the next level and that is where Analysis Services comes into play. As part of Packt’s Hands On series, I focused on getting going with Analysis Services from install to reporting. Microsoft has developer editions of the software which allow you to do a complete walk through of everything in the book in a step by step fashion. You will start the process by getting the tools installed, downloading sample data, and building out a multidimensional model. Once you have that model built out, then we do build a similar model using tabular model technology. We follow that up by building reports and visualizations in both Excel and Power BI. No journey is complete without working through security and administration basics. If you want learn by doing, this is the book for you. If you are interested in getting the book, you can order it from Amazon or Packt. From November 20, 2020 through December 20, 2020, you can get a 25% discount using the this code – 25STEVEN or by using this link directly. I want to thank the technical editors that worked with me to make sure the content and the steps worked as expected – Alan Faulkner, Dan English, and Manikandan Kurup. Their attention to detail raised the quality of the book significantly and was greatly appreciated. I have to also thank Tazeen Shaikh who was a great content editor to work with. When she joined the project, my confidence in the quality of the final product increased as well. She helped me sort out some of the formatting nuances and coordinated the needed changes to the book. Her work on the book with me was greatly appreciated. Finally, many thanks to Kirti Pisat who kept me on track in spite of COVID impacts throughout the writing of the book this year. I hope you enjoy the book! The post I Wrote a Book – Hands-On SQL Server 2019 Analysis Services appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1214

article-image-server-level-roles-back-to-basics-from-blog-posts-sqlservercentral
Anonymous
20 Nov 2020
6 min read
Save for later

Server-Level Roles – Back to Basics from Blog Posts - SQLServerCentral

Anonymous
20 Nov 2020
6 min read
Server-Level Roles SQL Server security is like a box of chocolates. Wait, it is more like an onion – with all of the layers that get to be peeled back. One of the more important layers, in my opinion, is the layer dealing with Roles. I have written about the various types of roles on several occasions. Whether it be Fixed Server Role memberships, Fixed Server Role permissions, or Database Roles permissions (among several options), you can presume that I deem the topic to be of importance. Within the “Roles” layer of the SQL Server security onion, there are multiple additional layers (as alluded to just a moment ago) such as Database Roles and Server Roles. Focusing on Server Roles, did you know there are different types of Server Roles? These types are “fixed roles” and “custom roles.” Today, I want to focus on the custom type of role. Custom Server Roles Starting with SQL Server 2014, we were given a new “feature” to help us minimize our security administration efforts. The new “feature” is that which allows a data professional to create a “Server Role” in SQL Server and to grant specific permissions to that role. I wrote about how to take advantage of this in the 2014 recipes book I helped to author, but never got around to creating an article here on how to do it. In this article, I will take you through a quick example of how to take advantage of these custom roles. First let’s create a login principal. This principal is a “login” so will be created at the server level. Notice that I perform an existence check for the principal before trying to create it. We wouldn’t want to run into an ugly error, right? Also, when you use this script in your environment, be sure to change the DEFAULT_DATABASE to one that exists in your environment. While [] is an actual database in my environment, it is highly unlikely it exists in yours! USE [master]; GO IF NOT EXISTS ( SELECT name FROM sys.server_principals WHERE name = 'Gargouille' ) BEGIN CREATE LOGIN [Gargouille] WITH PASSWORD = N'SuperDuperLongComplexandHardtoRememberPasswordlikePassw0rd1!' , DEFAULT_DATABASE = [] , CHECK_EXPIRATION = OFF , CHECK_POLICY = OFF; END; Next, we want to go ahead and create a custom server level role. Once created, we will grant specific permissions to that role. --check for the server role IF NOT EXISTS ( SELECT name FROM sys.server_principals WHERE name = 'SpyRead' AND type_desc = 'SERVER_ROLE' ) BEGIN CREATE SERVER ROLE [SpyRead] AUTHORIZATION [securityadmin]; GRANT CONNECT ANY DATABASE TO [SpyRead]; GRANT SELECT ALL USER SECURABLES TO [SpyRead]; END; As you can see, there is nothing terrifyingly complex about this so far. The statements should be pretty familiar to the data professional and they are fairly similar to routine tasks performed every day. Note in this second script that after I check for the existence of the role, I simply use “CREATE SERVER ROLE” to create the role, then I add permissions explicitly to that role. Now, I will add the login “Gargouille” to the Server Role “SpyRead”. In addition to adding the login principal to the role principal, I will validate permissions before and after – permissions for Gargouille that is. EXECUTE AS LOGIN = 'Gargouille' GO USE []; GO SELECT * FROM fn_my_permissions(NULL, 'DATABASE') fn; REVERT USE master; GO IF NOT EXISTS ( SELECT mem.name AS MemberName FROM sys.server_role_members rm INNER JOIN sys.server_principals sp ON rm.role_principal_id = sp.principal_id LEFT OUTER JOIN sys.server_principals mem ON rm.member_principal_id = mem.principal_id WHERE sp.name = 'SpyRead' AND sp.type_desc = 'SERVER_ROLE' AND mem.name = 'Gargouille' ) BEGIN ALTER SERVER ROLE [SpyRead] ADD MEMBER [Gargouille]; END; EXECUTE AS LOGIN = 'Gargouille' GO USE []; GO SELECT * FROM fn_my_permissions(NULL, 'DATABASE') fn; REVERT We have a few more things happening in  this code snippet. Let’s take a closer look and break it down a little bit. The first section tries to execute some statements as “Gargouille”. When this attempt is made, there is an error that is produced – which is good because it validates the principal does not have permission to connect to the requested database. The next statement of  interest adds the “Gargouille” principal to the SpyRead Server role. After the principal is added to the custom server role, I attempt to impersonate the “Gargouille” principal again and connect to the database and run a permissions check. These are the results from that last query. Lastly, I run a check to validate that Gargouille is indeed a member of the server role “SpyRead” – which it is. From these results we can see the power of the customer server role. In this case, I had a user that “needed” to access every database on the server. Instead of granting permissions on each database one by one, I granted the “Connect” (and a couple of other permissions to be discussed in the follow-up article) to the server role and then added Gargouille to that role. This reduced my administration time requirement quite a bit – more if there are hundreds of databases on the server. In the follow-up article, I will show how this will help to make it easier to grant a user the ability to view schema definitions as well as read from every database with one fell swoop. Stay tuned! Wrapping it Up In this article, I have shown how to use the power of custom server roles to help reduce your administration time. The custom security role is like using a security group to grant a bunch of people the same sets of permissions. When you use a security group to manage multiple people, it makes administration very much like you have offloaded the job to somebody else to do because it becomes that easy! Now it is your turn, take what you have learned in this article and see how you could apply it within your environment to help you be a rockstar data professional. Feel free to explore some of the other Back to Basics posts I have written. Are you interested in more articles showing what and how to audit? I recommend reading through some of my auditing articles. Feeling like you need to explore more about the security within SQL Server, check out this library of articles here. Related Posts: SQL Server User Already Exists - Back to Basics January 24, 2018 Quick Permissions Audit January 14, 2019 When Too Much is Not a Good Thing December 13, 2019 Easy Permissions Audit January 21, 2019 Cannot Use the Special Principal - Back to Basics November 7, 2018 The post Server-Level Roles - Back to Basics first appeared on SQL RNNR. The post Server-Level Roles – Back to Basics appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1340
Banner background image

article-image-coming-soon-to-tableau-more-power-simplicity-and-predictive-flexibility-from-whats-new
Anonymous
20 Nov 2020
5 min read
Save for later

Coming soon to Tableau: More power, simplicity, and predictive flexibility from What's New

Anonymous
20 Nov 2020
5 min read
Sarah Wachter Product Management Manager Tanna Solberg November 20, 2020 - 6:46pm November 20, 2020 We were excited to release Predictive Modeling Functions in 2020.3, empowering Tableau users with predictive statistical functions accessible from the native Tableau table calculation interface. We put powerful predictive analytics right into the hands of business users, keeping them in the flow of working with their data. Users can quickly build statistical models and iterate based on the prediction quality, predict values for missing data, and understand relationships within their data.  However, we knew that a significant use case was still challenging. Surprising exactly no one, a key use case for predictive modeling is to generate predictions for future dates. While you can accomplish this in 2020.3 with some complicated calculations, it certainly isn’t easy. We also knew that linear regression, specifically ordinary least squares, isn't always going to be the best predictive model for many data sets and situations. While it's very widely used and simple to understand, there are other regression models that are better suited for certain use cases or data sets, especially when you're looking at time-series data and want to make future projections. We want to make sure that our users have the power, simplicity, and flexibility they need to apply these functions to a wide variety of use cases, and so we're delighted to announce two enhancements to predictive modeling functions. In the 2020.4 release, you'll be able to select your statistical regression model from linear regression (the default option), regularized linear regression, or Gaussian process regression. You'll also be able to extend your date range—and therefore your predictions—with just a few clicks, using a simple menu. With these new features, Predictive Modeling Functions become even more powerful and flexible, helping you see and understand your data using best-in-class statistical techniques. Let's take a closer look at each feature. Model Selection By default, predictive modeling functions use linear regression as the underlying statistical model. Linear regression is a common statistical model that is best used when there are one or more predictors that have a linear relationship with the prediction target (for example, "square footage" and "tax assessment") and those predictors don't represent two instances of the same data ("sales in GBP" and "sales in USD" represent the same data and should not both be used as predictors in a linear regression). Linear regression is suitable for a wide array of use cases, but there are some situations where a different model is better. In 2020.4, Tableau supports linear regression, regularized linear regression, and Gaussian process regression as models. For example, regularized linear regression would be a better model in a situation where there is an approximately linear relationship between two or more predictors, such as "height" and "weight" or "age" and "salary". Gaussian process regression is best used when generating predictions across an ordered domain, such as time or space, or when there is a nonlinear relationship between the predictor and the prediction target. Models can easily be selected by including "model=linear", "model=rl", or "model=gp" as the first argument in a predictive modeling function. Date Axis Extension Additionally, we knew that making predictions for future dates is a critical feature of predictive modeling functions. To support this, we added a new menu option to Date pills that allow you to quickly and easily extend your date axis into the future. While we built this function to support predictive modeling functions, it can also be used with RUNNING_SUM or other RUNNING_ calculations, as well as with our R & Python integrations.  Let's take a look at how these new functions can be applied! First, let's look at how to extend your date axis and make predictions into the future. In the below example, we've already built a predictive modeling function that will predict our sales of various types of liquor. Of course, since this is a time series, we want to see what kind of sales numbers we can expect for the coming months. This is as simple as clicking the Date pill, selecting "Show Future Values", and using the menu options to set how far into the future you want to generate predictions.  Next, let's look at model selection. In the below example, we've already built a predictive modeling function that uses month and category as predictors for sales of various types of liquor. We can see that the default linear regression is capturing sales seasonality and overall trends. However, we can easily switch to using regularized linear regression to see how the regularized model affects the overall amplitude of the seasonal behavior. Since we're building predictions across an ordered domain (time), Gaussian process is also a valid model to use with this data set. In either case, it's as simple as including "model=rl" or "model=gp" as the first argument of the predictive function.  While we've made it very easy to switch between models, for most use cases linear regression will be an appropriate choice. Selecting an incorrect model can lead to wildly inaccurate predictions, so this functionality is best reserved for use by those with a strong statistical background and understanding of the pros and cons of different models. Get started with the newest version of Tableau With these additions, we've significantly expanded the flexibility and power of our predictive modeling functions. Gaussian process regression will let you generate better predictions across a time axis, and regularized linear regression will let you account for multiple predictors being affected by the same underlying trends. Date axis extension gives you an easy, intuitive interface to generate predictions into the future, whether you're using predictive modeling functions or external services like R or Python. Look for these new features in the upcoming Tableau 2020.4 release to get started—and see what else we’re working on.  As always, thank you to the countless customers and fans we've spoken with as we built these new features. We couldn't have done it without you.
Read more
  • 0
  • 0
  • 890

article-image-looking-back-at-election-2020-the-power-of-online-polling-and-visualization-from-whats-new
Anonymous
20 Nov 2020
9 min read
Save for later

Looking back at Election 2020: The power of online polling and visualization from What's New

Anonymous
20 Nov 2020
9 min read
Steve Schwartz Director, Public Affairs at Tableau Tanna Solberg November 20, 2020 - 4:36pm November 20, 2020 The 2020 presidential election was two weeks ago, but in the world of election data, results are still being processed. Every election is a data story, but 2020 was especially so. As analysts pick apart the accuracy of the polls—and voters decompress from consuming the stream of data stemming from the overwhelming number of mail-in votes this year—Tableau and SurveyMonkey have taken time to reflect on the partnership launched this fall to visualize critical, public opinion data. Through the Election 2020 partnership, SurveyMonkey continuously polled a subset of its nearly 2 million daily survey respondents on a range of topics related to the election—from candidate preference, to likelihood of voting by mail, to concerns about COVID-19. Working with such a robust data set, they were able to break down their data by a number of demographic cuts and visualize it in Tableau, so anyone could analyze the data and understand what factors could shape the outcome this year. Axios, as the exclusive media partner for the initiative, contextualized the data and offered their own analysis. Tableau talked with Laura Wronski, research science manager at SurveyMonkey, about how their online polling data captured the eventual results of the election, the power of data visualization to showcase the complexities in demographic analysis of voter trends, and the effect that key issues—like mail-in voting and COVID-19—had on the outcome. Tableau: As you look back on the polling data you gathered in the lead-up to the election, what is your big-picture takeaway about what your data revealed? Wronski: One thing that we really came to appreciate was the value of having the 50-state Candidate Preference map to visualize our data. We actually feel that we did well in terms of directionally calling the states correctly. We were dead-on in a lot of cases, and the places where we were off, they were oftentimes less than the degree to which other pollsters were off. And when you look at our map, you can see that the states we focused on for the whole election were the ones that proved to be very pivotal. In Georgia, we had a slight Biden lead, and for Arizona as well. We had Nevada very close, though that ended up being more of a Biden state than our data predicted. What’s interesting is that the reason these states are so critical is that the demographics there are changing. The fact that Georgia was competitive and went blue for the first time in many years was fascinating. Our data showed that, but it’s something that also gave us pause as we were putting up those numbers—we really wanted to be confident in the data.  This Candidate Preference map shows the survey responses from the question: "If the 2020 presidential election were being held today among the following candidates, for whom would you vote?"This was a year in which people’s confidence in polling data was ultimately quite shaken. But as you said, your data was pretty accurate. What does that say to you about your methodology of conducting online surveys? That’s something that we've been talking about a lot internally. There were obviously some big errors this year when comparing all pre-election polling to the final outcomes. Wisconsin, for instance, is a state that pretty much everybody got wrong. The FiveThirtyEight polling average for Wisconsin aggregated 72 polls in the two months leading up to the election: only two had a tie, and just one—one of our polls—had a Trump lead at some point. But 69 polls had Biden winning, many of them by a wide margin, and he ended up winning by just 1 percent or so. That means nearly all of the polls overestimated Biden. That is disorienting, because while a two-point error is not a big one, if 10 pollsters all show the same error, it gives people a sense of confidence in the data that didn’t actually pan out. One thing that we have seen through our polling efforts was that because we collect data through online surveys and operate at such a large scale, we’re able to get pretty robust data from small segments and subgroups of people. So we could look at responses just among Black Americans, and we did a story with Axios focused on young voters. A lot of times, these subsets are really hard to see in a 1,000-person national poll. So that is something that we think is an advantage to online polling going forward—particularly as what we’ve seen this year is that it’s hard to get the right mix of people in the underlying sample. The more we’re able to get to a large scale with the data, the more we’re able to look closely at respondents and cut the data by different factors to make sure we’re looking not just at who lives in rural areas, for instance, but that we’re getting the right mix of people who live in rural areas by race and education.  Credit: AxiosAs you’re working with such a vast amount of data and identifying trends, why is visualizing the data so important? Visualization is so useful because it really allows you to see the trends, rather than look at the numbers and get a relative sense for what they’re showing. We built a dashboard that enables people to dig into different demographic groups and really understand differences among them, not just between them. In looking at Black voters, for instance, you’re able to layer in education or gender, and see how more granular subsets fall in terms of candidate preference. And looking at white voters as an entire group, they were the only ones in our dashboard to fall on the Trump side of the margin. But if you add in education, you can see that it was just white voters without a college degree who fell on that side. And if you add in gender, it’s really just men. The more cuts you can do, the more you can see that there are such overwhelming divides along so many demographic lines. There is a temptation to treat [demographic] groups like race as a monolith, but being able to visualize the data and see how different factors layer in encourages people to take a more nuanced approach to understanding voter groups. The way this election unfolded hinged on not just the number of votes, but on the way people voted. What did your polling data reveal about the role that mail-in voting ultimately played in the election? Early on in the process, our data was pointing to what would be a big divergence by party in voting by mail. If you look at our dashboard where you can explore people’s likelihood of voting by mail, you can see a dark purple map indicating the high percent of Democrats who are very likely to vote by mail, and conversely, you can see a deep orange map of the high percentage of Republicans who are not at all likely to vote by mail. That obviously had an effect on the timeline of the election, and the way the results played out on election day and the days that followed. We’re happy that we got the data out there, and we were right on the money in the sense of how much of a story it would be. I think there’s more to think about how we tell the story around how high rates of mail-in voting can affect the timing of results. People are so used to having all the data on election day, but is there a way we can show with data and visualizations how mail-in voting can extend that timeline? Another significant factor in this election was the context of COVID-19. As you were polling people about the election and their preferences, you were also asking respondents questions about COVID-19 and their concerns around the virus. Did you see any correlations in the data between people’s COVID responses and the way the election turned out? Dating back to February, we’ve asked for people’s responses to five questions that relate to their concerns about the coronavirus. And over time, what we’ve seen is that, on the whole, people are more concerned about the economic impact of COVID-19 on the country [overall]. That’s much higher than the number of people who said that they were worried about the economic impact on their own households. Usually the lowest concern is that they or someone in their family will get coronavirus. Rates of concern about the virus were also much lower among white respondents. We’ve seen in our data that, on the whole, Democratic voters were much more likely to say they were concerned about COVID, and Republicans were less likely to see it as a threat—and if they were, it was much more focused on the economy. So it’s clear that people were looking at the macro level, and that the economic impacts, even more than the health concerns, were what motivated voters. As waves of the virus move across the country, it’s useful to track what changes and what doesn’t about people’s opinions. We can see how these concerns impacted what people thought about when voting, and—when you look at mail-in voting rates—how it impacted how they voted. To see more from Tableau, SurveyMonkey, and Axios’s Election 2020 partnership, visit the website.
Read more
  • 0
  • 0
  • 807

Anonymous
20 Nov 2020
3 min read
Save for later

Book Review: Making Work Visible from Blog Posts - SQLServerCentral

Anonymous
20 Nov 2020
3 min read
I was recommended Making Work Visible by a developer at Redgate Software. The book caught my eye as it seeks to ensure you can work more efficiently by watching out for some of the common things we do wrong in software development. It’s a DevOps related book, and many of the concepts of flow, work in progress, etc. that we talk about in DevOps are things that I saw in the book. The overall message is that there are five main time thieves that cause you to work less efficiently than you or your team might otherwise function. These are: Thief Too Much Work-in-Progress Thief Unknown Dependencies Thief Unplanned work Thief Conflicting Priorities Thief Neglected Work The different issues are introduced early on, with each getting a few pages to describe them. Then later in the book, the author delves into more detail on the issues of this type of time thief, the impact, and ways you can think about working around the issues. I read this book alone, but I might recommend you work in team here and do some of the exercises shown in the book. Each is really a physical activity, but I’m sure it would work with a virtual meeting these days. The book is really built around Kanban boards, and there is a lot of detail on the ways to organize, or not organize, your board and team. I’ve seen some of the positives at Redgate, and some negatives as well, though I’ve seen more negatives at other companies. There are suggestions for meetings, techniques for informing others of status and progress, and even some “beastly practices. There is lots of information supporting why something is good or bad, or really, more or less helpful. I read most of this book in my Kindle app, but I did go through some in the cloud reader from Amazon. There are lots of images and illustrations, and lots of color, so I might recommend that you get the physical book, or if you like Kindle, read it online at times, especially with the examples and diagrams of the Kanban boards. For me, personally, I get caught up in unplanned work at times, but often I have too much WIP and neglected work. I start things and don’t finish them quickly enough, or focus on getting them out of the way. One thing I took away from this book is to slow down and dedicate more blocks of time to knocking items off my list, rather than doing some things when I feel like it. The post Book Review: Making Work Visible appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1101
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-the-silliness-of-a-group-from-blog-posts-sqlservercentral
Anonymous
20 Nov 2020
2 min read
Save for later

The Silliness of a Group from Blog Posts - SQLServerCentral

Anonymous
20 Nov 2020
2 min read
Recently we had a quick online tutorial for Mural, a way of collaborating online with a group. It’s often used for design, but it can be used for brainstorming and more. There are templates for standups, business models, roadmaps, and more. Anyway, we had a designer showing a bunch of others how to do this. Some product developers, team leads, advocates, and more. During the session, as we were watching, we were in a live mural where we could add items. I added a post-it with “Steve’s Note” on it, just to get a feel. I also added a photo I’d taken. Before long, the group chimed in, especially when the host misidentified Phoebe the horse as a goat. We had another part of the session dealing with voting and making choices. The demo was with ice cream, allowing each of us to vote on a set of choices. Next we went to a template where we could add our own choices and people had fun, including me. All in all, I see Mural as an interesting tool that I could see different groups using this in a variety of ways to collaborate, with some sort of Zoom/audio call and then focusing on a virtual whiteboard, there’s a lot here. I actually think this could be a neat way of posing questions, taking votes or polls, and sharing information in a group that can’t get together in person. . The post The Silliness of a Group appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 734

Anonymous
20 Nov 2020
2 min read
Save for later

Daily Coping 20 Nov 2020 from Blog Posts - SQLServerCentral

Anonymous
20 Nov 2020
2 min read
I started to add a daily coping tip to the SQLServerCentral newsletter and to the Community Circle, which is helping me deal with the issues in the world. I’m adding my responses for each day here. Today’s tip is to choose a different route and see what you notice on the way. The pandemic has kept most of us at home. We drive less, go less places, do less. For me, while I still go a few places, I do less for sure. When I saw this item pop up, I had to think about how I’d choose a new route. Walking from my house only gives me one route for a mile to get to the end of my street and out to a place where I can then go in a few directions. Driving around, it often doesn’t make sense to go a different way, but I thought this might be a good way to change my day. I’m lucky in that my gym has been open since May, and while there are restrictions and limitations, I can go. My week usually has 3-4 trips, twice for yoga and 1-2 trips for weights. I’ve avoided most classes, though I may go back to a swim a week as well. The route there is pretty simple, and while the facility is about 10mi away, I can take a separate route, wind through some neighborhoods slightly out of my way, and keep this to about 14mi. I did that recently. I took the long way, which winds alongside E-470 in S Denver, but also has a small bridge over the highway. That leads to the back of the neighborhood where I used to live. I drove through, looking at houses where friends used to live, or I used to bike/walk/horseback ride through. It was a nice trip on which to reminisce. The post Daily Coping 20 Nov 2020 appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 610

Anonymous
20 Nov 2020
1 min read
Save for later

Power BI Monthly Digest – November 2020 from Blog Posts - SQLServerCentral

Anonymous
20 Nov 2020
1 min read
In this month’s Power BI Digest Matt and I will again guide you through some of the latest and greatest Power BI updates this month. In our November 2020 edition we highlighted the following features: New Field and Model View (Preview) Filters Pane – Apply all filters button Data Lasso now available in maps Visual Zoom Slider Anomaly Detection (Preview) The post Power BI Monthly Digest – November 2020 appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1310

article-image-announcing-eightkb-2021-from-blog-posts-sqlservercentral
Anonymous
19 Nov 2020
2 min read
Save for later

Announcing EightKB 2021 from Blog Posts - SQLServerCentral

Anonymous
19 Nov 2020
2 min read
The first EightKB back in July was a real blast. Five expert speakers delivered mind-melting content to over 1,000 attendees! We were honestly blown away by how successful the first event was and we had so much fun putting it on, we thought we’d do it again The next EightKB is going to be on January 27th 2021 and the schedule has just been announced! Once again we have five top-notch speakers delivering the highest quality sessions you can get! Expect a deep dive into the subject matter and demos, demos, demos! Registration is open and it’s completely free! You can sign up for the next EightKB here! We also run a monthly podcast called Mixed Extents where experts from the industry join us to talk about different topics related to SQL Server. They’re all on YouTube or you can listen to the podcasts wherever you get your podcasts! EightKB and Mixed Extents are 100% community driven with no sponsors…so, we’ve launched our own Mixed Extents t-shirts! Any money generated from these t-shirts will be put straight back into the events. EightKB was setup by Andrew Pruski (b|t), Mark Wilkinson (b|t), and myself as we wanted to put on an event that delved into the internals of SQL Server and we’re having great fun doing just that Hope to see you there! The post Announcing EightKB 2021 appeared first on Centino Systems Blog. The post Announcing EightKB 2021 appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 690
article-image-speaking-at-dps-2020-from-blog-posts-sqlservercentral
Anonymous
19 Nov 2020
1 min read
Save for later

Speaking at DPS 2020 from Blog Posts - SQLServerCentral

Anonymous
19 Nov 2020
1 min read
I was lucky enough to attend the Data Platform Summit a few years ago. One of my favorite speaking photos was from the event.Me on a massive stage, massive auditorium and huge screen. This year the event is virtual and I’m on the slate with a couple talks. I’m doing a blogging session and a DevOps session. Both are recorded, but I’ll be online for chat, and certainly available for questions later. There are tons of sessions, with pre-cons, post-cons, and lots of sessions, running around the world. It’s inexpensive, so if you missed the PASS Summit or SQL Bits, join DPS. US$124.50 for the event and recordings. Pre/post cons are about $175. Register today and I’ll see you there. The post Speaking at DPS 2020 appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 689

Anonymous
19 Nov 2020
2 min read
Save for later

Daily Coping 19 Nov 2020 from Blog Posts - SQLServerCentral

Anonymous
19 Nov 2020
2 min read
I started to add a daily coping tip to the SQLServerCentral newsletter and to the Community Circle, which is helping me deal with the issues in the world. I’m adding my responses for each day here. Today’s tip is to be overcome frustration by trying out a new approach. I tend to go with the flow, and I don’t have a lot of frustrations with how my life goes, but I do have some. I’m annoyed that my gym limits class sizes and quite a few people seem to reserve spots and then not show up. However, I’m also grateful I can just go. I’m annoyed that despite months of fairly safe competition and practice in volleyball, with little evidence of transmission from games, that counties have blanketly closed all competitions. I get it, and I know this disease isn’t something to take lightly, but I also know we need to balance that with continuing to live. Our club cancelled full practices, limiting us to 5 athletes and two coaches for skills, no competition. That makes coaching hard, and this is a challenge with keeping a team together. After talking with my assistant, rather getting upset, we decided to donate an extra day of time and split out practices to conform to the limits and work on keeping kids in the gym twice a week, along with a rotation so that different teammates get a chance to see each other. A minor part of my life, but still frustrating. Taking a positive approach and changing up how I work through this has helped cope with the frustration. The post Daily Coping 19 Nov 2020 appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 709

article-image-european-businesses-navigate-pandemic-yougov-survey-finds-data-gives-critical-advantage-optimism-and-confidence-from-whats-new
Anonymous
18 Nov 2020
7 min read
Save for later

European businesses navigate pandemic: YouGov survey finds data gives critical advantage, optimism, and confidence from What's New

Anonymous
18 Nov 2020
7 min read
Tony Hammond Vice President Strategy and Growth, EMEA Tanna Solberg November 18, 2020 - 11:43pm November 19, 2020 With a surge of COVID-19 cases triggering a second shutdown in Europe, continued disruption is imminent as businesses face more difficult decisions and challenging realities in the days ahead. We remain in a fight or flight mode, and with that comes added pressure to get things right, to quickly learn from our mistakes, and understand our data. As 2021 approaches, the future remains uncertain, weighing on the minds and hearts of business leaders, but data can be a guide—helping organisations out-perform and out-survive.  Our team in Europe, and frankly around the globe, has seen change, agility, digital transformation, and data accentuated by the pandemic and prioritized by organisations as they navigate a new normal and chart a plan forward. No journey is the same, however. With organisational challenges and shifting customer and business priorities, some businesses are lightly tip-toeing into the age of data while others are already reaping the benefits and building a “memory bank” by learning, testing, and understanding their data. We partnered with YouGov, an international research data and analytics group headquartered in London, to survey more than 3,500 senior managers and IT decision makers in four major European markets: the UK, France, Germany, and the Netherlands. We explored several key questions like: What are the benefits that organisations experience when using and relying on data (especially during the pandemic)?  What lessons have businesses learned thus far, as a result of the pandemic? What will companies prioritise when it comes to future plans and what role will data play? In this blog post, we’ll share top learnings from our research. Explore the full results in this visualization on Tableau Public. The data divide between European businesses Greater optimism amongst data-driven leaders, businesses Nearly 60 percent of survey respondents identified as data-driven, which positively indicates that leaders are prioritising digital acceleration and data transformation regionally. Most (80 percent) of that same group believes that being part of a data-driven organisation puts them at a greater advantage than the businesses who aren’t data-driven. They also have greater optimism about the future of their business because analytics are giving them the clarity to handle obstacles while seizing on the opportunities in their sights.  Those same organisations expressed multiple advantages gained from using data, including: more effective communication with employees and customers; making strategic decisions more quickly; and increased team collaboration for decision making and problem solving, which is essential when new problems surface weekly—ranging in complexity and significance to the business. Now in a second phase of lockdown across many European countries, we can see how data-driven organisations responded effectively the first time, what they learnt (good and bad), and how they’ll apply that as the cycle repeats.  Some organisations benefiting from a data-focused approach, before and during the pandemic, are Huel, a nutritional meal replacement provider based in the UK, and ABN AMRO, one of the world’s leading providers of clearing and financing services. A fast-growing start-up, Huel struggled with delayed decision-making because analytics took too long and required too much effort. By embracing Tableau’s interactive, self-service analytics, they’re democratising data worldwide and creating a data-driven company culture. “Our data-driven strategy is helping us respond to consumer behaviour—enabling us to pivot and react with greater speed and clarity. It’s all about empowering the full organisation through data,” said Jay Kotecha, a Huel data scientist. Speed, high volume transactions, security, and compliance are challenges that global clearing banks face daily—particularly during the pandemic as settlement demand grew 3x the daily average from market volatility. ABN AMRO needed access to accurate data to monitor their settlement process and analyse counterparty risk in real-time and used Tableau analytics to securely explore data and act on insights with speed, agility, and clarity. Organisations that aren’t data-driven at a disadvantage While the YouGov study revealed favourable perspectives with many European businesses, some haven’t fully grasped the value and importance of data. Only 29 percent of respondents who classified themselves as non data-driven, see data as a critical advantage and 36 percent are confident that decisions are supported by data. Furthermore, 58 percent of the non-data driven companies found themselves more pessimistic about the future of their business. They enter the future slightly data-blind because they want to reduce or stop investing in data skills, which means their analysts, IT, and employees are less equipped with data-related resources and their business will likely lag behind competitors who embrace, and therefore thrive, with data.  Key takeaways Data literacy is a priority for businesses that are data-driven, increasing competitiveness Even as some respondents recognise data’s benefits, nearly 75 percent of the data-driven companies across all four markets still see a need to continue (or increase) spending on data skills training and development in the future. “We started building data skills across the business in 2013, and the pandemic has definitely seen us benefit from these capabilities,” explained Dirk Holback, Corporate Senior Vice President and CSCO Laundry and Home Care at Henkel, one of the world’s leading chemical and consumer goods companies based in Düsseldorf, Germany. Also a Tableau customer, Henkel set a strong data foundation before the pandemic hit and was glad that they didn’t let up on data analytics training. Employees now interpret data and apply it to their business area while juggling dynamic regulations, processes, and supply chains. Investment in data literacy creates future success, as we’ve seen with many of our European customers like Henkel, and doesn’t necessarily require large, enterprise efforts. Even smaller, incremental projects that foster data skills, knowledge, and analytics passion—like team contests, learning hours, or individual encouragement from a supervisor to participate in a relevant training—can create a foundation that benefits your organisation for years to come.  Benefits gained from a data-literate, data-driven culture can include: Leaders, business users, and IT who are confidently adapting in real-time and planning for an uncertain future Reduced time to insights  A greater sense of community, enterprise-wide A more motivated, more efficient workforce More informed decision-making with a single source of truth A quicker path to failure...and effective recovery Everyone speaking the same language with increased data access and transparency Cross-team collaboration and innovation on behalf of the business and customers A pivot from data paralysis to business resilience and growth  Agility, swift execution and better quality data is mandatory With all survey respondents, we found three top-of-mind priority areas resulting from lessons learnt during the pandemic. They include: a need for greater agility with changing demands (30 percent), effectively prioritizing and delivering on projects faster (26 percent), and needing more accurate, timely, and clean data (25 percent). We anticipate that in the next 12 months, these areas, amongst others, are where European businesses will focus significant time, attention, and resources. Likewise, they will turn to technology partners who can support this work as they think about and swiftly become digitally, data-driven organisations who both survive and thrive in the face of adversity. The value of data analytics to achieve resilience Will we experience another 12 or six months of disruption? It’s hard to predict the future, but, what we inherently know from observing and listening to customers or prospects, plus talking with tech stakeholders in various industries, is that resilient organisations empower their people with data. This allows them to creatively solve problems, respond to change, and confidently act together. Now, businesses should unite their people with data—to gain a shared understanding of their situation, establish realistic and attainable goals, and to celebrate what might be small wins as they build resilience while facing adversity.  Even if your organisation is less data-driven or feels like it doesn’t have the right expertise, you can take cues from others that found agility and resilience with data and analytics. Becoming data-driven is not out of reach; it’s an achievable goal to strive for with the support of easy-to-use, flexible solutions and resources that will help you quickly start and develop the right culture. To ensure your organisation harnesses the power of being data-driven, consult these resources and simple steps to help you get all hands on data. Learn more about the YouGov research and download the e-book. 
Read more
  • 0
  • 0
  • 773
article-image-azure-stack-and-azure-arc-for-data-services-from-blog-posts-sqlservercentral
Anonymous
18 Nov 2020
6 min read
Save for later

Azure Stack and Azure Arc for data services from Blog Posts - SQLServerCentral

Anonymous
18 Nov 2020
6 min read
For those companies that can’t yet move to the cloud, have certain workloads that can’t move to the cloud, or have limited to no internet access, Microsoft has options to build your own private on-prem cloud via Azure Stack and Azure Arc. I’ll focus this blog on using these products to host your databases. Azure Stack is an extension of Azure that provides a way to run apps and databases in an on-premises environment and deliver Azure services via three options: Azure Stack Hub: Run your own private, autonomous cloud—connected or disconnected with cloud-native apps using consistent Azure services on-premises. Azure Stack Hub integrated systems are comprised in racks of 4-16 servers built by trusted hardware partners and delivered straight to your datacenter. Azure Stack Hub is built on industry standard hardware and is managed using the same tools you already use for managing Azure subscriptions. As a result, you can apply consistent DevOps processes whether you’re connected to Azure or not. The Azure Stack Hub architecture lets you provide Azure services for remote locations with intermittent connectivity or disconnected from the internet. You can also create hybrid solutions that process data locally in Azure Stack Hub and then aggregate it in Azure for additional processing and analytics. Finally, because Azure Stack Hub is installed on-premises, you can meet specific regulatory or policy requirements with the flexibility of deploying cloud apps on-premises without changing any code. See Azure Stack Hub overview Azure Stack Edge: Get rapid insights with an Azure-managed appliance using compute and hardware-accelerated machine learning at edge locations for your Internet of Things (IoT) and AI workloads. Think of it as a much smaller version of Azure Stack Hub that uses purpose-built hardware-as-a-service such as Pro GPU, Pro FPGA, Pro R, and Mini R. The Mini is designed to work in the harshest environment conditions, supporting scenarios such as tactical edge, humanitarian and emergency response efforts. See Azure Stack Edge documentation Azure Stack HCI (preview): A hyperconverged infrastructure (HCI) cluster solution that hosts virtualized Windows and Linux workloads and their storage in a hybrid on-premises environment. Think of it as a virtualization fabric for VM or kubernetes hosting – software only to put on your certified hardware. See Azure Stack HCI solution overview These Azure Stack options are almost all VMs/IaaS, with no PaaS options for data services such as SQL Database (the only data service available is SQL Server in a VM). It is integrated certified hardware and software run by Microsoft, just plug in and go. For support, there is “one throat to choke” as the saying goes. It is a great option if you are disconnected from Azure. It extends Azure management and security to any infrastructure and provides flexibility in deployment of applications, making management more consistent (a single view for on-prem, clouds, and edge). It brings the Azure fabric to your own data center but allows you to use your own security requirements. Microsoft orchestrates the upgrades of hardware, firmware, and software, but you control when those updates happen. Azure Arc is a software only solution that can be deployed on any hardware, including Azure Stack, AWS, or your own hardware. With Azure Arc and Azure Arc-enabled data services (preview) you can deploy Azure SQL Managed Instance (SQL MI) and Azure Database for PostgreSQL Hyperscale to any of these environments, which requires kubernetes. It can also manage SQL Server in a VM by just installing an agent on the SQL server (see Preview of Azure Arc enabled SQL Server is now available). Any of these databases can then be easily moved from your hardware to Azure down the road. It allows you to extend Azure management across your environments, adopt cloud practices on-premises, and implement Azure security anywhere you choose. This allows for many options to use Azure Arc on Azure Stack or on other platforms (click to expand): Some features about Azure Arc: It can be used to solve for data residency requirements (data sovereignty) It is supported in disconnected and intermittently connected scenarios such as air gapped private data centers, cruise ships that are off the grid for multiple weeks, factory floors that have occasional disconnects due to power outages, etc. Customers can use Azure Data Studio (instead of the Azure Portal) to manage their data estate when operating in a disconnected/intermittent connected mode Could eventually support other products like Azure Synapse Analytics Can use larger hardware solutions and more hardware tiers then what is available in Azure, but have to do your own HA/DR You are not charged if you shut down SQL MI, unlike in Azure, as it’s your hardware, where in Azure the hardware is dedicated to you even if you are not using it With Arc you are managing the hardware, but with Stack Microsoft is managing the hardware Can use modern cloud billing models on-premises for better cost efficiency With Azure Arc enabled SQL Server, you can use the Azure Portal to register and track the inventory of your SQL Server instances across on-premises, edge sites, and multi-cloud in a single view. You can also take advantage of Azure security services, such as Azure Security Center and Azure Sentinel, as well as use the SQL Assessment service Azure Stack hub provides consistent hardware, but if you use your own hardware you have more flexibility and possibly cheaper hardware costs These slides covers the major benefits of Azure Arc and what the architecture looks like: Looking at the differences when you are connected directly vs connected indirectly (i.e. an Arc server is not connected to the Internet so must coordinate with a server that is connected): Here is what an Azure Arc data services architecture looks like: Some of the top use cases we see with customers using Azure Stack and/or Azure Arc: Cloud-to-cloud failover On-prem databases with failover to cloud Easier migration: Deploy locally, then flip a switch to go to cloud This slide provides details on the differences with SQL databases (click to expand): More info: Understanding Azure Arc Enabled SQL Server What is Azure Arc Enabled SQL Managed Instance The post Azure Stack and Azure Arc for data services first appeared on James Serra's Blog. The post Azure Stack and Azure Arc for data services appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1222

Anonymous
18 Nov 2020
11 min read
Save for later

Thoughts on the 2020 PASS Virtual Summit as an Attendee from Blog Posts - SQLServerCentral

Anonymous
18 Nov 2020
11 min read
Last week was the first PASS Virtual Summit. This was the first time that the event wasn’t held in a live setting, and this was the third conference for me this year that I attended virtually. I took some notes during the event, and this is a summary of my impressions. I’ve also submitted this feedback to PASS as an evaluation. Tl; Dr: I give the event a C Overall Things went well, and I was pleased with the way that the Summit was handled Huge kudos to the staff at C&C, who were behind the scenes. I saw a number of their efforts as a speaker, but they managed to run this event well.  Thanks to Cadmium and Falcon Events as well for the mostly smooth operation of the event. I was able to attend sessions, chat, ask questions, and enjoy the week. I didn’t get to a lot of sessions, but since I can watch later, I wasn’t too stressed. I liked the ability to see live sessions together and just pick one to engage in. The moderator and Q&A seemed to work well. The basic event ran well. The bare minimum was there, so I give PASS credit for pulling this off. Pre-Con I attended Meagan Longoria’s pre-con on Monday. I enjoyed it, learned a lot, and got some notes out of it. I do wish I had the chance to re-watch some of it this week, but I also understand the speaker perspective here. There were lots of communications, which was good. One thing I find with many events is that limited communication gets buried in my inbox. It’s nice to get a few reminders the day before and day of to help me remember and figure out how to get into the session. I had to track down credentials, which is fine. I learned to save this separately. I do wish I had a calendar reminder. While I saw 5+ emails that said 11amEST, I kept interpreting that as 11am in my time zone, which is where Meagan lives. As a result, I was late to the event. My fault, but I do think an .ics file might have helped me. I do wish that this were earlier or later, so that I could actually rewatch some of the pre-con. I had other work to do Tues, though I might have done another pre-con in other years, and I had work and some live sessions on Wednesday. It the pre-con were the week before, I would have more time to review or catch something I might have missed. This is still better than a live event, where it’s one shot on one day, but it feels disappointing to me as though I’ve lost something. Grade: A- Schedule While I loved the “live session” list, I wasn’t thrilled with the rest of the schedule. I’m often looking to see what’s on now. I had to scroll down to find the current time, or next time, and then if I looked at a session’s details, the page would often scroll back to the top when I closed the popup. The time zone was also a problem. I deal across time zones constantly, and I’ve learned how to schedule things in different time zones, however, I would get confused at times and forget to subtract 2 hours from EST to get to MST. I do think that not supporting a user, really a per-user, time zone is a large failure. While it would be good to auto-detect browsers, at least let me set a time zone for display. Search worked well, but I’m a browser, and it was hard to find things. I also found the scrolling through the list by time was difficult and the details of the session put me back at the top. I didn’t see many community items, nor did I see the yoga, meditation, and other breaks. Maybe my fault, but not easy to find if it was me. Grade: C Opening Night I went to the opening session, which was the DJ playing tunes. While I enjoyed the music, watching someone spin records isn’t great. The chat was good for a few people that knew each other, but it flew by on my screen. I do think that some separation of chat into channels or some way to allow some interaction here would be good. I’d also prefer some live (or recorded) item on the main stage, with some conversation, some discussion, or something besides the DJ. The chat was live, but it didn’t work well for me, having hundreds of people in one chat session doesn’t scale or work. I also a little disappointed that the bartender session was at the end, and not sometime earlier. Be nice to make a drink early in the session, not later. I don’t know if things changed after 830EST as I went to a music bubble. I hosted one bubble, which were music themed, but no music was provided. Since I have a Yeti mic, I played Spotify with my theme in the background, as I was chatting with a few people. I thought the small, 6-7 people chatting was good, but I also think there’s an opacity here. Someone has to join to see who’s in the room, and if there is any conversation. It’s an awkward time to jump in to see something and have someone then try to engage you when you’re not sure if you want to be there. I wish these were open all week, but with more transparency as to who is in them and organized around some topic. Grade: B Keynotes I enjoyed a couple keynotes, with a few more on my list. I was surprised at the screen quality, without the ability to maximize the live screen and be able to see the screen. Two things here. 1. Presenters need to understand that the attendees see a slightly different view and screen. Please, make things larger in browsers, zoom in, etc. Understand it’s hard to see. It’s something I need to learn to be better at. 2. The tech platforms need to ensure that we can pause, maybe rewind slightly, move backward, and maximize the screen. I also am glad I reviewed my sessions before uploading them as I was really disappointed in the sound quality for Bob and Conor’s demo. Overall, these were what we normally see at a keynote, albeit with some issues with display. Grade: B Sessions There were a few types here. For the live sessions, I thought these mostly went well, albeit, without the ability to easily see the whole screen. I liked being able to ask a question and have a moderator bring this up to the speaker. I do think we might need some practice as speakers with pauses or asking for questions more during session. That isn’t great for recordings, but it would be nice. I ran into one recorded session that was a mess (Ray Kim’s blogging session). Since we pay for content for the year, that needs to be re-recorded, along with others that are broken. I can’t decide what I think about the Q&A and Chat. I don’t like them in the browser window, because that takes up space I could use elsewhere. Really, I preferred the discord server that Jen McCown set up for the event. That was better real time interaction. I also didn’t like that pre-recorded sessions didn’t start at the time. I had to press play, which was odd for me. I brought up a session, left it there and started to answer some email, expecting it to play. It didn’t, and when I started it 10 minutes late, the chat didn’t make sense. If we’re going to have the session at a time, it ought to just play. I can watch the recording later. For the recordings, I can’t maximize them. That feels like an ergonomic fail. I also don’t have a tooltip on the reverse/forward. These are 10sec, which is good, but it would be nice to know. I also wish the chat or Q&A were available, especially the latter. I don’t see a way to access this. Grade: B Networking To me, one of the great things at the PASS Summit is being able to see old friends and meet new ones. This is a big part of the reason why I go, take the time out of schedule, and make a case to my boss. Or in years past, why I paid to go. This was almost non existent. The bubbles on opening night were good. I met 1 new person and saw 5 friends. In one Redgate video chat, and one text chat, I responded to 2 friends, but since I didn’t have times in the chat, I didn’t realize I’d missed that person. After my session, I had 1 friend in a video chat. Outside of that, the Community Zone hours never worked for me. One of the great things in Seattle (or any city) is that the community zone is always open, so I can pop by and see a few people. I can walk between sessions with someone for a minute, or stop in the hallway. As far as I could tell, there wasn’t a good way to do this. On the discord server, I had a few  conversations with people, but it wasn’t great networking. Not something I think is better than catching many of these sessions at a SQL Saturday, UG, or GroupBy. Messaging in the platform was hidden, but more, there just wasn’t an easy way to see someone or know if they were around. I actually had more text conversations with friends on my phone or Twitter than on the platform. The Summit only reminded me of a few people. That’s not nothing, but it’s not necessarily great. Grade: D – this is a big weight in my mind. Security Some big security fails, with passwords, emails, and various other information leaking. That might be fine with some events, but it feels like a problem for me. Disclosure of data isn’t good, and this wasn’t well handled by the platform. Not PASS’ fault, but it is in some sense as they should be cognizant of this and check for it. I can’t tell how much of an issue this was, and I only saw some complaints, but if this leaks stuff, there could be fines there that wipe out profit. This needs to be taken seriously for next year. Grade: D Value I’m torn here. The content worked, I learned things, and I could ask questions. I still can, I guess, as many speakers have their email on the site, or a link to some social media/blog. That’s a portion of what I get. The pre-con was great, and it’s not really much different than in person. I could still get distracted, need to step out, or lose focus. The networking was subpar. The community activities were mostly non-existent, and I felt less energy on Twitter and other places than I have in years past. Some might be me, but this wasn’t great. Ultimately, I think at $599, it’s not bad for the content and the ability to watch it for a year, except Lots of these sessions are being presented at SQL Saturdays, user or virtual groups, and other events. With a lot of other events moving online, is this worth it? Everything is together, and I can search for something like “polybase”, but is this better than Google? Maybe I can assume these speakers do a better job than random videos on the internet, this is curated, and that helps. However, I’m really not sure. There is something here, but it’s hard to judge. Before the event, thinking of maybe interacting with people, I thought $200/day for content and networking wasn’t bad. Now, with networking essentially non-existent (1 new person, 10-12 old ones), I think I’m not sure. Overall Grade: C – I don’t think this was great, but also not horrible. Mostly, meh. The post Thoughts on the 2020 PASS Virtual Summit as an Attendee appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 825