Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech News

3711 Articles
article-image-merry-christmas-from-blog-posts-sqlservercentral
Anonymous
25 Dec 2020
1 min read
Save for later

Merry Christmas from Blog Posts - SQLServerCentral

Anonymous
25 Dec 2020
1 min read
Christmas is this week so not a technical post for this week.  Just a simple post wishing you and your family as many blessing as possible (especially in the year 2020) and good tidings during this holiday time.  I hope that 2020 wasn’t too harsh on you or anybody close to you.  May the holidays bring you peach and joy! Take care and wear a mask! © 2020, John Morehouse. All rights reserved. The post Merry Christmas first appeared on John Morehouse. The post Merry Christmas appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1351

article-image-daily-coping-25-dec-2020-from-blog-posts-sqlservercentral
Anonymous
25 Dec 2020
1 min read
Save for later

Daily Coping 25 Dec 2020 from Blog Posts - SQLServerCentral

Anonymous
25 Dec 2020
1 min read
I started to add a daily coping tip to the SQLServerCentral newsletter and to the Community Circle, which is helping me deal with the issues in the world. I’m adding my responses for each day here. All my coping tips are under this tag.  Today’s tip is to stop for a minute today and smile while you remember a happy moment in 2020. I’ve had more than my share this year, but my happy moment from 2020 that comes to mind is from February. The only airplane trip of the year for me, to celebrate a birthday. Merry Christmas. The post Daily Coping 25 Dec 2020 appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1100

article-image-retrieving-log-analytics-data-with-data-factory-from-blog-posts-sqlservercentral
Anonymous
24 Dec 2020
6 min read
Save for later

Retrieving Log Analytics Data with Data Factory from Blog Posts - SQLServerCentral

Anonymous
24 Dec 2020
6 min read
I’ve been working on a project where I use Azure Data Factory to retrieve data from the Azure Log Analytics API. The query language used by Log Analytics is Kusto Query Language (KQL). If you know T-SQL, a lot of the concepts translate to KQL. Here’s an example T-SQL query and what it might look like in KQL. --T-SQL: SELECT * FROM dbo.AzureDiagnostics  WHERE TimeGenerated BETWEEN '2020-12-15 AND '2020-12-16' AND database_name_s = 'mydatabasename' //KQL: AzureDiagnostics  | where TimeGenerated between(datetime('2020-12-15') .. datetime('2020-12-16'))  | where database_name_s == 'mydatabasename' For this project, we have several Azure SQL Databases configured to send logs and metrics to a Log Analytics workspace. You can execute KQL queries against the workspace in the Log Analytics user interface in the Azure Portal, a notebook in Azure Data Studio, or directly through the API. The resulting format of the data downloaded from the API leaves something to be desired (it’s like someone shoved a CSV inside a JSON document), but it’s usable after a bit of parsing based upon column position. Just be sure your KQL query actually states the columns and their order (this can be done using the Project operator). You can use an Azure Data Factory copy activity to retrieve the results of a KQL query and land them in an Azure Storage account. You must first execute a web activity to get a bearer token, which gives you the authorization to execute the query. Data Factory pipeline that retrieves data from the Log Analytics API. I had to create an app registration in Azure Active Directory for the web activity to get the bearer token. The web activity should perform a POST to the following url (with your domain populated and without the quotes): "https://login.microsoftonline.com/[your domain]/oauth2/token" Make sure you have added the appropriate header of Content-Type: application/x-www-form-urlencoded. The body should contain your service principal information and identify the resource as "resource=https://api.loganalytics.io". For more information about this step, see the API documentation. Data Factory Copy Activity The source of the copy activity uses the REST connector. The base url is set to "https://api.loganalytics.io/v1/workspaces/[workspace ID]/" (with your workspace ID populated and without the quotes). Authentication is set to Anonymous. Below is my source dataset for the copy activity. Notice that the relative url is set to “query”. ADF Dataset referencing a REST linked service pointing to the Log Analytics API The Source properties of the copy activity should reference this REST dataset. The request method should be POST, and the KQL query should be placed in the request body (more on this below). Two additional headers need to be added in the Source properties. Additional headers in the Source properties of the ADF copy activity The Authorization header should pass a string formatted as “Bearer [Auth Token]” (with a space between the string “Bearer” and the token). The example above retrieves the token from the web activity that executes before the copy activity in the pipeline. Make sure you are securing your inputs and outputs so your secrets and tokens are not being logged in Data Factory. This option is currently found on the General properties of each activity. Embedding a KQL Query in the Copy Activity You must pass the KQL query to the API as a JSON string. But this string is already inside the JSON created by Data Factory. Data Factory is a bit picky in how you enter the query. Here is an example of how to populate the request body in the copy activity. { "query": "AzureDiagnostics | where TimeGenerated between(datetime('2020-12-15') .. datetime('2020-12-16')) | where database_name_s == 'mydatabasename'" } Note that the curly braces are on separate lines, but the query must be on one line. So where I had my query spread across 3 lines in the Log Analytics user interface as shown at the beginning of this post, I have to delete the line breaks for the query to work in Data Factory. The other thing to note is that I am using single quotes to contain string literals. KQL supports either single or double quotes to encode string literals. But using double quotes in your KQL and then putting that inside the double quotes in the request body in ADF leads to errors and frustration (ask me how I know). So make it easy on yourself and use single quotes for any string literals in your KQL query. In my project, we were looping through multiple databases for customized time frames, so my request body is dynamically populated. Below is a request body similar to what I use for my copy activity that retrieves Azure Metrics such as CPU percent and data storage percent. The values come from a lookup activity. In this case, the SQL stored procedure that is executed by the lookup puts the single quotes around the database name so it is returned as ‘mydatabasename’. { "query": "AzureMetrics | where TimeGenerated between (datetime(@{item().TimeStart}) .. datetime(@{item().TimeEnd})) | where Resource == @{item().DatabaseName} | project SourceSystem , TimeGenerated , Resource, ResourceGroup , ResourceProvider , SubscriptionId , MetricName , Total , Count , Maximum , Minimum , TimeGrain , UnitName , Type, ResourceId" } With dynamically populated queries like the above, string interpolation is your friend. Paul Andrew’s post on variable string interpolation in a REST API body helped me understand this and get my API request to produce the required results. You can do similar things with Data Factory to query the Application Insights API. In fact, this blog post on the subject helped me figure out how to get the Log Analytics data I needed. Be Aware of API Limits There are limits to the frequency and amount of data you can pull from the Log Analytics API. As noted in the API documentation: Queries cannot return more than 500,000 rows Queries cannot return more than 64,000,000 bytes (~61 MiB total data) Queries cannot run longer than 10 minutes (3 minutes by default) If there is a risk that you may hit the limit on rows or bytes, you need to be aware that the Log Analytics API does not return an error in this case. It will return the results up to the limit and then note the “partial query failure” in the result set. As far as I can tell, there is no option for pagination, so you will need to adjust your query to keep it under the limits. My current process uses a Get Metadata activity after the copy activity to check file sizes for anything close to the limit and then breaks that query into smaller chunks and re-executes it. It’s All in the Details I had a lot of trial and error as I worked my way through populating the request body in the API call and dealing with API limits. I hope this helps you avoid some of the pitfalls. The post Retrieving Log Analytics Data with Data Factory appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 2281

article-image-daily-coping-24-dec-2020-from-blog-posts-sqlservercentral
Anonymous
24 Dec 2020
2 min read
Save for later

Daily Coping 24 Dec 2020 from Blog Posts - SQLServerCentral

Anonymous
24 Dec 2020
2 min read
I started to add a daily coping tip to the SQLServerCentral newsletter and to the Community Circle, which is helping me deal with the issues in the world. I’m adding my responses for each day here. All my coping tips are under this tag.  Today’s tip is to Give away something you have been holding on to. I have made more donations this year and in the past,. Partially I think this is because life slowed down and I had time to clean out some spaces. However, I have more to do, and when I saw this item, I decided to do something new. I’m a big supporter of Habitat for Humanity. During my first sabbatical, I volunteered there quite a bit, and I’ve continued to do that periodically since. I believe shelter is an important resource most people need. site:I’ve had some tools at the house that I’ve held onto, thinking they would be good spares. I have a few cordless items, but I have an older miter saw and a table saw that work fine. Habitat doesn’t take these, but I donated them to another local charity that can make use of them. I’m hoping someone will use them to improve their lives, either building something or maybe using them in their work. The post Daily Coping 24 Dec 2020 appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1088

article-image-2020-vizinreview-the-year-in-viz-of-the-days-from-whats-new
Anonymous
23 Dec 2020
6 min read
Save for later

2020 #VizInReview: The year in Viz of the Days from What's New

Anonymous
23 Dec 2020
6 min read
Team Tableau Public Kristin Adderson December 23, 2020 - 9:01pm December 24, 2020 Let’s be real, 2020 has been one incredibly wild ride that no one expected. Despite it all, one thing remained steadfast: the Tableau Public global community of data enthusiasts’ commitment to bringing impactful (and often mindblowing) data insights to life. To mark the end of 2020, we’re taking a look back at some of the most amazing visualizations created by the #DataFam this year.  We looked back at highlights from this year’s featured visualizations. Our “Viz of the Day” gallery represents the many ways our community uses Tableau Public to visualize the data topics they’re most passionate about. Each day, the Tableau Public team selects and features a “Viz of the Day” (VOTD) based on a variety of criteria. A viz might tell a clear and compelling story. Perhaps it is visually stunning or includes an innovative chart type. Or, the viz might result from one of the community’s social data projects or competitions. Whatever the reason, each featured viz shares a common trait—demonstrating the realm of possibility when using data to express oneself.  There were over 200 visualizations featured as “Viz of the Day” in 2020. The Tableau Public team reviewed each one and hand-picked our favorite from each month. We’ve strived to highlight a diversity of visualizations with different chart-types on a wide range of topics from authors across the globe. Read about each one, then click on the thumbnail to see each creation in its full glory. See a viz that you love? Don’t forget to let that author know by logging in and “favoriting” it.    JANUARY  The 2019 Global Multidimensional Poverty Index by Lali Jularbal The Multidimensional Poverty Index (MPI) takes an in-depth look at how people experience three dimensions of poverty—Health, Education, and Living Standards—in 101 developing countries. Lali Jularbal visualizes the developing country’s rankings by MPI, Intensity, Headcount, and poverty dimensions. Favorite this viz   FEBRUARY Racial Integration in U.S. Schools by Candra McCrae Desegregation orders were implemented by the Supreme Court to help eliminate segregation in schools across the United States. However, according to a recent Gallup Poll, 57% of U.S. adults believe school segregation is still a moderate or severe problem. In this visualization, Candra McRae looks at the history of racial integration in U.S. schools and explores ideas that could help reduce segregation. Favorite this viz   MARCH Popular Pizza Toppings by Amy Tran Whether or not pineapple belongs on a pizza was arguably one of the most controversial debate topics in 2020. Dig into this #MakeoverMonday visualization by Amy Tran to learn about the most popular pizza toppings in Britain. Did your favorite topping make the list? Favorite this viz   APRIL The World's Dependence on the Travel Industry by Chantilly Jaggernauth The travel and tourism industry accounted for more than 10% of the world’s Gross Domestic Product (GDP) in 2019. Explore this visualization by Chantilly Jaggernauth to see the amount of GDP generated by travel and tourism, including hotels, airlines, travel agencies, and more, in various countries across the globe. Favorite this viz   MAY Teladoc Health, Inc. by Praveen P Jose  Many countries around the world are still struggling to control the spread of coronavirus (COVID-19). As a result, telemedicine has become more popular than ever before. In this visualization, Praveen P Jose looks at the stock price of leading telemedicine provider Teladoc over the last five years. Favorite this viz   JUNE Exonerations in America by JR Copreros Over 2,500 wrongful convictions have been reversed in the U.S. since 1989. Using data from the National Registry of Exonerations, JR Copreros visualizes exonerations by race, state, type of crime, and more, revealing systemic flaws in the criminal justice system. Favorite this viz   JULY Economic Empowerment of Women by Yobanny Samano According to the World Bank, the Women, Business and the Law (WBL) Index, composed of eight indicators, "tracks how the law affects women at various stages in their lives, from the basics of transportation to the challenges of starting a job and getting a pension." In this #MakeoverMonday visualization, Yobanny Samano looks at the WBL index scores for 190 countries. Favorite this viz   AUGUST Constellations Born of Mythology by Satoshi Ganeko How did constellations get their names? Many of them are named after figures in Greek and Roman mythology. Brush up on your stargazing skills and explore this #IronQuest visualization by Satoshi Ganeko to learn about each one. Favorite this viz   SEPTEMBER The Day Lebanon Changed by Soha Elghany and Fred Najjar On August 4, 2020, a large amount of ammonium nitrate stored at the port city of Beirut exploded, killing over 200 people and causing billions of dollars in damage. Soha Elghany and Fred Najjar collaborated to create this visualization, which shows the impact of one of the most powerful non-nuclear explosions in history. Favorite this viz   OCTOBER The Air We Breathe by Christian Felix According to the latest data from the World Health Organization (WHO), 97% of cities in low- and middle-income countries with more than 100,000 inhabitants do not meet WHO air quality guidelines. In this visualization, #IronViz Champion Christian Felix explores the correlation between breathing air inequality and wealth inequality. Favorite this viz   NOVEMBER The Most Popular Dog Breeds by Anjushree B V In 2019, the Pembroke Welsh Corgi made it onto the Top 10 Most Popular Dog Breeds list for the first time. Check out this visualization by Anjushree B V to learn how each dog breed's popularity has changed over time. Favorite this viz   DECEMBER Giant Pandas Overseas by Wendy Shijia  Did you know that China rents out its pandas? Today, over 60 giant pandas, native to south-central China, can be found worldwide. Dive into this visualization by Wendy Shijia to learn when each panda living abroad will be returned to its home country. Favorite this viz   And that’s a wrap! Cheers to an incredible year, made possible by Tableau Public users like you. Be sure to subscribe to “Viz of the Day” to get more visualizations like these—another year’s worth of awe-inspiring community-created data inspiration awaits.  Craving more viz-spiration? Check out these resources commemorating Tableau Public’s 10th anniversary: Ten most-favorited vizzes to celebrate ten viz-tastic years of Tableau Public Ten years later—What Tableau Public means to our community and the world If Data Could Talk: A walk down memory lane with Tableau Public
Read more
  • 0
  • 0
  • 1258

article-image-directquery-for-power-bi-datasets-and-azure-analysis-services-preview-from-blog-posts-sqlservercentral
Anonymous
23 Dec 2020
6 min read
Save for later

DirectQuery for Power BI datasets and Azure Analysis Services (preview) from Blog Posts - SQLServerCentral

Anonymous
23 Dec 2020
6 min read
Announced last week is a major new feature for Power BI: you can now use DirectQuery to connect to Azure Analysis Services or Power BI Datasets and combine it with other DirectQuery datasets and/or imported datasets. This is a HUGE improvement that has the Power BI community buzzing! Think of it as the next generation of composite models. Note this requires the December version of Power BI Desktop, and you must go to Options -> Preview features and select “DirectQuery for Power BI datasets and Analysis Services”. You begin with a Power BI dataset that has been published to the Power BI service. In Power BI Desktop you connect to the dataset, where you can build a new model over the original one.  You can extend the original model by adding tables, columns, and measures, and you can also connect to other datasets and combine them into a single semantic model.  While doing this you do not lose any elements from the original model – measures and relationships continue to work. You do not have to worry about anything but the additional data you want to integrate. When the original model is refreshed, your local model also sees any updated information. You work as if you have a local copy of the model and full rights to modify and expand it, even though you are not duplicating any data already stored on the server. This feature is ideal for report authors who want to combine the data from their enterprise semantic model with other data they may own like an Excel spreadsheet, or who want to personalize or enrich the metadata from their enterprise semantic model. This seals the marriage between self-service and corporate BI. The main technology that makes this work is DirectQuery storage mode. DirectQuery will allow composite models to work with live connected sources and other data sources like Excel or SQL Server. Using DirectQuery for Power BI datasets and Azure Analysis Services requires that your report has a local model.  You can start from a live connection to an existing dataset and upgrade to a local model, or start with a DirectQuery connection or imported data, which will automatically create a local model in your report. Live connection is basically a connection to the remote model (the model is not inside of Power BI Desktop).  Converting the remote model to DirectQuery gives you a local model in Power BI Desktop.  So if you want to make changes to your live connection, you will first convert to a DirectQuery connection. If you don’t need to make changes to the remote model and just combine it with other models, you can keep the remote model as a live connection. Keep in mind that when you publish a report with a local model to the service, a dataset for that local model will be published a well.  This is the same behavior as when you publish a report with imported data to the service. When connecting to a remote model, the data for that model is kept in the cloud, and you can join it with another local model with it’s data to create new columns, new measures and new tables without ever moving the data from the remote model to your local PC. It’s an easy way to extend that remote model, which could be managed by IT and refreshed all the time. You are just responsible for managing and refreshing your local model and data. This is how you are combining the efforts of enterprise IT and end-users. With this new feature I can see many companies creating enterprise semantic models in Azure Analysis Services or in a premium instance of Power BI. These semantic models can have the entire business logic of your company using the huge amount of data from your data warehouse. Then users can use DirectQuery against those models and extend those models locally with their own calculations, without having to download any data from the semantic models to Power BI Desktop. This is definitely a game changer. This new feature allows you to do dataset chaining. Chaining allows you to publish a report and a dataset that is based on other Power BI datasets, which was previously not possible. Together, datasets and the datasets and models they are based on form a chain. For example, imagine your colleague publishes a Power BI dataset called Sales and Budget that is based on an Azure Analysis Services model called Sales, and combines it with an Excel sheet called Budget. If you create and publish a new report and dataset called Sales and Budget Europe that is based on the Sales and Budget Power BI dataset published by your colleague, making some further modifications or extensions, you are effectively adding a report and dataset to a chain of length three (the max supported), which started with the Sales Azure Analysis Services model and ends with your Sales and Budget Europe Power BI dataset: This opens a whole new world of possibilities. Before this feature was available, to modify a dataset you would have to get a copy of the pbix file with the dataset and make your own pbix copy, which would also include the data. Also, you were not able to chain models together or to combine datasets (i.e. making models of models). This is quite an improvement! Share your feedback on this new feature at this Power BI Community forum post. More info: New composite models in Power BI: A milestone in Business Intelligence Power BI Direct Query Composite Models = Amazing Composite Models Gen 2 and DirectQuery over Power BI Datasets Power BI Composite Models using Analysis Services -Direct Query Mode Composite models over Power BI datasets and Azure Analysis Services “Composite” arrived – CAUTION! Prologika Newsletter Winter 2020 The post DirectQuery for Power BI datasets and Azure Analysis Services (preview) first appeared on James Serra's Blog. The post DirectQuery for Power BI datasets and Azure Analysis Services (preview) appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1760
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-basic-cursors-in-t-sql-sqlnewblogger-from-blog-posts-sqlservercentral
Anonymous
23 Dec 2020
4 min read
Save for later

Basic Cursors in T-SQL–#SQLNewBlogger from Blog Posts - SQLServerCentral

Anonymous
23 Dec 2020
4 min read
Another post for me that is simple and hopefully serves as an example for people trying to get blogging as #SQLNewBloggers. Cursors are not efficient, and not recommended for use in SQL Server/T-SQL. This is different from other platforms, so be sure you know how things work. There are places where cursors are useful, especially in one-off type situations. I recently had a situation, and typed “CREATE CURSOR”, which resulted in an error. This isn’t valid syntax, so I decided to write a quick post to remind myself what is valid. The Basic Syntax Instead of CREATE, a cursor uses DECLARE. The structure is unlike other DDL statements, which are action type name, as CREATE TABLE dbo.MyTable. Instead we have this: DECLARE cursorname CURSOR as in DECLARE myCursor CURSOR There is more that is needed here. This is just the opening. The rest of the structure is DECLARE cursorname CURSOR [options] FOR select_statement You can see this in the docs, but essentially what we are doing is loading the result of a select statement into an object that we can then process row by row. We give the object a name and structure this with the DECLARE CURSOR FOR. I was recently working on the Advent of Code and Day 4 asks for some processing across  rows. As a result, I decided to try a cursor like this: DECLARE pcurs CURSOR FOR SELECT lineval FROM day4 ORDER BY linekey; The next steps are to now process the data in the cursor. We do this by fetching data from the cursor as required. I’ll build up the structure here starting with some housekeeping. In order to use the cursor, we need to open it. It’s good practice to then deallocate the objet at the end, so let’s set up this code: DECLARE pcurs CURSOR FOR SELECT lineval FROM day4 ORDER BY linekey;OPEN pcurs...DEALLOCATE pcurs This gets us a clean structure if the code is re-run multiple times. Now, after the cursor is open, we fetch data from the cursor. Each column in the SELECT statement can be fetched from the cursor into a variable. Therefore, we also need to declare a variable. DECLARE pcurs CURSOR FOR SELECT lineval FROM day4 ORDER BY linekey;OPEN pcursDECLARE @val varchar(1000);FETCH NEXT FROM pcurs into @val...DEALLOCATE pcurs Usually we want to process all rows, so we loop through them. I’ll add a WHILE loop, and use the @@FETCH_STATUS variable. If this is 0, there are still rows in the cursor. If I hit the end of the cursor, a –1 is returned. DECLARE pcurs CURSOR FOR SELECT lineval FROM day4 ORDER BY linekey;OPEN pcursDECLARE @val varchar(1000);FETCH NEXT FROM pcurs into @valWHILE @@FETCH_STATUS = 0 BEGIN ... FETCH NEXT FROM pcurs into @val ENDDEALLOCATE pcurs Where the ellipsis is is where I can do other work, process the value, change it, anything I want to do in T-SQL. I do need to remember to get the next row in the loop. As I mentioned, cursors aren’t efficient and you should avoid them, but there are times when row processing is needed, and a cursor is a good solution to understand. SQLNewBlogger As soon as I realized my mistake in setting up the cursor, I knew some of my knowledge had deteriorated. I decided to take a few minutes and describe cursors and document syntax, mostly for myself. However, this is a way to show why you know something might not be used. You could write a post on replacing a cursor with a set based solution, or even show where performance is poor from a cursor. The post Basic Cursors in T-SQL–#SQLNewBlogger appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1057

article-image-goodbye-pass-from-blog-posts-sqlservercentral
Anonymous
23 Dec 2020
1 min read
Save for later

Goodbye PASS from Blog Posts - SQLServerCentral

Anonymous
23 Dec 2020
1 min read
“It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of light, it was the season of darkness, it was the spring of hope, it-> Continue reading Goodbye PASS The post Goodbye PASS appeared first on Born SQL. The post Goodbye PASS appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 2277

article-image-daily-coping-23-dec-2020-from-blog-posts-sqlservercentral
Anonymous
23 Dec 2020
1 min read
Save for later

Daily Coping 23 Dec 2020 from Blog Posts - SQLServerCentral

Anonymous
23 Dec 2020
1 min read
I started to add a daily coping tip to the SQLServerCentral newsletter and to the Community Circle, which is helping me deal with the issues in the world. I’m adding my responses for each day here. All my coping tips are under this tag.  Today’s tip is to practice gratitude. List the kind things others have done for you. I’ve thanked people for their help, which is hard for me. I’ve learned to be gracious and accepting of help, but I don’t really like it. I try to do most things for myself in the world. However, I was injured recently, and while I could get around, it was painful. My wife, daughter, and even the kids I coach noticed and helped me out in a few ways: bringing me breakfast in bed getting my computer for me from another room carrying my backpack going to retrieve my phone from another room bringing equipment into the gym and taking it out. I thanked everyone and managed to appreciate their efforts without feeling sorry for myself. I haven’t always been able to do that. The post Daily Coping 23 Dec 2020 appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1050

article-image-six-topics-on-its-mind-for-scaling-analytics-next-year-from-whats-new
Anonymous
22 Dec 2020
5 min read
Save for later

Six topics on IT's mind for scaling analytics next year from What's New

Anonymous
22 Dec 2020
5 min read
Brian Matsubara RVP of Global Technology Alliances Kristin Adderson December 22, 2020 - 9:46pm December 23, 2020 We recently wrapped up participation in the all-virtual AWS re:Invent 2020 where we shared our experiences from scaling Tableau Public ten-fold this year. What an informative few weeks! It wasn’t surprising that the theme of scalability was mentioned throughout many sessions; as IT leaders and professionals, you’re working hard to support remote workforces and evolving business needs in our current situation. This includes offering broader access to data and analytics and embracing the cloud to better adapt, innovate, and grow more resilient while facing the unexpected. As you welcome more individuals to the promising world of modern BI, you must ensure systems and processes are equipped to support higher demand, and empower everyone in the organization to make the most of your data and analytics investments. Let’s take a closer look at what’s top of mind for IT to best enable the business while scaling your analytics program.  Supporting your data infrastructure Many organizations say remote work is here to stay, while new data and analytics use cases are constantly emerging to address the massive amounts of data that organizations collect. IT must enable an elastic environment where it's easier, faster, more reliable, and more secure to ingest, store, analyze, and share data among a dispersed workforce.  1. Deploying flexible infrastructure With benefits including greater flexibility and more predictable operating expenses, cloud-based infrastructure can help you get analytics pipelines up and running fast. And attractive, on-demand pricing makes it easier to scale resources up and down, supporting growing needs. If you're considering moving your organization’s on-premises analytics to the cloud, you can accelerate your migration and time to value by leveraging the resources and expertise of a strategic partner. Hear from Experian who is deploying and scaling its analytics in the cloud and recently benefited from this infrastructure.  Experian turned to Tableau and AWS for support powering its new Experian Safeguard dashboard, a free analytics tool that helps public organizations use data to pinpoint and protect vulnerable communities. Accessibility and scalability of the dashboard resulted in faster time to market and adoption by nearly 70 local authorities, emergency services, and charities now using “data for good.”  2. Optimizing costs According to IDC research, analytics spend in the cloud is growing eight times faster than other deployment types. You’ve probably purchased a data warehouse to meet the highest demand timeframes of the organization, but don’t need the 24/7 support that can result in unused capacity and wasted dollars. Monitor cloud costs and use patterns to make better operating, governance, and risk management decisions around your cloud deployment as it grows, and to protect your investment —especially when leadership is looking for every chance to maximize resources and keep spending streamlined. Supporting your people Since IT’s responsibilities are more and more aligned with business objectives—like revenue growth, customer retention, and even developing new business models—it’s critical to measure success beyond deploying modern BI technology. It’s equally important to empower the business to adopt and use analytics to discover opportunities, create efficiencies, and drive change. 3. Onboarding and license management As your analytics deployment grows, it's not scalable to have individuals submit one-off requests for software licenses that you then have to manually assign, configure, and track. You can take advantage of the groups you’ve already established in your identity and access management solution to automate the licensing process for your analytics program. This can also reduce unused licenses, helping lines of business to save a little extra budget.  4. Ensuring responsible use Another big concern as analytics programs grow is maintaining data security and governance in a self-service model. Fortunately, you can address this while streamlining user onboarding even further by automatically configuring user permissions based on their group memberships. Coupled with well-structured analytics content, you’ll not only reduce IT administrative work, but you’ll help people get faster, secure access to trusted data that matters most to their jobs. 5. Enabling access from anywhere When your organization is increasingly relying on data to make decisions, 24/7 support and access to customized analytics is business-critical. With secure, mobile access to analytics and an at-a-glance view of important KPIs, your users can keep a pulse on their business no matter where they are. 6. Growing data literacy When everyone in the organization is equipped and encouraged to explore, understand, and communicate with data, you’ll see amazing impact from more informed decision-making. But foundational data skills are necessary to get people engaged and using data and analytics properly. Customers have shown us creative and fun ways that IT helps build data literacy, from formal training to community-building activities. For example, St. Mary’s Bank holds regular Tableau office hours, is investing more time and energy in trainings, and has games that test employees on their Tableau knowledge.  Want to learn more?  If you missed AWS re:Invent 2020, you’re not out of luck! You can still register and watch on-demand content, including our own discussion of scaling Tableau Public tenfold to support customers and their growing needs for sharing COVID-19 data (featuring SVP of Product Development, Ellie Fields, and Director of Software Engineering, Jared Scott). You’ll learn about how we reacted to customer demands—especially from governments reporting localized data to keep constituents safe and informed during the pandemic—including shifts from on-premises to the cloud, hosting vizzes that could handle thousands, even millions, of daily hits. Data-driven transformation is an ongoing journey. Today, the organizations that are successfully navigating uncertainty are those leaning into data and analytics to solve challenges and innovate together. No matter where you are—evaluating, deploying, or scaling—the benefits of the cloud and modern BI are available to you. You can start by learning more about how we partner with AWS.
Read more
  • 0
  • 0
  • 2428
article-image-build-custom-maps-the-easy-way-with-multiple-map-layers-in-tableau-from-whats-new
Anonymous
22 Dec 2020
5 min read
Save for later

Build custom maps the easy way with multiple map layers in Tableau from What's New

Anonymous
22 Dec 2020
5 min read
Ashwin Kumar Senior Product Manager Kristin Adderson December 22, 2020 - 8:04pm December 22, 2020 The Tableau 2020.4 release comes fully-loaded with tons of great features, including several key updates to boost your geospatial analysis. In particular, the new multiple marks layers feature lets you add an unlimited number of layers to the map. This means you can visualize multiple sets of location data in context of one another, and there’s no need for external tools to build custom background maps.  Drag and drop map layers—yes, it’s just that easy Spend less time preparing spatial datasets and more time analyzing your data with drag-and-drop map layers across Tableau Online, Server, and Desktop in Tableau 2020.4. Getting started is easy! Once you’ve connected to a datasource that contains location data and created a map, simply drag any geographic field onto the Add a Marks Layers drop target, and Tableau will instantly draw the new layer of marks on the map. For each layer that you create, Tableau provides a new marks card, so you can encode each layer’s data by size, shape, and color. What’s more, you can even control the formatting of each layer independently, giving you maximum flexibility in controlling the appearance of your map.  But that’s not all. While allowing you to draw an unlimited number of customized map layers is a powerful capability in its own right, the multiple map layers feature in Tableau gives you even more tools that you can use to supercharge your analytics. First up: the ability to toggle the visibility of each layer. With this feature, you can decide to show or hide each layer at will, allowing you to visualize only the relevant layers for the question at hand. You can use this feature by hovering over each layer’s name in the marks card, revealing the interactive eye icon. Sometimes, you may want only some of your layers to be interactive, and the remaining layers to simply be part of the background. And luckily, the multiple map layers feature allows you to have exactly this type of control. Hovering over each layer’s name in the marks card reveals a dropdown arrow. Clicking on this arrow, you can select the first option in the context menu: Disable Selection. With this option, you can customize the end-user experience, ensuring that background contextual layers do not produce tooltips or other interactive elements when not required.  Finally, you also have fine-grained control over the drawing order, or z-order, of layers on your map. With this capability, you can ensure that background layers that may obscure other map features are drawn on the bottom. To adjust the z-order of layers on the map, you can either drag to reorder your layers in the marks card, or you can use the Move Up and Move Down options in each layer’s dropdown context menu.  Drawing an unlimited number of map layers is critical to helping you build authoritative, context-appropriate maps for your organization. This is helpful for a wide variety of use cases across industries and businesses. Check out some more examples below: A national coffee chain might want to visualize stores, competitor locations, and win/loss metrics by sales area to understand competitive pressures. In the oil and gas industry, visualizing drilling rigs, block leases, and nautical boundaries could help devise exploration and investment strategies.A disaster relief NGO may decide to map out hurricane paths, at-risk hospitals, and first-responder bases to deploy rescue teams to those in need.Essentially, you can use this feature to build rich context into your maps and support easy analysis and exploration for any scenario! Plus, spatial updates across the stack: Tableau Prep, Amazon Redshift, and offline maps The 2020.4 release will also include other maps feature to help you take location intelligence to the next level. In this release, we’re including support for spatial data in Tableau Prep, so you can clean and transform your location data without having to use a third party tool. We’re also including support for spatial data from Amazon Redshift databases, and offline maps for Tableau Server, so you can use Tableau maps in any environment and connect to your location data directly from more data sources.  Want to know what else we released with Tableau 2020.4? Learn about Tableau Prep in the browser, web authoring and predictive modeling enhancements, and more in our launch announcement. We’d love your feedback Can you think of additional features you need to take your mapping in Tableau to greater heights? We would love to hear from you! Submit your request on the Tableau Ideas Forum today. Every idea is considered by our Product Management team and we value your input in making decisions about what to build next.  Want to get a sneak peek at the latest and greatest in Tableau? Visit our Coming Soon page to learn more about what we’re working on next. Happy mapping! 
Read more
  • 0
  • 0
  • 2732

article-image-daily-coping-22-dec-2020-from-blog-posts-sqlservercentral
Pushkar Sharma
22 Dec 2020
1 min read
Save for later

Daily Coping 22 Dec 2020 from Blog Posts - SQLServerCentral

Pushkar Sharma
22 Dec 2020
1 min read
I started to add a daily coping tip to the SQLServerCentral newsletter and to the Community Circle, which is helping me deal with the issues in the world. I’m adding my responses for each day here. All my coping tips are under this tag. Today’s tip is to share a happy memory or inspiring thought with a loved one. Not sure I need to explain, but I did show my kids this one from a celebration.. The post Daily Coping 22 Dec 2020 appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1133

article-image-top-3-upgrade-migration-mistakes-from-blog-posts-sqlservercentral
Anonymous
22 Dec 2020
1 min read
Save for later

Top 3 Upgrade & Migration Mistakes from Blog Posts - SQLServerCentral

Anonymous
22 Dec 2020
1 min read
This is a guest post by another friend of Dallas DBAs – Brendan Mason (L|T) Upgrading and migrating databases can be a daunting exercise if it’s not something you practice regularly. As a result, upgrades routinely get dismissed as “not necessary right now” or otherwise put off until there is not really another option. Maybe… The post Top 3 Upgrade & Migration Mistakes appeared first on DallasDBAs.com. The post Top 3 Upgrade & Migration Mistakes appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 908
article-image-end-of-an-era-sql-pass-and-lessons-learned-from-blog-posts-sqlservercentral
Anonymous
22 Dec 2020
7 min read
Save for later

End of an Era – SQL PASS and Lessons learned from Blog Posts - SQLServerCentral

Anonymous
22 Dec 2020
7 min read
Most of my blog is filled with posts related to PASS in some way. Events, various volunteering opportunities, keynote blogging, this or that…With the demise of the organization, I wanted to write one final post but wondered what it could be..I could write about what I think caused it go down, but that horse has been flogged to death and continues to be. I could write about my opinion on how the last stages were handled, but that again is similar. I finally decided I would write about the lessons I’ve learned in my 22 year association with them. This is necessary for me to move on and may be worth reading for those who think similar.There is the common line that PASS is not the #sqlfamily, and that line is currently true. But back in those days, it was. Atleast it was our introduction to the community commonly known as #sqlfamily. So many lessons here are in fact lessons in dealing with and living with community issues. Lesson #1: Networking is important. Seems odd and obvious to say it..but needs to be said. When I was new to PASS I stuck to tech sessions and heading right back to my room when I was done. I was, and I am, every bit the introverted geek who liked her company better than anyone else’s, and kept to it. That didn’t get me very far. I used to frequent the Barnes and Noble behind the Washington convention center in the evenings, to get the ‘people buzz’ out of me – it was here that I met Andy Warren, one of my earliest mentors in the community. Andy explained to me the gains of networking and also introduced a new term ‘functional extrovert’ to me. That is, grow an aspect of my personality that may not be natural but is needed for functional reasons. I worked harder on networking after that, learned to introduce myself to new people and hang out at as many parties and gatherings as I could. It paid off a lot more than tech learning did. Lesson #2: Stay out of crowds and people you don’t belong with. This comes closely on the lines of #1 and may even be a bit of a paradox. But this is true, especially for minorities and sensitive people. There are people we belong with and people we don’t. Networking and attempting to be an extrovert does not mean you sell your self respect and try to fit in everywhere. If people pointedly exclude you in conversations, are disrespectful or stand offish – you don’t belong there. Generally immigrants have to try harder than others to explain themselves and fit in – so this is something that needs to be said for us. Give it a shot and if your gut tells you you don’t belong, leave. Lesson #3: You will be judged and labelled, no matter what. I was one of those people who wanted to stay out of any kind of labelling – just be thought of as a good person who was fair and helpful. But it wasn’t as easy as I thought. Over time factions and groups started to develop in the community. Part of it was fed by politics created by decisions PASS made – quite a lot of it was personal rivalry and jealousy between highly successful people. I formed some opinions based on the information I had (which I would learn later was incomplete and inaccurate), but my opinions cost me some relationships and gave me some labelling. Although this happened about a decade ago, the labels and sourness in some of those relationships persist. Minorities get judged a labelled a lot quicker than others in general, and I was no exception to that.Looking back- I realize that it is not possible to be a friend to everyone, no matter how hard we try. Whatever has happened has happened, we have to learn to move on. Lesson #4: Few people have the full story – so try to hold opinions when there is a controversy. There are backdoor conversations everywhere – but this community has a very high volume of that going on. Very few people have the complete story in face of a controversy. But we are all human, when everyone is sharing opinions we feel pushed to share ours too. A lot of times these can be costly in terms of relationships.I have been shocked, many times, on how poor informed I was when I formed my opinion and later learned the truth of the whole story. I think some of this was fuelled by the highly NDA ridden PASS culture, but I don’t think PASS going away is going to change it. Cliques and back door conversations are going to continue to exist. It is best for us to avoid sharing any opinions unless we are completely sure we know the entire story behind anything. Lesson #5: Volunteering comes with power struggles. I was among the naive who always thought of every fellow volunteer as just a volunteer. It is not that simple. There are hierarchies and people wanting to control each other everywhere. There are many people willing to do the grunt work and expect nothing more, but many others who want to constantly be right, push others around and have it their way. Recognizing such people exist and if possible, staying out of their way is a good idea. Some people also function better if given high level roles than grunt work – so recognizing a person’s skills while assigning volunteer tasks is also a good idea. Lesson #6: Pay attention to burnouts. There is a line of thought that volunteers have no right to expect anything , including thank you or gratitude. As someone who did this a long time and burned out seriously, I disagree. I am not advocating selfishness or manipulative ways of volunteering , but it is important to pay attention to what we are getting out of what we are doing. Feeling thankless and going on for a long time with empty, meaningless feeling in our hearts – can add up to health issues, physical and mental. I believe PASS did not do enough to thank volunteers and I have spoken up many times in this regard. I personally am not a victim of that, especially after the PASSion award. But I have felt that way before it, and I know a lot of people felt that way too. Avoid getting too deep into a potential burnout, it is hard to get out of . And express gratitude and thanks wherever and whenever possible to fellow volunteers. They deserve it and need it. Lesson #6: There is more to it than speaking and organizing events. These are the two most known avenues for volunteering, but there are many more. Blogging on other people’s events, doing podcasts, promoting diversity, contributing to open source efforts like DataSaturdays.com – all of these are volunteering efforts. Make a list and contribute wherever and whenever possible. PASS gave people like me who are not big name speakers many of those opportunites..with it gone it may be harder, but we have to work at it. Lesson #7: Give it time..I think some of the misunderstandings and controversies around PASS come from younger people who didn’t get the gains out of it that folks like me who are older did. Part of it has to do with how dysfunctional and political the organization as well as the community got over time – but some of it has to do with the fact that building a network and a respectable name really takes time. It takes time for people to get to know you as a person of integrity and good values, and as someone worth depending on. Give it time, don’t push the river. Last, but not the least – be a person of integrity. Be someone people can depend on when they need you. Even if we are labelled or end up having wrong opinions in a controversy , our integrity can go a long way in saving our skin. Mine certainly did. Be a person of integrity, and help people. It is , quite literally all there is. Thank you for reading and Happy Holidays. The post End of an Era – SQL PASS and Lessons learned appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 972

article-image-using-azure-durable-functions-with-azure-data-factory-http-long-polling-from-blog-posts-sqlservercentral
Anonymous
21 Dec 2020
5 min read
Save for later

Using Azure Durable Functions with Azure Data Factory - HTTP Long Polling from Blog Posts - SQLServerCentral

Anonymous
21 Dec 2020
5 min read
(2020-Dec-21) While working with Azure Functions that provide a serverless environment to run my computer program code, I’m still struggling to understand how it actually works. Yes, I admit, there is no bravado in my conversation about Function Apps, I really don’t understand what happens behind a scene, when a front-end application submits a request to execute my function code in a cloud environment, and how this request is processed via a durable function framework (starter => orchestrator => activity).  Azure Data Factory provides an interface to execute your Azure Function, and if you wish, then the output result of your function code can be further processed in your Data Factory workflow. The more I work with this couple, the more I trust how a function app can work differently under various Azure Service Plans available for me. The more parallel Azure Function requests I submit from my Data Factory, the more trust I put into my Azure Function App that it will properly and gracefully scale out from “Always Ready instances”, to “Pre-warmed instances”, and to “Maximum instances” available for my Function App. Supported runtime version for PowerShell durable functions, along with data exchange possibilities between orchestrator function and activity function requires a lot of trust too because the latter is still not well documented. My current journey of using Azure Functions in Data Factory has been marked with two milestones so far: Initial overview of what is possible - http://datanrg.blogspot.com/2020/04/using-azure-functions-in-azure-data.html Further advancement to enable long-running function processes and keep data factory from failing - http://datanrg.blogspot.com/2020/10/using-durable-functions-in-azure-data.html Photo by Jesse Dodds on Unsplash Recently I realized that the initially proposed HTTP Polling of long-running function process in a data factory can be simplified even further. An early version (please check the 2nd blog post listed above) suggested that I can execute a durable function orchestrator, which eventually will execute a function activity. Then I would check the status of my function app execution by polling the statusQueryGetUri URI from my data factory pipeline, if its status is not Completed, then I would poll it again.  In reality, the combination of Until loop container along with Wait and Web call activities can just be replaced by a single Web call activity. The reason for this is that simple: when you initially execute your durable Azure Function (even if it will take minutes, hours, or days to finish), it will almost instantly provide you with an execution HTTP status code 202 (Accepted). Then Azure Data Factory Web activity will poll the statusQueryGetUri URI of your Azure Function on its own until the HTTP status code becomes 200 (OK). Web activity will run this step as long as necessary or unless the Azure Function timeout is reached; this can vary for different pricing tiers - https://docs.microsoft.com/en-us/azure/azure-functions/functions-scale#timeout The structure of statusQueryGetUri URI is simple: it has a reference to your azure function app along with the execution instance GUID. And how Azure Data Factory polls this URI, is unknown to me, it's all about trust, please see the beginning of this blog post https://<your-function-app>.azurewebsites.net/runtime/webhooks/durabletask/instances/<GUID>?taskHub=DurableFunctionsHub&connection=Storage&code=<code-value> This has been an introduction, now the real blog post begins. Naturally, you can execute multiple instances of your Azure Function at the same time (event-driven processes or front-end parallel execution steps) and the Azure Function App will handle them. My recent work project requirement indicated that when a parallel execution happens, a certain operation still needed to be throttled and artificially sequenced, again, it was a special use case, and it may not happen in your projects. I tried to put such throttling logic inside of my durable azure function activity, however, with many concurrent requests to execute this one particular operation, my function had app used all of the available instances, while the instances were active and running, my function became not available to the existing data factory workflows. There is a good wiki page about Writing Tasks Orchestrators that states, “Code should be non-blocking i.e. no thread sleep or Task.WaitXXX() methods.” So, that was my aha moment to remove the throttling logic from my azure function activity to the data factory. Now, when an instance of my Azure Function finds itself that it can’t proceed further due to other operation running, it completes with HTTP status code 200 (OK), releases the azure function instance, and also provides an additional execution output status that it’s not really “OK” and needs to re-executed. The Until loop container now will handle two types of scenario: HTTP Status Code 200 (OK) and custom output Status "OK", then it exits the loop container and proceeds further with the "Get Function App Output" activity. HTTP Status Code 200 (OK) and custom output Status is not "OK" (you can provide more descriptive info of what your not OK scenario might be), then execution continues with another round of "Call Durable Azure Function" & "Get Current Function Status". This new approach for gracefully handling conflicts in functions required some changes in Azure Function Activity: (1) to run regular operation and completes with the custom "OK" status or identify another running instance, completes the current function instance and proved "Conflict" custom status, (2) Data Factory adjustments to check the custom Status output and decides what to do next. Azure Function HTTP long polling mission was accomplished, however, now it has two layers of HTTP polling: natural webhook status collection and data factory custom logic to check if my webhook received OK status was really OK. The post Using Azure Durable Functions with Azure Data Factory - HTTP Long Polling appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1997