Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech News

3711 Articles
article-image-2021-zen-masters-nominations-and-applications-are-now-open-from-whats-new
Anonymous
08 Dec 2020
8 min read
Save for later

2021 Zen Masters: Nominations and applications are now open! from What's New

Anonymous
08 Dec 2020
8 min read
Amanda Boyle Tanna Solberg December 8, 2020 - 12:25am December 8, 2020 In their mastery of the Tableau platform, their desire to collaborate and help invent the Tableau solutions of tomorrow, and their dedication to helping our global community, Tableau Zen Masters stand out in a community of greatness. They are also our biggest advocates as well as our harshest critics, but by listening to and engaging with the Tableau Community, we build better software and, ultimately, a better company. The spirit of Tableau is our customers. And as one way to better support our customers, we are growing the Zen Master program this year. We are looking to add more diverse leaders and to grow representation throughout the world. We are excited to announce that it’s time to nominate a new cohort of Zen Masters. This means we need your help! To ensure a diverse pool of nominees representative of our global community, we need the collective force of all of you to help us spread the news to your colleagues and your friends. We need you to champion your peers, showcase, and shine a light on those doing incredible work that elevates others in a public space. By nominating a fellow community member to become a Tableau Zen Master, you are not only recognizing your data heroes, but you are also giving your input about who you want to lead the Tableau Community for the upcoming year. As we shared in the 2020 Ambassador nomination process, we are listening to the calls for action and change from the community—and from our own team. We ask you to elevate diverse voices by nominating Black, Indigenous, and people of color to lead our community. We know that by amplifying diverse leaders in our community, we can help create better outcomes for all, and better reflect the communities in which we live and work. To establish greater equity, we need to bring diversity into this community proactively. The Tableau Community Equity Task Force will be advising our team on recruitment opportunities, but it will take a collective effort to be successful. Thank you for your support and engagement. We avoid selecting individuals whose work is internal and private to their organization. We respect that as our communities have grown, internal communities have flourished. Undoubtedly, we want to celebrate these efforts, but continue to only select members whose work meets our criteria of working and elevating others in public spaces—and not fee-gated. What makes someone a Tableau Zen Master? The Zen Master nomination period begins now, and will be open from Tuesday, December 8, 2020, through Friday, January 8, 2021. During the nomination period, we invite you to highlight the people who inspire and instruct you—those with exceptional dedication to exploring Tableau and improving it. The “humble-smart” leaders who make the community so remarkable. Zen Master selections are made from nominations and applications from you, members of the Tableau Community! When submitting a nomination, you will be asked to share examples of how you or your nominee has demonstrated the three pillars of the Tableau Zen Master program: teacher, master, and collaborator. As you prepare, consider the following questions: How has this person served as a teacher in the last year? Does the person dedicate their time to helping others be better at using Tableau? Are they a Tableau evangelist who shares our mission of helping people see and understand data? Does the person add to the Tableau Community by offering help to people of all levels? How do you or your nominee demonstrate mastery on the Tableau platform? Has the person shown a deep understanding of how Tableau works? They might create beautiful dashboards on Tableau Public, maintain Tableau Server, build mind-blowing extensions, or more. How does your nominee collaborate? Does the person work with others to bring Tableau and its mission to new communities and geographies? Have they worked with other community members to build thought leadership or training resources? Do they contribute useful, popular ideas on our Ideas Forum? If all of these attributes can be found in someone you know, nominate them to be a Tableau Zen Master. Please be brief and focused in your response, including links to blogs, Tableau Public profiles, vizzes, virtual event links, and other sources. Tableau and Salesforce employees, partners, and community members are all welcome to submit nominations. Ready, set, nominate! Please complete one submission for each person you want to nominate. Nominations close at 10:00 pm PST on Friday, January 8, 2021. All nominations will be reviewed by a selection committee made up of Tableau employees with input from the Hall of Fame Zen Masters. We do not select Zen Masters based on the number of nominations received. While we do read and track all nominations, we also use internal metrics, employee nominations, and the needs of our global community to determine the new cohort. Further details can be found on the Tableau Zen Master page. Getting together looked different this year: Tableau Community members including Zen Masters, Ambassadors & friends coming together for TC’ish Supporting our Community through 2020 and beyond In February 2020, we invited 34 individuals, representing 11 countries, and 4 global regions to serve as Tableau Zen Masters for a one-year term.  This year’s Zen Masters helped design and pivot events to virtual platforms—welcoming thousands to the #DataFamCommunityJams. They supported new mentorship initiatives to help people feel more connected in isolation and build new opportunities for collaboration. They worked countless hours standing up the first iteration of what would become the Tableau COVID-19 Data Hub. These leaders jumped in without hesitation when requests came in from global public-health leaders desperate for assistance with organizing and analyzing data. The 2020 Zen Masters brought their passion and expertise to a new generation, creating content for our Data Literacy for All eLearning program that provides data skills fundamentals, regardless of skill level. And just last month, two Hall of Fame Zen Masters gave their time to work with SurveyMonkey and Axios to make sure we put out the best possible and best-performing visualizations in our US Election Polling partnership for the presidential election.  Zen Masters Jeffrey Shaffer, Sarah Bartlett, and Kevin Flerlage joined by Ambassador Adam Mico, Dinushki De Livera, and Mark Bradbourne at the Cincinnati TUG in January, 2020 We are inviting all 2020 Tableau Zen Masters to join us for another year. 2020 didn't quite work out the way anyone predicted. The pressures of COVID-19, combined with so many other factors have had an impact on everyone personally and professionally—and have also impacted the Zen Master experience. We encouraged all of our community leaders to prioritize their health and wellness, and that of their families. We supported members disengaging to take care of more pressing priorities, and we greatly appreciate that they did. Through it all, the 2020 class exceeded all expectations as teachers, masters, and collaborators in brilliant, meaningful ways that truly embodied the pay-it-forward spirit of the Tableau Community.  We have offered the 2020 Zen Masters an early invitation to join the 2021 group. There are a few reasons why we have made this decision. First, this year’s group has had unique, pandemic-related challenges in terms of access to Tableau teams, speaking opportunities, and time to connect with one another, as well as a lack of support from our team. Second, we just think it’s the right thing to do. We know this year has been challenging. We are learning as we go, and we want all the current Zen Masters to have a meaningful experience—one we believe we have not provided this year. This will not be an extension of the current year and will add to the 5-year minimum to be considered for the Zen Master Hall of Fame. Current Zen Masters are being asked to provide a recap of their experience, sharing what is or is not working for them, and any feedback to help strengthen the program through the next year.  Left: Zen Master Ann Jackson sharing her knowledge and passion for teaching and problem solving as a volunteer Tableau Doctor at TC’ish 2020, Right: Zen Master Yukari Nagata supporting the APAC CommunityAll Zen Masters completing their 5th term will be eligible for 2021 Hall of Fame consideration. Current Zen Masters who will be eligible after completing their 5th term include Adam McCann, Chris Love, Jeffrey Shaffer, Rob Radburn, and Tamas Foldi. Each member will go through a similar evaluation process that we have used with previous groups. Thank you, #datafam, for being a part of the Tableau Community! We look forward to hearing from you.
Read more
  • 0
  • 0
  • 1929

article-image-using-aws-sdk-with-go-for-ec2-ami-metrics-from-blog-posts-sqlservercentral
Anonymous
07 Dec 2020
8 min read
Save for later

Using Aws Sdk With Go for Ec2 Ami Metrics from Blog Posts - SQLServerCentral

Anonymous
07 Dec 2020
8 min read
Source The source code for this repo is located here: What This Is This is a quick overview of some AWS SDK Go work, but not a detailed tutorial. I’d love feedback from more experienced Go devs as well. Feel free to submit a PR with tweaks or suggestions, or just comment at the bottom (which is a GitHub issue powered comment system anyway). Image Age Good metrics can help drive change. If you identify metrics that help you quantify areas of progress in your DevOps process, you’ll have a chance to show the progress made and chart the wins. Knowing the age of the image underlying your instances could be useful if you wanted to measure how often instances were being built and rebuilt. I’m a big fan of making instances as immutable as possible, with less reliance on changes applied by configuration management and build oriented pipelines, and more baked into the image itself. Even if you don’t build everything into your image and are just doing “golden images”, you’ll still benefit from seeing the average age of images used go down. This would represent more continual rebuilds of your infrastructure. Containerization removes a lot of these concerns, but not everyone is in a place to go straight to containerization for all deployments yet. What Using the SDK Covers I decided this would be a good chance to use Go as the task is relatively simple and I already know how I’d accomplish this in PowerShell. If you are also on this journey, maybe you’ll find this detail inspiring to help you get some practical application in Go. There are a few steps that would be required: Connection & Authorization Obtain a List of Images Filtering required Obtain List of Instances Match Images to Instances where possible Produce artifact in file form Warning… I discovered that the SDK is pretty noisy and probably makes things a bit tougher than just plain idiomatic Go. If you want to learn pointers and derefrencing with Go… you’ll be a pro by the time you are done with it ?? Why This Could Be Useful In Learning More Go I think this is a pretty great small metric oriented collector focus as it ties in with several areas worth future versions. Since the overall logic is simple there’s less need to focus on understanding AWS and more on leveraging different Go features. Version 1: MVP that just products a JSON artifact Version 2: Wrap up in lambda collector and product s3 artifact Version 3: Persist metrics to Cloudwatch instead as a metric Version 4: Datadog or Telegraf plugin From the initial iteration I’ll post, there’s quite a bit of room for even basic improvement that my quick and dirty solution didn’t implement. Use channels to run parallel sessions to collect multi-region metrics in less time Use sorting with the structs properly would probably cut down on overhead and execution time dramatically. Timeseries metrics output for Cloudwatch, Datadog, or Telegraf Caveat Still learning Go. Posting this up and welcome any pull requests or comments (comments will open GitHubub issue automatically). There is no proper isolation of functions and tests applied. I’ve determined it’s better to produce and get some volume under my belt that focus on immediately making everything best practices. Once I’ve gotten more familar with Go proper structure, removing logic from main() and more will be important. This is not a complete walkthrough of all concepts, more a few things I found interesting along the way. Some Observations & Notes On V1 Attempt omitempty Writing to JSON is pretty straight forward, but what I found interesting was handling null values. If you don’t want the default initialized value from the data type to be populated, then you need to specific additional attributes in your struct to let it know how to properly serialize the data. For instance, I didn’t want to populate a null value for AmiAge as 0 would mess up any averages you were trying to calculate. type ReportAmiAging struct { Region string `json:"region"` InstanceID string `json:"instance-id"` AmiID string `json:"image-id"` ImageName *string `json:"image-name,omitempty"` PlatformDetails *string `json:"platform-details,omitempty"` InstanceCreateDate *time.Time `json:"instance-create-date"` AmiCreateDate *time.Time `json:"ami-create-date,omitempty"` AmiAgeDays *int `json:"ami-age-days,omitempty"` } In this case, I just set omitempty and it would set to null if I passed in a pointer to the value. For a much more detailed walk-through of this: Go’s Emit Empty Explained Multi-Region Here things got a little confusing as I really wanted to run this concurrently, but shelved that for v1 to deliver results more quickly. To initialize a new session, I provided my starting point. sess, err := session.NewSession(&aws.Config{ Region: aws.String("eu-west-1"), }, ) if err != nil { log.Err(err) } log.Info().Str("region", string(*sess.Config.Region)).Msg("initialized new session successfully") Next, I had to gather all the regions. In my scenario, I wanted to add flexibility to ignore regions that were not opted into, to allow less regions to be covered when this setting was correctly used in AWS. // Create EC2 service client client := ec2.New(sess) regions, err := client.DescribeRegions(&ec2.DescribeRegionsInput{ AllRegions: aws.Bool(true), Filters: []*ec2.Filter{ { Name: aws.String("opt-in-status"), Values: []*string{aws.String("opted-in"), aws.String("opt-in-not-required")}, }, }, }, ) if err != nil { log.Err(err).Msg("Failed to parse regions") os.Exit(1) } The filter syntax is pretty ugly. Due to the way the SDK works, you can’t just pass in *[]string{"opted-in","opt-in-not-required} and then reference this. Instead, you have to se the AWS functions to create pointers to the strongs and then dereference it apparently. Deep diving into this further was beyond my time alloted, but definitely a bit clunky. After gathering the regions you’d iterate and create a new session per region similar to this. for _, region := range regions.Regions { log.Info().Str("region", *region.RegionName).Msg("--> processing region") client := ec2.New(sess, &aws.Config{Region: *&region.RegionName}) // Do your magic } Structured Logging I’ve blogged about this before (mostly on microblog). As a newer gopher, I’ve found that zerolog is pretty intuitive. Structured logging is really important to being able to use log tools and get more value out of your logs in the future, so I personally like the idea of starting with them from the beginning. Here you could see how you can provide name value pairs, along with the message. log.Info().Int("result_count", len(respInstances.Reservations)).Dur("duration", time.Since(start)).Msg("tresults returned for ec2instances") Using this provided some nice readable console feedback, along with values that a tool like Datadog’s log parser could turn into values you could easily make metrics from. Performance In Searching From my prior blog post Filtering Results In Go I also talked about this. The lack of syntactic sugar in Go means this seemed much more verbose than I was expecting. A few key things I observed here were: Important to set your default layout for time if you want any consistency. Sorting algorithms, or even just basic sorting, would likely reduce the overall cost of a search like this (I’m better pretty dramatically) Pointers. Everywhere. Coming from a dynamic scripting language like PowerShell/Python, this is a different paradigm. I’m used to isolated functions which have less focus on passing values to modify directly (by value). In .NET you can pass in variables by reference, which is similar in concept, but it’s not something I found a lot of use for in scripting. I can see the massive benefits when at scale though, as avoiding more memory grants by using existing memory allocations with pointers would be much more efficient. Just have to get used to it! // GetMatchingImage will search the ami results for a matching id func GetMatchingImage(imgs []*ec2.Image, search *string) (parsedTime time.Time, imageName string, platformDetails string, err error) { layout := time.RFC3339 //"2006-01-02T15:04:05.000Z" log.Debug().Msgf("tttsearching for: %s", *search) // Look up the matching image for _, i := range imgs { log.Trace().Msgf("ttt%s <--> %s", *i.ImageId, *search) if strings.ToLower(*i.ImageId) == strings.ToLower(*search) { log.Trace().Msgf("ttt %s == %s", *i.ImageId, *search) p, err := time.Parse(layout, *i.CreationDate) if err != nil { log.Err(err).Msg("tttfailed to parse date from image i.CreationDate") } log.Debug().Str("i.CreationDate", *i.CreationDate).Str("parsedTime", p.String()).Msg("tttami-create-date result") return p, *i.Name, *i.PlatformDetails, nil // break } } return parsedTime, "", "", errors.New("tttno matching ami found") } Multiple Return Properties While this can be done in PowerShell, I rarely did it in the manner Go does. amiCreateDate, ImageName, platformDetails, err := GetMatchingImage(respPrivateImages.Images, inst.ImageId) if err != nil { log.Err(err).Msg("failure to find ami") } Feedback Welcome As stated, feedback welcome from any more experienced Gophers would be welcome. Anything for round 2. Goals for that will be at a minimum: Use go test to run. Isolate main and build basic tests for each function. Decide to wrap up in lambda or plugin. #tech #development #aws #golang #metrics The post Using Aws Sdk With Go for Ec2 Ami Metrics appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 667

article-image-data-architecture-blog-post-ci-cd-in-azure-synapse-analytics-part-1-from-blog-posts-sqlservercentral
Anonymous
07 Dec 2020
1 min read
Save for later

Data Architecture Blog Post: CI CD in Azure Synapse Analytics Part 1 from Blog Posts - SQLServerCentral

Anonymous
07 Dec 2020
1 min read
Hello Dear Reader! It's been a while. I've got a new blog post over on the Microsoft Data Architecture Blog on using Azure Synapse Analytics titled,  CI CD in Azure Synapse Analytics Part 1 .  I'm not sure how many numbers will be in this series. I have at least 2 planned. We will see after that. So head over and read up my friends!   As always. Thank you for stopping by.   Thanks,   Brad. The post Data Architecture Blog Post: CI CD in Azure Synapse Analytics Part 1 appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1456

article-image-autocorrect-in-git-from-blog-posts-sqlservercentral
Anonymous
07 Dec 2020
1 min read
Save for later

AutoCorrect in Git from Blog Posts - SQLServerCentral

Anonymous
07 Dec 2020
1 min read
I can’t believe autocorrect is available, or that I didn’t know it existed. I should have looked, after all, git is smart enough to guess my intentions. I learned this from Kendra Little, who made a quick video on this. She got it from Andy Carter’s blog. Let’s say that I type something like git stats in the cmd line. I’ll get a message from git that this isn’t a command, but there is one similar. You can see this below. However, I can have git actually just run this. If I change the configuration with this code: git config --global help.autocorrect 20 Now if I run the command, I see this, where git will delay briefly and then run what it things is correct. The delay is controlled by the parameter I passed in. The value in in tenths of a second, so 20 is 2 seconds, 50 is 5 seconds, 2 is 0.2 seconds, etc.  If you set this back to 0, autocorrect is off. A great trick, and one I’d suggest everyone enable. The post AutoCorrect in Git appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 865
Banner background image

article-image-data-boosts-confidence-and-resilience-in-companies-despite-the-uncertain-global-economic-climate-yougov-survey-in-asia-pacific-japan-finds-from-whats-new
Anonymous
07 Dec 2020
7 min read
Save for later

Data boosts confidence and resilience in companies despite the uncertain global economic climate: YouGov survey in Asia Pacific &amp; Japan finds from What's New

Anonymous
07 Dec 2020
7 min read
JY Pook Senior Vice President, Asia Pacific, Tableau Kristin Adderson December 7, 2020 - 3:59pm December 7, 2020 Especially after the long bumpy ride we have all been on since the start of 2020, and that we continue to live through the pandemic, can businesses really be optimistic about their future health? It can be particularly tough for businesses in Asia Pacific & Japan (APJ) who have been weathering this storm the longest. The chain of events triggered by the public health pandemic that dealt shockwaves like we’ve never seen are still reverberating today. But as we approach the end of the year, APJ is finally on a slow path of recovery, even though progress is uneven across markets. Business leaders can now apply learnings from the first phase of the pandemic to get a better grip on their business, but the predominant sentiment remains one of caution. With reports of new waves of infections disrupting economic recovery, the future remains uncertain. Still, there are business leaders who are feeling optimistic about the next six months - those from data-driven organisations. Many of whom have been encouraged by the critical advantages that data has brought about for their organisation during the pandemic, empowering them to come out of the crisis better than others.  Being data-driven fuels optimism for the future More data-driven companies (63%) are optimistic about the future health of their business in the next six months than non data-driven companies (37%). This is the most notable finding we uncovered in a recent study conducted in conjunction with YouGov, which surveyed more than 2,500 medium level managers or higher and IT decision makers across four markets in Asia Pacific (Singapore, Australia, India and Japan).  Business leaders across various industries were questioned about their use of data during the pandemic, lessons learnt and confidence in the future health of their organisation. Overwhelmingly, we found that data-driven organisations are more resilient and confident during the pandemic, and this is what fuels the optimism for the future health of their business. 82 percent of data-driven companies in APJ have reported critical business advantages during the pandemic. The findings show multiple and vast benefits when organisations tap on data:  ●    being able to make strategic business decisions faster (54%) ●    more effective communication with stakeholders (54%) ●    increased cross-team collaboration (51%) and  ●    making their business more agile (46%) Bank Mandiri, one of the leading financial institutions in Indonesia, is a great example of such data-driven organisations. Data enabled the bank to quickly gain visibility on the evolving situation, and respond in accordance to ensure business continuity for its customers.  At the height of the pandemic when many of its customers began facing cash flow problems, the bank tapped into data sources, built data squads and created key dashboards focused on real-time liquidity monitoring and a law restructuring programme, all within a matter of 48 hours. The Tableau solution allowed Bank Mandiri to increase flexibility in their operations, and customers’ suitability for their new loan restructuring program. In doing so, they could ensure that customers still carry out their financial transactions and receive support on their financial and loan repayment needs. What is troubling is that across the region, there remains a disconnect in how businesses value and use data. In contrast to organisations like Bank Mandiri, only 39% of non data-driven companies recognise data as a critical advantage. This is in spite of how the pandemic has further asserted the role of data in society today, and as we enter the era of analytics ubiquity.  In the coming year, the use of data will set companies even further apart. A strong data culture is no longer a nice-to-have, but rather a must-have for organisations. There needs to be a mindset shift in non data-driven organisations, where they need to get all hands on data. Explore the full dashboard here. Investment in data skills key to gaining competitive advantage One of the fundamental areas of focus for organisations during the pandemic is retaining and investing in its people. On this front, data-driven companies are again leading the charge - 82 percent of them eager to increase or continue their existing level of data skills investment in employees over the next six months.  Worryingly, 32 percent of non-data driven organisations have opted to either reduce or not invest in data skills at all. These non data-driven companies are at high risk of being at a disadvantage. At this critical time when it is a necessity for organisations to remain agile and adaptable, employees must have the requisite data skills to make both strategic and tactical decisions backed by insights, to future-proof their organisation for the challenges that lie ahead.  Take Zuellig Pharma, for instance. As one of the largest healthcare service providers in the region, Zuellig is deeply committed to investing in data skills training for its employees - through various programmes such as Tableau and automation training, as well as self-directed learning on its online Academy. These efforts have paid off well during the pandemic - exemplified by a critical mass of people within the organisation who embeds data practices and assets into everyday business processes. Instead of relying on the analytics team, even ground level staff such as warehouse operators have the competency to review and analyse data through Tableau and understand how warehouse processes map against business goals. An empowered workforce gives the organisation more confidence in planning, preparing and overcoming new operational challenges brought about by the pandemic.  Aside from investing in data skills, business leaders must also look into developing a more holistic data strategy as they increasingly incorporate data in their business processes. The survey found that the other top lessons learnt from the pandemic include the need for better data quality (46%), data transparency (43%), followed by the need for agility (41%). Organisations must take these into consideration as they plan for the year ahead.  Building business resilience with data analytics, starting now With uneven recovery and prevailing uncertainty across the region, it is more important than ever for business leaders to build operational resilience and business agility with data insights. For leaders who worry that they have yet to establish a data-driven organisation, it is never too late to embark on the data journey - and the best time to act is now.  The truth is, becoming a data-driven organisation does not require dramatic changes right off the bat. Business leaders can start by taking action with data that already sits within the organisation, and empower its workforce with the necessary data skills and tools. Over time, these steps can set off a chain reaction and culminate in communities centered around making data-first decisions, which can contribute to the larger cultural shift and better business outcomes.  Looking externally, other data-driven organisations like ZALORA can also offer inspiration and lessons on how to drive organisational transformation with data. Even amidst a difficult time like a global pandemic, data has provided the means for the company to diversify its product offerings and unlock new revenue streams. Earlier this year, it introduced TRENDER, an embedded analytics solution, to provide brand partners on its platform with real-time insights and trends on sales performance. Data has helped ZALORA to provide value-added solutions for its brand partners, and stay relevant and competitive in the retail scene.  Find out more about our YouGov research and how to get started on your data journey here.
Read more
  • 0
  • 0
  • 1100

article-image-migrating-from-sql-server-to-amazon-aws-aurora-from-blog-posts-sqlservercentral
Anonymous
07 Dec 2020
2 min read
Save for later

Migrating from SQL Server to Amazon AWS Aurora from Blog Posts - SQLServerCentral

Anonymous
07 Dec 2020
2 min read
Is Microsoft’s licensing scheme getting you down? It’s 2020 and there are now plenty of data platforms that are good for running your enterprise data workloads. Amazon’s Aurora PaaS service runs either MySQL or PostgreSQL. I’ve been supporting SQL Server for nearly 22 years and I’ve seen just about everything when it comes to bugs or performance problems and am quite comfortable with SQL Server as a data platform; so, why migrate to something new? Amazon’s Aurora has quite a bit to offer and they are constantly improving the product. Since there’s no license costs its operating expenditures are much more reasonable. Let’s take a quick look to compare a 64 core Business Critical Azure Managed Instance with a 64 core instance of Aurora MySQL. What about Aurora? Two nodes of Aurora MySQL are less than half the cost of Azure SQL Server Managed Instances.It’s also worth noting that Azure Managed Instances only support 100 databases and only have 5.1 GB of RAM per vCore. Given the 64 GB example there’s only 326.4 GB of RAM compared to the 512 GB selected in the Aurora Instance. This post wasn’t intended to be about the “Why” of migrating; so, let’s talk about the “How”. Migration at the high level takes two steps. Schema Conversion Data Migration Schema Conversion is made simple with AWS SCT (Schema Conversion Tool). Walking through a simple conversion. Note that the JDBC drivers for SQL Server are required. You can’t use “.” for a local host, which is a little annoying but typing the servername is easy enough. The dark blue items in the graph represent complex actions, such as converting triggers, since triggers aren’t a concept used in MySQL they aren’t a simple 1:1 conversion. Migrating to Aurora from SQL Server can be simple with AWS SCT and a cost saving move that also modernizes your data platform. Next we’ll look at AWS DMS (Data Migration Service). Thanks to the engineers at AWS, migrating to Aurora PostgreSQL is even easier. Recently Babelfish for Aurora PostgreSQL was announced, which is a product that allows SQL Server’s T-SQL code to run on PostgreSQL. The post Migrating from SQL Server to Amazon AWS Aurora appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 641
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-daily-coping-7-dec-2020-from-blog-posts-sqlservercentral
Anonymous
07 Dec 2020
2 min read
Save for later

Daily Coping 7 Dec 2020 from Blog Posts - SQLServerCentral

Anonymous
07 Dec 2020
2 min read
I started to add a daily coping tip to the SQLServerCentral newsletter and to the Community Circle, which is helping me deal with the issues in the world. I’m adding my responses for each day here. All my coping tips are under this tag.  Today’s tip is to discover your artistic side and design your own Christmas cards. I’m not a big Christmas card sender, but years ago I used to produce a letter for the family that we sent out to extended family and friends. It was a quick look at life on the ranch. At some point, I stopped doing it, but I decided to try and cope a little this year by restarting this. While we haven’t done a lot this year, we have spent time together, and life has changed for us, albeit a bit strangely. I’m here all the time, which is good for family. So I gathered together some photos from the year, and put them together with some words and a digital Xmas card. I’m not sharing the words here, but I’ll include the design with the photos. The post Daily Coping 7 Dec 2020 appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 655

article-image-tracking-costliest-queries-from-blog-posts-sqlservercentral
Anonymous
07 Dec 2020
3 min read
Save for later

Tracking costliest queries from Blog Posts - SQLServerCentral

Anonymous
07 Dec 2020
3 min read
Being a Database Developer or Administrator, often we work on Performance Optimization of the queries and procedures. It becomes very necessary that we focus on the right queries to get major benefits. Recently I was working on a Performance Tuning project. I started working based on query list provided by client. Client was referring the user feedbacks and Long Running Query extract from SQL Server. But it was not helping much. The database had more than 1K stored procedures and approx. 1K other programmability objects. On top of that, there were multiple applications triggering inline queries as well. I got a very interesting request from my client that “Can we get the top 100 queries running most frequently and taking more than a minute?”. This made me write my own query to get the list of queries being executed frequently and for duration greater/less than a particular time. This query can also play a major role if you are doing multiple things to optimize the database (such as server / database setting changes, indexing, stats or code changes etc.) and would like to track the duration. You can create a job with this query and dump the output in some table. Job can be scheduled to run in certain frequency. Later, you can plot trend out of the data tracked. This has really helped me a-lot in my assignment. I hope you’ll also find it useful. /* Following query will return the queries (along with plan) taking more than 1 minute and how many time executed since last SQL restart. We'll also get the average execution time. */ ; WITH cte_stag AS ( SELECT plan_handle , sql_handle , execution_count , (total_elapsed_time / NULLIF(execution_count, 0)) AS avg_elapsed_time , last_execution_time , ROW_NUMBER() OVER(PARTITION BY sql_handle, plan_handle ORDER BY execution_count DESC, last_execution_time DESC) AS RowID FROM sys.dm_exec_query_stats STA WHERE (total_elapsed_time / NULLIF(execution_count, 0)) > 60000 -- This is 60000 MS (1 minute). You can change it as per your wish. ) -- If you need TOP few queries, simply add TOP keyword in the SELECT statement. SELECT DB_NAME(q.dbid) AS DatabaseName , OBJECT_NAME(q.objectid) AS ObjectName , q.text , p.query_plan , STA.execution_count , STA.avg_elapsed_time , STA.last_execution_time FROM cte_stag STA CROSS APPLY sys.dm_exec_query_plan(STA.plan_handle) AS p CROSS APPLY sys.dm_exec_sql_text(STA.sql_handle) AS q WHERE STA.RowID = 1 AND q.dbid = DB_ID() /* Either select the desired database while running the query or supply the database name in quotes to the DB_ID() function. <code>Note: Inline queries being triggered from application may not have the object name and database name. In case you are not getting the desired query in the result, try removing the filter condition on dbid</code> */ ORDER BY 5 DESC, 6 DESC The post Tracking costliest queries appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 813

article-image-provisioning-storage-for-azure-sql-edge-running-on-a-raspberry-pi-kubernetes-cluster-from-blog-posts-sqlservercentral
Anonymous
07 Dec 2020
10 min read
Save for later

Provisioning storage for Azure SQL Edge running on a Raspberry Pi Kubernetes cluster from Blog Posts - SQLServerCentral

Anonymous
07 Dec 2020
10 min read
In a previous post we went through how to setup a Kubernetes cluster on Raspberry Pis and then deploy Azure SQL Edge to it. In this post I want to go through how to configure a NFS server so that we can use that to provision persistent volumes in the Kubernetes cluster. Once again, doing this on a Raspberry Pi 4 with an external USB SSD. The kit I bought was: – 1 x Raspberry Pi 4 Model B – 2GB RAM 1 x SanDisk Ultra 16 GB microSDHC Memory Card 1 x SanDisk 128 GB Solid State Flash Drive The initial set up steps are the same as the previous posts, but we’re going to run through them here (as I don’t just want to link back to the previous blog). So let’s go ahead and run through setting up a Raspberry Pi NFS server and then deploying persistent volumes for Azure SQL Edge. Flashing the OS The first thing to do is flash the SD card using Rufus: – Grab the Ubuntu 20.04 ARM image from the website and flash all the cards: – Once that’s done, connect the Pi to an internet connection, plug in the USB drive, and then power the Pi on. Setting a static IP Once the Pi is powered on, find it’s IP address on the network. Nmap can be used for this: – nmap -sP 192.168.1.0/24 Or use a Network Analyzer application on your phone (I find the output of nmap can be confusing at times). Then we can ssh to the Pi: – ssh pi@192.168.1.xx And then change the password of the default ubuntu user (default password is ubuntu): – Ok, now we can ssh back into the Pi and set a static IP address. Edit the file /etc/netplan/50-cloud-init.yaml to look something like this: – eth0 is the network the Pi is on (confirm with ip a), 192.168.1.160 is the IP address I’m setting, 192.168.1.254 is the gateway on my network, and 192.168.1.5 is my dns server (my pi-hole). There is a warning there about changes not persisting, but they do Now that the file is configured, we need to run: – sudo netplan apply Once this is executed it will break the current shell, wait for the Pi to come back on the network on the new IP address and ssh back into it. Creating a custom user Let’s now create a custom user, with sudo access, and diable the default ubuntu user. To create a new user: – sudo adduser dbafromthecold Add to the sudo group: – sudo usermod -aG sudo dbafromthecold Then log out of the Pi and log back in with the new user. Once in, disable the default ubuntu user: – sudo usermod --expiredate 1 ubuntu Cool! So we’re good to go to set up key based authentication into the Pi. Setting up key based authentication In the post about creating the cluster we already created an ssh key pair to use to log into the Pi but if we needed to create a new key we could just run: – ssh-keygen And follow the prompts to create a new key pair. Now we can copy the public key to the Pi. Log out of the Pi and navigate to the location of the public key: – ssh-copy-id -i ./raspberrypi_k8s.pub dbafromthecold@192.168.1.160 Once the key has been copied to the Pi, add an entry for the Pi into the ssh config file: – Host pi-nfs-server HostName 192.168.1.160 User dbafromthecold IdentityFile ~/raspberrypi_k8s To make sure that’s all working, try logging into the Pi with: – ssh dbafromthecold@pi-nfs-server Installing and configuring the NFS server Great! Ok, now we can configure the Pi. First thing, let’s rename it to pi-nfs-server and bounce: – sudo hostnamectl set-hostname pi-nfs-server sudo reboot Once the Pi comes back up, log back in and install the nfs server itself: – sudo apt-get install -y nfs-kernel-server Now we need to find the USB drive on the Pi so that we can mount it: – lsblk And here you can see the USB drive as sda: – Another way to find the disk is to run: – sudo lshw -class disk So we need to get some more information about /dev/sda it in order to mount it: – sudo blkid /dev/sda Here you can see the UUID of the drive and that it’s got a type of NTFS. Now we’re going to create a folder to mount the drive (/mnt/sqledge): – sudo mkdir /mnt/sqledge/ And then add a record for the mount into /etc/fstab using the UUID we got earlier for the drive: – sudo vim /etc/fstab And add (changing the UUID to the value retrieved earlier): – UUID=242EC6792EC64390 /mnt/sqledge ntfs defaults 0 0 Then mount the drive to /mnt/sqledge: – sudo mount -a To confirm the disk is mounted: – df -h Great! We have our disk mounted. Now let’s create some subfolders for the SQL system, data, and log files: – sudo mkdir /mnt/sqledge/{sqlsystem,sqldata,sqllog} Ok, now we need to modify the export file so that the server knows which directories to share. Get your user and group ID using the id command: – The edit the /etc/exports file: – sudo vim /etc/exports Add the following to the file: – /mnt/sqledge *(rw,all_squash,insecure,async,no_subtree_check,anonuid=1001,anongid=1001) N.B. – Update the final two numbers with the values from the id command. A full break down of what’s happening in this file is detailed here. And then update: – sudo exportfs -ra Configuring the Kubernetes Nodes Each node in the cluster needs to have the nfs tools installed: – sudo apt-get install nfs-common And each one will need a reference to the NFS server in its /etc/hosts file. Here’s what the hosts file on k8s-node-1 now looks like: – Creating a persistent volume Excellent stuff! Now we’re good to go to create three persistent volumes for our Azure SQL Edge pod: – apiVersion: v1 kind: PersistentVolume metadata: name: sqlsystem-pv spec: capacity: storage: 1024Mi accessModes: - ReadWriteOnce nfs: server: pi-nfs-server path: "/mnt/sqledge/sqlsystem" --- apiVersion: v1 kind: PersistentVolume metadata: name: sqldata-pv spec: capacity: storage: 1024Mi accessModes: - ReadWriteOnce nfs: server: pi-nfs-server path: "/mnt/sqledge/sqldata" --- apiVersion: v1 kind: PersistentVolume metadata: name: sqllog-pv spec: capacity: storage: 1024Mi accessModes: - ReadWriteOnce nfs: server: pi-nfs-server path: "/mnt/sqledge/sqllog" What this file will do is create three persistent volumes, 1GB in size (although that will kinda be ignored as we’re using NFS shares), in the ReadWriteOnce access mode, pointing at each of the folders we’ve created on the NFS server. We can either create the file and deploy or run (do this locally with kubectl pointed at the Pi K8s cluster): – kubectl apply -f https://gist.githubusercontent.com/dbafromthecold/da751e8c93a401524e4e59266812dc63/raw/d97c0a78887b6fcc41d0e48c46f05fe48981c530/azure-sql-edge-pv.yaml To confirm: – kubectl get pv Now we can create three persistent volume claims for the persistent volumes: – apiVersion: v1 kind: PersistentVolumeClaim metadata: name: sqlsystem-pvc spec: volumeName: sqlsystem-pv accessModes: - ReadWriteOnce resources: requests: storage: 1024Mi --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: sqldata-pvc spec: volumeName: sqldata-pv accessModes: - ReadWriteOnce resources: requests: storage: 1024Mi --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: sqllog-pvc spec: volumeName: sqllog-pv accessModes: - ReadWriteOnce resources: requests: storage: 1024Mi Each one with the same AccessMode and size as the corresponding persistent volume. Again, we can create the file and deploy or just run: – kubectl apply -f https://gist.githubusercontent.com/dbafromthecold/0c8fcd74480bba8455672bb5f66a9d3c/raw/f3fdb63bdd039739ef7d7b6ab71196803bdfebb2/azure-sql-edge-pvc.yaml And confirm with: – kubectl get pvc The PVCs should all have a status of Bound, meaning that they’ve found their corresponding PVs. We can confirm this with: – kubectl get pv Deploying Azure SQL Edge with persistent storage Awesome stuff! Now we are good to go and deploy Azure SQL Edge to our Pi K8s cluster with persistent storage! Here’s the yaml file for Azure SQL Edge: – apiVersion: apps/v1 kind: Deployment metadata: name: sqledge-deployment spec: replicas: 1 selector: matchLabels: app: sqledge template: metadata: labels: app: sqledge spec: volumes: - name: sqlsystem persistentVolumeClaim: claimName: sqlsystem-pvc - name: sqldata persistentVolumeClaim: claimName: sqldata-pvc - name: sqllog persistentVolumeClaim: claimName: sqllog-pvc containers: - name: azuresqledge image: mcr.microsoft.com/azure-sql-edge:latest ports: - containerPort: 1433 volumeMounts: - name: sqlsystem mountPath: /var/opt/mssql - name: sqldata mountPath: /var/opt/sqlserver/data - name: sqllog mountPath: /var/opt/sqlserver/log env: - name: MSSQL_PID value: "Developer" - name: ACCEPT_EULA value: "Y" - name: SA_PASSWORD value: "Testing1122" - name: MSSQL_AGENT_ENABLED value: "TRUE" - name: MSSQL_COLLATION value: "SQL_Latin1_General_CP1_CI_AS" - name: MSSQL_LCID value: "1033" - name: MSSQL_DATA_DIR value: "/var/opt/sqlserver/data" - name: MSSQL_LOG_DIR value: "/var/opt/sqlserver/log" terminationGracePeriodSeconds: 30 securityContext: fsGroup: 10001 So we’re referencing our three persistent volume clams and mounting them as sqlsystem-pvc – /var/opt/mssql sqldata-pvc – /var/opt/sqlserver/data sqllog-pvc – /var/opt/sqlserver/log We’re also setting environment variables to set the default data and log paths to the paths mounted by persistent volume claims. To deploy: – kubectl apply -f https://gist.githubusercontent.com/dbafromthecold/92ddea343d525f6c680d9e3fff4906c9/raw/4d1c071e9c515266662361e7c01a27cc162d08b1/azure-sql-edge-persistent.yaml To confirm: – kubectl get all All looks good! To dig in a little deeper: – kubectl describe pods -l app=sqledge Testing the persistent volumes But let’s not take Kubernetes’ word for it! Let’s create a database and see it persistent across pods. So expose the deployment: – kubectl expose deployment sqledge-deployment --type=LoadBalancer --port=1433 --target-port=1433 Get the External IP of the service created (provided by MetalLb configured in the previous post): – kubectl get services And now create a database with the mssql-cli: – mssql-cli -S 192.168.1.101 -U sa -P Testing1122 -Q "CREATE DATABASE [testdatabase];" Confirm the database is there: – mssql-cli -S 192.168.1.101 -U sa -P Testing1122 -Q "SELECT [name] FROM sys.databases;" Confirm the database files: – mssql-cli -S 192.168.1.101 -U sa -P Testing1122 -Q "USE [testdatabase]; EXEC sp_helpfile;" We can even check on the NFS server itself: – ls -al /mnt/sqledge/sqldata ls -al /mnt/sqledge/sqllog Ok, so the “real” test. Let’s delete the existing pod in the deployment and see if the new pod has the database: – kubectl delete pod -l app=sqledge Wait for the new pod to come up: – kubectl get pods -o wide And then see if our database is in the new pod: – mssql-cli -S 192.168.1.101 -U sa -P Testing1122 -Q "SELECT [name] FROM sys.databases;" And that’s it! We’ve successfully built a Pi NFS server to deploy persistent volumes to our Raspberry Pi Kubernetes cluster so that we can persist databases from one pod to another! Phew! Thanks for reading! The post Provisioning storage for Azure SQL Edge running on a Raspberry Pi Kubernetes cluster appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 716

article-image-a-note-to-the-pass-board-of-directors-from-blog-posts-sqlservercentral
Anonymous
06 Dec 2020
2 min read
Save for later

A Note to the PASS Board of Directors from Blog Posts - SQLServerCentral

Anonymous
06 Dec 2020
2 min read
I just read with dismay that Mindy Curnutt has resigned. That’s a big loss at a time when the future of PASS is in doubt and we need all hands engaged. The reasons she gives for leaving with regards to secrecy and participation are concerning, troublesome, yet not really surprising. The cult of secrecy has existed at PASS for a long time, as has the tendency of the Executive Committee to be a closed circle that acts as if it is superior to the Board, when in fact the Board of Directors has the ultimate say on just about everything. You as a Board can force issues into the open or even disband the Executive Committee, but to do that you’ll have to take ownership and stop thinking of the appointed officers as all powerful. The warning about morally wrong decisions is far more concerning. Those of out here in the membership don’t now what’s going on. PASS hasn’t written anything in clear and candid language about the state of PASS and options being considered, or asked what we think about those options. Is there a reason not to have that conversation? Are you sure that if you can find a way for PASS to survive that it will be one we can support and admire? Leading is about more than being in the room and making decisions. Are you being a good leader, a good steward? From the outside it sure doesn’t seem that way. The post A Note to the PASS Board of Directors appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 808
article-image-the-case-for-data-communities-why-it-takes-a-village-to-sustain-a-data-driven-business-from-whats-new
Anonymous
04 Dec 2020
9 min read
Save for later

The case for data communities: Why it takes a village to sustain a data-driven business from What's New

Anonymous
04 Dec 2020
9 min read
Forbes BrandVoice Kristin Adderson December 4, 2020 - 7:01pm December 5, 2020 Editor's note: This article originally appeared in Forbes. Data is inseparable from the future of work as more organizations embrace data to make decisions, track progress against goals and innovate their products and offerings. But to generate data insights that are truly valuable, people need to become fluent in data—to understand the data they see and participate in conversations where data is the lingua franca. Just as a professional who takes a job abroad needs to immerse herself in the native tongue, businesses who value data literacy need ways to immerse their people in the language of data.  “The best way to learn Spanish is to go to Spain for three weeks,” said Stephanie Richardson, vice president of Tableau Community. “It is similar when you’re learning the language of data. In a data community, beginners can work alongside people who know data and know how to analyze it. You’re going to have people around you that are excited. You’re going to see the language being used at its best. You’re going to see the potential.” Data communities—networks of engaged data users within an organization—represent a way for businesses to create conditions where people can immerse themselves in the language of data, encouraging data literacy and fueling excitement around data and analytics.   The best data communities provide access to data and support its use with training sessions and technical assistance, but they also build enthusiasm through programs like internal competitions, user group meetings and lunch-and-learns. Community brings people together from across the organization to share learnings, ideas and successes. These exchanges build confidence and camaraderie, lifting morale and creating them around a shared mission for improving the business with data. Those who have already invested in data communities are reaping the benefits, even during a global pandemic. People have the data training they need to act quickly in crisis and know where to go when they have questions about data sources or visualizations, speeding up communications cycles. If building a new data community seems daunting during this time, there are small steps you can take to set a foundation for larger initiatives in the future.   Data communities in a work-from-home world Before Covid-19, organizations knew collaboration was important. But now, when many work remotely, people are disconnected and further removed from business priorities. Data and analytics communities can be a unifying force that focuses people on the same goals and gives them a dedicated space to connect. For businesses wanting to keep their people active, engaged and innovating with their colleagues, data communities are a sound investment.   “Community doesn’t have to be face-to-face activities and big events,” said Audrey Strohm, enterprise communities specialist at Tableau. “You participate in a community when you post a question to your organization’s internal discussion forum—whenever you take an action to be in the loop.”  Data communities are well suited for remote collaboration and virtual connection. Some traits of a thriving data community—fresh content, frequent recognition and small, attainable incentives for participation—apply no matter where its members reside. Data communities can also spark participation by providing a virtual venue, such as an internal chat channel or forum, where members can discuss challenges or share advice. Instead of spending hours spinning in circles, employees can log on and ask a question, access resources or find the right point of contact—all in a protected setting. Inside a data community at JP Morgan Chase JPMorgan Chase developed a data community to support data activities and to nurture a data culture. It emphasized immersion, rapid feedback and a gamified structure with skill belts—a concept similar to how students of the martial arts advance through the ranks. Its story shows that, sometimes, a focus on skills is not enough—oftentimes, you need community support. Speaking at Tableau Conference 2019, Heather Gough, a software engineer at the financial services company, shared three tips based on the data community at JPMorgan Chase: 1. Encourage learners to develop skills with any kind of data. Training approaches that center on projects challenge learners to show off their skills with a data set that reflects their personal interests. This gives learners a chance to inject their own passion and keeps the projects interesting for the trainers who evaluate their skills. 2. Not everyone will reach the mountain top, and that’s okay. Most participants don’t reach the top skill tier. Even those who only advance partway through a skill belt or other data literacy program still learn valuable new skills they can talk about and share with others. That’s the real goal, not the accumulation of credentials. 3. Sharing must be included in the design. Part of the progression through the ranks includes spending time sharing newly learned data skills with others. This practice scales as learners become more sophisticated, from fielding just a few questions at low levels to exchanging knowledge with other learners at the top tiers.  How to foster data communities and literacy While you may not be able to completely shift your priorities to fully invest in a data community right now, you can lay the groundwork for a community by taking a few steps, starting with these: 1. Focus on business needs The most effective way to stir excitement and adoption of data collaboration is to connect analytics training and community-related activities to relevant business needs. Teach people how to access the most critical data sources, and showcase dashboards from across the company to show how other teams are using data.  Struggling to adapt to new challenges? Bring people together from across business units to innovate and share expertise. Are your data resources going unused? Imagine if people in your organization were excited about using data to inform their decision making. They would seek those resources rather than begrudgingly look once or twice. Are people still not finding useful insights in their data after being trained? Your people might need to see a more direct connection to their work.  “Foundational data skills create a competitive advantage for individuals and organizations,” said Courtney Totten, director of academic programs at Tableau.  When these efforts are supported by community initiatives, you can address business needs faster because you’re all trained to look at the same metrics and work together to solve business challenges. 2. Empower Your Existing Data Leaders The future leaders of your data community shouldn’t be hard to find. Chances are, they are already in your organization, advocating for more opportunities to explore, understand and communicate with data. Leaders responsible for building a data community do not have to be the organization’s top data experts, but they should provide empathic guidance and inject enthusiasm. These people may have already set up informal structures to promote data internally, such as a peer-driven messaging channel. Natural enthusiasm and energy are extremely valuable to create an authentic and thriving community. Find the people who have already volunteered to help others on their data journeys and give them a stake in the development and management of the community. A reliable leader will need to maintain the community platform and ensure that it keeps its momentum over time. 3. Treat Community Like a Strategic Investment Data communities can foster more engagement with data assets—data sources, dashboards and workbooks. But they can only make a significant impact when they’re properly supported. “People often neglect some of the infrastructure that helps maximize the impact of engagement activities,” Strohm said. “Community needs to be thought of as a strategic investment.”  Data communities need a centralized resource hub that makes it easy to connect from anywhere, share a wide variety of resources and participate in learning modules. Other investments include freeing up a small amount of people’s time to engage in the community and assigning a dedicated community leader. Some communities fail when people don’t feel as though they can take time away from the immediate task at hand to really connect and collaborate. Also, communities aren’t sustainable when they’re entirely run by volunteers. If you can’t invest in a fully dedicated community leader at this time, consider opening up a small portion of someone’s role so they can help build or run community programs. 4. Promote Participation at Every Level Executive leadership needs to do more than just sponsor data communities and mandate data literacy. They need to be visible, model members. That doesn’t mean fighting to the top of every skill tree. Executives should, however, engage in discussions about being accountable for data-driven decisions and be open to fielding tough questions about their own use of data. “If you’re expecting your people to be vulnerable, to reach out with questions, to see data as approachable, you can help in this by also being vulnerable and asking questions when you have them,” said Strohm. 5. Adopt a Data Literacy Framework Decide what your contributors need to know for them to be considered data literate. The criteria may include learning database fundamentals and understanding the mathematical and statistical underpinnings of correlation and causation. Ready-made programs such as Tableau’s Data Literacy for All provide foundational training across all skill levels. Data communities give everyone in your organization a venue to collaborate on complex business challenges and reduce uncertainty. Ask your passionate data advocates what they need to communicate more effectively with their colleagues. Recruit participants who are eager to learn and share. And don’t be afraid to pose difficult questions about business recovery and growth, especially as everyone continues to grapple with the pandemic. Communities rally around a common cause. Visit Tableau.com to learn how to develop data communities and explore stories of data-driven collaboration.  
Read more
  • 0
  • 0
  • 1753

article-image-goal-progress-november-2020-from-blog-posts-sqlservercentral
Anonymous
04 Dec 2020
3 min read
Save for later

Goal Progress–November 2020 from Blog Posts - SQLServerCentral

Anonymous
04 Dec 2020
3 min read
This is my report, which continues on from the Oct report. It’s getting near the end of the year, and I wanted to track things a little tighter, and maybe inspire myself to push. Rating so far: C- Reading Goals Here were my goals for the year. 3 technical books 2 non-technical books – done Books I’ve tackled: Making Work Visible – Complete Pro Power BI Desktop – 70% complete White Fragility – Complete The Biggest Bluff – Complete Team of Teams – 59% complete Project to Product – NEW I’ve made progress here. I have completed my two non-technical books, and actually exceeded this. My focus moved a bit into the more business side of things, and so I’m on pace to complete 4 of these books. The tech books haven’t been as successful, as with my project work, I’ve ended up not being as focused as I’d like on my career, and more focused on tactical things that I need to work on for my job. I think I’ve learned some things, but not what I wanted. My push for December is to finish Team of Teams, get through Power BI Desktop, and then try to tackle one new tech book from either the list of them I have, or one I bought last winter and didn’t read. Project Goals Here were my project goals, working with software A Power BI report that updates from a database A mobile app reading data from somewhere A website that showcases changes and data from a database. Ugh. I’m feeling bad here. I had planned on doing more PowerBI work after the PASS Summit, thinking I’d get some things out of the pre-con. I did, but not practical things, so I need to put time into building up a PowerBI report that I can use. I’ve waffled between one for the team I coach, which has little data, but would be helpful to the athletes, and a personal one. I’ve downloaded some data about my life, but I haven’t organized it into a database. I keep getting started with exercise data, Spotify data, travel data, etc., but not finishing. I’ve also avoided working on a website, and actually having to maintain it in some way. Not a good excuse. I think the mobile app is dead for this year. I don’t really have enough time to dig in here, at least, that’s my thought. The website, however, should be easier. I wanted to use an example from a book, so I should make some time each week, as a personal project, and actually build this out. That’s likely doable by Dec 21. The post Goal Progress–November 2020 appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 693

article-image-azure-synapse-analytics-is-ga-from-blog-posts-sqlservercentral
Anonymous
04 Dec 2020
2 min read
Save for later

Azure Synapse Analytics is GA! from Blog Posts - SQLServerCentral

Anonymous
04 Dec 2020
2 min read
(Note: I will give a demo on Azure Synapse Analytics this Saturday Dec 5th at 1:10pm EST, at the PASS SQL Saturday Atlanta BI (info) (register) (full schedule)) Great news! Azure Synapse Analytics is now GA (see announcement). While most of the feature are GA, there are a few that are still in preview: For those of you who were using the public preview version of Azure Synapse Analytics, nothing has changed – just access your Synapse workspace as before. For those of you who have a Synapse database (i.e. SQL DW database) that was not under a Synapse workspace, your existing data warehouse resources are now listed under “Dedicated SQL pool (formerly SQL DW)” in the Azure portal (where you can still create a standalone database, called a SQL pool). You now have three options going forward for your existing database: Standalone: Keep the database (called a SQL pool) as is and get none of the new workspace features listed here, but you are able to continue using your database, operations, automation, and tooling like before with no changes Enable Azure Synapse workspace features: Go to the overview page for your existing database and choose “New synapse workspace” in the top menu bar and get all the new features except unified management and monitoring. All management operations will continue via SQL resource provider. Except for SQL requests submitted via the Synapse Studio, all SQL monitoring capabilities remain on the database (dedicated SQL pool). For more details on the steps to enable the workspace features see Enabling Synapse workspace features for an existing dedicated SQL pool (formerly SQL DW) Migrate to Azure Synapse workspace: Create a user-defined restore point through the Azure portal, create a new synapse workspace or use an existing one, and then restore the database and get all the new features. All monitoring and management is done via the Synapse workspace and the Synapse Studio experience The features available for all three options (click to expand): More info: Microsoft introduces Azure Purview data catalog; announces GA of Synapse Analytics The post Azure Synapse Analytics is GA! first appeared on James Serra's Blog. The post Azure Synapse Analytics is GA! appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 763
article-image-5-things-you-should-know-about-azure-sql-from-blog-posts-sqlservercentral
Anonymous
04 Dec 2020
5 min read
Save for later

5 Things You Should Know About Azure SQL from Blog Posts - SQLServerCentral

Anonymous
04 Dec 2020
5 min read
Azure SQL offers up a world of benefits that can be captured by consumers if implemented correctly.  It will not solve all your problems, but it can solve quite a few of them. When speaking to clients I often run into misconceptions as to what Azure SQL can really do. Let us look at a few of these to help eliminate any confusion. You can scale easier and faster Let us face it, I am old.  I have been around the block in the IT realm for many years.  I distinctly remember the days where scaling server hardware was a multi-month process that usually resulted in the fact that the resulting scaled hardware was already out of date by the time the process was finished.  With the introduction of cloud providers, the ability to scale vertically or horizontally can usually be accomplished within a few clicks of the mouse.  Often, once initiated, the scaling process is completed within minutes instead of months.  This is multiple orders of magnitude better than the method of having to procure hardware for such needs. The added benefit of this scaling ability is that you can then scale down when needed to help save on costs.   Just like scaling up or out, this is accomplished with a few mouse clicks and a few minutes of your time. It is not going to fix your performance issues If you currently have performance issues with your existing infrastructure, Azure SQL is not going to necessarily solve your problem.  Yes, you can hide the issue with faster and better hardware, but really the issue is still going to exist, and you need to deal with it.  Furthermore, moving to Azure SQL could introduce additional issues if the underlying performance issue is not addressed before hand.   Make sure to look at your current workloads and address any performance issues you might find before migrating to the cloud.  Furthermore, ensure that you understand the available service tiers that are offered for the Azure SQL products.   By doing so, you’ll help guarantee that your workloads have enough compute resources to run as optimal as possible. You still must have a DR plan If you have ever seen me present on Azure SQL, I’m quite certain you’ve heard me mention that one of the biggest mistakes you can do when moving to any cloud provider is not having a DR plan in place.  There are a multitude of ways to ensure you have a proper disaster recovery strategy in place regardless of which Azure SQL product you are using.  Platform as a Service (Azure SQL Database or SQL Managed Instance) offers automatic database backups which solves one DR issue for you out of the gate.  PaaS also offers geo-replication and automatic failover groups for additional disaster recovery solutions which are easily implemented with a few clicks of the mouse. When working with SQL Server on an Azure Virtual machine (which is Infrastructure as a Service), you can perform database backups through native SQL Server backups or tools like Azure Backup. Keep in mind that high availability is baked into the Azure service at every turn.  However, high availability does not equal disaster recovery and even cloud providers such as Azure do incur outages that can affect your production workloads.  Make sure to implement a disaster recovery strategy and furthermore, practice it. It could save you money When implemented correctly, Azure SQL could indeed save you money in the long run. However, it all depends on what your workloads and data volume look like. For example, due to the ease of scalability Azure SQL offers (even when scaling virtual machines), secondary replicas of your data could be at a lower service tier to minimize costs.  In the event a failover needs to occur you could then scale the resource to a higher performing service tier to ensure workload compute requirements are met. Azure SQL Database offers a serverless tier that provides the ability for the database to be paused.  When the database pauses, you will not be charged for any compute consumption.  This is a great resource for unpredictable workloads. Saving costs in any cloud provider implies knowing what options are available as well as continued evaluation of which options would best suit your needs. It is just SQL Azure SQL is not magical quite honestly.  It really is just the same SQL engine you are used to with on-premises deployments.  The real difference is how you engage with the product and sometimes that can be scary if you are not used to it.  As a self-proclaimed die-hard database administrator, it was daunting for me when I started to learn how Azure SQL would fit into modern day workloads and potential help save organizations money.  In the end, though, it’s the same product that many of us have been using for years. Summary In this blog post I’ve covered five things to know about Azure SQL.  It is a powerful product that can help transform your own data ecosystem into a more capable platform to serve your customers for years to come.  Cloud is definitely not a fad and is here to stay.  Make sure that you expand your horizons and look upward because that’s where the market is going. If you aren’t looking at Azure SQL currently, what are you waiting for?  Just do it. © 2020, John Morehouse. All rights reserved. The post 5 Things You Should Know About Azure SQL first appeared on John Morehouse. The post 5 Things You Should Know About Azure SQL appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1086

article-image-daily-coping-4-dec-2020-from-blog-posts-sqlservercentral
Anonymous
04 Dec 2020
2 min read
Save for later

Daily Coping 4 Dec 2020 from Blog Posts - SQLServerCentral

Anonymous
04 Dec 2020
2 min read
I started to add a daily coping tip to the SQLServerCentral newsletter and to the Community Circle, which is helping me deal with the issues in the world. I’m adding my responses for each day here. All my coping tips are under this tag.  Today’s tip is to enjoy new music today. Play, sing, dance, or listen. I enjoy lots of types of music, and I often look to grab something new from Spotify while I’m working, letting a particular album play through, or even going through the works of an artist, familiar or brand new. Recently I was re-watching The Chappelle Show online, and in the 2nd or 3rd episode of the show, he has Mos Def on as a guest. I do enjoy rap, and I realized that I had never really heard much from Mos. The next day I pulled up his catalog and let us play through while working. I love a smooth, continuous rap artist that brings a melody and a rhythm to the words. Mos Def does this, and I enjoyed hearing him entertain me for a few hours. If you like rap, and haven’t gone through his stuff, give him a listen. The post Daily Coping 4 Dec 2020 appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 762