Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

Tech News

3711 Articles
article-image-daily-coping-21-dec-2020-from-blog-posts-sqlservercentral
Anonymous
21 Dec 2020
2 min read
Save for later

Daily Coping 21 Dec 2020 from Blog Posts - SQLServerCentral

Anonymous
21 Dec 2020
2 min read
I started to add a daily coping tip to the SQLServerCentral newsletter and to the Community Circle, which is helping me deal with the issues in the world. I’m adding my responses for each day here. All my coping tips are under this tag.  Today’s tip is to see how many different people you can smile at today. I often wave and smile at people when I drive around my neighborhood. I smile and wave at people in town, especially kids. I don’t seem to meet a lot of people these days, and with masks, I can’t always let someone know I’ve smiled or see if they smile back. I can, however, count my smiles. On a recent day I had to go to the doctor and the grocery before heading home. I smiled at these people, some of whom responded, so I’m guessing they could tell I was smiling under my mask. nurse admitting me into the facility doctor examining me 2nd nurse taking my blood receptionist returning paperwork employee walking into store cashier 3 family members Not a huge total, but I managed to get 9 people today. The post Daily Coping 21 Dec 2020 appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 819

article-image-estimating-the-storage-savings-by-removing-columns-with-null-value-across-the-table-or-marking-them-as-sparse-from-blog-posts-sqlservercentral
Anonymous
21 Dec 2020
3 min read
Save for later

Estimating the storage savings by removing columns with NULL value across the table or marking them as SPARSE from Blog Posts - SQLServerCentral

Anonymous
21 Dec 2020
3 min read
In the previous article Find columns with NULL values across the table we discussed that storage space can be saved by removing columns with NULL value across the table or marking them as SPARSE. We also learnt about the query to find all such columns across the tables of a database. In this article we’ll learn to estimate the storage saving by taking the necessary action on the columns with NULL value across the table, either by removing them or by marking them as SPARSE. It becomes extremely important to be ready with the relevant data and stats when we propose anything. Similarly, when we’ve to approach the Sr. Leadership for the approvals to take any such actions on the Production database, we need to have the data supporting our claim of storage savings. I found this query very useful. It helped me with the table wise data which we finally aggregated for the total storage savings. This query provides the following columns as the output. TableName : This gives the name of the table TotalColumns : This gives the count of columns in the table with NULL values across. TotalRows: This gives the count of rows of the table Estimated_Savings_Bytes: This gives the estimation of storage savings in bytes. Note: You may find a table tables_with_null_values_across being referred in the query. This is the same table which was created in the previous article. This article is the continuation of Find columns with NULL values across the table. SELECT DV.TableName , COUNT(DISTINCT DV.ColumnName) AS TotalColumns , DV.TotalRows , SUM(DV.TotalRows * CASE WHEN COL.DATA_TYPE IN ('CHAR', 'NCHAR') THEN COL.CHARACTER_OCTET_LENGTH WHEN COL.DATA_TYPE = 'TINYINT' THEN 1 WHEN COL.DATA_TYPE = 'SMALLINT' THEN 2 WHEN COL.DATA_TYPE = 'INT' THEN 4 WHEN COL.DATA_TYPE = 'BIGINT' THEN 8 WHEN COL.DATA_TYPE IN ('NUMERIC', 'DECIMAL') THEN 9 WHEN COL.DATA_TYPE = 'FLOAT' THEN 8 WHEN COL.DATA_TYPE = 'DATE' THEN 3 WHEN COL.DATA_TYPE = 'TIME' THEN 5 WHEN COL.DATA_TYPE = 'SMALLDATETIME' THEN 4 WHEN COL.DATA_TYPE = 'DATETIME' THEN 8 WHEN COL.DATA_TYPE = 'BIT' THEN 1 ELSE 2 END) Estimated_Savings_Bytes FROM tables_with_null_values_across DV WITH (NOLOCK) INNER JOIN INFORMATION_SCHEMA.COLUMNS COL WITH (NOLOCK) ON COL.TABLE_NAME = PARSENAME(DV.TableName, 1) AND COL.COLUMN_NAME = PARSENAME(DV.ColumnName, 1) GROUP BY DV.TableName , DV.TotalRows The post Estimating the storage savings by removing columns with NULL value across the table or marking them as SPARSE appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 822

article-image-the-2021-plan-for-sqlorlando-from-blog-posts-sqlservercentral
Anonymous
20 Dec 2020
2 min read
Save for later

The 2021 Plan for SQLOrlando from Blog Posts - SQLServerCentral

Anonymous
20 Dec 2020
2 min read
SQLOrlando Annual Plan 2021 FinalDownload Over the past couple years we’ve been slowly evolving from a fairly adhoc plan of doing what we did last year to a semi structured plan that was mainly bullet points to a more structured and written out plan for 2021. Writing out a formal (ish) plan supports these goals: Explain clearly to the Board (of SQLOrlando) what we intend to do (and ultimately they can accept or amend the plan – it’s been approved for 2021) Explain clearly to our community what we intend to do – this document is already public (but not announced) It’s our authorization to spend It’s way to work on continuity. If one of us gets tired, distracted, whatever, we have a map of where to go (and an operating manual and Trello Board to support it) Not least, it makes us think about what we want to do. I like having a plan. It’s certainly a less than perfect plan and I’m sure things will change (already have with the end of PASS). It was written without nearly enough community input, something I hope we can improve on next year. I like transparency and for me, this is walking the walk. No reason for secrets here. It’s an ambitious plan for sure and my goal isn’t to say that you need to do what we’re doing in terms of the number or types of events. Borrow ideas if you like them, absolutely, but do the things that excite you, your volunteers, your community. It’s easy (well, sorta) to do more events, but it’s a lot harder to get more people to attend a single event, something we haven’t figured out in Orlando. If you write a plan for your group, public or not, I’d love to see it. More ideas, different implementations, I’ll take those wherever I can find them. The post The 2021 Plan for SQLOrlando appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 901

article-image-some-post-pass-thoughts-on-local-user-groups-from-blog-posts-sqlservercentral
Anonymous
20 Dec 2020
4 min read
Save for later

Some Post-PASS Thoughts on Local User Groups from Blog Posts - SQLServerCentral

Anonymous
20 Dec 2020
4 min read
For all the groups the most immediate need is to rescue what they can of the mailing list stored at PASS.org and to have a new landing page and/or meeting registration site. The challenge on the mailing list is that you can’t export it (privacy!), so the best you can do is email (multiple times) to give people the new home of your group. We’ve been using Meetup for all our free events and it’s been decent, it’s biggest strength is that people find us far more often than they did by joining PASS (they had to know to join). The downside for a group of more than 50 members is that you have to pay. In Orlando that has been $180 a year, but it looks like that might be increasing. It’s far less capable than Eventbrite for paid events. For us it’s been worth it, but we’re in the same situation as with PASS – if something happens to Meetup, we lose the list. That means either finding something we like better or building out an alternate list (LinkedIn groups are not bad for this) because I really don’t like a single source of failure. Whether you need anything more than Meetup (or whatever equivalent you pick) is really up to you. In Orlando we run sqlorlando.org on WordPress (the hosted version) for about $45 a year (so that we can use our domain). It’s not much to look at it so far, but we’ve budgeted some time and money to work on that this year. It’s important as a non profit to have some place to post by-laws and minutes (Meetup doesn’t let you upload files) and I see some value in whenever I meet someone that is interested in being able to say “just go to sqlorlando.org and you’ll find all the things”. It’s one more thing to pay for and maintain, so it’s definitely optional depending on your goals. To say that differently, I’d say think about will work for your group. You haven’t got a lot of time, but you have some time to decide. Last night I removed the reference to PASS from our site, added a task to remove it from our operating manual when we do the next update, and removed the link/suggestion to join from the announcement email we send when someone joins Meetup. Today I’m going to update our Trello template cards to remove the tasks for emailing the group monthly, posting new events, and closing out completed events. It’s still important to track what’ve we done, so I added a couple more columns to our planning sheet for now. To a degree this simplifies the workflow for us. Planning/Execution Tracking Not to minimize the jump through hoops exercise at the end of the year, but once you figure out where to send people and try to get some of the list to move you’re back to where you were – running your group. Whether it feels that way probably depends on your group identity. Orlando back in 2004 was just a PASS chapter, we did it the way they suggested (more or less) and that was ok. In 2020 we’re a non profit that chose to align with PASS because it aligned with our goals for serving the Orlando tech community. At the monthly meeting level losing PASS just doesn’t change what we do at all (clearly the bigger and more painful impact is losing SQLSaturday.com and I’ll write more on that in the next week or two). None of that removes the emotional impact. We’re still a network, just more loosely coupled than we were – for now. December is a good time for reflecting and planning, so maybe spend an hour or two thinking about how your group can serve your local community next year and write it down given all the changes. In my next post I’ll share what we had planned for next year (though you can see the event list above already) in Orlando and how we’ll be amending that plan. The post Some Post-PASS Thoughts on Local User Groups appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 785

article-image-tf-idf-in-net-for-apache-spark-using-spark-ml-v2-from-blog-posts-sqlservercentral
Anonymous
19 Dec 2020
1 min read
Save for later

TF-IDF in .NET for Apache Spark Using Spark ML v2 from Blog Posts - SQLServerCentral

Anonymous
19 Dec 2020
1 min read
Spark ML in .NET for Apache Spark Apache Spark has had a machine learning API for quite some time and this has been partially implemented in .NET for Apache Spark. In this post we will look at how we can use the Apache Spark ML API from .NET. This is the second version of this post, the first version was written before version 1 of .NET for Apache Spark and there was a vital piece of the implementation missing which meant although we could build the model in . The post TF-IDF in .NET for Apache Spark Using Spark ML v2 appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 889

article-image-etl-antipattern-load-processes-that-dont-scale-from-blog-posts-sqlservercentral
Anonymous
19 Dec 2020
1 min read
Save for later

ETL Antipattern: Load Processes that Don’t Scale from Blog Posts - SQLServerCentral

Anonymous
19 Dec 2020
1 min read
One of the most significant design considerations in ETL process development is the volume of data to be processed. Most ETL processes have time constraints that require them to complete their load operations within a given window, and the time required to process data will often dictate the design of the load. One of the more common mistakes I’ve seen... The post ETL Antipattern: Load Processes that Don’t Scale appeared first on Tim Mitchell. The post ETL Antipattern: Load Processes that Don’t Scale appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 785
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-placing-my-bid-for-sqlsaturday-com-from-blog-posts-sqlservercentral
Anonymous
18 Dec 2020
1 min read
Save for later

Placing my Bid for SQLSaturday.com from Blog Posts - SQLServerCentral

Anonymous
18 Dec 2020
1 min read
I’m making my bid. I don’t know this brand needs to continue, or that it will, but I’d like to hold this for posterity, at least in an interim basis. Feel free to submit your own bid. I am hoping PASS is willing to return this brand to the founders. The post Placing my Bid for SQLSaturday.com appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 924

article-image-speaker-guidance-save-your-data-from-blog-posts-sqlservercentral
Anonymous
18 Dec 2020
3 min read
Save for later

Speaker Guidance: Save Your Data from Blog Posts - SQLServerCentral

Anonymous
18 Dec 2020
3 min read
The PASS organization (and likely SQL Saturday) are shutting down their sites on 15 Jan,  2021. This means that potentially lots of links and data will disappear. There are people in the community saving some of the data, and images. I’m hoping to put some of those in an archive that will live on and remain accessible, but for speakers, the links to events are part of our living CV. It may be important for you to capture some of this. I’ve got a few things for you to do, especially those of you that are Microsoft (or other) MVPs, or nominated. I have thoughts here on image schedules and your living CV. Note that you might not care about this, or you might not be sure. However, once the data is gone, it’s gone, so you might want to get it now and decide on its value later. Sections here: Image Schedules A Speaking CV Preserving SQL Sat Data Image Schedules I should have done this years ago, as some events (like DevConnections), get rid of their site for particular events and only show the schedule for the next one. As a matter of fact, they’re done. For SQL Saturdays, you can get the schedule for an event on their page. For example, I spoke at SQL Sat 997 – Salt Lake City. I can get the schedule from an image capture: Or at the bottom, there is an export to PDF. I recommend you save these, then upload the image/pdf to someplace you can link to from your CV. Going forward, I’d continue to do this, and even take pictures on a marquee when we get back to live events. Speaking CV I track lots of activity for my Microsoft MVP award, but I also keep a Speaking CV I can point to from my resume. For me, this is a static page on my WordPress blog that I update, but I separate things by year and include the events and links. Typically I link to a schedule page for an event, but these will all be broken, or some of them. One of my tasks this winter will be to go capture some of the visual evidence of these, if I can. That way I can preserve things. One other note is that the first day back to work after I speak, I usually update this and ensure it’s current. Preserving SQL Sat Data For posterity, or maybe just for me, I was curious what activity I’d had. If I log into the PASS site, I can go to My SQL Saturday, and see the events where I submitted things. If you’re curious, go here, and then you know where to go save an image. The footer gets in the way, but you can copy/paste and resort this all if you need to. The post Speaker Guidance: Save Your Data appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 745

article-image-daily-coping-18-dec-2020-from-blog-posts-sqlservercentral
Anonymous
18 Dec 2020
1 min read
Save for later

Daily Coping 18 Dec 2020 from Blog Posts - SQLServerCentral

Anonymous
18 Dec 2020
1 min read
I started to add a daily coping tip to the SQLServerCentral newsletter and to the Community Circle, which is helping me deal with the issues in the world. I’m adding my responses for each day here. All my coping tips are under this tag.  Today’s tip is to buy and extra item and donate it to a local food bank. The food bank has always been important to me. I used to volunteer with my kids, helping them understand how lucky they are and how others may not be. Every year, I’ve tried to donate some supplies to either of those close to me, though probably not often enough. This year, with the strange world we live in and fewer trips, I know there are plenty of people in need. I decided to do some extra donations to the local bank. Rather than give them some food items, I purchased some local gift cards and gave them. The food bank uses these for perishable items, like milk,  eggs, etc. I also set a reminder to do this regularly. The post Daily Coping 18 Dec 2020 appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 796

article-image-sqlsaturday-is-dead-long-live-datasaturdays-from-blog-posts-sqlservercentral
Anonymous
18 Dec 2020
2 min read
Save for later

SQLSaturday is dead, long live DataSaturdays from Blog Posts - SQLServerCentral

Anonymous
18 Dec 2020
2 min read
This is a very brief post to inform you that PASS has died, killed by the for-profit company behind it. That’s sad beyond words, but we, as a community, are not destined to the same fate. The community will rise again and build something new. One of the things that we don’t want to lose is SQLSaturday. It’s been a substantial vehicle for involving community members locally and globally. It has been the launchpad for many community speakers. It has been the opportunity for many people to connect with other community members, share their knowledge and learn something new. Connect, share, learn… that sound familiar, right? We don’t want to take the existing SQL Saturday and give it a new name, we want to start a new community initiative that enables us to continue delivering events. It needs to be a platform that allows us to continue doing what we were doing. Do you want to be involved? Here’s what you can do: Head to datasaturdays.com and have a look. There’s not much content right now, but you have to start from something… Go to GitHub and join the discussion There are many aspects that we need to cover and we know we’re not perfect right now. Please bear with us, we want to improve. The main message here is that we need your help to continue running events for people to share, network and learn. A name a just a name and there’s more that identifies our community. Come and help us, be a part of the solution The post SQLSaturday is dead, long live DataSaturdays appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 849
article-image-workout-wednesdays-for-power-bi-in-2021-from-blog-posts-sqlservercentral
Anonymous
18 Dec 2020
3 min read
Save for later

Workout Wednesdays for Power BI in 2021 from Blog Posts - SQLServerCentral

Anonymous
18 Dec 2020
3 min read
I’m excited to announce that something new is coming to the Power BI community in 2021: Workout Wednesday! Workout Wednesday started in the Tableau community and is expanding to Power BI in the coming year. Workout Wednesdays present challenges to recreate a data-driven visualization as closely as possible. They are designed to help you improve your skills in Power BI and Tableau. How You Can Participate Watch for the Power BI challenge to be published on Wednesdays in 2021. The challenge will contain requirements and a dataset. Use the dataset to create the desired end result. Then share your workout! You can post your workout to the Data Stories Gallery or your blog, or just share a public link. If you aren’t able to share a public link – perhaps because that option is disabled in your Power BI tenant or you don’t have a Power BI tenant– a gif, a video, or even some screenshots are just fine. To formally participate: Post to Twitter using both the #WOW2021 and #PowerBI hashtags along with a link/image/video of your workout. Include a link to the challenge on the Workout Wednesday site. And please note the week number in your description, if possible. Community Growth I’m looking forward Workout Wednesdays for a couple of reasons. First, I think Power BI needs more love in the data visualization department. We need to be talking about effective visualization techniques and mature past ugly pie charts and tacky backgrounds. And I think Workout Wednesdays will help us individually grow those skills, but it will also foster more communication and sharing of ideas around data visualization in Power BI. That in turn will lead to more product enhancement ideas and conversations with the Power BI team, resulting in a better product and a stronger community. Second, I’m also excited to see the crosspollination and cross-platform learning we will achieve by coming together as a data visualization community that isn’t focused on one single tool. There is a lot Tableau practitioners and Power BI practitioners can learn from each other. Join Me In January Keep an eye out on Twitter and the Workout Wednesday website for the first challenge coming January 6. While it would be great if you did the workout for every single week, don’t be concerned if you can’t participate every week. A solution will be posted about a week later, but nothing says you can’t go back and do workouts from previous weeks as your schedule allows. I look forward to seeing all of your lovely Workout Wednesday solutions next year! The post Workout Wednesdays for Power BI in 2021 appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 828

article-image-asf-035-alex-yates-interview-part-2-from-blog-posts-sqlservercentral
Anonymous
18 Dec 2020
1 min read
Save for later

ASF 035: Alex Yates interview (part 2) from Blog Posts - SQLServerCentral

Anonymous
18 Dec 2020
1 min read
Alex is a Data Platform MVP who loves DevOps. He’s has been helping data professionals apply DevOps principles to relational database development and deployment since 2010. He’s most proud of helping Skyscanner develop the ability to deploy 95 times a day. Alex has worked with clients on every continent except Antarctica – so he’s keen to meet anyone who researches penguins. Source The post ASF 035: Alex Yates interview (part 2) appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 809

article-image-december-developer-platform-news-personal-access-tokens-update-auto-disabling-webhooks-and-jupyterlab-integration-from-whats-new
Anonymous
18 Dec 2020
5 min read
Save for later

December Developer Platform news: Personal Access Tokens update, auto-disabling Webhooks, and JupyterLab integration from What's New

Anonymous
18 Dec 2020
5 min read
Geraldine Zanolli Developer Evangelist Kristin Adderson December 18, 2020 - 3:42am December 18, 2020 Every month is like Christmas for Developer Program members because we strive to delight our members as we showcase the latest projects from our internal developer platform and tools engineers. For the last Sprint Demos, we featured some exciting updates: Personal Access Token impersonation, auto-disabling Webhooks, new Webhooks payload for Slack, and JupyterLab integration for the Hyper API. Check out the gifts of increased communication, time, and security that these updates will bring. Personal Access Token (PAT) impersonation One of the use cases for the REST API is to query available content (e.g. projects, workbooks, data sources) for certain users. For embedding scenarios specifically, we often want to load up end-user-specific content within the application. The way to do this today is via impersonation, by which a server admin can impersonate a user, query as that user, and retrieve content that user has access to based on permissions within Tableau. Today, server admins can already impersonate users by sending over the user’s unique userID as part of the sign-in request, however, in order to do this, they need to hardcode their username and password in any scripts requiring impersonation.  Over a year ago, we released Personal Access Tokens (PATs), which are long-lived authentication tokens that allow users to run automation with the Tableau REST API without hard-coding credentials or requiring an interactive login. In the 2021.1 release, we are going to introduce user impersonation support for PATs, the last piece of functionality previously supported only by hard-coded credentials in REST API scripts. So, why not update all your scripts to use PATs today? Auto-disable Webhooks Webhooks is a notification service that allows you to integrate Tableau with any external server. Anytime that an event is happening on Tableau, Tableau is sending an HTTP POST request to the external server. Once the external server is receiving the request, it can respond to the event. But what is happening when the Webhook fails? You might have created multiple Webhooks on your site for testing that are no longer set properly, which means you’ll want to manually disable them or delete them. Today, the way that a Webhook works is that every time a Webhook is triggered, it is going to attempt to connect to the external server up to four times. After four times, it is going to count as a failed delivery attempt.  In our upcoming product releases, after four failed delivery attempts, the Webook will be automatically disabled and an email will be sent to the Webhook owner. But don't worry: If you have a successful delivery attempt before reaching a fourth failed attempt, the counter will be reset to zero. As always, you can configure these options on Tableau Server. Slack: New payload for Webhooks Since the release of Webhooks, we noticed that one of the most popular use cases is Slack. Tableau users want to be notified on Slack when an event is happening on Tableau. Today, this use case doesn’t work out of the box. You need to set up middleware in order to send Webhooks from Tableau to Slack—so yes, the payload that we’re sending from Tableau has a different format than the payload that Slack is asking for. (It's like speaking French to someone who only speaks German: you need a translator in the middle.)  In the upcoming 2021.1 release, you’ll be able to create new Webhooks to Slack with no need for middleware! We’re going to add an additional field to the payload.  Hyper API: JupyterLab integration Hyper API is a powerful tool, but with the new command-line interface around Hyper API, will it be even more powerful?  It will indeed! We added the command-line interface around HyperAPI to our hyper-api-samples in our open-source repository, so you can directly run SQL queries against Hyper. We integrated with an existing command-line interface infrastructure—the Jupyter infrastructure—giving you the ability to use HyperAPI directly within JupyterLab. If you’re not familiar with JupyterLab, it’s a web-based IDE mostly used by data scientists.  With the JupyterLab integration, it has never been easier for you to prototype new functionalities:  You can run your SQL queries and check the results without having to write a complete program around Hyper API.  Debugging is also becoming easier: You can isolate your queries to find the root cause of your issue.  Don’t forget about all the ad hoc, analytical queries that you can now run on data directly from your console. Get started using JupyterLab in a few minutes. Updates from the #DataDev Community The #DataDev community continues to share their knowledge with others and drive innovation: Robert Crocker (twitter @robcrock) published two tutorials on the JavaScript API Elliott Stam (twitter @elliottstam) launched a YouTube Channel and published multiple videos on the Tableau REST APIs Andre de Vries (twitter @andre347_) also shared on YouTube a video explaining Trusted Authentication Anya Prosvetova (twitter @Anyalitica) inspired from the Brain Dates at TC-ish launched a monthly DataDev Happy Hours to chat about APIs and Developer Tools Join the #DataDev community to get your invitation to our exclusive Sprint Demos and be the first to know about the Developer Platform updates—directly from the engineering team. See you next year!
Read more
  • 0
  • 0
  • 1061
article-image-daily-coping-17-dec-2020-from-blog-posts-sqlservercentral
Anonymous
17 Dec 2020
1 min read
Save for later

Daily Coping 17 Dec 2020 from Blog Posts - SQLServerCentral

Anonymous
17 Dec 2020
1 min read
I started to add a daily coping tip to the SQLServerCentral newsletter and to the Community Circle, which is helping me deal with the issues in the world. I’m adding my responses for each day here. All my coping tips are under this tag.  Today’s tip is to be generous and feed someone with food, love, or kindness today. My love language is Acts of Service, and I do this often, preparing things for the family when I can. Recently I asked my son what he’d want for dinner. He comes down once or twice a month from college for a few days, and I try to ensure he enjoys the time. His request: ramen. I put this together for him, and the family, last Friday night. The sushi I bought, because that’s something he enjoys, and I’m not nearly as good as some local chefs. The post Daily Coping 17 Dec 2020 appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 764

article-image-no-scalars-with-json_query-sqlnewblogger-from-blog-posts-sqlservercentral
Anonymous
16 Dec 2020
2 min read
Save for later

No Scalars with JSON_QUERY–#SQLNewBlogger from Blog Posts - SQLServerCentral

Anonymous
16 Dec 2020
2 min read
Another post for me that is simple and hopefully serves as an example for people trying to get blogging as #SQLNewBloggers. I started to dig into JSON queries recently, and as I continued to experiment with JSON, this struck me as strange. Why is there a NULL in the result? The path looks right. This appears to be somewhere I ought to get a result back. As I looked up the JSON_QUERY documentation, and it says I get an object or array back. I’d somewhat expect that position, while containing a single value, could be seen as an object of {“setter”} The fact that I need to know I have a single value here seems like poor design. If the document changes, perhaps someone might enter this: DECLARE @json NVARCHAR(1000)     = N'  {  "player": {              "name" : "Sarah",              "position" : "setter, DS"             },    "team":"varsity"   } '; In this case, a JSON_VALUE would fail, while a JSON_QUERY wouldn’t work in the first example above. This means that I need to modify my code based on documents. I don’t like this, but I need to know this, so if you work with JSON, make sure you know how the functions work. SQLNewBlogger While writing the previous post, I changed one of the function calls and got the NULL. I had to fix things for the other post, but I kept the query and then spent about 10 minutes writing this one to show a little thought into the language. You can easily take something you are confused about, made a mistake doing, or wonder about and write your own post. The post No Scalars with JSON_QUERY–#SQLNewBlogger appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 792