Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Databases

233 Articles
article-image-ignoring-comments-in-sql-compare-from-blog-posts-sqlservercentral
Anonymous
02 Dec 2020
4 min read
Save for later

Ignoring Comments in SQL Compare from Blog Posts - SQLServerCentral

Anonymous
02 Dec 2020
4 min read
Recently I had a client that wanted to know how they could use SQL Compare to catch actual changes in their code, but not have comments show up as changes. This is fairly easy to do, and this post looks at how this works. Setting up a Scenario Let’s say I have two databases that are empty. I’ll name them Compare1 and Compare2. I’ll run this code in Compare1: CREATE TABLE MyTable (   MyKey INT NOT NULL IDENTITY(1, 1) CONSTRAINT MyTablePk PRIMARY KEY   , MyVal VARCHAR(100)); GO CREATE PROCEDURE GetMyTable @MyKey INT = NULL AS IF @MyKey IS NOT NULL     SELECT           @MyKey AS MyKey, mt.MyVal     FROM  dbo.MyTable AS mt     WHERE mt.MyKey = @MyKey; ELSE     SELECT mt.MyKey, mt.MyVal      FROM dbo.MyTable AS mt; SELECT 1 AS One; RETURN; GO I’ll run the same code in Compare2 and then run SQL Compare 14 against these two databases. As expected, I find no differences. I used the default options here, just picking the databases and running the comparison. Let’s now change some code. In Compare2, I’ll adjust the procedure code to look like this: CREATE OR ALTER PROCEDURE GetMyTable @MyKey INT = NULL AS /* Check for a parameter not passed in. If it is missing, then get all data. */ IF @MyKey IS NOT NULL     SELECT           @MyKey AS MyKey, mt.MyVal     FROM  dbo.MyTable AS mt     WHERE mt.MyKey = @MyKey; ELSE     SELECT mt.MyKey, mt.MyVal      FROM dbo.MyTable AS mt; SELECT 1 AS One; RETURN; GO I can refresh my project, and now I see there is a difference. This procedure is flagged as having 4 different lines, as you see in the image below. However, the procedure isn’t different. I’ve just added comments to one of the procs. You might view this as different, in terms of how you run software development, but to the SQL Server engine, these procs are the same. How can I avoid flagging this as a difference and causing a deployment of this code? Changing Project Options Redgate has thought of this. In the SQL Compare toolbar, there is an “Edit Project” button. If I click this, I get the dialog that normally starts SQL Compare, with my project and the databases selected. Notice that there are actually four choices at the top of this dialog, with the rightmost one being “Options”. If I click this, there are lots of options. I’ve scrolled down a bit, to the Ignore section. In here, you can see my mouse on the “Ignore comments” option. I’ll click that, click Compare Now, which then refreshes my project. Now I all objects shown as identical. However, if I expand the stored procedure object, I can still see the difference. The difference is just ignored by SQL Compare. This lets me track the differences, see them, but not have the project flag them for deployment. If I’m using any of the Redgate automation tools, the command line option for this is IgnoreComments, or icm. You can pass this into any of the tools to prevent comments from causing a deployment by themselves. This also works with inline comments. I’ll alter the procedure in Compare1 with this code: CREATE OR ALTER PROCEDURE GetMyTable @MyKey INT = NULL AS IF @MyKey IS NOT NULL     SELECT           @MyKey AS MyKey, mt.MyVal     FROM  dbo.MyTable AS mt     WHERE mt.MyKey = @MyKey;  -- parameter value filter ELSE     SELECT mt.MyKey, mt.MyVal      FROM dbo.MyTable AS mt; SELECT 1 AS One;   -- second result set. RETURN; GO The refreshed project sees the differences, but this is still seen as an identical object for the purposes of deployment. If you are refactoring code, perhaps by just adding comments or clarifying something, you often may not want a deployment triggered just from changing the notes you leave for other developers. SQL Compare can help here, as can all the Redgate tools. I would recommend this option always be set, unless you have a good reason to allow comments to trigger a deployment. Give SQL Compare a try today if you’ve never used it, and if you have it, enable this in your projects. The post Ignoring Comments in SQL Compare appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 965

article-image-pass-sqlsaturdays-i-will-be-speaking-at-from-blog-posts-sqlservercentral
Anonymous
02 Dec 2020
2 min read
Save for later

PASS SQLSaturday’s I will be speaking at from Blog Posts - SQLServerCentral

Anonymous
02 Dec 2020
2 min read
I will be speaking at two upcoming PASS SQLSaturday’s. These are free events that you can attend virtually: Azure Synapse Analytics: A Data Lakehouse 12/5/20, 1:10pm EST, PASS SQL Saturday Atlanta BI (info) Azure Synapse Analytics is Azure SQL Data Warehouse evolved: a limitless analytics service that brings together enterprise data warehousing and Big Data analytics into a single service. It gives you the freedom to query data on your terms, using either serverless on-demand or provisioned resources, at scale. Azure Synapse brings these two worlds together with a unified experience to ingest, prepare, manage, and serve data for immediate business intelligence and machine learning needs. In this presentation, I’ll talk about the new products and features that make up Azure Synapse Analytics and how it fits in a modern data warehouse, as well as provide demonstrations. (register) (full schedule) How to build your career 12/12/20, 4:30pm EST, PASS SQL Saturday Minnesota, (info) (slides) In three years I went from a complete unknown to a popular blogger, speaker at PASS Summit, a SQL Server MVP, and then joined Microsoft.  Along the way I saw my yearly income triple.  Is it because I know some secret?  Is it because I am a genius?  No!  It is just about laying out your career path, setting goals, and doing the work. I’ll cover tips I learned over my career on everything from interviewing to building your personal brand.  I’ll discuss perm positions, consulting, contracting, working for Microsoft or partners, hot fields, in-demand skills, social media, networking, presenting, blogging, salary negotiating, dealing with recruiters, certifications, speaking at major conferences, resume tips, and keys to a high-paying career. Your first step to enhancing your career will be to attend this session! Let me be your career coach! (register) (full schedule) The post PASS SQLSaturday's I will be speaking at first appeared on James Serra's Blog. The post PASS SQLSaturday’s I will be speaking at appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1306

Anonymous
02 Dec 2020
1 min read
Save for later

T-SQL Tuesday Retrospective #006: What about blob? from Blog Posts - SQLServerCentral

Anonymous
02 Dec 2020
1 min read
I am revisiting old T-SQL Tuesday invitations from the very beginning of the project. On May 3, 2010, Michael Coles invited us to write about how we use LOB data, so now you know what this week’s post is about. Let’s go over some definitions and complain a little, because that’s how cranky data professionals-> Continue reading T-SQL Tuesday Retrospective #006: What about blob? The post T-SQL Tuesday Retrospective #006: What about blob? appeared first on Born SQL. The post T-SQL Tuesday Retrospective #006: What about blob? appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1436
Banner background image

Anonymous
02 Dec 2020
1 min read
Save for later

Daily Coping 2 Dec 2020 from Blog Posts - SQLServerCentral

Anonymous
02 Dec 2020
1 min read
I started to add a daily coping tip to the SQLServerCentral newsletter and to the Community Circle, which is helping me deal with the issues in the world. I’m adding my responses for each day here. Today’s tip is to tune in to a different radio station or TV channel. I enjoy sports radio. For years while I commuted, I caught up on what was happening with the local teams in the Denver area. With the pandemic, I go fewer places, and I more rarely listen to the station. I miss that a bit, but when I tuned in online, I found some different hosts. One that I used to really enjoy listening to is Alfred Williams. He played for the Broncos, and after retirement, I enjoyed hearing him on the radio. I looked around, and found him on 850KOA. I’ve made it a point to periodically listen in the afternoon, hear something different, and enjoy Alfred’s opinions and thoughts again. The post Daily Coping 2 Dec 2020 appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1264

article-image-sql-database-corruption-how-to-investigate-root-cause-from-blog-posts-sqlservercentral
Anonymous
02 Dec 2020
5 min read
Save for later

SQL Database Corruption, how to investigate root cause? from Blog Posts - SQLServerCentral

Anonymous
02 Dec 2020
5 min read
Introduction: In this article, we will discuss the MS SQL Server database corruption.  So, first, we need to understand what the cause of corruption is. Usually, in all the scenarios of SQL Server database corruption, the main corruption cause is related to the IO subsystem level, which means that the root cause is a problem with the drives, drivers, and possibly even drivers. And while the specific root causes can vary widely (simply due to the sheer complexity involved in dealing with magnetic storage). The main thing to remember about disk systems is that any person in the IT knows that all major operating systems. It ships with the equivalent of a kind of Disk-Check utility (CHKDSK) that can scan for bad sectors, bad entries, and other storage issues that can infiltrate storage environments. Summary: If you are beginner to Microsoft SQL Server. You could do the following things to solve the database corruption. And these tricks can’t help you out: Reopen SQL Server It just holds up the issue and gives raise to the system to run through crash restoration on the databases. Not to mention, in most systems, you will not be able to do this right away and will hold up the issue further Delete all the procedure cache Separate and moving the Microsoft SQL server to a new server When you do this you will feel pain because SQL Server will fail to attach on the second server and on your primary.  At this moment you have to look into "hack attach" SQL Server and I can understand it can be a very painful experience. If you know what will be helpful to solve any problem or what can't be helpful. It requires that you have to be prepared every time for these kinds of problems.  It means that you have to create a database that is corrupt and try everything to recovery that database with the slightest data loss. You may read this: How to Reduce the Risk of SQL Database Corruption Root cause analysis: Root cause analysis may be a crucial part of this method and should not be unmarked regardless of however you pass through the info. This can be a vital step in preventing the matter from occurring once more and doubtless earlier than you're thinking that. In my expertise, once corruption happens, it's absolute to happen once more if no actions area unit is taken to rectify the matter. To boot, this is often seemed to be worse the second time. Now, I'd counsel, that though you think that you recognize the explanation for the corruption (E.G. power outage with no UPS) investigate the subsequent sources anyways. Perhaps the outage was simply helped and there have been warning signs occurring. To begin, I perpetually recommend these places to seem. Memory and disk medicine to create certain there aren't any issues with the present hardware SQL Server error logs Windows event viewer While rare, sit down with your vendors to examine if they need to have issues with the computer code you're using Software errors, believe it or not, Microsoft has been known to cause corruption. See KB2969896. this is often wherever gap tickets with Microsoft also are helpful The event viewer and SQL server error logs may be viewed along. But, I suggest dividing these out to the system administrators as they regularly have more manpower on their team to review these. Helpful Tip: In fact, even once knowing what the matter is, I forever counsel gap a price tag with Microsoft as a result of they're going to not solely provide an additional set of eyes on the problem however additionally their experience on the topic.to boot, Microsoft will and can assist you with the next steps to assist notice the foundation reason behind the matter and wherever the corruption originated from. Corruption problems: If the database is corrupt, it is possible to repair the database using SQL Recovery Software. This software will allow repairing the database in case of corruption. Conclusion: So finally, after this article, we learn many things about database corruption and how to resolve that corrupt database. Most of the things are too common, and now you can solve this kind of common corruption. With time when will you finish this series, the goal will be that when you find out you have corruption, it is coming from your alerts, not an end-user, and you will have a procedure to let your managers know where you sit and what the next steps are. Because of this, you will get a lot of benefits, and also it allows you to work without having someone breathing down your neck frequently. www.PracticalSqlDba.com The post SQL Database Corruption, how to investigate root cause? appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1673

article-image-would-you-pass-the-sql-server-certifications-please-what-do-you-mean-were-out-from-blog-posts-sqlservercentral
Anonymous
01 Dec 2020
5 min read
Save for later

Would You Pass the SQL Server Certifications Please? What Do You Mean We're Out? from Blog Posts - SQLServerCentral

Anonymous
01 Dec 2020
5 min read
I have held various certifications through my DBA career, from CompTIA A+ certification back when I worked help desk (I'm old) through the various MCxx that Microsoft has offered over the years (although I never went for Microsoft Certified Master (MCM), which I still regret). I have definitely gotten some mileage out of my certs over the years, getting an interview or an offer not just because I was certified, but rather because I had comparable job experience to someone else *and* I was certified, nudging me past the other candidate. I am currently an MCSA: SQL 2016 Database Administration and an MCSE: Data Management and Analytics, which is pretty much the top of SQL Server certifications currently available. I also work for a company that is a Microsoft partner (and have previously worked for other Microsoft partners) and part of the requirements to become (and stay) a Microsoft partner is maintaining a certain number of employees certified at certain levels of certification dependent on your partnership level. I completed the MCSE back in 2019, and my company is starting to have a new re-focus on certifications (a pivot, so to speak - I hate that term but it is accurate), so I went out to look at what my options were.  We have two SQL Server versions past SQL Server 2016 at this point, so there must be something else right? On top of that, the MCSA and MCSE certs I currently have are marked to expire *next* month (January 2021 - seriously, check it out HERE)...so there *MUST* be something else right - something to replace it with or to upgrade to? I went to check the official Microsoft certifications site (https://docs.microsoft.com/en-us/learn/certifications/browse/?products=sql-server&resource_type=certification) and found that the only SQL Server-relevant certification beyond the MCSE: Data Management and Analytics is the relatively new "Microsoft Certified: Azure Database Administrator Associate" certification (https://docs.microsoft.com/en-us/learn/certifications/azure-database-administrator-associate).   The official description of this certification is as follows: The Azure Database Administrator implements and manages the operational aspects of cloud-native and hybrid data platform solutions built with Microsoft SQL Server and Microsoft Azure Data Services. The Azure Database Administrator uses a variety of methods and tools to perform day-to-day operations, including applying knowledge of using T-SQL for administrative management purposes. Cloud...Cloud, Cloud...Cloud...(SQL)...Cloud, Cloud, Cloud...by the way, SQL. Microsoft has been driving toward the cloud for a very long time - everything is "Cloud First" (developed in Azure before being retrofit into on-premises products, and the company definitely tries to steer as much into the cloud as it can. I realize this is Microsoft's reality, and I have had some useful experiences using the cloud for Azure VM's and Azure SQL Database over the years...but... There is still an awful lot of the world running on physical machines - either directly or via a certain virtualization platform that starts with VM and rhymes with everywhere. As such, I can't believe Microsoft has bailed on actual SQL Server certifications...but it sure looks that way.  Maybe something shiny and new will come out of this; maybe there will be a new better, stronger, faster SQL Server certification in the near future - but the current lack of open discussion doesn't inspire hope. -- Looking at the Azure Database Administrator Associate certification, it requires a single exam (DP-300 https://docs.microsoft.com/en-us/learn/certifications/exams/dp-300) and is apparently "Associate" level.  Since the styling of certs is apparently changing (after all it isn't the MCxx Azure Database Administrator) I went to look at what Associate meant. Apparently there are Fundamental, Associate, and Expert level certifications in the new role-based certification setup, and there are currently only Expert-level certs for a handful of technologies, most of them Office and 365-related technologies. This means that for most system administrators - database and otherwise - there is nowhere to go beyond the "Associate" level - you can dabble in different technologies, but no way to be certified as an "Expert" by Microsoft in SQL Server, cloud or otherwise. (The one exception I could find for any sysadmins is the "Microsoft Certified: Azure Solutions Architect Expert" certification, which is all-around design and implement in Azure at a much broader level.) -- After reviewing all of this, I am already preparing for the Azure Database Administrator Associate certification via the DP-300 exam, and I am considering other options for broadening my experience, including Azure administrator certs and AWS administrator certs.  I will likely focus on Azure since my current role has more Azure exposure than AWS (although maybe that is a reason to go towards AWS and broaden my field...hmm...) If anything changes in the SQL Server cert world - some cool new "OMG we forgot we don't have a new SQL Server certification - here you go" announcement - I will let you know. The post Would You Pass the SQL Server Certifications Please? What Do You Mean We're Out? appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 2092
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
Anonymous
01 Dec 2020
5 min read
Save for later

PASS Summit 2020 Mental Health Presentation Eval Comments from Blog Posts - SQLServerCentral

Anonymous
01 Dec 2020
5 min read
I first starting presenting on mental health last December in Charlotte, NC on mental health.  I got some backlash at a couple of events for a few people on “me” being the person talking about it.  But I’ve gotten an overwhelming amount more of support than I have backlash.  I just want to share the comments I got from the 20 people of filled out evals at Summit.  If you notice one person actually used something they learned with a family member that week.  I’m not going to worry about revealing scores mine was the highest I’ve ever gotten, but if someone could please fix that darn vendor related worded question so I can quit getting my score lowered while not advertising anything it would be great. I would ask any managers out there that are reading this that have had to feal with employees with mental health issues to contact me.  I do get questions on the best way to approach an employee they are concerned about and have given advice and how I would like to be approached but I liked to hear how it looks from a manager’s perspective.  Just DM me on Twitter.  I don’t any details on the person or particular situation just how you approached the person with the issue. These are all the comments unedited I received: This is important stuff to keep in mind both for oneself and when working with and watching out for others, personally and professionally. Thank you. I’m happy that people are starting to become more comfortable sharing their battles with mental illness and depression. Tracy eloquently shared her story in a way that makes us want to take this session back to spread to our colleagues. There definitely needs to be an end to the stigma surrounding mental health; sessions like Tracy’s are helping crack that barrier. VERY valuable session. Thanks! Thanks Tracy! That was a wonderful session and thank you for discussing the elephant in the room as the saying goes. I didn’t realize there are higher rates of mental health issues for us IT folks. I’ve also struggled with co-workers that didn’t understand and were not compassionate about what I was going through at the time, which made things harder. Thanks again! Rich This is a great session. It is good to remind ourselves that we are all human and need to focus on our mental health. Also I have known Tracy for awhile and I know that she is super talented and does so much to give back to not only PASS but other great causes too. Hearing about some of the challenges she has had helps to demonstrate that we are all more a like than we are different in that we all struggle with things from time to time. Also great use of pictures in the session. Having relevant pictures through out made the presentation speak louder for sure. Thanks for sharing your story, Tracy! valuable topic I admire Tracy’s strength for talking about what she has been through. Hopefully it opens the door for others to be able to speak more openly in the future. as far as the presentation itself, the slides were good and gave a good summary of the discussion. Thank you for speaking about this. It’s good to hear that we’re not alone in feeling stress. The list of resources in the slides is of great help. I really wish you had done a session like this with a health professional. It was okay to hear first hand experience but I think that insight from a mental health professional would have been much more helpful. It takes a lot of courage to approach and discuss this topic. This was a very good reminder to me to stop and remember it’s not all about deadlines etc. Some the statistics were very eye opening. I’ve been impacted by several suicides over the last five years and it is hard to understand and to understand how to help. It’s good to be reminded that just listening helps. Tracy is exceptionally brave; I appreciate her work to destigmatize the topic and provide practical and tangible advice. Much appreciated. Thanks Tracy. I was able to use some of the things you taught me to work through a mental health issue in my family yesterday and the results were excellent. Keep sharing! Thank you so much. Thank you for sharing your story and helping me realize how many people struggle with mental health in IT. Thank you for the pointers on how to help a friend. Thank you for the survival tips. This was the most valuable session of the whole conference for me! The post PASS Summit 2020 Mental Health Presentation Eval Comments first appeared on Tracy Boggiano's SQL Server Blog. The post PASS Summit 2020 Mental Health Presentation Eval Comments appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1432

Anonymous
01 Dec 2020
1 min read
Save for later

Daily Coping 1 Dec 2020 from Blog Posts - SQLServerCentral

Anonymous
01 Dec 2020
1 min read
I started to add a daily coping tip to the SQLServerCentral newsletter and to the Community Circle, which is helping me deal with the issues in the world. I’m adding my responses for each day here. Today’s tip is to set aside regular time to pursue an activity you love. For me, I don’t know I love a lot of things, but I do enjoy guitar and I’ve worked on it a lot this year. The last month, I’ve let it go a bit, but I’m getting back to it. I try to remember to bring it downstairs to my office, and I’ll take some 5-10 minute breaks and play. I’ve also started to put together a little time on Sat am to work through a course, and build some new skills. The post Daily Coping 1 Dec 2020 appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1299

article-image-power-bi-hungry-median-aggregations-from-blog-posts-sqlservercentral
Anonymous
01 Dec 2020
4 min read
Save for later

Power BI – Hungry Median – Aggregations from Blog Posts - SQLServerCentral

Anonymous
01 Dec 2020
4 min read
Introduction In my last blog post, I have addressed an issue with DAX Median function consuming a lot of memory. To refresh the result, below is the performance summary of those implementations. More on that topic you can find in this previous article. Performance for hour attribute (24 values) Duration (s) Memory Consumed (GB) Native median function 71.00 8.00 Custom implementation 1 6.30 0.20 Many2Many median implementation 2.20 0.02 Performance for location attribute (422 values) Duration (s) Memory Consumed (GB) Native median function 81.00 8.00 Custom implementation 1 107.00 2.50 Many2Many median implementation 41.10 0.08 It seems we have solved the issue with memory but still, the duration of this query when used with locations is not user-friendly. Today we will focus on the performance part. Tuning 3 – Improve User Experience I did not find how to improve the performance with some significant change of DAX or model. As such, I was thinking if we can somehow use aggregations for the median. MEASURESenzors[Calc_MedTempMap] =VAR _mep = [MedianPositionEven]VAR _mepOdd = [MedianPositionOdd]VAR _TempMedianTable = ADDCOLUMNS( values(TemperatureMapping[temperature]), “MMIN”,  [RowCount]– [TemperatureMappingRowCount] +1 , “MMAX”, [RowCount])        VAR _T_MedianVals =FILTER( _TempMedianTable ,( _mep >= [MMIN]&& _mep <= [MMAX])||( _mepOdd >= [MMIN]&& _mepOdd <= [MMAX]) )RETURNAVERAGEX( _T_MedianVals, [temperature]) The part highlighted is still the critical one having the biggest impact on performance because formula engine needs to do the following: – Iterate through all values we have on visual (for example location) – For each item take a list of temperatures – For each temperature get a cumulative count (sum of all counts of lower temperatures) Although we made faster a less expensive cumulative count, we are doing too many loops in the formula engine evaluating similar values again and again. What about to pre-calculate “_TempMedianTable” table so we don’t have to change the algorithm but just pick up cumulative counts as a materialized column? This is how the new model would look like: We can do the aggregation in the source system or we can do it even in Power BI, because we have less memory consumption. There are two helper tables: LocMedAgg – for analyses through a location. HourMedianAgg – for analyses by hour. Now we need to create an hour and location-specific measures and then one combined measure which will switch among them according to the current selection of attributes made by the user. This is DAX expression for LocMedAgg table: MEASURESenzors[LocMedAgg] =FILTER(SUMMARIZECOLUMNS(Senzors[location],TemperatureMapping[temperature],“TcountEndCount”, [RowCount],“TCountStart”,[RowCount] – [TemperatureMappingRowCount] + 1,“Cnt”, [TemperatureMappingRowCount]),— due to m2n relation we would have empty members we do not need and therefore let’s filter themNOT(ISBLANK([TemperatureMappingRowCount]))) New definition for hour Median measure is: —————————————————————— MEASURESenzors[HM_MedianPositionEven] =ROUNDUP(([HM_CountRows] / 2), 0) —————————————————————— MEASURESenzors[HM_MedianPositionOdd] =VAR _cnt = [HM_CountRows]RETURNROUNDUP(( _cnt / 2), 0) + ISEVEN( _cnt ) —————————————————————— MEASURESenzors[HM_Med] =VAR _mpe = [HM_MedianPositionEven]VAR _mpeOdd = [HM_MedianPositionOdd]VAR _T_MedianVals =FILTER( HourMedianAgg,VAR _max = HourMedianAgg[TcountEndCount]VAR _min = HourMedianAgg[TCountStart]RETURN( _mpe >= _min&& _mpe <= _max )||( _mpeOdd >= _min&& _mpeOdd <= _max ))RETURNAVERAGEX( _T_MedianVals, [temperature]) However, when we bring it into the visualization, we see the following issue: We are missing the total value. But that actually is no issue for us as we need to bring a context into the final calculation anyway, so we will compute the total value in a different branch of the final switch. We create the aggregated median measures for location the same way as for hour, and then we put it all together in the final median calculation that switches among different median helpers. For simplification, I wrapped the logic for each branch into a new measure, so the final calculation is simple: MEASURESenzors[CombinedMedian] =SWITCH(1 = 1,[UseHourMedian], [HM_Med_NoAgg],[UseLocationMedian], [LM_Med_NoAgg],[IsDateFiltered], [Orig_Med],[Calc_MedTempMap]) The switch above do this: If an hour and nothing else is selected use hour aggregation median calculation If location and nothing else is selected use location aggregation median If date attribute is selected use native median In all other cases use M2M median calculation Below is one of the switching measures: MEASURESenzors[IsDateFiltered] =— as I let engine to generate hierarchy for me I need to have this filter a bit complex to identify if any level of data is filteredISFILTERED(Senzors[Date].[Date])||ISFILTERED(Senzors[Date].[Day])||ISFILTERED(Senzors[Date].[Month])||ISFILTERED(Senzors[Date].[MonthNo])||ISFILTERED(Senzors[Date].[Quarter])||ISFILTERED(Senzors[Date].[QuarterNo])||ISFILTERED(Senzors[Date].[Year]) MEASURESenzors[UseHourMedian] = ISFILTERED(‘Hour'[Hour])&&NOT(ISFILTERED(Location[Location])) &&NOT([IsDateFiltered]) And that’s it! Now we have solution where you get median under one second for major dimensions. You can download sample pbx from here. The post Power BI – Hungry Median – Aggregations appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1538

Anonymous
01 Dec 2020
2 min read
Save for later

Inspector 2.4 now available from Blog Posts - SQLServerCentral

Anonymous
01 Dec 2020
2 min read
All the changes for this release can be found in the Github Project page Mainly bug fixes this time around , but we have also added new functionality: Improvements #263 If you centralize your Servers’ collections into a single database, you may be interested in the latest addition, we added the ability to override most of the global settings for thresholds found in the Settings table on a server by server basis so you are no longer locked to a threshold for all the servers information contained within the database. Check out the Github issue for more details regarding the change or check out the Inspector user guide. Bug Fixes #257 Fixed a bug where the Inspector Auto update Powershell function was incorrectly parsing non uk date formats, download the latest InspectorAutoUpdate.psm1 to get the update. #261 We noticed that ModuleConfig with ReportWarningsOnly = 3 still sent a report even if there were no Warnings/Advisories present so we fixed that. #256 If you use the powershell collection and one of your servers had a blank settings table that servers’ collection was being skipped, shame on us! we fixed this so that the settings table is re-synced and collection continues. #259 The BlitzWaits custom module was highlighting wait types from your watched wait types table even when the threshold was not breached , A silly oversight but we got it fixed. #265 The Backup space module was failing if access was denied on the backup path , we handle this gracefully now so you will see a warning on your report if this occurs. The post Inspector 2.4 now available appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1350
article-image-basic-json-queries-sqlnewblogger-from-blog-posts-sqlservercentral
Anonymous
30 Nov 2020
2 min read
Save for later

Basic JSON Queries–#SQLNewBlogger from Blog Posts - SQLServerCentral

Anonymous
30 Nov 2020
2 min read
Another post for me that is simple and hopefully serves as an example for people trying to get blogging as #SQLNewBloggers. Recently I saw Jason Horner do a presentation on JSON at a user group meeting. I’ve lightly looked at JSON in some detail, and I decided to experiment with this. Basic Querying of a Document A JSON document is text that contains key-value pairs, with colons used to separate them, and grouped with curly braces. Arrays are supported with brackets, values separated by commas, and everything that is text is quoted with double quotes. There are a few other rules, but that’s the basic structure. Things can next, and in SQL Server, we store the data a character data. So let’s create a document: DECLARE @json NVARCHAR(1000) = N'{  "player": {             "name" : "Sarah",             "position" : "setter"            }  "team" : "varsity"}' This is a basic document, with two key values (player and team) and one set of additional keys (name and position) inside the first key. I can query this with the code: SELECT JSON_VALUE(@json, '$.player.name') AS PlayerName; This returns the scalar value from the document. In this case, I get “Sarah”, as shown here: I need to get the path correct here for the value. Note that I start with a dot (.) as the root and then traverse the tree. A few other examples are shown in the image. These show the paths to get to data in the document. In a future post, I’ll look in more detail how this works. SQLNewBlogger After watching the presentation, I decided to do a little research and experiment. I spent about 10 minutes playing with JSON and querying it, and then another 10 writing this post. This is a great example of picking up the beginnings of a new skill, and the start of a blog series that shows how I can work with this data. The post Basic JSON Queries–#SQLNewBlogger appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1557

article-image-daily-coping-30-nov-2020-from-blog-posts-sqlservercentral
Anonymous
30 Nov 2020
2 min read
Save for later

Daily Coping 30 Nov 2020 from Blog Posts - SQLServerCentral

Anonymous
30 Nov 2020
2 min read
I started to add a daily coping tip to the SQLServerCentral newsletter and to the Community Circle, which is helping me deal with the issues in the world. I’m adding my responses for each day here. Today’s tip is to make a meal using a recipe or ingredient you’ve not tried before. While I enjoy cooking, I haven’t experimented a lot. Some, but not a lot. I made a few things this year that I’ve never made before, as an experiment. For example, I put together homemade ramen early in the pandemic, which was a hit. For me, I had never made donuts. We’ve enjoyed (too many) donuts during the pandemic, but most aren’t gluten free. I have a cookbook that includes and one for donuts. It’s involved, like bread, with letting the dough rise twice and then frying in oil. I told my daughter I’d make them and she got very excited. I didn’t quite realize what I’d gotten myself into, and it was hours after my girl expected something, but they came out well. It felt good to make these. My Mom had made something similar when I was a kid, but I’d never done them until now. The post Daily Coping 30 Nov 2020 appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 840

article-image-building-a-raspberry-pi-cluster-to-run-azure-sql-edge-on-kubernetes-from-blog-posts-sqlservercentral
Anonymous
30 Nov 2020
16 min read
Save for later

Building a Raspberry Pi cluster to run Azure SQL Edge on Kubernetes from Blog Posts - SQLServerCentral

Anonymous
30 Nov 2020
16 min read
A project I’ve been meaning to work on for a while has been to build my own Kubernetes cluster running on Raspberry Pis. I’ve been playing around with Kubernetes for a while now and things like Azure Kubernetes Service are great tools to learn but I wanted something that I’d built from the ground up. Something that I could tear down, fiddle with, and rebuild to my heart’s content. So earlier this year I finally got around to doing just that and with Azure SQL Edge going GA with a disconnected mode I wanted to blog about my setup. Here’s what I bought: – 1 x Raspberry Pi 4 Model B – 8BG RAM 3 x Raspberry Pi 4 Model B – 4GB RAM 4 x SanDisk Ultra 32 GB microSDHC Memory Card 1 x Pi Rack Case for Raspberry Pi 4 Model B 1 x Aukey USB Wall Charger Adapter 6 Ports 1 x NETGEAR GS308 8-Port Gigabit Ethernet Network Switch 1 x Bunch of ethernet cables 1 x Bunch of (short) USB cables OK, I’ve gone a little overboard with the Pis and the SD cards. You won’t need an 8GB Raspberry Pi for the control node, the 4GB model will work fine. The 2GB model will also probably work but that would be really hitting the limit. For the SD cards, 16GB will be more than enough. In fact, you could just buy one Raspberry Pi and do everything I’m going to run through here on it. I went with a 4 node cluster (1 control node and 3 worker nodes) just because I wanted to tinker. What follows in this blog is the complete build, from setting up the cluster, configuring the OS, to deploying Azure SQL Edge. So let’s get to it! Yay, delivery day! Flashing the SD Cards The first thing to do is flash the SD cards. I used Rufus but Etcher would work as well. Grab the Ubuntu 20.04 ARM image from the website and flash all the cards: – Once that’s done, it’s assembly time! Building the cluster So…many…little…screws… But done! Now it’s time to plug it all in. Plug all the SD cards into the Pis. Connect the USB hub to the mains and then plug the switch into your router. It’s plug and play so no need to mess around. Once they’re connected, plug the Pis into the switch and then power them up (plug them into the USB Hub): – (Ignore the zero in the background, it’s running pi-hole which I also recommend you check out!) Setting a static IP address for each Raspberry Pi We’re going to set a static IP address for each Pi on the network. Not doing anything fancy here with subnets, we’re just going to assign the Pis IP addresses that are currently not in use. To find the Pis on the network with their current IP address we can run: – nmap -sP 192.168.1.0/24 Tbh – nmap works but I usually use a Network Analyser app on my phone…it’s just easier (the output of nmap can be confusing). Pick one Pi that’s going to be the control node and let’s ssh into it: – ssh ubuntu@192.168.1.xx When we first try to ssh we’ll have to change the ubuntu user password: – The default password is ubuntu. Change the password to anything you want, we’re going to be disabling the ubuntu user later anyway. Once that’s done ssh back into the Pi. Ok, now that we’re back on the Pi run: – sudo nano /etc/netplan/50-cloud-init.yaml And update the file to look similar to this: – network: ethernets: eth0: addresses: [192.168.1.53/24] gateway4: 192.168.1.254 nameservers: addresses: [192.168.1.5] version: 2 192.168.1.53 is the address I’m setting for the Pi, but it can be pretty much anything on your network that’s not already in use. 192.168.1.254 is the gateway on my network, and 192.168.1.5 is my DNS server (the pi-hole), you can use 8.8.8.8 if you want to. There’ll also be a load of text at the top of the file saying something along the lines of “changes here won’t persist“. Ignore it, I’ve found the changes do persist. DISCLAIMER – There’s probably another (better?) way of setting a static IP address on Ubuntu 20.04, this is just what I’ve done and works for me. Ok, once the file is updated we run: – sudo netplan apply This will freeze your existing ssh session. So close that and open another terminal…wait for the Pi to come back up on your network on the new IP address. Creating a custom user on all nodes Let’s not use the default ubuntu user anymore (just because). We’re going to create a new user, dbafromthecold (you can call your user anything you want ): – sudo adduser dbafromthecold Run through the prompts and then add the new user to the sudo group: – sudo usermod -aG sudo dbafromthecold Cool, once that’s done, exit out of the Pi and ssh back in with the new user and run: – sudo usermod --expiredate 1 ubuntu This way no-one can ssh into the Pi using the default user: – Setting up key based authentication for all nodes Let’s now set up key based authentication (as I cannot be bothered typing out a password every time I want to ssh to a Pi). I’m working in WSL2 here locally (I just prefer it) but a powershell session should work for everything we’re going to be running. Anyway in WSL2 locally run: – ssh-keygen Follow the prompt to create the key. You can add a passphrase if you wish (I didn’t). Ok, now let’s copy that to the pi: – cat ./raspberrypi_k8s.pub | ssh dbafromthecold@192.168.1.53 "mkdir -p ~/.ssh && touch ~/.ssh/authorized_keys && chmod -R go= ~/.ssh && cat >> ~/.ssh/authorized_keys" What this is going to do is copy the public key (raspberrypi_k8s.pub) up to the pi and store it as /home/dbafromthecold/.ssh/authorized_keys This will allow us to specify the private key when connecting to the pi and use that to authenticate. We’ll have to log in with the password one more time to get this working, so ssh with the password…and then immediately log out. Now try to log in with the key: – ssh -i raspberrypi_k8s dbafromthecold@192.168.1.53 If that doesn’t ask for a password and logs you in, it’s working! As the Pi has a static IP address we can setup a ssh config file. So run: – echo "Host k8s-control-1 HostName 192.168.1.53 User dbafromthecold IdentityFile ~/raspberrypi_k8s" > ~/.ssh/config I’m going to call this Pi k8s-control-1, and once this file is created, I can ssh it to by: – ssh k8s-control-1 Awesome stuff! We have setup key based authentication to our Pi! Configuring the OS on all nodes Next thing to do is rename the pi (to match the name we’ve given in our ssh config file): – sudo hostnamectl set-hostname k8s-control-1 sudo reboot That’ll rename the Pi to k8s-control-1 and then restart it. Wait for it to come back up and ssh in. And we can see by the prompt and the hostname command…our Pi has been renamed! Ok, now update the Pi: – sudo apt-get update sudo apt-get upgrade N.B. – This could take a while. After that completes…we need to enable memory cgroups on the Pi. This is required for the Kubernetes installation to complete successfully so run:- sudo nano /boot/firmware/cmdline.txt and add cgroup_enable=memory to the end, so it looks like this: – and then reboot again: – sudo reboot Installing Docker on all nodes Getting there! Ok, let’s now install our container runtime…Docker. sudo apt-get install -y docker.io Then set docker to start on server startup: – sudo systemctl enable docker And then, so that we don’t have to use sudo each time we want to run a docker command: – sudo usermod -aG docker dbafromthecold Log out and then log back into the Pi for that to take effect. To confirm it’s working run: – docker version And now…let’s go ahead and install the components for kubernetes! Installing Kubernetes components on all nodes So we’re going to use kubeadm to install kubernetes but we also need kubectl (to admin the cluster) and the kubelet (which is an agent that runs on each Kubernetes node and isn’t installed via kubeadm). So make sure the following are installed: – sudo apt-get install -y apt-transport-https curl Then add the Kubernetes GPG key: – curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - Add Kubernetes to the sources list: – cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list deb https://apt.kubernetes.io/ kubernetes-xenial main EOF Ok, I know that the 20.04 code name isn’t xenial, it’s focal but if you use kubernetes-focal you’ll get this when running apt-get update: – E: The repository ‘https://apt.kubernetes.io kubernetes-focal Release’ does not have a Release file. So to avoid that error, we’re using xenial. Anyway, now update sources on the box: – sudo apt-get update And we’re good to go and install the Kubernetes components: – sudo apt-get install -y kubelet=1.19.2-00 kubeadm=1.19.2-00 kubectl=1.19.2-00 Going for version 1.19.2 for this install….absolutely no reason for it other than to show you that you can install specific versions! Once the install has finished run the following: – sudo apt-mark hold kubelet kubeadm kubectl That’ll prevent the applications from being accidentally updated. Building the Control Node Right, we are good to go and create our control node! Kubeadm makes this simple! Simply run: – sudo kubeadm init | tee kubeadm-init.out What’s happening here is we’re creating our control node and saving the output to kubeadm-init.out. This’ll take a few minutes to complete but once it does, we have a one node Kubernetes cluster! Ok, so that we can use kubectl to admin the cluster: – mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config And now…we can run: – kubectl get nodes Don’t worry about the node being in a status of NotReady…it’ll come online after we deploy a pod network. So let’s setup that pod network to allow the pods to communicate with each other. We’re going to use Weave for this: – kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d 'n')" A couple of minutes after that’s deployed, we’ll see the node becoming Ready: – And we can check all the control plane components are running in the cluster: – kubectl get pods -n kube-system Now we have a one node Kubernetes cluster up and running! Deploying a test application on the control node Now that we have our one node cluster, let’s deploy a test nginx application to make sure all is working. The first thing we need to do is remove the taint from the control node that prevents user applications (pods) from being deployed to it. So run: – kubectl taint nodes $(hostname) node-role.kubernetes.io/master:NoSchedule- And now we can deploy nginx: – kubectl run nginx --image=nginx Give that a few seconds and then confirm that the pod is up and running: – kubectl get pods -o wide Cool, the pod is up and running with an IP address of 10.32.0.4. We can run curl against it to confirm the application is working as expected: – curl 10.32.0.4 Boom! We have the correct response so we know we can deploy applications into our Kubernetes cluster! Leave the pod running as we’re going to need it in the next section. Don’t do this now but if you want to add the taint back to the control node, run: – kubectl taint nodes $(hostname) node-role.kubernetes.io/master:NoSchedule Deploying MetalLb on the control node There are no SQL client tools that’ll run on ARM infrastructure (at present) so we’ll need to connect to Azure SQL Edge from outside of the cluster. The way we’ll do that is with an external IP provided by a load balanced service. In order for us to get those IP addresses we’ll need to deploy MetalLb to our cluster. MetalLb provides us with external IP addresses from a range we specify for any load balanced services we deploy. To deploy MetalLb, run: – kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.8.1/manifests/metallb.yaml And now we need to create a config map specifying the range of IP addresses that MetalLb can use: – apiVersion: v1 kind: ConfigMap metadata:   namespace: metallb-system   name: config data:   config: |     address-pools:     - name: default       protocol: layer2       addresses:       - 192.168.1.100-192.168.1.110 What we’re doing here is specifying the IP range that MetalLb can assign to load balanced services as 192.168.1.100 to 192.168.1.110 You can use any range you want, just make sure that the IPs are not in use on your network. Create the file as metallb-config.yaml and then deploy into the cluster: – kubectl apply -f metallb-config.yaml OK, to make sure everything is working…check the pods in the metallb-system namespace: – kubectl get pods -n metallb-system If they’re up and running we’re good to go and expose our nginx pod with a load balanced service:- kubectl expose pod nginx --type=LoadBalancer --port=80 --target-port=80 Then confirm that the service created has an external IP: – kubectl get services Awesome! Ok, to really confirm everything is working…try to curl against that IP address from outside of the cluster (from our local machine): – curl 192.168.1.100 Woo hoo! All working, we can access applications running in our cluster externally! Ok, quick tidy up…remove the pod and the service: – kubectl delete pod nginx kubectl delete service nginx And now we can add the taint back to the control node: – kubectl taint nodes $(hostname) node-role.kubernetes.io/master:NoSchedule Joining the other nodes to the cluster Now that we have the control node up and running, and the worker nodes ready to go…let’s add them into the cluster! First thing to do (on all the nodes) is add entries for each node in the /etc/hosts file. For example on my control node I have the following: – 192.168.1.54 k8s-node-1 192.168.1.55 k8s-node-2 192.168.1.56 k8s-node-3 Make sure each node has entries for all the other nodes in the cluster in the file…and then we’re ready to go! Remember when we ran kubeadm init on the control node to create the cluster? At the end of the output there was something similar to: – sudo kubeadm join k8s-control-1:6443 --token f5e0m6.u6hx5k9rekrt1ktk --discovery-token-ca-cert-hash sha256:fd3bed4669636d1f2bbba0fd58bcddffe6dd29bde82e0e80daf985a77d96c37b Don’t worry if you didn’t save it, it’s in the kubeadm-init.out file we created. Or you can run this on the control node to regenerate the command: – kubeadm token create --print-join-command So let’s run that join command on each of the nodes: – Once that’s done, we can confirm that all the nodes have joined and are ready to go by running: – kubectl get nodes Fantastic stuff, we have a Kubernetes cluster all built! External kubectl access to cluster Ok, we don’t want to be ssh’ing into the cluster each time we want to work with it, so let’s setup kubectl access from our local machine. What we’re going to do is grab the config file from the control node and pull it down locally. Kubectl can be installed locally from here Now on our local machine run: – mkdir $HOME/.kube And then pull down the config file: – scp k8s-control-1:/home/dbafromthecold/.kube/config $HOME/.kube/ And to confirm that we can use kubectl locally to administer the cluster: – kubectl get nodes Wooo! Ok, phew…still with me? Right, it’s now time to (finally) deploy Azure SQL Edge to our cluster. Running Azure SQL Edge Alrighty, we’ve done a lot of config to get to this point but now we can deploy Azure SQL Edge. Here’s the yaml file to deploy: – apiVersion: apps/v1 kind: Deployment metadata: name: sqledge-deployment spec: replicas: 1 selector: matchLabels: app: sqledge template: metadata: labels: app: sqledge spec: containers: - name: azuresqledge image: mcr.microsoft.com/azure-sql-edge:latest ports: - containerPort: 1433 env: - name: MSSQL_PID value: "Developer" - name: ACCEPT_EULA value: "Y" - name: SA_PASSWORD value: "Testing1122" - name: MSSQL_AGENT_ENABLED value: "TRUE" - name: MSSQL_COLLATION value: "SQL_Latin1_General_CP1_CI_AS" - name: MSSQL_LCID value: "1033" terminationGracePeriodSeconds: 30 securityContext: fsGroup: 10001 --- apiVersion: v1 kind: Service metadata: creationTimestamp: null name: sqledge-deployment spec: ports: - port: 1433 protocol: TCP targetPort: 1433 selector: app: sqledge type: LoadBalancer What this is going to do is create a deployment called sqledge-deployment with one pod running Azure SQL Edge and expose it with a load balanced service. We can either create a deployment.yaml file or deploy it from a Gist like this: – kubectl apply -f https://gist.githubusercontent.com/dbafromthecold/1a78438bc408406f341be4ac0774c2aa/raw/9f4984ead9032d6117a80ee16409485650258221/azure-sql-edge.yaml Give it a few minutes for the Azure SQL Edge deployment to be pulled down from the MCR and then run: – kubectl get all If all has gone well, the pod will have a status of Running and we’ll have an external IP address for our service. Which means we can connect to it and run a SQL command: – mssql-cli -S 192.168.1.101 -U sa -P Testing1122 -Q "SELECT @@VERSION as [Version];" N.B. – I’m using the mssql-cli here but you can use SSMS or ADS. And that’s it! We have Azure SQL Edge up and running in our Raspberry Pi Kubernetes cluster and we can connect to it externally! Thanks for reading! The post Building a Raspberry Pi cluster to run Azure SQL Edge on Kubernetes appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1120
Anonymous
30 Nov 2020
3 min read
Save for later

SQL Server login – default database and failed logins from Blog Posts - SQLServerCentral

Anonymous
30 Nov 2020
3 min read
This is one of them little options that I see which quite often gets little consideration or gets set to a user database without consideration of what the consequences may be if that DB becomes unavailable. There are going to be situations where setting a default other than master is essential and there are going to be situations where leaving as master suits best and this comes down to the individual requirements of each login, Recently I had to fix an issue with user connectivity for a single login, the user was getting failed connections when trying to connect to the SQL server when trying to access one of their legacy databases , everything appeared fine – User account was enabled the password hadn’t been changed and was therefore correct, the database they were trying to access was up and accessible but the SQL error log highlighted the real issue. Login failed for user ‘MyLogin’. Reason: Failed to open the database ‘TheDefaultdatabase’ Ahhh makes sense now because at the time that database (the default database for the login) was in the middle of a restore as part of some planned work, problem is this was not the database the user was trying to connect to at the time, the expected behavior for the login was to be able to access all of their databases regardless of any of the other databases not being available. Easy fix for this situation was to set the default database to master but could have been avoided if set correctly in the beginning, however when this login was created only one user database existed so the admin who configured the login didn’t think twice about setting the login to have a default database of their single user database, unfortunately this setting was forgotten as more databases were added to the instance. In most cases leaving it as master will be the reliable option as the master database in terms of user connectivity because if SQL is up so is the master database unless there is some other issue going on! however you may have valid reasons to want to assign a login a specific default database and that’s cool provided you consider what will happen to these logins when the database becomes unavailable. I checked BOL , unfortunately this only provides the following: DEFAULT_DATABASE =database Specifies the default database to be assigned to the login. If this option is not included, the default database is set to master Unfortunately there is no real warning there to allow you to give this setting good consideration, but it is pretty important to ask yourself the following question when creating a new user login. Does it matter if the user/login cannot access the SQL server if the default database is inaccessible when making new connections? If the answer is no then you can set to whichever database makes the most sense or leave as the default. If the answer is yes then you might want to consider master as the default database if the login is granted permission to more than one database on the instance because when the default database becomes inaccessible i.e Recovery pending suspect offline restoring possible even single user The login/user will lose access to SQL server when they try and make a new connection Thanks for reading! The post SQL Server login – default database and failed logins appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1040

Anonymous
27 Nov 2020
2 min read
Save for later

Daily Coping 27 Nov 2020 from Blog Posts - SQLServerCentral

Anonymous
27 Nov 2020
2 min read
I started to add a daily coping tip to the SQLServerCentral newsletter and to the Community Circle, which is helping me deal with the issues in the world. I’m adding my responses for each day here. Today’s tip is to broaden your perspective: read a different paper, magazine, or site. Many of us tend to get caught in bubbles of information. Google/Bing do this, bringing us certain results. We have a set group of people on social media, and often we read the same sites for news, information, fun, etc. The US election took place recently, and it was a contentious one. Like many, I was entranced with the process, the outcome, and the way the event unfolded. I have some sites for news, and a subscription, but I decided to try something different since I was hoping for other views. In particular, I gave CNN a try at CNN.com. I haven’t been thrilled with their television program for years, as I think they struggle to find new and interesting things to day and still fill 24 hours. However, I saw some ratings about how people around the world view the site, and it’s fairly neutral. I also grabbed a new book, Dirt, after hearing someone talk about it on NPR. It’s a book by a writer that moves to France to learn French cooking. A little out of my area, and I struggled to get into this book, but eventually it turned a corner, and I enjoyed hearing about how the writer blends cooking, chefs, ingredients, and life in a foreign country. The post Daily Coping 27 Nov 2020 appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1351