Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Databases

233 Articles
Anonymous
07 Dec 2020
1 min read
Save for later

AutoCorrect in Git from Blog Posts - SQLServerCentral

Anonymous
07 Dec 2020
1 min read
I can’t believe autocorrect is available, or that I didn’t know it existed. I should have looked, after all, git is smart enough to guess my intentions. I learned this from Kendra Little, who made a quick video on this. She got it from Andy Carter’s blog. Let’s say that I type something like git stats in the cmd line. I’ll get a message from git that this isn’t a command, but there is one similar. You can see this below. However, I can have git actually just run this. If I change the configuration with this code: git config --global help.autocorrect 20 Now if I run the command, I see this, where git will delay briefly and then run what it things is correct. The delay is controlled by the parameter I passed in. The value in in tenths of a second, so 20 is 2 seconds, 50 is 5 seconds, 2 is 0.2 seconds, etc.  If you set this back to 0, autocorrect is off. A great trick, and one I’d suggest everyone enable. The post AutoCorrect in Git appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1573

article-image-migrating-from-sql-server-to-amazon-aws-aurora-from-blog-posts-sqlservercentral
Anonymous
07 Dec 2020
2 min read
Save for later

Migrating from SQL Server to Amazon AWS Aurora from Blog Posts - SQLServerCentral

Anonymous
07 Dec 2020
2 min read
Is Microsoft’s licensing scheme getting you down? It’s 2020 and there are now plenty of data platforms that are good for running your enterprise data workloads. Amazon’s Aurora PaaS service runs either MySQL or PostgreSQL. I’ve been supporting SQL Server for nearly 22 years and I’ve seen just about everything when it comes to bugs or performance problems and am quite comfortable with SQL Server as a data platform; so, why migrate to something new? Amazon’s Aurora has quite a bit to offer and they are constantly improving the product. Since there’s no license costs its operating expenditures are much more reasonable. Let’s take a quick look to compare a 64 core Business Critical Azure Managed Instance with a 64 core instance of Aurora MySQL. What about Aurora? Two nodes of Aurora MySQL are less than half the cost of Azure SQL Server Managed Instances.It’s also worth noting that Azure Managed Instances only support 100 databases and only have 5.1 GB of RAM per vCore. Given the 64 GB example there’s only 326.4 GB of RAM compared to the 512 GB selected in the Aurora Instance. This post wasn’t intended to be about the “Why” of migrating; so, let’s talk about the “How”. Migration at the high level takes two steps. Schema Conversion Data Migration Schema Conversion is made simple with AWS SCT (Schema Conversion Tool). Walking through a simple conversion. Note that the JDBC drivers for SQL Server are required. You can’t use “.” for a local host, which is a little annoying but typing the servername is easy enough. The dark blue items in the graph represent complex actions, such as converting triggers, since triggers aren’t a concept used in MySQL they aren’t a simple 1:1 conversion. Migrating to Aurora from SQL Server can be simple with AWS SCT and a cost saving move that also modernizes your data platform. Next we’ll look at AWS DMS (Data Migration Service). Thanks to the engineers at AWS, migrating to Aurora PostgreSQL is even easier. Recently Babelfish for Aurora PostgreSQL was announced, which is a product that allows SQL Server’s T-SQL code to run on PostgreSQL. The post Migrating from SQL Server to Amazon AWS Aurora appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1412

article-image-daily-coping-7-dec-2020-from-blog-posts-sqlservercentral
Anonymous
07 Dec 2020
2 min read
Save for later

Daily Coping 7 Dec 2020 from Blog Posts - SQLServerCentral

Anonymous
07 Dec 2020
2 min read
I started to add a daily coping tip to the SQLServerCentral newsletter and to the Community Circle, which is helping me deal with the issues in the world. I’m adding my responses for each day here. All my coping tips are under this tag.  Today’s tip is to discover your artistic side and design your own Christmas cards. I’m not a big Christmas card sender, but years ago I used to produce a letter for the family that we sent out to extended family and friends. It was a quick look at life on the ranch. At some point, I stopped doing it, but I decided to try and cope a little this year by restarting this. While we haven’t done a lot this year, we have spent time together, and life has changed for us, albeit a bit strangely. I’m here all the time, which is good for family. So I gathered together some photos from the year, and put them together with some words and a digital Xmas card. I’m not sharing the words here, but I’ll include the design with the photos. The post Daily Coping 7 Dec 2020 appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 898
Banner background image

Anonymous
07 Dec 2020
3 min read
Save for later

Tracking costliest queries from Blog Posts - SQLServerCentral

Anonymous
07 Dec 2020
3 min read
Being a Database Developer or Administrator, often we work on Performance Optimization of the queries and procedures. It becomes very necessary that we focus on the right queries to get major benefits. Recently I was working on a Performance Tuning project. I started working based on query list provided by client. Client was referring the user feedbacks and Long Running Query extract from SQL Server. But it was not helping much. The database had more than 1K stored procedures and approx. 1K other programmability objects. On top of that, there were multiple applications triggering inline queries as well. I got a very interesting request from my client that “Can we get the top 100 queries running most frequently and taking more than a minute?”. This made me write my own query to get the list of queries being executed frequently and for duration greater/less than a particular time. This query can also play a major role if you are doing multiple things to optimize the database (such as server / database setting changes, indexing, stats or code changes etc.) and would like to track the duration. You can create a job with this query and dump the output in some table. Job can be scheduled to run in certain frequency. Later, you can plot trend out of the data tracked. This has really helped me a-lot in my assignment. I hope you’ll also find it useful. /* Following query will return the queries (along with plan) taking more than 1 minute and how many time executed since last SQL restart. We'll also get the average execution time. */ ; WITH cte_stag AS ( SELECT plan_handle , sql_handle , execution_count , (total_elapsed_time / NULLIF(execution_count, 0)) AS avg_elapsed_time , last_execution_time , ROW_NUMBER() OVER(PARTITION BY sql_handle, plan_handle ORDER BY execution_count DESC, last_execution_time DESC) AS RowID FROM sys.dm_exec_query_stats STA WHERE (total_elapsed_time / NULLIF(execution_count, 0)) > 60000 -- This is 60000 MS (1 minute). You can change it as per your wish. ) -- If you need TOP few queries, simply add TOP keyword in the SELECT statement. SELECT DB_NAME(q.dbid) AS DatabaseName , OBJECT_NAME(q.objectid) AS ObjectName , q.text , p.query_plan , STA.execution_count , STA.avg_elapsed_time , STA.last_execution_time FROM cte_stag STA CROSS APPLY sys.dm_exec_query_plan(STA.plan_handle) AS p CROSS APPLY sys.dm_exec_sql_text(STA.sql_handle) AS q WHERE STA.RowID = 1 AND q.dbid = DB_ID() /* Either select the desired database while running the query or supply the database name in quotes to the DB_ID() function. <code>Note: Inline queries being triggered from application may not have the object name and database name. In case you are not getting the desired query in the result, try removing the filter condition on dbid</code> */ ORDER BY 5 DESC, 6 DESC The post Tracking costliest queries appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 2830

article-image-provisioning-storage-for-azure-sql-edge-running-on-a-raspberry-pi-kubernetes-cluster-from-blog-posts-sqlservercentral
Anonymous
07 Dec 2020
10 min read
Save for later

Provisioning storage for Azure SQL Edge running on a Raspberry Pi Kubernetes cluster from Blog Posts - SQLServerCentral

Anonymous
07 Dec 2020
10 min read
In a previous post we went through how to setup a Kubernetes cluster on Raspberry Pis and then deploy Azure SQL Edge to it. In this post I want to go through how to configure a NFS server so that we can use that to provision persistent volumes in the Kubernetes cluster. Once again, doing this on a Raspberry Pi 4 with an external USB SSD. The kit I bought was: – 1 x Raspberry Pi 4 Model B – 2GB RAM 1 x SanDisk Ultra 16 GB microSDHC Memory Card 1 x SanDisk 128 GB Solid State Flash Drive The initial set up steps are the same as the previous posts, but we’re going to run through them here (as I don’t just want to link back to the previous blog). So let’s go ahead and run through setting up a Raspberry Pi NFS server and then deploying persistent volumes for Azure SQL Edge. Flashing the OS The first thing to do is flash the SD card using Rufus: – Grab the Ubuntu 20.04 ARM image from the website and flash all the cards: – Once that’s done, connect the Pi to an internet connection, plug in the USB drive, and then power the Pi on. Setting a static IP Once the Pi is powered on, find it’s IP address on the network. Nmap can be used for this: – nmap -sP 192.168.1.0/24 Or use a Network Analyzer application on your phone (I find the output of nmap can be confusing at times). Then we can ssh to the Pi: – ssh pi@192.168.1.xx And then change the password of the default ubuntu user (default password is ubuntu): – Ok, now we can ssh back into the Pi and set a static IP address. Edit the file /etc/netplan/50-cloud-init.yaml to look something like this: – eth0 is the network the Pi is on (confirm with ip a), 192.168.1.160 is the IP address I’m setting, 192.168.1.254 is the gateway on my network, and 192.168.1.5 is my dns server (my pi-hole). There is a warning there about changes not persisting, but they do Now that the file is configured, we need to run: – sudo netplan apply Once this is executed it will break the current shell, wait for the Pi to come back on the network on the new IP address and ssh back into it. Creating a custom user Let’s now create a custom user, with sudo access, and diable the default ubuntu user. To create a new user: – sudo adduser dbafromthecold Add to the sudo group: – sudo usermod -aG sudo dbafromthecold Then log out of the Pi and log back in with the new user. Once in, disable the default ubuntu user: – sudo usermod --expiredate 1 ubuntu Cool! So we’re good to go to set up key based authentication into the Pi. Setting up key based authentication In the post about creating the cluster we already created an ssh key pair to use to log into the Pi but if we needed to create a new key we could just run: – ssh-keygen And follow the prompts to create a new key pair. Now we can copy the public key to the Pi. Log out of the Pi and navigate to the location of the public key: – ssh-copy-id -i ./raspberrypi_k8s.pub dbafromthecold@192.168.1.160 Once the key has been copied to the Pi, add an entry for the Pi into the ssh config file: – Host pi-nfs-server HostName 192.168.1.160 User dbafromthecold IdentityFile ~/raspberrypi_k8s To make sure that’s all working, try logging into the Pi with: – ssh dbafromthecold@pi-nfs-server Installing and configuring the NFS server Great! Ok, now we can configure the Pi. First thing, let’s rename it to pi-nfs-server and bounce: – sudo hostnamectl set-hostname pi-nfs-server sudo reboot Once the Pi comes back up, log back in and install the nfs server itself: – sudo apt-get install -y nfs-kernel-server Now we need to find the USB drive on the Pi so that we can mount it: – lsblk And here you can see the USB drive as sda: – Another way to find the disk is to run: – sudo lshw -class disk So we need to get some more information about /dev/sda it in order to mount it: – sudo blkid /dev/sda Here you can see the UUID of the drive and that it’s got a type of NTFS. Now we’re going to create a folder to mount the drive (/mnt/sqledge): – sudo mkdir /mnt/sqledge/ And then add a record for the mount into /etc/fstab using the UUID we got earlier for the drive: – sudo vim /etc/fstab And add (changing the UUID to the value retrieved earlier): – UUID=242EC6792EC64390 /mnt/sqledge ntfs defaults 0 0 Then mount the drive to /mnt/sqledge: – sudo mount -a To confirm the disk is mounted: – df -h Great! We have our disk mounted. Now let’s create some subfolders for the SQL system, data, and log files: – sudo mkdir /mnt/sqledge/{sqlsystem,sqldata,sqllog} Ok, now we need to modify the export file so that the server knows which directories to share. Get your user and group ID using the id command: – The edit the /etc/exports file: – sudo vim /etc/exports Add the following to the file: – /mnt/sqledge *(rw,all_squash,insecure,async,no_subtree_check,anonuid=1001,anongid=1001) N.B. – Update the final two numbers with the values from the id command. A full break down of what’s happening in this file is detailed here. And then update: – sudo exportfs -ra Configuring the Kubernetes Nodes Each node in the cluster needs to have the nfs tools installed: – sudo apt-get install nfs-common And each one will need a reference to the NFS server in its /etc/hosts file. Here’s what the hosts file on k8s-node-1 now looks like: – Creating a persistent volume Excellent stuff! Now we’re good to go to create three persistent volumes for our Azure SQL Edge pod: – apiVersion: v1 kind: PersistentVolume metadata: name: sqlsystem-pv spec: capacity: storage: 1024Mi accessModes: - ReadWriteOnce nfs: server: pi-nfs-server path: "/mnt/sqledge/sqlsystem" --- apiVersion: v1 kind: PersistentVolume metadata: name: sqldata-pv spec: capacity: storage: 1024Mi accessModes: - ReadWriteOnce nfs: server: pi-nfs-server path: "/mnt/sqledge/sqldata" --- apiVersion: v1 kind: PersistentVolume metadata: name: sqllog-pv spec: capacity: storage: 1024Mi accessModes: - ReadWriteOnce nfs: server: pi-nfs-server path: "/mnt/sqledge/sqllog" What this file will do is create three persistent volumes, 1GB in size (although that will kinda be ignored as we’re using NFS shares), in the ReadWriteOnce access mode, pointing at each of the folders we’ve created on the NFS server. We can either create the file and deploy or run (do this locally with kubectl pointed at the Pi K8s cluster): – kubectl apply -f https://gist.githubusercontent.com/dbafromthecold/da751e8c93a401524e4e59266812dc63/raw/d97c0a78887b6fcc41d0e48c46f05fe48981c530/azure-sql-edge-pv.yaml To confirm: – kubectl get pv Now we can create three persistent volume claims for the persistent volumes: – apiVersion: v1 kind: PersistentVolumeClaim metadata: name: sqlsystem-pvc spec: volumeName: sqlsystem-pv accessModes: - ReadWriteOnce resources: requests: storage: 1024Mi --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: sqldata-pvc spec: volumeName: sqldata-pv accessModes: - ReadWriteOnce resources: requests: storage: 1024Mi --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: sqllog-pvc spec: volumeName: sqllog-pv accessModes: - ReadWriteOnce resources: requests: storage: 1024Mi Each one with the same AccessMode and size as the corresponding persistent volume. Again, we can create the file and deploy or just run: – kubectl apply -f https://gist.githubusercontent.com/dbafromthecold/0c8fcd74480bba8455672bb5f66a9d3c/raw/f3fdb63bdd039739ef7d7b6ab71196803bdfebb2/azure-sql-edge-pvc.yaml And confirm with: – kubectl get pvc The PVCs should all have a status of Bound, meaning that they’ve found their corresponding PVs. We can confirm this with: – kubectl get pv Deploying Azure SQL Edge with persistent storage Awesome stuff! Now we are good to go and deploy Azure SQL Edge to our Pi K8s cluster with persistent storage! Here’s the yaml file for Azure SQL Edge: – apiVersion: apps/v1 kind: Deployment metadata: name: sqledge-deployment spec: replicas: 1 selector: matchLabels: app: sqledge template: metadata: labels: app: sqledge spec: volumes: - name: sqlsystem persistentVolumeClaim: claimName: sqlsystem-pvc - name: sqldata persistentVolumeClaim: claimName: sqldata-pvc - name: sqllog persistentVolumeClaim: claimName: sqllog-pvc containers: - name: azuresqledge image: mcr.microsoft.com/azure-sql-edge:latest ports: - containerPort: 1433 volumeMounts: - name: sqlsystem mountPath: /var/opt/mssql - name: sqldata mountPath: /var/opt/sqlserver/data - name: sqllog mountPath: /var/opt/sqlserver/log env: - name: MSSQL_PID value: "Developer" - name: ACCEPT_EULA value: "Y" - name: SA_PASSWORD value: "Testing1122" - name: MSSQL_AGENT_ENABLED value: "TRUE" - name: MSSQL_COLLATION value: "SQL_Latin1_General_CP1_CI_AS" - name: MSSQL_LCID value: "1033" - name: MSSQL_DATA_DIR value: "/var/opt/sqlserver/data" - name: MSSQL_LOG_DIR value: "/var/opt/sqlserver/log" terminationGracePeriodSeconds: 30 securityContext: fsGroup: 10001 So we’re referencing our three persistent volume clams and mounting them as sqlsystem-pvc – /var/opt/mssql sqldata-pvc – /var/opt/sqlserver/data sqllog-pvc – /var/opt/sqlserver/log We’re also setting environment variables to set the default data and log paths to the paths mounted by persistent volume claims. To deploy: – kubectl apply -f https://gist.githubusercontent.com/dbafromthecold/92ddea343d525f6c680d9e3fff4906c9/raw/4d1c071e9c515266662361e7c01a27cc162d08b1/azure-sql-edge-persistent.yaml To confirm: – kubectl get all All looks good! To dig in a little deeper: – kubectl describe pods -l app=sqledge Testing the persistent volumes But let’s not take Kubernetes’ word for it! Let’s create a database and see it persistent across pods. So expose the deployment: – kubectl expose deployment sqledge-deployment --type=LoadBalancer --port=1433 --target-port=1433 Get the External IP of the service created (provided by MetalLb configured in the previous post): – kubectl get services And now create a database with the mssql-cli: – mssql-cli -S 192.168.1.101 -U sa -P Testing1122 -Q "CREATE DATABASE [testdatabase];" Confirm the database is there: – mssql-cli -S 192.168.1.101 -U sa -P Testing1122 -Q "SELECT [name] FROM sys.databases;" Confirm the database files: – mssql-cli -S 192.168.1.101 -U sa -P Testing1122 -Q "USE [testdatabase]; EXEC sp_helpfile;" We can even check on the NFS server itself: – ls -al /mnt/sqledge/sqldata ls -al /mnt/sqledge/sqllog Ok, so the “real” test. Let’s delete the existing pod in the deployment and see if the new pod has the database: – kubectl delete pod -l app=sqledge Wait for the new pod to come up: – kubectl get pods -o wide And then see if our database is in the new pod: – mssql-cli -S 192.168.1.101 -U sa -P Testing1122 -Q "SELECT [name] FROM sys.databases;" And that’s it! We’ve successfully built a Pi NFS server to deploy persistent volumes to our Raspberry Pi Kubernetes cluster so that we can persist databases from one pod to another! Phew! Thanks for reading! The post Provisioning storage for Azure SQL Edge running on a Raspberry Pi Kubernetes cluster appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1488

article-image-a-note-to-the-pass-board-of-directors-from-blog-posts-sqlservercentral
Anonymous
06 Dec 2020
2 min read
Save for later

A Note to the PASS Board of Directors from Blog Posts - SQLServerCentral

Anonymous
06 Dec 2020
2 min read
I just read with dismay that Mindy Curnutt has resigned. That’s a big loss at a time when the future of PASS is in doubt and we need all hands engaged. The reasons she gives for leaving with regards to secrecy and participation are concerning, troublesome, yet not really surprising. The cult of secrecy has existed at PASS for a long time, as has the tendency of the Executive Committee to be a closed circle that acts as if it is superior to the Board, when in fact the Board of Directors has the ultimate say on just about everything. You as a Board can force issues into the open or even disband the Executive Committee, but to do that you’ll have to take ownership and stop thinking of the appointed officers as all powerful. The warning about morally wrong decisions is far more concerning. Those of out here in the membership don’t now what’s going on. PASS hasn’t written anything in clear and candid language about the state of PASS and options being considered, or asked what we think about those options. Is there a reason not to have that conversation? Are you sure that if you can find a way for PASS to survive that it will be one we can support and admire? Leading is about more than being in the room and making decisions. Are you being a good leader, a good steward? From the outside it sure doesn’t seem that way. The post A Note to the PASS Board of Directors appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1565
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
Anonymous
04 Dec 2020
3 min read
Save for later

Goal Progress–November 2020 from Blog Posts - SQLServerCentral

Anonymous
04 Dec 2020
3 min read
This is my report, which continues on from the Oct report. It’s getting near the end of the year, and I wanted to track things a little tighter, and maybe inspire myself to push. Rating so far: C- Reading Goals Here were my goals for the year. 3 technical books 2 non-technical books – done Books I’ve tackled: Making Work Visible – Complete Pro Power BI Desktop – 70% complete White Fragility – Complete The Biggest Bluff – Complete Team of Teams – 59% complete Project to Product – NEW I’ve made progress here. I have completed my two non-technical books, and actually exceeded this. My focus moved a bit into the more business side of things, and so I’m on pace to complete 4 of these books. The tech books haven’t been as successful, as with my project work, I’ve ended up not being as focused as I’d like on my career, and more focused on tactical things that I need to work on for my job. I think I’ve learned some things, but not what I wanted. My push for December is to finish Team of Teams, get through Power BI Desktop, and then try to tackle one new tech book from either the list of them I have, or one I bought last winter and didn’t read. Project Goals Here were my project goals, working with software A Power BI report that updates from a database A mobile app reading data from somewhere A website that showcases changes and data from a database. Ugh. I’m feeling bad here. I had planned on doing more PowerBI work after the PASS Summit, thinking I’d get some things out of the pre-con. I did, but not practical things, so I need to put time into building up a PowerBI report that I can use. I’ve waffled between one for the team I coach, which has little data, but would be helpful to the athletes, and a personal one. I’ve downloaded some data about my life, but I haven’t organized it into a database. I keep getting started with exercise data, Spotify data, travel data, etc., but not finishing. I’ve also avoided working on a website, and actually having to maintain it in some way. Not a good excuse. I think the mobile app is dead for this year. I don’t really have enough time to dig in here, at least, that’s my thought. The website, however, should be easier. I wanted to use an example from a book, so I should make some time each week, as a personal project, and actually build this out. That’s likely doable by Dec 21. The post Goal Progress–November 2020 appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1394

article-image-azure-synapse-analytics-is-ga-from-blog-posts-sqlservercentral
Anonymous
04 Dec 2020
2 min read
Save for later

Azure Synapse Analytics is GA! from Blog Posts - SQLServerCentral

Anonymous
04 Dec 2020
2 min read
(Note: I will give a demo on Azure Synapse Analytics this Saturday Dec 5th at 1:10pm EST, at the PASS SQL Saturday Atlanta BI (info) (register) (full schedule)) Great news! Azure Synapse Analytics is now GA (see announcement). While most of the feature are GA, there are a few that are still in preview: For those of you who were using the public preview version of Azure Synapse Analytics, nothing has changed – just access your Synapse workspace as before. For those of you who have a Synapse database (i.e. SQL DW database) that was not under a Synapse workspace, your existing data warehouse resources are now listed under “Dedicated SQL pool (formerly SQL DW)” in the Azure portal (where you can still create a standalone database, called a SQL pool). You now have three options going forward for your existing database: Standalone: Keep the database (called a SQL pool) as is and get none of the new workspace features listed here, but you are able to continue using your database, operations, automation, and tooling like before with no changes Enable Azure Synapse workspace features: Go to the overview page for your existing database and choose “New synapse workspace” in the top menu bar and get all the new features except unified management and monitoring. All management operations will continue via SQL resource provider. Except for SQL requests submitted via the Synapse Studio, all SQL monitoring capabilities remain on the database (dedicated SQL pool). For more details on the steps to enable the workspace features see Enabling Synapse workspace features for an existing dedicated SQL pool (formerly SQL DW) Migrate to Azure Synapse workspace: Create a user-defined restore point through the Azure portal, create a new synapse workspace or use an existing one, and then restore the database and get all the new features. All monitoring and management is done via the Synapse workspace and the Synapse Studio experience The features available for all three options (click to expand): More info: Microsoft introduces Azure Purview data catalog; announces GA of Synapse Analytics The post Azure Synapse Analytics is GA! first appeared on James Serra's Blog. The post Azure Synapse Analytics is GA! appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1482

Anonymous
04 Dec 2020
5 min read
Save for later

5 Things You Should Know About Azure SQL from Blog Posts - SQLServerCentral

Anonymous
04 Dec 2020
5 min read
Azure SQL offers up a world of benefits that can be captured by consumers if implemented correctly.  It will not solve all your problems, but it can solve quite a few of them. When speaking to clients I often run into misconceptions as to what Azure SQL can really do. Let us look at a few of these to help eliminate any confusion. You can scale easier and faster Let us face it, I am old.  I have been around the block in the IT realm for many years.  I distinctly remember the days where scaling server hardware was a multi-month process that usually resulted in the fact that the resulting scaled hardware was already out of date by the time the process was finished.  With the introduction of cloud providers, the ability to scale vertically or horizontally can usually be accomplished within a few clicks of the mouse.  Often, once initiated, the scaling process is completed within minutes instead of months.  This is multiple orders of magnitude better than the method of having to procure hardware for such needs. The added benefit of this scaling ability is that you can then scale down when needed to help save on costs.   Just like scaling up or out, this is accomplished with a few mouse clicks and a few minutes of your time. It is not going to fix your performance issues If you currently have performance issues with your existing infrastructure, Azure SQL is not going to necessarily solve your problem.  Yes, you can hide the issue with faster and better hardware, but really the issue is still going to exist, and you need to deal with it.  Furthermore, moving to Azure SQL could introduce additional issues if the underlying performance issue is not addressed before hand.   Make sure to look at your current workloads and address any performance issues you might find before migrating to the cloud.  Furthermore, ensure that you understand the available service tiers that are offered for the Azure SQL products.   By doing so, you’ll help guarantee that your workloads have enough compute resources to run as optimal as possible. You still must have a DR plan If you have ever seen me present on Azure SQL, I’m quite certain you’ve heard me mention that one of the biggest mistakes you can do when moving to any cloud provider is not having a DR plan in place.  There are a multitude of ways to ensure you have a proper disaster recovery strategy in place regardless of which Azure SQL product you are using.  Platform as a Service (Azure SQL Database or SQL Managed Instance) offers automatic database backups which solves one DR issue for you out of the gate.  PaaS also offers geo-replication and automatic failover groups for additional disaster recovery solutions which are easily implemented with a few clicks of the mouse. When working with SQL Server on an Azure Virtual machine (which is Infrastructure as a Service), you can perform database backups through native SQL Server backups or tools like Azure Backup. Keep in mind that high availability is baked into the Azure service at every turn.  However, high availability does not equal disaster recovery and even cloud providers such as Azure do incur outages that can affect your production workloads.  Make sure to implement a disaster recovery strategy and furthermore, practice it. It could save you money When implemented correctly, Azure SQL could indeed save you money in the long run. However, it all depends on what your workloads and data volume look like. For example, due to the ease of scalability Azure SQL offers (even when scaling virtual machines), secondary replicas of your data could be at a lower service tier to minimize costs.  In the event a failover needs to occur you could then scale the resource to a higher performing service tier to ensure workload compute requirements are met. Azure SQL Database offers a serverless tier that provides the ability for the database to be paused.  When the database pauses, you will not be charged for any compute consumption.  This is a great resource for unpredictable workloads. Saving costs in any cloud provider implies knowing what options are available as well as continued evaluation of which options would best suit your needs. It is just SQL Azure SQL is not magical quite honestly.  It really is just the same SQL engine you are used to with on-premises deployments.  The real difference is how you engage with the product and sometimes that can be scary if you are not used to it.  As a self-proclaimed die-hard database administrator, it was daunting for me when I started to learn how Azure SQL would fit into modern day workloads and potential help save organizations money.  In the end, though, it’s the same product that many of us have been using for years. Summary In this blog post I’ve covered five things to know about Azure SQL.  It is a powerful product that can help transform your own data ecosystem into a more capable platform to serve your customers for years to come.  Cloud is definitely not a fad and is here to stay.  Make sure that you expand your horizons and look upward because that’s where the market is going. If you aren’t looking at Azure SQL currently, what are you waiting for?  Just do it. © 2020, John Morehouse. All rights reserved. The post 5 Things You Should Know About Azure SQL first appeared on John Morehouse. The post 5 Things You Should Know About Azure SQL appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1793

Anonymous
04 Dec 2020
2 min read
Save for later

Daily Coping 4 Dec 2020 from Blog Posts - SQLServerCentral

Anonymous
04 Dec 2020
2 min read
I started to add a daily coping tip to the SQLServerCentral newsletter and to the Community Circle, which is helping me deal with the issues in the world. I’m adding my responses for each day here. All my coping tips are under this tag.  Today’s tip is to enjoy new music today. Play, sing, dance, or listen. I enjoy lots of types of music, and I often look to grab something new from Spotify while I’m working, letting a particular album play through, or even going through the works of an artist, familiar or brand new. Recently I was re-watching The Chappelle Show online, and in the 2nd or 3rd episode of the show, he has Mos Def on as a guest. I do enjoy rap, and I realized that I had never really heard much from Mos. The next day I pulled up his catalog and let us play through while working. I love a smooth, continuous rap artist that brings a melody and a rhythm to the words. Mos Def does this, and I enjoyed hearing him entertain me for a few hours. If you like rap, and haven’t gone through his stuff, give him a listen. The post Daily Coping 4 Dec 2020 appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 904
article-image-updating-my-kubernetes-raspberry-pi-cluster-to-containerd-from-blog-posts-sqlservercentral
Anonymous
03 Dec 2020
4 min read
Save for later

Updating my Kubernetes Raspberry Pi Cluster to containerd from Blog Posts - SQLServerCentral

Anonymous
03 Dec 2020
4 min read
There’s been a lot of conversations happening on twitter over the last couple of days due to the fact that Docker is deprecated in Kubernetes v1.20. If you want to know more about the reason why I highly recommend checking out this twitter thread. I’ve recently built a Raspberry Pi Kubernetes cluster so I thought I’d run through updating them in-place to use containerd as the container runtime instead of Docker. DISCLAIMER – You’d never do this for a production cluster. For those clusters, you’d simply get rid of the existing nodes and bring new ones in on a rolling basis. This blog is just me mucking about with my Raspberry Pi cluster to see if the update can be done in-place without having to rebuild the nodes (as I really didn’t want to have to do that). So the first thing to do is drain the node (my node is called k8s-node-1) that is to be updated and cordon it:- kubectl drain k8s-node-1 --ignore-daemonsets Then ssh onto the node and stop the kubelet: – systemctl stop kubelet Then remove Docker: – apt-get remove docker.io Remove old dependencies: – apt-get autoremove Now unmask the existing containerd service (containerd is used by Docker so that’s why it’s already there): – systemctl unmask containerd Install the dependencies required:- apt-get install unzip make golang-go libseccomp2 libseccomp-dev btrfs-progs libbtrfs-dev OK, now we’re following the instructions to install containerd from source detailed here. I installed from source as I tried to use apt-get to install (as detailed here on the Kubernetes docs) but it wouldn’t work for me. No idea why, didn’t spend to much time looking and tbh, I haven’t installed anything from source before so this was kinda fun (once it worked). Anyway, doing everything as root, grab the containerd source: – go get -d github.com/containerd/containerd Now grab protoc and install: – wget -c https://github.com/google/protobuf/releases/download/v3.11.4/protoc-3.11.4-linux-x86_64.zip sudo unzip protoc-3.11.4-linux-x86_64.zip -d /usr/local Get the runc code: – go get -d github.com/opencontainers/runc Navigate to the downloaded package (check your $GOPATH variable) mine was set to ~/go so cd into it and use make to build and install: – cd ~/go/src/github.com/opencontainers/runc make make install Now we’re going to do the same thing with containerd itself: – cd ~/go/src/github.com/containerd/containerd make make install Cool. Now copy the containerd.service file to systemd to create the containerd service: – cp containerd.service /etc/systemd/system/ chmod 644 /etc/systemd/system/containerd.service And start containerd: – systemctl daemon-reload systemctl start containerd systemctl enable containerd Let’s confirm containerd is up and running: – systemctl status containerd Awesome! Nearly done, now we need to update the kubelet to use containerd as it defaults to docker. We can do this by running: – sed -i 's/3.2/3.2 --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock/g' /var/lib/kubelet/kubeadm-flags.env The flags for the kubelet are detailed here I’m using sed to append the flags to the cluster but if that doesn’t work, edit manually with vim:- vim /var/lib/kubelet/kubeadm-flags.env And the following flags need to be added: – –container-runtime=remote –container-runtime-endpoint=unix:///run/containerd/containerd.sock OK, now that’s done we can start the kubelet: – systemctl start kubelet And confirm that it’s working:- systemctl status kubelet N.B. – Scroll to the right and we can see the new flags Finally, uncordon the node. So back on the local machine:- kubectl uncordon k8s-node-1 Run through that for all the worker nodes in the cluster. I did the control node as well following these instructions (didn’t drain/cordon it) and it worked a charm! kubectl get nodes -o wide Thanks for reading! The post Updating my Kubernetes Raspberry Pi Cluster to containerd appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1427

Anonymous
03 Dec 2020
2 min read
Save for later

Daily Coping 3 Dec 2020 from Blog Posts - SQLServerCentral

Anonymous
03 Dec 2020
2 min read
I started to add a daily coping tip to the SQLServerCentral newsletter and to the Community Circle, which is helping me deal with the issues in the world. I’m adding my responses for each day here. Today’s tip is to join a friend doing their hobby and find out why they love it. Joining someone else isn’t really a good idea or very possible during this time. Colorado is slightly locked down, so it’s not necessarily legal, and likely not a good idea to join someone else. However, my daughter picked up some supplies and started knitting recently. I decided to sit with her a bit and see how the new hobby is progressing. It’s something I’ve been lightly interested in, and it looks somewhat zen to sit and allow your hands to move along, building something while you sit quietly. I remember reading about Rosey Grier picking up the hobby years ago. I have done some minor paracord crafts, usually making some bag pulls for the kids I coach. This was similar, and while I don’t need another hobby now, I enjoyed watching her work. The post Daily Coping 3 Dec 2020 appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1447

article-image-sql-homework-december-2020-participate-in-the-advent-of-code-from-blog-posts-sqlservercentral
Anonymous
03 Dec 2020
2 min read
Save for later

SQL Homework – December 2020 – Participate in the Advent of Code. from Blog Posts - SQLServerCentral

Anonymous
03 Dec 2020
2 min read
Christmas. Depending on where you live it’s a big deal even if you aren’t Christian. It pervades almost every aspect of life. And this year it’s going to seep into your professional life. How? I want you to do an Advent calendar. If you’ve never heard of them an Advent calendar is a mini gift each day leading up to Christmas. I’ll be honest that’s about all I know about Advent calendars. I’m sure there’s more to it than that but this is a SQL blog not a religious one so I’m not doing a whole lot of research on the subject. So what Advent calendar has something to do with SQL? The Advent of Code! For each of the first 25 days of December there is a two part programming challenge. These challenges are Christmas themed and can be fairly amusing. They are also some of the best put together programming puzzles I’ve seen. For example for day one you were given a list of numbers. The challenge was to find the one combination where two numbers could be added together to get 2020. Then you had to return the product of those two numbers. Not overly difficult with SQL but remember that these are programming challenges. Some will favor SQL some won’t. Once you’ve input the correct answer you’ll get part two of the challenge for the day. Here’s what I want you to do. Participate in at least 10-20 days. And that’s both parts of the puzzle. If you feel like stretching yourself a bit give it a shot in multiple languages. Studying Python? Do it with both T-SQL and Python. Powershell? Give that a shot too. Each language has it’s strengths and weaknesses. Try to play to the strength of each language. The post SQL Homework – December 2020 – Participate in the Advent of Code. appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1318
Anonymous
03 Dec 2020
1 min read
Save for later

[Video] Azure SQL Database – Import a Database from Blog Posts - SQLServerCentral

Anonymous
03 Dec 2020
1 min read
Quick Video showing you have to use a BACPAC to “import” a database into Azure (Via Storage container), The post [Video] Azure SQL Database – Import a Database appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1478

article-image-requesting-comments-on-the-sqlorlando-operations-manual-from-blog-posts-sqlservercentral
Anonymous
02 Dec 2020
1 min read
Save for later

Requesting Comments on the SQLOrlando Operations Manual from Blog Posts - SQLServerCentral

Anonymous
02 Dec 2020
1 min read
For the past couple weeks I’ve been trying to capture a lot of ideas about how and what and why we do things in Orlando and put them into an organized format. I’m sharing here in hopes that some of you will find it useful and that some of you will have questions, comments, or suggestions that would make it better. I’ll write more about it later this week, for now I’ll let the document stand on its own, with one exception – below are a list of all the templates we have in Trello that have the details on how to do many of our recurring tasks. I’ll share all of that in the next week or so as well. SQLOrlando Operating ManualDownload The post Requesting Comments on the SQLOrlando Operations Manual appeared first on SQLServerCentral.
Read more
  • 0
  • 0
  • 1511