Quantcast
Channel: SQL Server Blog
Viewing all 1849 articles
Browse latest View live

SQL SERVER – Improve Application Performance on Cloud While Reducing Bandwidth Cost

$
0
0

It is very common for people to move databases now a days to the cloud. The very first question I often hear from them is – How to Improve Application Performance on Cloud While Reducing Bandwidth Cost?

However, once the data is moved to any cloud there are two major challenges I see.

SQL SERVER - Improve Application Performance on Cloud While Reducing Bandwidth Cost nitroazure-800x302

Challenge 1:Network Latency / Congestion and Slow Performance

Challenge 2: Network Bandwidth Congestion and High Cost

Every time when I go to performance tuning consultation for cloud hosted database, I hear about slow performance and high cost. The matter of the fact, it is so common that when I know the database is hosted on a cloud, I tell my customer that I already know their problem a) Slow Performance due to network latency and b) High cost due to high consumption of the data bandwidth.

If you have a very small database, you may not face issues with the high cost of bandwidth, but if you are running enterprise grade application where you move quite a lot of data from one server to another server, I am very confident that you are facing above two issues.

For example, Microsoft charges egress or data transfer out cost per 250MB around USD 21.32 and Amazon charges around USD 23.27. Now look at your database and find the size of the largest table, next think if you want your table to be exported to our in-premises system due to any reason, how much will it cost if you try to transfer out that much data. Trust me, in no time, you can almost cross the limit of your credit card. It is indeed a big big challenge and a REAL ONE!

Smart Solution

Though the challenge is quite big the solution is very simple. Actually the solution requires just a couple of clicks of the button and four minutes of your time.

I have installed NitroAccelerator in my cloud machine and local machine. Once I installed NitroAccelerator, I did two different tests of transferring around 25,000 rows of the data. When I saw the results, I was indeed very much surprised and happy.

Here is the less than 4 minutes long video where I narrate the entire story.

TESTTime (s)Data Size (KB)
NitroAccelerator OFF482267
NitroAccelerator ON1231

You can see that when I had turned ON NitroAccelerator, the query time was reduced 1/3 from the original time by giving me around 300% percentage of performance improvements and data size was reduced from 2267 KB to just 31 KB, that is around 98% of the compression of the data.  These tests do not include using the NitroAccelerator caching features for applications with redundant queries or using multiple active result sets (MARS) where you would get even more performance and cost saving benefits.

Here is my question to you – answer me honestly.

Do you want to gain 3 times more performance by reducing your cost by 98%?

If the answer is YES. Watch the above video and download NitroAccelerator.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

First appeared on SQL SERVER – Improve Application Performance on Cloud While Reducing Bandwidth Cost


SQL SERVER – Fix: Error: 1934, Level 16, INSERT or UPDATE Failed Because the Following SET Options have Incorrect Settings: ‘QUOTED_IDENTIFIER’.

$
0
0

SQL SERVER - Fix: Error: 1934, Level 16, INSERT or UPDATE Failed Because the Following SET Options have Incorrect Settings: 'QUOTED_IDENTIFIER'. gear_error A very old client of mine yesterday pinged me with this question about error related to QUOTED_IDENTIFIER.

“Pinal,

When I run my query it works just fine in SSMS but when I run it via SQL Server Agent Job, it gives me following error. Can you give me solution?

Here is the error he was facing:

Msg 1934, Level 16, State 1
UPDATE Failed Because the Following SET Options have Incorrect Settings: ‘QUOTED_IDENTIFIER’.

Solution / Workaround

The problem he was facing was very simple to fix it. He was running an update and which was working just fine in SSMS, because the SET options of the SSMS windows were different than the one for the query he was running.

Here is the solution of this simple error:

SET QUOTED_IDENTIFIER ON
GO
-- Write Your Query

When you write turn on the settings for the quoted identifier it will automatically remove the error for you. I hope this simple fix will help you.

If you want to read more about what is quoted identifier, you can read my following blog where I have explained the same issue in the detail.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

First appeared on SQL SERVER – Fix: Error: 1934, Level 16, INSERT or UPDATE Failed Because the Following SET Options have Incorrect Settings: ‘QUOTED_IDENTIFIER’.

SQL SERVER – Event ID 26 – Your SQL Server Installation is Either Corrupt or has Been Tampered With (Error Getting Instance Name)

$
0
0

While trying to start SQL Service for a named instance, I got an error message related to event id 26 – Your SQL Server Installation is Either Corrupt or has Been Tampered With (Error Getting Instance Name).

Here is the text of the error which is related to event id 26.

Windows could not start the SQL Server (SOFTWARE) service on Local Computer. Error 1067: The process terminated unexpectedly.

When I checked event log I found following error messages:

Error Messages 1: The SQL Server (SOFTWARE) service terminated unexpectedly.  It has done this 4 time(s).

Error Message 2: Application popup: SQL Server: Your SQL Server installation is either corrupt or has been tampered with (Error getting instance name.).   Please uninstall then re-run setup to correct this problem.

The error gives hints about the issue, but I was not very sure what I should do. I search on the internet and found that we get Error getting the instance name when mapping between instance ID and name is incorrect. So, I looked into registry key

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\Instance Names\SQL

SQL SERVER - Event ID 26 - Your SQL Server Installation is Either Corrupt or has Been Tampered With (Error Getting Instance Name) tamper-01

As we can see above, the SOFTWARE is mapped to MSSQL13.SOFTWARE. so I looked for this key

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL13.SOFTWARE

Interestingly, I found that someone renamed MSSQL13.SOFTWARE to MSSQL13.SOFTWARE1 which was causing the issue.

SQL SERVER - Event ID 26 - Your SQL Server Installation is Either Corrupt or has Been Tampered With (Error Getting Instance Name) tamper-02

As soon I renamed the key back, the issue was resolved.

Another way to find a cause of such errors would be to start SQLServr.exe via command prompt and you might get a more meaningful error.

I don’t know who renamed the key and that is still a mystery.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

First appeared on SQL SERVER – Event ID 26 – Your SQL Server Installation is Either Corrupt or has Been Tampered With (Error Getting Instance Name)

Temporary Post Used For Theme Detection (c4769b6a-7865-40b8-ab68-fa0a485a54d4 – 3bfe001a-32de-4114-a6b4-4005b770f6d7)

$
0
0
This is a temporary post that was not deleted. Please delete this manually. (ac944c77-2264-440f-9cad-287c14d43701 – 3bfe001a-32de-4114-a6b4-4005b770f6d7)Filed under: #SQLServer

SQL SERVER – Maintenance Plan failing with 0x80131904 – A network-related or instance-specific error occurred while establishing a connection to SQL Server.

$
0
0

One of my clients contacted me and informed that after patching of SQL Server, they noticed that their maintenance plan were failing. I asked them to share the complete error message about the Maintenance Plan failing with 0x80131904.

End Warning Error: 2017-01-09 17:22:27.10 Code: 0xC0024104 Source: Back Up Database Task Description: The Execute method on the task returned error code 0x80131904 (A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 – Error Locating Server/Instance Specified)). The Execute method must succeed, and indicate the result using an “out” parameter. End Error

I have removed some part of a message which is not relevant. In above long error information, interesting piece is “error: 26 – Error Locating Server/Instance Specified”.

Whenever we create a Maintenance Plan, SQL Server Management Studio always takes the name of the connection manager as “Local server connection” by default.

The important thing to note is, it uses the server name that we have used in SSMS to connect to the instance and puts this exact name in the server name property of the connection manager “Local server connection”

Does this give some hint?

SOLUTION / WORKAROUND

It is quite possible that the port of SQL might have been changed. When we connected earlier, when maintenance plan was created. We need to check on what is the server name, value in the connection manager “Local server connection” mentioned. To see that we can click “Manage Connections” after we click modify the Maintenance Plan.

SQL SERVER - Maintenance Plan failing with 0x80131904 - A network-related or instance-specific error occurred while establishing a connection to SQL Server. mp-alias-01

Now, we can use the SQL Server Configuration manager and create an alias with the exact same name, provide the TCP and manual port number. Once we created alias, maintenance plan ran without an error?

Hopefully you know how to create TCP aliases in SQL Server Configuration Manager. If not, refer to books online documentation Create or Delete a Server Alias for Use by a Client (SQL Server Configuration Manager)

Reference: Pinal Dave (http://blog.SQLAuthority.com)

First appeared on SQL SERVER – Maintenance Plan failing with 0x80131904 – A network-related or instance-specific error occurred while establishing a connection to SQL Server.

#0400 – SQL Server – SSIS – Using the SQL Server Destination

$
0
0
When inserting bulk data, the SQL Server destination may often prove to be a better option as compared to the OLE DB destination.

Monday Coffee 2017-02-27

$
0
0

Ergh, not a fun weekend rugby wise. But anyway…

Last week Microsoft released an image for SQL Server 2016 SP1 Developer Edition in containers. Previously the only edition available was vNext Enterprise Evaluation which was a real problem in making containers a viable option for many businesses.

There’s no point in having a development environment referencing a SQL instance that is not the same version as production. How many people would be running vNext in production? I bet there’s a few (mad) early adopters out there but in the main, I would say most businesses would be running 2016, 2014 or 2012.

Having this image available means that developers/DBAs can now seriously look at containers as an option when building development environments. Need to build an environment quickly? That’s what containers give you. I’d love to see this technology become widely used in the SQL Server world. I’ve been working with them for over a year now and being able to spin up a new instance of SQL Server in seconds is really cool.

It does beg the question are Microsoft going to release images for other, earlier versions of SQL Server? I’m honestly not sure that they will but if they want containers to become more widespread that would be the way to do it. We’ll see what happens but even if they don’t there are other options out there.

Have a good week!


SQL SERVER – Troubleshooting: When Database Creation Takes Long Time

$
0
0

This might not be an issue which would be faced by SQL DBAs regularly because databases are generally already created. There might be few DBAs out there who take care of deployment as well. So, if you are having an issue with slow database creation, then you have found the right blog to try some troubleshooting.

Are you running command without size?

If yes, then you should know what is the size SQL Server would assume. Going back to basics, the model database in SQL Server is used as the template for all databases created on an instance of SQL Server. The entire contents of the model database, including database options, are copied to the new database.  So, what are the things you would check?

SQL SERVER - Troubleshooting: When Database Creation Takes Long Time databasecreation-800x327

Yes, we would check the size of the model database. I remember a client who changed the model database default size for database and log file and we found it be set more than 1024MB each in our environment. Earlier it was taking 10 minutes to create the new database (we were just running CREATE DATABASE FOO) and as soon as When we reduced the initial size of both files in the model database to ideal values, we could create a new database within 8 to 10 seconds.

I always rely on DMVs to give me some pointers about what’s going on. Below is one of my favorite commands

SELECT * FROM sys.dm_exec_requests WHERE SESSION_ID=[Yoursessionid]

When I checked the last_wait_type for the database creation SPID, it was found to be IO_COMPLETION.

Are you running command with size?

If you are running something like below

CREATE DATABASE [MyHUGEDatabase] ON PRIMARY (
	NAME = N'MyHUGEDatabase_dat'
	,FILENAME = N'D:\HugeDB\MyHUGEDatabase.mdf'
	,SIZE = 150000 MB
	,MAXSIZE = 400000 MB
	,FILEGROWTH = 51200 KB
	) LOG ON (
	NAME = N'MyHUGEDatabase_log'
	,FILENAME = N'D:\HugeDB\MyHUGEDatabase_log.ldf'
	,SIZE = 50000 MB
	,MAXSIZE = 100000 MB
	,FILEGROWTH = 15360 KB
	) COLLATE SQL_Latin1_General_CP1_CI_AS
GO

Watch out for the size given in the above query. 150 GB data file and 100 GB Log file. It is debatable if we need 100 GB of transaction log file for a 150 GB database. But keeping that point aside, we need to check whether we have the “perform volume maintenance” set for the SQL Server service account? If not, then enabling it will improve the initialization of the data files when creating/expanding them. This was also explained in many blogs. This is called as Instant File Initialization. You can search for that and get tons of article and samples.

Did you find this useful?

Reference: Pinal Dave (http://blog.SQLAuthority.com)

First appeared on SQL SERVER – Troubleshooting: When Database Creation Takes Long Time


SQL Server Target vs Total Memory

$
0
0
For this blog post I want to discuss the meaning behind SQL Server: Memory Manager\Target Server Memory (KB) and SQL Server: Memory Manager \Total memory (KB) Perfmon counters.  I will mention how under different situations and configuration settings their behaviour … Continue reading

Azure Data Factory and SSIS compared

$
0
0

I see a lot of confusion when it comes to Azure Data Factory (ADF) and how it compares to SSIS.  It is not simply “SSIS in the cloud”.  See What is Azure Data Factory? for an overview of ADF, and I’ll assume you know SSIS.  So how are they different?

SSIS is an Extract-Transfer-Load tool, but ADF is a Extract-Load Tool, as it does not do any transformations within the tool, instead those would be done by ADF calling a stored procedure on a SQL Server that does the transformation, or calling a Hive job, or a U-SQL job in Azure Data Lake Analytics, as examples.  Think of it more as an orchestration tool.  SSIS has the added benefit of doing transformations, but keep in mind the performance of any transformations depends on the power of the server that SSIS is installed on, as the data to be transformed will be pushed to that SSIS server.  Other major differences:

  • ADF is a cloud-based service (via ADF editor in Azure portal) and since it is a PaaS tool does not require hardware or any installation.  SSIS is a desktop tool (via SSDT) and requires a good-sized server that you have to manage and you have to install SQL Server with SSIS
  • ADF uses JSON scripts for its orchestration (coding), while SSIS uses drag-and-drop tasks (no coding)
  • ADF is pay-as-you-go via an Azure subscription, SSIS is a license cost as part of SQL Server
  • ADF can fire-up HDInsights clusters and run Pig and Hive scripts.  SSIS cannot
  • SSIS has a powerful GUI, intellisense, and debugging.  ADF has a basic editor and no intellisense or debugging
  • SSIS is administered via SSMS, while ADF is administered via the Azure portal
  • SSIS has a wider range of supported data sources and destinations
  • SSIS has a programming SDK, automation via BIML, and third-party components.  ADF does not have a programming SDK, has automation via PowerShell, and no third-party components
  • SSIS has error handling.  ADF does not
  • ADF has “data lineage“, tagging and tracking the data from different sources.  SSIS does not have this

Think of ADF as a complementary service to SSIS, with its main use case confined to inexpensively dealing with big data in the cloud.

Note that moving to the cloud requires you to think differently when it comes to loading a large amount of data, especially when using a product like SQL Data Warehouse (see Azure SQL Data Warehouse loading patterns and strategies).

More info:

Azure Data Factory vs SSIS

SQL SERVER – Unable to See Tables (Objects) in SSMS

$
0
0

Once I was thrown into an interesting situation where a client told me that he can see the data from the table but not the table. I asked him to elaborate more and he sent me below screenshots where he was unable to see the table in SSMS.

Below we can see that Sysadmin level account is connected to SQL Server and it can see database foo and also Table_1 in the database.

SQL SERVER - Unable to See Tables (Objects) in SSMS hide-ssms-01

Below is from a non-sysadmin user who is not able to see the database in object explorer BUT when he queries the table Table_1 in foo database, it works fine.

SQL SERVER - Unable to See Tables (Objects) in SSMS hide-ssms-02

I took a little time to capture profile when opening management studio. It didn’t take much time to realize that there is somewhere we have DENY permission applied. I have asked to share the output of below query.

USE master
GO

SELECT class
	,major_id
	,grantee_principal_id
	,permission_name
	,state_desc
FROM sys.server_permissions
WHERE state_desc = 'DENY'
GO

USE foo -- permission in database
GO

SELECT class
	,major_id
	,grantee_principal_id
	,permission_name
	,state_desc
FROM sys.database_permissions
WHERE state_desc = 'DENY'
GO

Here is the output.

SQL SERVER - Unable to See Tables (Objects) in SSMS hide-ssms-03

So, as we can see, we have DENY permission on VIEW ANY DATABASE and that’s why we are seeing above behavior.

WORKAROUND / SOLUTION

We need to REVOKE the DENY permission which was given earlier.

USE master
REVOKE VIEW ANY DATABASE FROM SQL1
USE foo
REVOKE VIEW DEFINITION ON table_1 FROM SQL1

Here is the query which was used to create this scenario

USE master
DENY VIEW any database TO SQL1
USE foo
DENY VIEW DEFINITION on table_1 to SQL1

As a DBA, now you can avoid users to see the schema of database/table in SSMS. Have you used such lower level permissions in SQL Server?

Reference: Pinal Dave (http://blog.SQLAuthority.com)

First appeared on SQL SERVER – Unable to See Tables (Objects) in SSMS

Power BI Hands-On Workshops in April, 2017

$
0
0
During the month of April, I will be delivering three full-day Power BI hands-On workshops.  Each of these events will be the Friday preceding these SQL Saturday events.  Seating is limited and many of these workshops tend to book-up.  Follow the links to register. Huntington Beach, Orange County, CA;  March 31 SQL Saturday: April 1 Madison, … Continue reading

Tempdb settings in SQL Server 2016

$
0
0
A nice little error log feature that I noticed in SQL Server 2016 regarding tempdb. Tempdb, a system database in SQL Server, some call it the “public toilet” but honestly it is where temporary user objects are created, internal objects … Continue reading

SqlPackage Deploy Performance - IgnoreXX are not your friend!

$
0
0

Following on from yesterdays blog I was wondering about the comparison of objects that were the same and how the IgnoreWhitespace, IgnoreComments, IgnoreKeywordCasing and IgnoreSemiColonsBetweenStatements flags affected the comparison. To be fair I was only interested in IgnoreWhitespace but actually it turns out that those four are very closely related.

When the deploy happens, where a script in the source and target are compared the process is:

  • 1. Loads of things we will skip
  • 2. Any cmd variables in the scripts are replaced with their appropriate values
  • 3. If both the scripts are null - the comparison returns true. This has to be the best for performance but the worse for functionality ;)
  • 4. If one script is null but not the other then the comparison returns false. This actually has to be the best for comparison performance but worse for deploy performance!
  • 5. We then get a good old fashioned String.Equals, the standard .net compare goes: if both strings are not null and the lengths are the same do a check on each byte in the strings
  • 6. If the strings are equal we have a match, happy days no more action required

It is what happens if the strings do not match that it starts to get a bit more interesting, if the strings are not equal and any of those four ignore options are True then we then fall down into doing a further comparison but after the scripts have been normalized using the script dom and antlr which is an expensive operation in itself (this also happens to be my next topic!).

Once the normalization has been done we end up in the actual compare which goes like this:

  • 1. Turn the script into a stream of tokens
  • 2. If the token is a comment and ignore comments is set, skip it
  • 3. If the token is whitespace and ignore whitespace is set, skip it
  • 4. If the token is a semi-colon and ignore semi-colon's is set, skip it
  • 5. Then compare the tokens which itself does things like use IgnoreKeywordCasing and removes quotes around quoted identifiers - it isn't a straightforward String.Equals
  • 6. If any of the tokens don't match then it is a failure and the script needs to be changed

So what?

So blunt. Anyway, basically what this means is that the default options in the sqlpackage.exe are set to allow things like different cased keywords and whitespace and to allow that we end up taking longer to do deployments where we actually make use of the default features.

huh?

If you have a database with lots of code and you have the code in SSDT but you do things like change the comments when you deploy and rely on IgnoreComments (this is a real life secenario I have seen, someone adding a custom comment header) then you will have slower deployments and as slower deployments are the opposite of what we want you should:

  • Have the same code in your database as you have in your project
  • Have the same code, including the same case of keywords, comments and whitespace in your database that you have in your project
  • Disable the defaults and set IgnoreWhitespace, IgnoreComments, IgnoreKeywordCasing and IgnoreSemiColonsBetweenStatements all to false

What effect does it have?

If your database a project code are exactly the same, then no effect you neither gain nor lose anything.

If your database and code are different by comments, case, semi-colons etc and you have lots of files that are different then you will gain quite a bit. On my machine here I created a database with 1,000 stored procedures like "select '----'" (I used replicate to make it large) I then imported the procs into SSDT and added a space between the select and the text and did a deploy using sqlpackage (in fact I did a few to get an average time), with the default IgnoreWhitespace=true the deploy took about 24 seconds (remember this is on a machine following yesterdays recommendations. lots of memory, fast CPU and SSD) - when I removed the defaults and set them to false - firstly the deploy took 34 seconds because naturally it had to deploy the procs then re-running it took around 17 seconds - about 7 seconds from a 24 second deploy which i'll take.

The thing that you will really gain is that your project code and database will be the same which should really be the end goal, if you can honestly say that you have to have:

  • Different whitespace
  • Different keyword casing
  • Different semi-colons
  • Different comments

I would be inclined to find out why as it sounds like an interesting project :)

ScriptDom parsing and NoViableAltExceptions

$
0
0

If you have ever tried to debug a program that used the TSql Script Dom to parse some T-SQL you will know that the process is extremely slow and this is due to the volume of NoViableAltExceptions (and others) that are thrown and then caught. Because these are first chance exceptions they are being handled and it is the way that the script dom interacts with Antlr and the Lexer that they use. When you debug a program what happens is you have two processes, process one is the debuger, this starts (or attaches) to process two, the debugee.

The debugger calls a windows function WaitForDebugEvent typically in a "while(true)" loop (everyone should write a windows debugger at some point in their lives you learn so much, in fact put down ssms and go write your first debugger loop: https://msdn.microsoft.com/en-us/library/windows/desktop/ms681675(v=vs.85).aspx). The debugee app is then run and when something interesting like an exception or a dll is loaded/unloaded the debuggee is paused (i.e. all threads stopped), then WaitForDebugEvent returns and the debugger can look at the child process and either do something or call WaitForDebugEvent again. Even if the debugger doesn't care about the exceptions the debugee is still paused and when you parse T-SQL under a debugger, even if you tell Visual Studio to ignore the exceptions, the debugee is still paused for every exception just so Visual Studio (or your home baked debugger) can decide that it wants to ignore that exception and the debugee is started up again.

What this means for an app that throws lots of first chance exceptions is a constant start, stop, start, stop which is so so painful for performance - it is basically impossible to debug a TSql Script Dom parse on a large project, I typically debug a project with like one table and one proc and hope it gives me everything I need or do other tricks like letting the parsing happen without a debugger attached then attach a debugger at the right point after the parsing has happened but then again I don't have to debug the TSql Lexer's!

So where is this leading?

I was wondering what effect these first chance exceptions had on T-SQL and even in normal operations where we don't have a debugger attached, is there something we can do to speed up the processing?

The first thing I wanted to do was to try to reproduce a NoViableAltException, I kind of thought it would take me a few goes but actually the first statement I wrote caused one:

"select 1;"

This got me curious so I tried just:

"select 1"

Guess what? no NoViableAltException the second time - this didn't look good, should we remove all the semi-colon's from our code (spoiler no!).

Ok so we have a reproducable query that causes a first chance exception, what if we parse this like 1000 times and see the times and then another 1000 times with the semi-colon replaced with a space (so it is the same length)?

Guess what? The processing without the semi-colon took just over half the time of the queries with semi-colons, the average time to process a small query with a semi-colon in took 700ms and the query without the semi-colon took 420ms so much faster but who cares about 300 milli seconds? it is less than 1 second and really won't make much difference in the overall processing time to publish a dacpac.

I thought I would just have one more go at validating a real life(ish) database so I grabbed the world wide importers database and scriptied out the objects and broke it into batches, splitting on GO and either leaving semi-colons or removing all semi-colons - when I had semi-colons in the time it took to process was 620 ms and there were 2403 first chance exceptions. The second run without semi-colons which would likely create invalid sql in some cases - took 550 ms and there were still 1323 first chance exceptions, I think if we could get rid of all teh first chance exceptions the processing would be much faster but ho hum - to handle the first chance exceptions you just need a fast CPU and not to be a process that is being debugged.


HOW TO: Solve General SQL Server Connectivity Issues

$
0
0

Recently, Microsoft published a page for solving general SQL Server connectivity issues. This page is perfect for you or anyone you know that has suffered from one of the following error messages in the past.

The post HOW TO: Solve General SQL Server Connectivity Issues appeared first on Thomas LaRock.

If you liked this post then consider subscribing to the IS [NOT] NULL newsletter: http://thomaslarock.com/is-not-null-newsletter/

SQL SERVER – Add Failover Cluster Node Fails With Error – This SQL Server Edition Does Not Support the Installed Number of Cluster Nodes

$
0
0

SQL SERVER - Add Failover Cluster Node Fails With Error - This SQL Server Edition Does Not Support the Installed Number of Cluster Nodes clusters In this blog post we will discover how to fix an Add Failover Cluster Node Fails With Error. If you have installed SQL Server cluster, then it would be easy for you to remember that it’s a two-step process.

  1. InstallFailoverCluster
  2. AddNode

My client completed step 1 successfully, but while adding a second node, he was getting below error in Detail.txt.

(13) 2017-01-08 14:33:01 Slp: Executing rules engine…
(13) 2017-01-08 14:33:01 Slp: Start rule execution, total number of rules loaded: 18
(13) 2017-01-08 14:33:01 Slp: Initializing rule : Number of cluster nodes supported for edition
(13) 2017-01-08 14:33:01 Slp: Rule is will be executed : True
(13) 2017-01-08 14:33:01 Slp: Init rule target object: Microsoft.SqlServer.Configuration.SetupExtension.NumberOfNodesFacet
(13) 2017-01-08 14:33:01 Slp: Rule ‘Cluster_NumberOfNodes’ edition Invalid allows 0 cluster nodes.
(13) 2017-01-08 14:33:01 Slp: Rule ‘Cluster_NumberOfNodes’ detected 1 cluster nodes.
(13) 2017-01-08 14:33:01 Slp: Evaluating rule : Cluster_NumberOfNodes
(13) 2017-01-08 14:33:01 Slp: Rule running on machine: SQLNODE02
(13) 2017-01-08 14:33:01 Slp: Rule evaluation done : Failed
(13) 2017-01-08 14:33:01 Slp: Rule evaluation message: This SQL Server edition does not support the installed number of cluster nodes. To continue, remove nodes and then complete cluster installation.

Initially, my thoughts, it’s because of a standard edition. SQL SERVER – Add failover cluster node fails with “number of cluster nodes supported for edition”

So, I asked to check SELECT SERVERPROPERTY (‘Edition’) and it was the enterprise! We should note that they were using Enterprise edition, which doesn’t have the limitation of number of nodes in the cluster. The error was very strange but when I looked at line by line, I found something interesting as below.

(13) 2017-01-08 14:33:01 Slp: Rule ‘Cluster_NumberOfNodes’ edition Invalid allows 0 cluster nodes.

From above it looks SQL setup is not able to get edition and it’s marked as “Invalid”. I checked further and found below message as well.

(04) 2017-01-08 14:33:01 Slp: Loading rule: AddNodeEditionBlock
(04) 2017-01-08 14:33:01 Slp: Creating rule target object: Microsoft.SqlServer.Configuration.SetupExtension.AddNodeEditionBlock
(04) 2017-01-08 14:33:01 Slp: Rule applied features : ALL
(04) 2017-01-08 14:33:01 Slp: ———————————————————————-
(04) 2017-01-08 14:33:01 Slp: Skipping rule AddNodeEditionBlock
(04) 2017-01-08 14:33:01 Slp: Rule will not be evaluated due to the following failed restriction(s):
(04) 2017-01-08 14:33:01 Slp: Condition “Is requested input setting is set to PID” did not pass as it returned false and true was expected.
Returning false as an unhandled exception was caught:
Microsoft.SqlServer.Chainer.Infrastructure.ChainerInvalidOperationException: The input ‘PID’ requested by the StringInputSettingExistsCondition is not of string type.

Based on my search on the internet, it looks like sqlboot.dll is used to get version using checksum value. The DLL is located in the path “SharedCode” stored in “HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\120” and “checksum” is located in “HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL12.<InstanceName>\Setup”

In my client’s case the file was missing from C:\Program Files\Microsoft SQL Server\120\Shared I am not sure how that happened. They did tell me that setup was incomplete on Node1 and they did manual hack to fix that.

Have you ever seen such weird error? Thanks to the internet to provide internals.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

First appeared on SQL SERVER – Add Failover Cluster Node Fails With Error – This SQL Server Edition Does Not Support the Installed Number of Cluster Nodes

Blog Challenge: I am watching you!

$
0
0
I decided to accept Grant’s (TheScaryDBA) challenge found here- http://www.scarydba.com/2017/03/02/random-blog-post-challenge/ where we have to write a technical blog post that incorporates a certain image of a certain individual. There was someone quite mischievous on my SQL Server, I needed to find out what … Continue reading

SQL SERVER – How to Apply Patch in AlwaysOn Availability Group Configuration?

$
0
0

This is one of the common question which is asked via emails to me. “How to apply the patch in AlwaysOn Availability Group configuration?” OR “What are the steps we should do and things to take care while patching availability replica?”

SQL SERVER - How to Apply Patch in AlwaysOn Availability Group Configuration? alwaysonpatch-800x317

There might be many articles which would provide same details, but I want to make it short and crisp.

I am going to pick step by step for an Availability Group with one secondary replica.

  1. Make sure that we have taken good recent OS backup with system state (or VMware snapshot with SQL services stopped), a good recent backup of all databases and a successful completion of a checkdb on the primary node. {This is not mandatory, but to avoid “Ouch” moment}
  2. From the node acting as the primary replica (SQL1), change the failover mode to manual
  3. Refresh the affected databases on the secondary replica (SQL2) and make sure that everything is green on the dashboard.
  4. Apply the patch (service pack of CU) on SQL2.
  5. Repeat the Windows Update and/or software updates until all available patches are applied. Do not move on with the patching steps until all patches and post patch reboot and configuration tasks are completed.
  6. Double check that patches have been applied, the cluster is healthy and AlwaysOn Availability Groups are functional.
  7. Make sure that synchronization state is SYNCHRONIZED.
  8. Fail over the availability group to the secondary replica (SQL2).
  9. Refresh the affected databases on secondary Replica (former primary = SQL1) until the synchronization state is synchronized.
  10. Apply the patch (service pack of CU) on SQL1.
  11. Repeat the Windows Update and/or software updates until all available patches are applied. Do not move on with the patching steps until all patches and post patch reboot and configuration tasks are completed.
  12. Double check that patches have been applied, the cluster is healthy and AlwaysOn Availability Groups are functional.
  13. Make sure that synchronization state is SYNCHRONIZED.
  14. Fail over the availability group to the primary node (back to SQL1).
  15. Change the failover mode to Automatic now (which we changed in Step b)

In case things do not go as planned, you have followed step a) so you know what needs to be done.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

First appeared on SQL SERVER – How to Apply Patch in AlwaysOn Availability Group Configuration?

Why to Use SQL Server Configuration Manager Over Services applet (services.msc)? – Interview Question of the Week #112

$
0
0

Question: Why to Use SQL Server Configuration Manager Over Services applet (services.msc)?

Answer: You might have heard this advice many times but never got a complete answer to “Why”? Most of the blog would tell you “how” to change the service account in the right way.

Why to Use SQL Server Configuration Manager Over Services applet (services.msc)? - Interview Question of the Week #112 configmanager

“Why should we use the SQL Server Configuration Manager (SSCM) not services. Mass” is the question which I am trying to answer in this blog? Few interviewers also might ask this question to check candidate’s skill.

Here are the reasons which I can think of.

  1. Password validation: When we change the password from services.msc it saves the password without validation. So, if we give incorrect password, service startup would fail with the standard error – Error 1069: The service did not start due to a logon failure.
  2. Prorogation to permissions in the registry: When an account is changed, the permission on the registry is also modified. If it’s done from services.msc then SQL startup might fail because service account if not able to read the registry keys, which is one of the parts in SQL Service startup.
  3. Group Membership Changing service account via SSCM adds the service account to the appropriate group membership which provides the necessary permissions.
  4. Service Restart: When changing service account, SCCM stops and starts the service automatically. Password change doesn’t need a restart. But as mentioned earlier, its validated and error would be thrown if password is incorrect.
  5. Encryption: SSCM also takes care of updating the Windows local security store which protects the service master key for the Database Engine.

Note that the configuration manager uses WMI to make changes. So above all points are also applicable when we are using SMO or WMI to change password programmatically.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

First appeared on Why to Use SQL Server Configuration Manager Over Services applet (services.msc)? – Interview Question of the Week #112

Viewing all 1849 articles
Browse latest View live