Quantcast
Channel: SQL Server Blog
Viewing all 1849 articles
Browse latest View live

Tales of when a Log Fails to Shrink in an Availability Group

$
0
0

I received a report that one of my servers had 7% free space on its log drive. Sounded like something fun to resolve. I checked on what was going on and found a log file that was 99% free and a hundred gb in size. While shrinking a log file is not a good practice and I’m not advocating it by any means because it’s just going to grow again and your storage is there specifically to hold logs, this situation was a out of the ordinary and we needed space.

The problem was, this log would not shrink. It was being extremely uncooperative. I took a backup, log backups, multiple shrink attempts, but it wouldn’t budge. The message returned was a big clue though.

<code>The log for database ‘dbname’ cannot be shrunk until all secondaries have moved past the point where the log was added.</code>

As you might have guessed, this server was a SQL Server 2012 instance and in an Always On Availability Group. The database in question could not shrink because it was participating in the AG.

It wasn’t an ideal fix, but by removing the database from the Availability Group, I was able to perform a log shrink to get the size to a more manageable amount. No, I did not truncate it to minimum size, I adjusted it to a reasonable amount based on its normal work. I didn’t want the log to just have to grow again. The shrink worked flawlessly, and with adequate drive space, I attempted to add the database back to the AG via the wizard.

The AG wizard refused to help. The database was encrypted and the AG wizard will not let you add a database if it is encrypted. No explanation why, it just doesn’t like that. You can add an encrypted database to an AG via script though. You can even script the change from the wizard by using a non-encrypted database then changing the database name in the scripted result. The resulting script is exactly what the AG wizard would do, it just cannot execute it automatically.


ALTER AVAILABILITY GROUP AgName
ADD DATABASE DbName;
GO

With free space and an encrypted database safely back in my AG, I was off to new adventures!


Filed under: Availability Groups, SQL Server, Troubleshooting Tagged: Availability Groups, SQL Server, Troubleshooting

Azure SQL Database vs SQL Data Warehouse

$
0
0

I am sometimes asked to compare Azure SQL Database (SQL DB) to Azure SQL Data Warehouse (SQL DW).  The most important thing to remember is SQL DB is for OLTP (i.e. applications with individual updates, inserts, and deletes) and SQL DW is not as it’s strictly for OLAP (i.e. data warehouses).  So if your going to build a OLTP solution, you would choose SQL DB.  However, both products can be used for building a data warehouse (OLAP).  With that in mind, here is a list of the differences:

I have other blogs that cover SQL DB and SQL DW.

PostgreSQL – Learn Online in a Single Day – PostgreSQL Learning Path

$
0
0

PostgreSQL is considered to be one of the most advanced open source database. PostgreSQL is very easy to learn as well as it is very implemented and easy to implement. Along with SQL Server I have been recently focusing on MySQL and PostgreSQL. While working with three well proven Relational Databases, I have figured out that once you know one relational language, it is really easy to master another relational language, we just have to know the basics and the rest of the advanced concepts we can build over our founding concepts. In this blog post we will discuss about the PostgreSQL Learning Path.

Earlier this year, I have completed a series of five courses on Pluralsight about PostgreSQL. While I build these courses I have kept following details in my mind:

  • Each concept should be easy to understand
  • Each example, should feel very close to real world scenarios
  • Each demonstration should be easy to execute

Well, that’s it. I kept above three rules in my mind when I built the courses and each course is extremely well received. Here is the link five courses. I strongly suggest that you follow this learning path to learn PostgreSQL. However, you can select any course and any module and learn from that point. I have made sure that each topic can be learnt independent from each other.

Please note that you will need valid access to Pluralsight to watch this courses. Pluralsight offers free trials.

PostgreSQL: Getting Started

PostgreSQL is commonly known as Postgres and is is often referred to as a the world’s most advanced open source database. In this course, we will go over the basics of the PostgreSQL. We will cover topics ranging from installations, to writing basic queries and retrieving data from tables. We will also explore the logic of join, and a few best practices.
Click to View Course

PostgreSQL: Introduction to SQL Queries

In this course, we will learn about various datatypes and their impact on performance and database designs. We will learn about various table operations, schemas, keys, and constraints. We will also learn how to efficiently retrieve data and make modification to data with insert, update, and delete operations.
Click to View Course

PostgreSQL: Advanced SQL Queries

In this course, we will discuss advanced queries for PostgreSQL. We will learn about functions and operators, type conversions, and transactions.
Click to View Course

PostgreSQL: Advanced Server Programming

If you’re a database developer looking to expand your skills and understanding of triggers, rules, and procedural language in PostgreSQL, this course is for you.
Click to View Course

PostgreSQL: Index Tuning and Performance Optimization

Data is critical to any application, and database performance goes hand-in-hand with it. In this course, you’ll get to see some ways to maximize database performance with PostgreSQL, covering indexes, best practices, and more.
Click to View Course

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on PostgreSQL – Learn Online in a Single Day – PostgreSQL Learning Path

Write-Only permissions

$
0
0

Yep, that’s right, you heard me. Write-Only not Read-Only. I was presenting SQL Server Security Basics at NTSSUG the other night and there was an interesting discussion on the idea of granting someone write permissions without corresponding read permissions.

So for example:

-- Setup code
CREATE LOGIN WriteOnlyUser WITH PASSWORD = 'WriteOnlyUser',CHECK_POLICY = OFF;
GO
USE AdventureWorks2014;
GO
CREATE USER WriteOnlyUser FROM LOGIN WriteOnlyUser;
GO
ALTER ROLE db_datawriter ADD WriteOnlyUser;
GO

The user WriteOnlyUser now has permission to insert, update and delete from any table in AdventureWorks2014. They do not, however, have permission to read from any of the tables.

EXECUTE AS USER = 'WriteOnlyUser';
GO
INSERT INTO Person.PersonPhone (BusinessEntityID, PhoneNumber, PhoneNumberTypeID) 
	VALUES (1,'999-999-9999',1);
GO
SELECT * FROM Person.PersonPhone;
GO
REVERT;
GO

WriteOnly1

So they can INSERT without a problem, but can’t SELECT. They can also UPDATE and DELETE. Or can they?

EXECUTE AS USER = 'WriteOnlyUser';
GO
UPDATE Person.PersonPhone SET PhoneNumberTypeID = 3
	WHERE BusinessEntityId = 1
	  AND PhoneNumber = '999-999-9999';
GO
DELETE Person.PersonPhone
	WHERE BusinessEntityId = 1
	  AND PhoneNumber = '999-999-9999';
GO
REVERT;
GO

WriteOnly2

Now wait, why are they getting a read error when trying to UPDATE or DELETE? Because of the WHERE clause. The WHERE requires reading the data to see if a row meets the required conditions.

This however will work

EXECUTE AS USER = 'WriteOnlyUser';
GO
UPDATE TOP (1) Person.PersonPhone SET PhoneNumberTypeID = 3;
GO
DELETE TOP (1) Person.PersonPhone;
GO
REVERT;
GO

WriteOnly3

So WriteOnlyUser can UPDATE or DELETE but only if it’s not actually looking at any of the data. i.e. a WHERE clause. I don’t know about you but that rather felt like working without a net. Or maybe running with scissors.

Someone in the session suggested the interesting (possible) workaround of using the OUTPUT clause. In other words could WriteOnlyUser update (or delete) a bunch of rows and bypass needing read permissions by looking at the output created by the OUTPUT clause. As it happens someone went home, tried it, and sent me the results before I even got home that night. (Yes, you beat me Lee, but you have a much shorter drive home than I do. :p ) We are going to pretend they didn’t and try it out ourselves.

EXECUTE AS USER = 'WriteOnlyUser';
GO
CREATE TABLE #TempPersonPhone (
	BusinessEntityId INT,
	PhoneNumber varchar(50),
	PhoneNumberTypeId smallint,
	ModifiedDate datetime);
GO
UPDATE TOP (1) Person.PersonPhone
SET PhoneNumberTypeId = PhoneNumberTypeId
OUTPUT inserted.BusinessEntityID, inserted.PhoneNumber,
    inserted.PhoneNumberTypeID, inserted.ModifiedDate
    INTO #TempPersonPhone;
GO
SELECT * FROM #TempPersonPhone;
GO
REVERT;
GO

WriteOnly4

So not only could WriteOnlyUser not use the OUTPUT clause but when it tried nothing happened (no rows affected).

Fun experement but the real point here is that if you don’t actually have read permission (SELECT) you can’t read anything from the table.


Filed under: Microsoft SQL Server, Security, SQLServerPedia Syndication Tagged: database permissions, microsoft sql server, security

Tricky TSQL: NOT IN (NULL)

$
0
0
We’ve done this before, but we can go one better this time. Let’s take this step by step. NULL means “I don’t know”. It stand for an unknown value. Nothing can be equal to NULL. We simply can’t say that 1 = NULL, or ‘ABBA’ = NULL,  because we don’t know what value NULL might possibly be … Continue reading Tricky TSQL: NOT IN (NULL)

SQL SERVER – Creating a Copy of Database in Azure SQL DB

$
0
0

Recently I was about to undertake an interesting assignment for one of my clients who pinged me for Performance Tuning exercise. Since this is something I have been doing full time in the past couple of months, I got into the call immediately. Lesser did I know what I was about to get into actually. Since the problem was about SQL Server query performance, I thought it would be the usual routine. Let us learn about how to create a copy of the database in Azure SQL DB.

On getting over the initial call on Skype, I got to know it was an Azure SQL Database – which in this case didn’t matter because they had a badly written query. Generally, the optimizations that happen are common irrespective of where the database is. Coming back to the context of this blog – the customer was troubleshooting this on production database and requested that we can work off a copy of the database.

The DBA was asking me about how they can take a backup and restore process. Since this was Azure SQL DB, they were not sure of the process. I asked if they were aware of the Export-Import wizard? As you can see below, the database on an Azure SQL DB would look a little different.

Best part of working with Azure SQL databases is the fact that we can use TSQL command to actually create a copy of any database. A typical query would look like:

CREATE DATABASE [new_sqlauth] AS COPY OF [sqlauth-demo].[sqlauth-DemoDB]
GO

In this case I have taken a demo database and created an exact copy.

This was so seamless that I had that database for playing / testing for the duration of troubleshooting. Once I was done with the exercise in about a couple of hours, the team just dropped the database using a standard syntax.

-- Clean up script
DROP DATABASE [new_sqlauth]
GO

For curiosity sake, I was seeing the DBA wants to try the same thing on an on premise SQL Server. This will yield in a syntax error as shown below:

Msg 102, Level 15, State 1, Line 1
Incorrect syntax near ‘copy’.

Earlier, I used to say the on premise SQL Server had many cool features which were evidently missing on Azure SQL Databases. But I might need to revisit that statement because the more I start exploring and work on Azure, there are some interesting capabilities that surprise me every single day.

Do let me know if you are working on Azure SQL DB. What have been your experiences? Tell me via your comments below.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – Creating a Copy of Database in Azure SQL DB

Are SQL Saturday’s Worth It?

$
0
0

VenueThis past weekend I was fortunate enough to be a part of Louisville’s (for those local the ‘ville) SQL Saturday event held at Indiana Wesleyan. Most of you who end up on this site are probably familiar with it, but for those that aren’t familiar with SQL Saturday events you can check out their site here.

Now to put on an event like this is nothing short of an incredible effort from volunteers, sponsors, speakers, and attendees. Being able to help co-organize the one here in Louisville has been a humbling yet gratifying experience. Let me see if I can break it down a different way for you, the reader, who may not have had the opportunity yet to volunteer or attend such an event.

Volunteers

You can see these people usually with matching shirts on and a lanyard with their name and a ribbon that only says “volunteer.” In the past when I’ve attended such events I knew people helped out to put something like this on, but never in my wildest dreams did I envision all that it took until I volunteered.

Volunteering is not for glitz, glamor, or glory. Instead volunteering is what helps the cogs in the wheel move to get the steam engine running down the track. It is the staple of helping afford the opportunity for free learning to attendees and colleagues in our field.

Many, many, and many hours go into planning and organizing an event; if you attend one of these events make sure you seek a volunteer or organizer out and say thank you for their time; they are doing this for free and on their own time away from their families.

Mala Mahadevan (B|T) as a founding organizer of our event I thank you for allowing me to be a part of it these past few years.

Sponsors

Over the years, SQL Saturday Louisville has been blessed with some great sponsors. For the previous two years, John Morehouse (B|T) and I have taken great pride in working with some stellar companies. Without them, we would not be able to do what we do which is concentrate on the attendees and helping people learn.

Our Gold sponsors this year were:

Gold

  1. EMC
  2. Farm Credit Mid-America
  3. Imperva
  4. Microsoft
  5. Republic Bank
  6. Pyramid Analytics

 

Our Silver and Bronze sponsors this year were:

SilverBronze

  1. Idera
  2. PASS
  3. PureStorage
  4. Tek Systems
  5. Click-IT Staffing
  6. Homecare Homebase
  7. Datavail
  8. SQLSentry

A major thank you for all of their contributions and it is always a pleasure to work with all of you.

Speakers

It always amazes me at the number of speakers we have who send in sessions to our event. These speakers are people from all over the U.S. who are willing to travel and give their time so attendees can learn. Getting to spend time with each of them is not always an easy task, but always thankful to catch up with many friends at the speaker dinner.

It was awesome to see the attendees interacting with the speakers asking their questions and getting insight into the variously presented topics. And, because of so many good sessions to choose from, there was a buzz in the air.

As is the case with the volunteers mentioned above, speakers also travel on their own dime, away from their families – a simple thank you goes a long way. Also, for these sessions, I do want to point out that feedback cards are provided; please please please take a moment and make sure you provide good insightful feedback to the speakers. Each speaker uses this feedback to improve their sessions or have take-a-ways on what may or may not have worked. Yes, folks, these are important!

I won’t list every speaker we had; that is not the intent of this topic. But I will take a moment and say to each and every speaker who attended SQL Saturday Louisville 531 we thank you.

Attendees

Two words – – THE PEOPLE. As I have stated, these last two years has been nothing short of amazing. Seeing light bulbs go off with attendees who are learning from some of the best, and having discussions with attendees is why we do what we do.

When individuals come to us stating it was their first time at the event, and they had no idea that there is a local Louisville SQL User Group opens the doors to help reach people in our tech community.

Steve Jones (B|T), who is part of my Fab Five, talks about Dreaming of SQL Saturday. If you have not had a chance to read his post, check it out. Attendees travel from quite a distance. Which tells me the people are eager to learn.

Conclusion

So, the question I opened with “Is SQL Saturday Worth It?” Considering what I know now versus what I knew then the answer is yes. Personally, being a product of these types of events, I am living proof of what can grow from the SQL Community.

Whether you volunteer, speak, sponsor, or attend, all of these make the wheel turn. It’s a team effort with a lot of hard work. So, next time you attend one of these events, please don’t take them for granted.

Here is to continued learning, as we move forward to grow this community!


SQL SERVER – Installation Fails With Error – A Constraint Violation Occurred

$
0
0

In my recent past, I have helped around 10 customers who have had similar problems while installing SQL Server Cluster on windows. So I thought it would be a nice idea to pen it down as a blog post so that it can help others in future. In this blog post we will discuss about Installation Fails With Error – A Constraint Violation Occurred.

The issue which we saw was that SQL Server cluster setup would create a network name resource in failover cluster manager and it would fail. Here is the message which we would see in the setup

The cluster resource ‘SQL Server (SQLSQL)’ could not be brought online due to an error bringing the dependency resource ‘SQL Network Name (SAPSQL)’ online. Refer to the Cluster Events in the Failover Cluster Manager for more information.
Click ‘Retry’ to retry the failed action, or click ‘Cancel’ to cancel this action and continue setup.

When we look at event log, we saw below message (event ID 1194)

Log Name: System
Source: Microsoft-Windows-FailoverClustering
Date: 20/06/2016 19:55:45
Event ID: 1194
Task Category: Network Name Resource
Level: Error
Keywords:
User: SYSTEM
Computer: NODENAME1.internal.sqlauthority.com
Description:
Cluster network name resource ‘SQL Network Name (SAPSQL)’ failed to create its associated computer object in domain ‘internal.sqlauthority.com’ during: Resource online.
The text for the associated error code is: A constraint violation occurred.
Please work with your domain administrator to ensure that:
– The cluster identity ‘WINCLUSTER$’ has Create Computer Objects permissions. By default all computer objects are created in the same container as the cluster identity ‘WINCLUSTER$’.
– The quota for computer objects has not been reached.
– If there is an existing computer object, verify the Cluster Identity ‘WINCLUSTER$’ has ‘Full Control’ permission to that computer object using the Active Directory Users and Computers tool.

Another client got “Access is denied” messages instead of “A constraint violation occurred” in above event ID. My clients have informed that they have logged in as domain admin so Access denied is impossible. Error from another client is below.

I explained all of them that when network name is created in a cluster, it would contact active directory (AD) domain controller (DC) via Windows Cluster Network name computer account also called as CNO (Cluster Network Object). So, whatever error, we are seeing are possible because the domain admin account (windows logged in user account) is not used to create a computer object for SQL in AD.

To solve this problem, we logged into the domain controller machine and created the Computer Account: SAPSQL (called as VCO – Virtual Computer Object). Gave the cluster name WINCLUSTER$ full control on the computer name. If we carefully read error message, we have the solution already listed there. Then clicked on the retry option in the setup. The setup continued and completed successfully.

Solution/Workaround:

Here are the detailed steps (generally done on a domain controller by domain admin):

  1. Start > Run > dsa.msc. This will bring up the Active Directory Users and Computers UI.
  2. Under the View menu, choose Advanced Features.
  3. If the SQL Virtual Server name is already created, then search for it else go to the appropriate OU and create the new computer object [VCO] under it.
  4. Right click on the new object created and click Properties.
  5. On the Security tab, click Add. Click Object Types and make sure that Computers is selected, then click Ok.
  6. Type the name of the CNO and click Ok. Select the CNO and under Permissions click Allow for Full Control permissions.
  7. Disable the VCO by right clicking.

This is also known as pre-staging of the VCO.

Hope this would help someone to save time and resolve issue without waiting for someone else assistance. Do let me know if you ever encountered the same.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – Installation Fails With Error – A Constraint Violation Occurred


How to Drop Clustered Index on Primary Key Column? – Interview Question of the Week #084

$
0
0

Question: How to drop clustered index, which is created on primary key?

Answer: If you thought the answer as simple as following script, you are wrong.

DROP INDEX PK_Table1_Col1
ON Table1
GO

When you run above script on Table1 where PK_Table1_Col1 is clustered index it will throw an error.

Msg 3723, Level 16, State 4, Line 26
An explicit DROP INDEX is not allowed on index ‘Table1.PK_Table1_Col1’. It is being used for PRIMARY KEY constraint enforcement.

Here are two blog posts which are related to the concept described in this blog post, I suggest you read them before you continue with this blog post as that will help you understand the subject of Primary Key and Clustered Index a bit more in detail.

So question still remains, How do we drop clustered index on primary key column?

The answer is very simple, but first we will go over the entire script which will demonstrate to us that we have created a clustered index and primary key on the table.

First, let us create a table.

-- Create Table
CREATE TABLE Table1(
Col1 INT NOT NULL,
Col2 VARCHAR(100)
CONSTRAINT PK_Table1_Col1 PRIMARY KEY CLUSTERED (
Col1 ASC)
)
GO

Next, check if table has a primary key and clustered index on the same column with the help of following a script.

-- Check the Name of Primary Key
SELECT name 
FROM sys.key_constraints  
WHERE type = 'PK'
        AND OBJECT_NAME(parent_object_id) = N'Table1'
GO
-- Check the Clustered Index 
SELECT OBJECT_NAME(object_id),name
FROM sys.indexes 
WHERE OBJECTPROPERTY(object_id, 'IsUserTable') = 1
        AND type_desc='CLUSTERED'
        AND OBJECT_NAME(object_id) = N'Table1'
GO

Now let us attempt to drop a clustered index with drop script, which will give an error.

-- Drop Clustered Index
DROP INDEX PK_Table1_Col1   
    ON Table1
GO

The script listed above will give us following error.

Msg 3723, Level 16, State 4, Line 26
An explicit DROP INDEX is not allowed on index ‘Table1.PK_Table1_Col1’. It is being used for PRIMARY KEY constraint enforcement.

Now it is clear that we are not able to drop the clustered index as there is a primary key. Now if you want to drop a clustered index, you will have to drop it with the help of following script where we drop constraint on the same column.

-- Drop Constraint
ALTER TABLE Table1
DROP CONSTRAINT PK_Table1_Col1
GO

The above script will give us success message.

Now let us run following script and double check that table does not have either primary key or clustered index.

-- Check the Name of Primary Key
SELECT name 
FROM sys.key_constraints  
WHERE type = 'PK'
        AND OBJECT_NAME(parent_object_id) = N'Table1'
GO
-- Check the Clustered Index 
SELECT OBJECT_NAME(object_id),name
FROM sys.indexes 
WHERE OBJECTPROPERTY(object_id, 'IsUserTable') = 1
        AND type_desc='CLUSTERED'
        AND OBJECT_NAME(object_id) = N'Table1'
GO

You will notice that if we drop primary key on the clustered index column, it also automatically drops the clustered index.

Remember, it is not a good idea to have a table with clustered index or primary key as they are very critical element in RDBMS for database integrity and performance.

Now you can execute the following script to clean up the table which we have created for demonstration purpose.

-- Clean up
DROP TABLE Table1
GO

I am very sure that you have more question once you read this blog post as well as this and this blog post. I suggest you leave a comment and I will address them in the future blog post.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on How to Drop Clustered Index on Primary Key Column? – Interview Question of the Week #084

SQL SERVER – A Timeout (30000 milliseconds) was Reached While Waiting for a Transaction Response from the MSSQLSERVER

$
0
0

 Recently I was contacted by a client who reported very strange error in the SQL Server machine. These consulting engagements sometimes get the best out of you when it comes to troubleshooting. They reported that they see timeout error. My question was whether it connection timeout or query timeout which I explained in this blog post?

They said that they are seeing below error in the System Event log and during that time they were not able to connect to SQL Server.

Event ID: 7011
Message: A timeout (30000 milliseconds) was reached while waiting for a transaction response from the MSSQLSERVER service.

Once it happens, they were not able to stop SQL Server service. I asked about how do they reproduce the error or hang situation and strangely they said that it happens when they expand an Oracle Linked server in SQL Server Management Studio!!!

I told them to reproduce the error. As soon as they expand “catalog” under linked server to oracle, it was stuck in “expanding”. Luckily, I was with them and as soon as “hang” was reproduced, I connected via DAC connection. I was able to see PREEMPTIVE_OS_GETPROCADDRESS wait for the SSMS query. As per my internet search, it is called when loading DLL. In this case, the wait for increasing continuously. So I asked them to kill the SQL process from task manager.

As a next step, I wanted to know which is the DLL causing issue, so I captured Process Monitor while reproducing the issue. Finally, we were able to nail down that SQLServr.exe is trying to find “OraClient11.Dll” but not able to locate it.

It didn’t take much time to conclude that the hang was caused due to incorrect value in PATH variable for Oracle DLLs used by linked server. This is also explained here.

Solution

We found that PATH variable was not having C:\oracle11\product\11.2.0\client_1\BIN which was the folder contains OraClient11.Dll. As soon as we added above location to PATH variable, issue was resolved.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on SQL SERVER – A Timeout (30000 milliseconds) was Reached While Waiting for a Transaction Response from the MSSQLSERVER

#0381 – SQL Server – Table design – Is it better to use NEWID or NEWSEQUENTIALID when defining the key as a UNIQUEIDENTIFIER?

$
0
0
In most cases, an INTEGER based key on a table is sufficient. However when a GUID is required to be used, it is important to keep in mind that using NEWID() causes more fragmentation in the underlying data resulting in poor system performance.

Join me at DellWorld 2016 in Austin, TX

$
0
0

Dell World 2016 logo
I will be attending DellWorld 2016 as an influencer/media/analyst participant. This means that I’ll get access to the regular sessions, plus special engagements with product teams to see what they’ve been working on recently and what they want to do in the future. I’ve attended a couple of Dell on-site events and am looking forward to talking to key customers and real-world, hands-on data professionals. Also, doesn’t everyone want to visit Austin as much as possible?

If you will be attending Dell World, let me know. I hope we can #selfie. Or just have a real conversation. Or we can get breakfast tacos.

PowerShell – SQL Server Paging of Memory Identification

$
0
0

In one of my recent consultation visits to a customer, there was deep performance related problems. They were unclear to what was happening and what was the actual problem. But these are some of the challenges that I love to take head-on too. In this quest to learn what the problem would be, I used a number of tools and during that time I figured out it was a memory pressure that was creating the problem. Let us learn about SQL Server Paging of Memory Identification.

After the engagement got over, the DBA from the organization wrote to me to understand how this can be easily identified when working with a number of their servers in the infrastructure. He wanted something that can be run to understand if the SQL Server pages were being paged out and that could be a possible cause of memory pressure. He wanted some guidance or cheat sheet to play with.

This blog and powershell script was a fall out of that engagement.

param (
    [string]$SqlServerName = "localhost"
)

Add-Type -Path "C:\Program Files\Microsoft SQL Server\130\SDK\Assemblies\Microsoft.SqlServer.Smo.dll"

$SqlServer = New-Object Microsoft.SqlServer.Management.Smo.Server($SqlServerName)

foreach ($LogArchiveNo in ($SqlServer.EnumErrorLogs() | Select-Object -ExpandProperty ArchiveNo)) {
    $SqlServer.ReadErrorLog($LogArchiveNo) |
        Where-Object {$_.Text -like "*process memory has been paged out*"}
} 

The output of this script would look like below:

Why is this important?

If there is excessive memory pressure on SQL Server’s memory allocations causing memory to get paged out to disk, that could be a potentially large performance impact as it invites I/O latency to memory access. It is best practice to ensure that there is enough physical memory on the machine, as well as a well-designed memory infrastructure from SQL Server so that there isn’t overcommitting of memory in order to ensure that paging is not excessive. It is recommended that reevaluation of memory allocations and/or available physical memory is taken into account in order to relieve memory pressure for the current SQL Server instance.

This shows how there has been some memory pressure to our SQL Server instance and this is available from our log records. Have you in ever used such simple scripts to figure out pressures of memory on your servers? How did you use them? Let me know.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on PowerShell – SQL Server Paging of Memory Identification

Everything Old Is New Again: 5 IT Headaches That Never Go Away

Database Sharding – How to Identify a Shard Key?

$
0
0

I have written a number of posts in the past working on shared databases and the concepts around these can be read at Sharding or No Sharding of Database – Working on my Weekend Project for the starters point of view. In a recent discussion at a user group, I had someone ask me what is the rationale around building a sharding key and what should be used for sharding their database. The concept, though common, it is not in black-and-white to what should be used as an sharding key. There are guiding principles which can be applied. Let us learn about Database Sharding.

As the discussion matured, I thought to pen down some of the thoughts that were discussed. These can be used as guiding principles in general.

Identifying an Appropriate Shard Key

One of the most difficult tasks in designing a data model is to identify an appropriate shard key. Many of the modeling decisions depend on this choice, and once committed, you cannot easily change the key. Hence, getting this right at the initial design phase is critical.

A best practice is to choose the most granular level of detail for the shard key.

Consider a SaaS solution provider that offers a service to multiple companies, each of which has a division with numerous assets. Each asset may generate a large amount of data or be used as the pivot point for many database transactions. One data modeler may choose to shard based on company, another on division, and yet another on asset. According to the best practice, asset is a good starting point.

Consider whether any DML queries will traverse the shards

In an ideal data model, no DML actions traverse across shards. As this ideal is very unlikely, the goal is to keep such requirements to a minimum. Such requirements can add complexity to the Data Access layer, reduce the usefulness and availability of RDBMS semantics, and expose your solution to greater risk should a shard become unavailable.

Depending on the database queries, you can decide to have multiple, logical groupings of shards, each one capable of been sharded

A logical grouping of shards is referred to as a shard set. A shard set is a collection of entities and database objects that are identical across shards within the shard set. For instance, a logical data model may have distinct functional areas, such as Inventory, Sales, and Customers, each of which could be considered a shard set. Each shard set has a shard key, such as ProductID for inventory and CustomerID for both Sales and Customers. A less common alternative for the Sales shard set is a shard key based on SalesOrderID. The choice depends on whether cross-shardlet queries can be handled.

It is common to encounter a case where logical relationships exist among shard sets—a big consideration when defining appropriate boundaries for the functional areas. When a relationship exists, the application tier must compensate for cross-area transactions. In this example, the Sales shard set has a logical relationship with Products shard set and a reference to ProductID. The Products shard set owns the metadata of the product. Of course, a reasonable option is to treat the Products table in the Sales shard set as a reference table. But this cannot be always possible because there can be a reference for Products even in the Orders shard and Shipment/Delivery shards etc. Think before you take a decision.

Finally, consider the data type of your shard key

The choice of shard key data type impacts database maintenance, troubleshooting, and resource consumption. The most efficient data type has an acceptable domain, is small, has a fixed storage size, and is well-suited for the processor. These factors tend to constrain the candidates for data types to integers (smallint, int, and bigint), char (4 -> 8), or binary (4 -> 8). Of these, bigint (Int64) is the best trade-off, but you can use smaller integer types if your business rules require.

The shard key must uniquely separate shardlets from one other. For example, if CustomerID is the shard key, then its value is unique among customers. In an entity that has child records of Customer, the shard key serves as part of the record’s primary key.

I am sure some of these discussion points have brought some insights that made a great write for me too. Do let me know if there are areas that I missed in my considerations. I would love to learn from you about them too.

Reference: Pinal Dave (http://blog.sqlauthority.com)

First appeared on Database Sharding – How to Identify a Shard Key?


Temper, Temper

$
0
0

Possibly the single most influential event in my career happened over 25 years ago. I lost my temper at a client.

The client had requested a meeting. They had a new, complex, system they wanted implemented in the software we managed for them. The client was very very excited. I was not excited. I was annoyed. I (in my youthful experience) felt it was unwise, difficult, and all round a stupid idea. I walked in with a chip on my shoulder. I took the first opportunity to take offense. I tore the poor woman’s document up one side and down the other. I pointed out logical flaws, grammatical flaws and anything else I could find. I was truly unpleasant. And I walked out of the meeting not realizing I was in the slightest bit upset.

One of my co-workers pulled me aside as we walked out of the meeting saying “Wow, you really lost your temper there.”

My response: “Really?” I honestly hadn’t noticed. I’d lost my temper, brought someone almost to tears, and hadn’t noticed.

I should probably throw in a little background at this point. I was working for a moderately large company that was growing quickly. ~20 DBAs and maybe 500-600 people total. I’d been working with this client for about two years, and it was our largest client. Even then I was considered one of the better DBA/developers (say top 2-3 of the 20 of us).

The client (of course) asked that I immediately be fired. And I don’t blame them in the slightest. I got very, very, outrageously lucky and the CIO told them I was the only one capable of working their account. That they would start looking for someone else but I was going to have to continue to work on it for at least 6 months. I was then called in and told off.

I worked my rear off that next 6 months. Anything that client wanted was done almost instantly. The process she was so excited about? I did it. I did everything I could to make that client happy. At the end of that 6 months the client and I had worked things out (mostly). She was happy with me, and I’d learned a very important lesson.

In fact I learned two. First of all I had a problem. I’ve always known that I have hard time dealing with and understanding others. Over the years the database stuff has come relatively easy, I mean I enjoy it after all. But I’ve had to work hard over the years to get better at dealing with people. I’ve learned to remain calm when working with people. To encourage and to teach. I even enjoy it. In fact I’ve gotten good enough that I occasionally get compliments on my reviews, but I’ve never forgotten that this is not my strong suit.

The next thing I learned is that control is not enough. You can learn self control all you want but it occasionally fails you. I had to learn how to release negative emotions and thoughts. That’s where communication comes in.

A few weeks ago I found out I was meeting with a vendor to explain a process I was working on. I wasn’t overly pleased by this. I expected trouble and I could very easily have gotten it. I found my heart beat getting faster and I’m sure my blood pressure was going up too. And this was 48 hours prior to the meeting! If I’d stamped down on those feelings, bottled them up and let them fester they could very easily have come boiling out during this meeting. I would have been a problem. Regardless of what the other person did. So what did I do?

I talked to a couple of my co-workers. They’ve dealt with the same people and could sympathize. They were nothing more than a sympathetic ear and let me vent some steam to someone who felt the same way.

I called my wife and talked to her. I love my wife, and she loves me. She didn’t understand but she didn’t have to. She was a loving non-judgmental ear. She let me vent without adding anything.

I wrote a blog post on the subject (blog post recursion!) This let me solidify how I felt. You’d be amazed how much writing something down so that someone else can understand it forces you to think through it and get all the feelings straight.

And I started to calm down. Over that 48 hours I got a hold of myself. I made sure my process was bullet proof. I researched the few pieces that I wasn’t 100% comfortable with.

The day of the meeting came. I presented my idea. The vendor complimented me! Everything was as smooth as you could wish. The members of my team were happy. The vendor was happy. The developers who had called the meeting were thrilled. But oh how easily it could have gone wrong. And it would have been my fault. Again.

So the thought of the day? Temper temper!


Filed under: SQLServerPedia Syndication, Uncategorized

Using The RelativePath And Query Options With Web.Contents() In Power Query And Power BI M Code

$
0
0

The Web.Contents() function in M is the key to getting data from web pages and web services, and has a number of useful – but badly documented – options that make it easier to construct urls for your web service calls.

Consider the following url:

https://data.gov.uk/api/3/action/package_search?q=cows

It is a call to the metadata api (documentation here) for https://data.gov.uk/, the UK government’s open data portal, and returns a JSON document listing all the datasets found for a search on the keyword “cows”. You can make this call using Web.Contents() quite easily like so:

Web.Contents(
 "https://data.gov.uk/api/3/action/package_search?q=cows"
)

However, instead of having one long string for your url (which will probably need to be constructed in a separate step) you can use the RelativePath and Query options with Web.Contents(). They are given in the second parameter of the function and passed through as fields in a record. RelativePath adds some extra text to the base url given in the first parameter for the function, while Query allows you to add query parameters to the url, and is itself a record.

So, taking the example above, if the base url for the api is https://data.gov.uk/api we can use these options like so:

Web.Contents(
 "https://data.gov.uk/api", 
 [
  RelativePath="3/action/package_search", 
  Query=[q="cows"]
 ]
)

RelativePath is just the string “3/action/package_search” and is added to the base url. There is just one query parameter “q”, the search query, and the search term is “cows”, so Query takes a record with one field: [q=”cows”]. If you want to specify multiple query parameters you just need to add more fields to the Query record; for example:

Web.Contents(
	"https://data.gov.uk/api", 
	[
		RelativePath="3/action/package_search", 
		Query=
		[
			q="cows", 
			rows="20"
		]
	]
)

Generates a call that returns 20 results, rather than the default 10:

https://data.gov.uk/api/3/action/package_search?q=cows&rows=20

Obviously these options make it easier to construct urls and the code is much clearer, but there are also other benefits to using these options which I’ll cover in another blog post soon.

Note: at the time of writing there is a bug that causes the value given in RelativePath to be appended twice when the Web.Page() function is also used. Hopefully this will be fixed soon.


DBCC CLONEDATABASE in SQL Server 2014

$
0
0

In SQL Server 2014 SP2 an interesting new DBCC command was included, DBCC CLONEDATABASE

This command creates a “clone” of a specified user (not supported for the system databases) database that contains all objects and statistics of the specified database. Hmm, could be useful, but, how does it work? Let’s have a look.

First create a database: –

USE [master];
GO

CREATE DATABASE [Test];
GO

And then create a test table: –

USE [TEST];
GO

CREATE TABLE dbo.TestTable 
(PK_ID	   INT IDENTITY(1,1),
 ColA	   VARCHAR(10),
 ColB	   VARCHAR(10),
 ColC	   VARCHAR(10),
 CreatedDate DATE,
 CONSTRAINT [PK_ID] PRIMARY KEY (PK_ID));
GO

Insert some data and then make sure stats have been generated: –

INSERT INTO dbo.TestTable
(ColA,ColB,ColC,CreatedDate)
VALUES
(REPLICATE('A',10),REPLICATE('B',10),REPLICATE('C',10),GETUTCDATE());
GO 100000

EXEC sp_updatestats;
GO

Now we can run the DBCC CLONEDATABASE command: –

DBCC CLONEDATABASE ('test','testclone');
GO

And verify that a read only copy of the database has been generated: –

dbcc clonedatabase

So, let’s have a look at the data in the new database: –

SELECT TOP 1000 [PK_ID]
      ,[ColA]
      ,[ColB]
      ,[ColC]
      ,[CreatedDate]
  FROM [testclone].[dbo].[TestTable];
GO

No data! Ok, so let’s have a look at the stats: –

USE [testclone];
GO

EXEC sp_spaceused 'dbo.testtable';
GO


DBCC SHOW_STATISTICS(N'testtable',PK_ID);
GO

dbcc clonedatabase stats
There’s the stats, SQL thinks that there’s 1000 rows in the table, pretty cool.

What we’ve ended up with is a read only database with no data but the objects and stats of the target database.

First thing, I’d be doing is backing that clone up and bringing it down to my local instance. Want to see how code will execute against production but don’t want to touch that prod environment? Here’s your answer.

@Microsoft, can we have this for other versions of SQL please?


Making Azure PowerShell Scripts Work in PowerShell and As RunBooks

$
0
0

Runbooks are very powerful tools which allow you to automate PowerShell commands which need to be run at different times.  One of the problems that I’ve run across when dealing with Azure Runbooks is that there is no way to use the same script on prem during testing and the same script when deploying. This is because of the way that authentication has to be handled when setting up a runbook.

The best way to handle authentication within a runbook is to store the authentication within the Azure Automation configuration as a stored credential.  The problem here is that you can’t use this credential while developing your runbook in the normal Powershell ISE.

One option which I’ve come up with is a little bit of TRY/CATCH logic that you can put into the PowerShell Script, which you’ll find below.

In this sample code we use a variable named $cred to pass authentication to the add-AzureRmAccount (and the add-AzureAccount) cmdlet. If that variable has no value in it then we try get call get-AutomationPSCredential. If the script is being run within the Azure Runbook environment then this will succeed and we’ll get a credential into the $cred variable. If not the call will fail, and the runner will be prompted for their Azure credentials through an PowerShell dialog box box. Whatever credentials are entered are saved into the $cred variable.

When we get to the add-AzureRmAccount and/or the add-AzureAccount cmdlets we pass in the value from $cred into the -Credential input parameter.

The reason that I’ve wrapped the get-AutomationPSCredential cmdlet in the IF block that I have, is so that it can be run over and over again in PowerShell without having to ask you to authenticate over and over again. I left the calls for the add-AzureRmAccount and add-AzureAccount inside the IF block so that it would only be called on the first run as there’s no point is calling add-AzureRmAccount every time unless we are authenticating for the first time.

if (!$cred) {
try {
[PSCredential] $cred = Get-AutomationPSCredential -Name $AzureAccount
}
catch {
write-warning ("Unable to get runbook account. Authenticate Manaually")
[PSCredential] $cred = Get-Credential -Message "Enter Azure Portal Creds"

if (!$cred) {
write-warning "Credentials were not provided. Exiting." -ForegroundColor Yellow
return
}
}

try {
add-AzureRmAccount -Credential $cred -InformationVariable InfoVar -ErrorVariable ErrorVar
}
catch {
Clear-Variable cred
write-warning ("Unable to authenticate to AzureRM using the provided credentials")
write-warning($ErrorVar)
return
}

try {
add-AzureAccount -Credential $cred -InformationVariable InfoVar -ErrorVariable ErrorVar
}
catch {
Clear-Variable cred
write-warning ("Unable to authenticate to AzureSM using the provided credentials")
write-warning( $ErrorVar)
return
}
}

You’ll be seeing this coming up shortly as part of a large PowerShell script that I’ll be releasing on Git-Hub to make live easier for some of us in Azure.

Denny

The post Making Azure PowerShell Scripts Work in PowerShell and As RunBooks appeared first on SQL Server with Mr. Denny.

SQL Saturdays and Why We Have Them

$
0
0

There has been some recent controversy over SQL Saturdays after PASS HQ announced some new changes. The changes introduced a new 600 mile radius for SQL Saturdays on the same day, an expansion from the previous 400 mile rule as well as reducing the PASS sponsorship from $500 per event to $250 per event and only for those that are in financial need. Originally the new rules also imposed a 600 mile rule and extended that to the Saturday before and after the event. The community was quick to point out how that would have impacted previous events and PASS HQ has removed the week before and after restriction.

With the popularity of the SQL Saturdays in the US, some event locations are finding it difficult to find sponsors for the event. I can understand this issue. I have helped organize numerous SQL Saturdays ranging from 100 attendees to upwards of 700. In the early days, there were fewer events and it seemed like every sponsor wanted to be at each one. That enabled organizers to be able to offer speakers and organizers event shirts, host a speaker dinner, and provide various other swag for the event. As popularity of the events grew, sponsors realized they couldn’t keep sending people to each one and that their budgets could only stretch so far. Organizers have started feeling the impact and are having to start looking elsewhere for sponsors as well as looking at their budgets.

Something that current and new organizers should consider is that all that extra stuff is just stuff. The main purpose of a SQL Saturday is to provide training to your local area, grow your local user group, and to help grow new speakers. As a speaker at nearly 40 SQL Saturdays, I have always enjoyed the speaker dinner as a way of networking and hanging out with other speakers, I would gladly pay for my own dinner at those events, the event organizer should not feel any pressure to feed the speakers the night before. If they would like to organize a place for us to all meet for dinner, which would be fantastic. Speaker shirts have been a big deal to many speakers, especially for new speakers starting out. If the budget allows for these, then great, if not, then do not feel obligated to provide a shirt. Many organizers feel they should get the speakers a gift, that is not necessary either, a hand written thank you note is more meaningful than a shirt, coffee mug, or Amazon gift card.

Smaller events can be held on a very small budget, especially if you can secure the venue for free.

I organize and run SQL Saturday Columbus GA and have helped organize SQL Saturday Atlanta since 2011. Atlanta is a great market and we have been very fortunate with sponsors year after year, in Columbus GA, things are very much different. Sponsorship dollars are much more difficult in Columbus and as a result, we keep things more “grass roots”. In Columbus GA, our event provides:

  • A venue – Free
  • Lanyards and name badge holders – $100
  • A nice variety of sessions thanks to our amazing speakers – Free
  • Lunch to our volunteers and the attendees opt to pay for lunch – $400
  • Coffee and donuts in the morning – $300
  • Speaker dinner – $500
  • Random snacks and drinks – $300

I hope more organizers will realize that they can put on a great event on a very small budget. SQL Saturday Columbus GA is fortunate to have a free venue and attract around 100 attendee’s year over year and to have the support of the Atlanta MDF. Our event cost just over $1500 and also generates a slight surplus in funds to fund our user group for the year.

In 2017 an approach I plan to do for sponsors is to have a $100 Community sponsor level. This will be for local businesses to help support the IT initiative without having to spend a lot of money. This will be for those to show support, get their name out there, but for those who really don’t need or care for the opt-in list or a table at the event. If I can sell 5 to 10 at that level, it will cover the majority of my event cost.

About me:

  • Attended my first SQL Saturday in 2010
  • Started speaking in 2011
  • Spoken at 38 SQL Saturdays
  • Helped organize 12 SQL Saturdays
  • Chapter Leader – Columbus GA SQL Server Users Group
  • PASS Regional Mentor

 

Share

Viewing all 1849 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>