Quantcast
Channel: SQL Server Blog
Viewing all 1849 articles
Browse latest View live

SQL 2016 Perform Volume Maintenance Task

$
0
0

This is a welcome addition to the install for SQL Server 2016 that I had not seen mentioned previously (you can find lots of posts about the new tempdb options, for example).

The post SQL 2016 Perform Volume Maintenance Task appeared first on Thomas LaRock.

If you liked this post then consider subscribing to the IS [NOT] NULL newsletter: http://thomaslarock.com/is-not-null-newsletter/


Blog Roll - List of blog post and script contribution

$
0
0
Greetings of the day!!


I am getting frequent emails with questions why I do not write as often I used to write earlier. I know, I got very few blog posts since 2013; the reason was that I had some personal commitment that kept me busy to take time off and make a blog post. However I tried to keep the moment by posting couple of blogs every year but that's on the another blog space - my employers blog space at http://www.pythian.com/blog/author/goswami/ . I have also contributed some scripts to Technet Script Gallery.  Below is the list of the blog post I made last year and the Script Gallery.

I am determined to post at least couple of entry each month now, I would appreciate your support as always!!


Links to the blog posts
DateTitleURL
Jan 5 2016SQL Server 2016 – AlwaysOn Basic Availability Grouphttp://www.pythian.com/blog/alwayson-basic-availability-group-sql-server-2016/
July 31 2015SQL Server and OS Error 1117, Error 9001, Error 823http://www.pythian.com/blog/sql-server-and-os-error-1117-error-9001-error-823/
July 9 2015Reading System Logs on SQL Serverhttp://www.pythian.com/blog/reading-system-logs-on-sql-server/
July 20 2015Reading System Logs on SQL Server - Part 2http://www.pythian.com/blog/reading-system-logs-sql-server-part-2/
Sep 30 2015Import / Export Multiple SSIS Packageshttp://www.pythian.com/blog/importexport-multiple-ssis-packages/
July 28 2014Unexpected Shutdown caused by ASRhttp://www.pythian.com/blog/unexpected-shutdown-caused-by-asr/
Jan 23 2014Script to Collect Database Information Quicklyhttp://www.pythian.com/blog/script-to-collect-database-information-quickly/


Links to the Technet script Gallery

DateTitleURL
Nov 22 2015Script to Collect ALL Database Information wtih VLF Counthttps://gallery.technet.microsoft.com/Script-to-Collect-ALL-82664699
May 3 2013Collect Cluster Information using TSQLhttps://gallery.technet.microsoft.com/scriptcenter/COLLECT-CLUSTER-INFORMATION-9a75e4a7
Mar 9 2013Configure Auto Growth in Fixed MBhttps://gallery.technet.microsoft.com/scriptcenter/Configure-AutoGrowth-in-f4f3d7d1
Jun 26 2015Script to Monitor Database Mirroring Healthhttps://gallery.technet.microsoft.com/scriptcenter/Script-to-monitor-database-0f35c5d7
Jun 26 2015Script to Monitor AlwaysOn Healthhttps://gallery.technet.microsoft.com/scriptcenter/TSQL-for-AlwaysOn-Health-6aae827d

SQL (Orphaned) User Without a Login: HowTo Create a Login For The User

$
0
0
Whenever I restore a database from a customer in my development environment I have the issue of orphaned users: users in the database have no corresponding login on the server/instance.
I found this nice answer on SO that uses the sp_change_users_login stored procedure for this.
You can use sp_change_users_login to create a login for users already in the database.


USE databasename                      -- The database I recently attached
EXEC sp_change_users_login 'Report'   -- Display orphaned users
EXEC sp_change_users_login 'Auto_Fix', 'UserName', NULL, 'Password'
You get the UserName(s) from the sproc when you run it with @Action='Report'.

Ad-hoc Changes

$
0
0
TSQL Tuesday #74: Be The Change

T-SQL Tuesday Each month, on the first Tuesday of the month, the announcement for the blog party T-SQL Tuesday comes out. Those that are interested then post their blogs, on the subject selected, on the second Tuesday of the month. If you’ve never heard of T-SQL Tuesday (which would surprise me) it’s a blog party started by Adam Machanic (b/t) over 6 years ago. Robert Davis (b/t), our host this month, has chosen a subject of Be The Change. Specifically data changes.

Now, I like to take the road a little less traveled on these sometimes and as it happens I made a mistake the other day that started me thinking of an appropriate subject. Rather than talk about ways to make data changes I want to talk about the processes around making ad-hoc changes to a system.

What started me thinking about this? We were working on a problem in production. Some code was running slowly and I suspected a bad query plan. I ran sp_recompile on what I suspected was the problem stored procedure. If you’ve never seen this function before it marks a stored procedure (or all procedures associated with a table) to be re-compiled on the next execution.

So what did I do wrong? When I ran this piece of code I didn’t do something very important. I didn’t check with the customer before running it. Possibly not a big deal in all shops but it’s part of our SOP (standard operating procedure).

And what does that have to do with data? Nothing. But it did get me thinking about making ad-hoc changes to a system. In every system I’ve ever worked on there were at least some ad-hoc changes necessary. Usually because of a bug in the code (that was hopefully fixed shortly), sometimes it’s necessary to replace a piece of code that hasn’t been written yet, etc. Either way you are running a piece of ad-hoc code against the database. Note: ad-hoc in this case being code that is not written, formalized and saved into the application code.

What’s the big deal about ad-hoc changes?

  • They frequently aren’t tested as careful as application code. In fact I’ll go so far as to say they are almost never tested that well.
  • They aren’t as carefully monitored as application code that is going into production.
  • They are far more prone to accidents. Running extra code by mistake, forgetting a where clause, etc.

 
Here’s what I want you to think about. How can you protect yourself and your systems when running ad-hoc changes? Here are some ideas.

  • Take some form of backup in case you need to perform an operational recovery.
  • Test your code in a non-production environment before running it in production. This one isn’t always possible but still something to keep in mind.
  • Make sure that this is the ONLY code in your window or that you are protected by a RETURN or SET EXECUTION OFF at the top of your screen. I have this put in place by default on new query windows. This protects you from running too much code by accident.
  • Make a habit of checking what instance you are connected to before running any ad-hoc code. Running code meant for a model or test environment in production can be a very scary thing.
  • There is nothing wrong with checking with a co-worker before running your code. Code-reviews are awesome!
  • Make sure that your client is aware of what you are doing. This could be the developers of the application, it could be the business itself. Doing something behind the scenes is NEVER a good idea.
  • Transactions. There is some good and bad here. If you have to run a big update during a busy time of day then you absolutely cannot use a transaction. You’ll potentially start blocking everyone else. So transactions good, but handle with care.
  • Most importantly, follow your SOP! If your company says fill out this form before making a change then make sure that form is filled out. Like them or not these processes are almost always there for a good reason. It might simply be to document what’s happened. Either way if you make a habit of ignoring this stuff you’ll probably be looking for a new job.

Filed under: Microsoft SQL Server, SQLServerPedia Syndication, T-SQL Tuesday Tagged: ad-hoc queries, microsoft sql server, T-SQL Tuesday

3 Reasons to Attend SQL Saturday Austin on Jan 30th

$
0
0

The Austin SQL Server User Group will host its third SQL Saturday on Saturday,

SQL Saturday Austin on January 30th, 2016

SQL Saturday Austin on January 30th, 2016

January 30th.SQLSaturday is a training event for SQL Server professionals and those wanting to learn about SQL Server. Admittance to this event is free ($15 for lunch), all costs are covered by donations and sponsorships. This all-day training event includes multiple tracks of SQL Server training from professional trainers, consultants, MCM’s, Microsoft Employees and MVPs.

Here are three reasons why I am excited to attend the SQL Saturday in Austin.

PreCons

While the SQL Saturday is free, there is also two separate all-day classes on Friday, January 29th that are dirt cheap compared to the cost of attending these classes at your local training center.

Have you ever wanted to learn how to make SQL Server go faster?  In a single day, Robert Davis will show you Performance Tuning like a Boss.

Have you wondered how you can keep your data highly available when your servers go bump in the night?  Ryan Adams will be teaching a class on Creating a High Availability and Disaster Recovery Plan.  Having a solid recovery plan can make you a Rockstar DBA and also help keep your company in business.

Sessions

In Austin we are blessed to have some of the best teachers come to town to share their knowledge.  We will have Connor Cunningham from the SQL Server Product team talk about the new features coming in SQL Server 2016.  We will have several MVP’s and MCMs sharing their knowledge.  If you want to learn about SQL Server there is not a better venue to do so than a local SQL Saturday.

Networking

Are you the only DBA or data professional working at your company?  If not, are you interested in meeting people who are as passionate as you are about data? If so, SQL Saturday is a great place to meet and network with some of the best data professionals.  I will never forget my first SQL Saturday. I found some vendors that had tools that made my job easier.  I also built some friendships that have helped me thought out my career.

The post 3 Reasons to Attend SQL Saturday Austin on Jan 30th appeared first on JohnSterrett.com SQL Server Consulting.

Pause SQL Server Service Before Restarting

$
0
0

By pausing the SQL Server service before restarting the instance we allow end users to continue their work uninterrupted and we also stop any new connections to the instance.

The post Pause SQL Server Service Before Restarting appeared first on Thomas LaRock.

If you liked this post then consider subscribing to the IS [NOT] NULL newsletter: http://thomaslarock.com/is-not-null-newsletter/

Descriptive Statistics In Power BI/M With Table.Profile()

$
0
0

As Buck Woody notes here, when you are exploring a new data set it can be useful calculate some basic descriptive statistics. One new M function that appeared in Power BI recently can help you to do this: Table.Profile(). This function takes a value of type table and returns a table that displays, for each column in the original table, the minimum, maximum, average, standard deviation, count of values, count of null values and count of distinct values (but no mode or median?). So, given the following table:

…the Table.Profile() function returns the following table:

Of course you could create something similar yourself fairly easily (as I have done for a customer in the past), and it’s not as sophisticated as the Quick Insights feature, but it’s handy to have a single function that does all this.

You could even use it on all of the tables in a SQL Server database. Since the Sql.Database() function returns a table containing all of the tables and views in a database, like so:

image

All you need to do to use Table.Profile() on all these tables is to add a new custom column that calls this function for every value in the Data column:

image

Then finally expand the new custom column and you’ll see the stats for every column in every table:

image

Here’s the code:

let
    Source = Sql.Database("localhost", "adventure works dw"),
    #"Added Custom" = Table.AddColumn(Source, "Profile", 
      each Table.Profile([Data])),
    #"Expanded Profile" = Table.ExpandTableColumn(#"Added Custom", 
      "Profile", 
      {"Column", "Min", "Max", "Average", "StandardDeviation", "Count", "NullCount", "DistinctCount"}, 
      {"Column", "Min", "Max", "Average", "StandardDeviation", "Count", "NullCount", "DistinctCount"})
in
    #"Expanded Profile"

My Work Autobiography

$
0
0

8 years at D2L and counting

8 years at D2L and counting

Some years ago, a friend of mine told me I should check out the company he worked for. There was a position that was focused solely on SQL Server. At the time I didn’t think of myself as a database developer, I was a software developer with a knack for SQL. But I applied and it wasn’t long before I signed on.

It’s been over eight years and I still work for D2L, a vendor best known for its Learning Management System educational software.

I get to show up to a different job every day. The variety is amazing. Officially though I’ve only had positions with four distinct teams.

Job 1: In House Consultant

When I started at D2L my position was Senior Software Developer, but really I just wanted to be know as the database guy. The first couple years were about learning SQL Server and building reputation. A number of things helped with my reputation at D2L.

  • A friend of mine, Richard, left D2L leaving a sort of gap behind. Richard was known by many as the developer to talk to for answers. Those people began wondering who to talk to and that was an opportunity for me. I tried to fill those shoes, at least for database issues.
  • Firefighting. Unfortunately, putting out a fire is more visible than preventing one in the first place. But I had enough opportunities to do both.
  • The SQL Server technical community. You guys had my back. If I didn’t know an answer immediately, I could find out. You guys were a tremendous resource. See 3 Problem Solving Resources That Make You Look Like A Genius.

Eventually I felt some sort of obligation to give back to the community I belonged to and so I started this blog.

As D2L grew, so did the separation of duties. As a vendor, developers retained a lot of the control and responsibilities typically associated with DBAs. For example, as a software vendor, DBAs don’t add an indexes to the product, but developers do. We also have a large say in deployment, performance, scalability and concurrency and of course we also share a large part of accepting any default blame.

So I had several years of fantastic on-the-job training facing new scalability challenges gradually. And as time went on, I worked on more preventative and proactive efforts.

Job 2: Business Intelligence

Then I chose to work with an analytics team. Everyone was talking about “Big Data” which was the new buzzword and I was excited about the opportunity to learn how to do it right.

BigData

It was a project based in part on Kimball’s approach to data warehousing. I worked with a great team and faced a number of challenges.

My reputation as the database guy still meant that I was getting asked to tackle problems on other teams. But I didn’t see them as interruptions. Eventually those “distractions” just showed me that I missed the world of relational data. So a year later, I changed jobs again.

Job 3: Project Argon*

So I joined a team called Argon. Our job was to overhaul the way we deliver and deploy our software. It was exciting and we enjoyed a lot of successes. One friend Scott MacLellan talks a lot more about what we did on his own blog. For example “Deploys Becoming Boring”

For my part I had fun writing

  • A tool that asserted schema alignment for any given database for some expected version of our product.
  • A tool that could efficiently cascade deletes along defined foreign keys in batches (giving up atomicity for that privilege).

I still found myself working as an internal consultant. Still helping out with production issues and still having a blast doing it.

Fun fact, Argon is the third project in my career with a noble gas for a codename, the other two being Neon and Xenon

Job 4: Samurai Team

This is where it gets good. All that internal consulting I’ve been doing? That’s my full-time job now.

A couple months ago I joined a brand new team with an initial mandate to “make our product stable”. We’re given the autonomy to determine what that means and how best to carry it out. I’m focused on the database side of that and I’m having a great time.

It’s still mainly technical work, but anything that increases software stability is in scope.

  • If internal training is needed, we can provide that. That’s in scope.
  • If increased Devops is needed (blurring the lines or increasing collaboration between devs and DBAs) we do that too.

What’s fun and what’s effective don’t often overlap, but at the moment they do.

Continued Blog Writing!

And I get to write and draw about it as I have been for over five years! Speaking of which, I got some news January 1st: mvp5


Problem removing files from TempDB

$
0
0

I recently ran into an interesting problem while attempting to remove approximately half of the TempDB data files configured in a testing environment. As you might expect, there are various SQL Server tasks that are performed infrequently by a DBA, and this is a good example of one of them. Most environments have usually been misconfigured with too few TempDB data files or wrongly sized usually resulting in the classic allocation page contention problems (which is explained by this excellent SQLSkills article “The Accidental DBA (Day 27 of 30): Troubleshooting: Tempdb Contention“), but this particular environment had been temporarily over provisioned with disks and TempDB data files (for testing purposes).

Bonus fact: SQL Server 2016 attempts to automatically address the tempdb datafile allocation page contention problem by defaulting to 8 TempDB data files (or less if the number of cores is smaller). This can be overridden on install through the GUI or by using the /SQLTEMPDBFILECOUNT switch if performing a command line installation. Further optimisations have been implemented such as the adoption of uniform extent allocations by default and auto-growing all files simultaneously -both behaviours would formally have required turning on Traceflags 1117 and 1118. Several other improvements have been made to the performance of TempDB database which you can read about yourself in this (currently preview) documentation.

 

So the plan was to remove each TempDB file one by one, restart the instance (if required) and decommission those spare disks so that they can be returned and reallocated elsewhere in the environment. In order to remove each data file we would:

  1. Empty each file at a time (using DBCC SHRINKFILE).
  2. Remove each file upon empty.

Now there was a time (prior to SQL Server 2005) when we were told to tread very very carefully when shrinking TempDB files, and doing so could result in corruption. This perception remains with some of the longer serving SQL Server professionals (I question myself frequently) but if we take a look at KB307487 article “How to shrink the tempdb database in SQL Server” we can see that it is now safe to do so -although (as the Knowledge Base article states) certain scenarios can cause the shrink operation to fail.

So I ran the following code for each TempDB data file:

DBCC SHRINKFILE (tempdb_15, EMPTYFILE)
go
ALTER DATABASE [tempdb] REMOVE FILE [tempdb_15]
GO

The code succeeded for the first few files, right up to TempDB data file 12, where I hit the following error:

DBCC SHRINKFILE: Page 14:56 could not be moved
because it is a work table page.
Msg 2555, Level 16, State 1, Line 1
Cannot move all contents of file "tempdb_12" to
other places to complete the emptyfile operation.
DBCC execution completed. If DBCC printed error
messages, contact your system administrator.
Msg 5042, Level 16, State 1, Line 1
The file 'tempdb_12' cannot be removed
because it is not empty.

As odd as this error was (especially since the SQL instance was not currently “in use”) I decided to bounce it but after restarting the instance was again greeted with the same error message! After some very quick Google-Fu I came across an old MSDN Forum question “DBCC SHRINKFILE: Page 4:11283400 could not be moved because it is a work table page.” I ruled out some of the responses but came across the response by Mike Rose:

“I realize this is an old thread but I have found that in most cases work tables are related to Query Plans.

Try issuing the following commands and the shrinking the tempdb:

DBCC FREESYSTEMCACHE (‘ALL’)

DBCC FREEPROCCACHE

there will be some performance hit for this as SQL will have to recreate its Query Plans, but it should allow you to shrink you TEMPDB.

Hope this helps,

M Rose”

 
Now my initial thoughts were that this response was clearly nonsense since I had just restarted the SQL Service for the instance (thereby flushing the SQL caches), but hey what the hell, why not give it a whirl….

…and success! It worked.

Apart from the flushing of the caches, the outcome was even more strange to me for another reason. Upon reboot TempDB is “recreated” upon startup*1 and I would have expected that fact alone to have fixed the problem but from the behaviour I had experienced, something had persisted across instance restart.
*1 you may want to read this interesting post by Jonathan Kehayias “Does tempdb Get Recreated From model at Startup?

Also of interest to me was whether running FREEPROCCACHE and FREESYSTEMCACHE was overkill so when the opportunity arose I attempted first to try only clearing the systemcache specific to TempDB through:

DBCC FREESYSTEMCACHE ('tempdb')

…and then tried clearing temporary tables and table variables through:

DBCC FREESYSTEMCACHE ('Temporary Tables & Table Variables')

…and then by using both together. Sadly these did not seem to work.

On my final opportunity I tried clearing the procedure cache (DBCC FREEPROCCACHE) and after running this a few times it appeared to solve the problem and I was able to remove the TempDB file without error.

 


Clearing system caches on a Production SQL Server instance is never a good thing and should be avoided, but at least the next time I run into this problem I have (what appears) to be a work around to the problem I ran into and at least the requirement to remove TempDB data files should be a rare occurrence for most of us!

I confess that I am left with a few unanswered questions and clearly what I experienced does make me doubt the results somewhat (and my sanity) and there were further tests that I would like to perform at some point. So if you run into this problem yourself I encourage you to try this (disclaimer: your fault if you destroy your SQL Server ;) ) and I would be delighted if you leave feedback of what works for you (and what doesn’t)!


Filed under: SQL, SQLServerPedia Syndication, Storage

Latest post on my Handy IoT Toolkit is released!

$
0
0

d3f07-spyI’ve started to update my IoT Toolkit blog post series.

You can get the latest post from here, which gives you some ideas on communication for your virtual blended team, and some pointers towards handy Visio stencils that might help. You can also navigate my other IoT posts by going to the IoT Toolkit menu that I’m trying to keep updated.


Creating a cluster without Domain Admin permissions

$
0
0

If you’ve ever watched a presentation when someone sets up a cluster you’ve probably noticed that it goes pretty smoothly. The reason for this is because the account which the presenter uses is a domain administrator.

But what about the real world?

In the real world unless you are a systems administrator you probably won’t be a Domain Admin when creating your cluster. There’s a couple of ways to get the cluster setup.

The first way is to be created the permission to create objects within the domain. This is the easiest option. If this isn’t an option then you have the second option available to you.

Pre-staging the objects is option #2. This option requires that a member of the systems administration team create the computer objects for the cluster (and any clustered resources like Availability Groups and Failover Clustered Instances). You also need to have the Domain Admin disable the accounts. This step is critical because if the computer accounts are enabled the Failover Cluster Manager won’t be able to use the computer account, and neither with the cluster when it comes to creating the computer objects for the FCI and/or the AGs.

You’ll also need to configure the computer objects for the actual cluster to be managed by the user who is configuring the cluster. For the other computer accounts (for the FCIs and the AGs) you need to setup the computer account for the cluster to be able to manage these other computer accounts.

Denny

The post Creating a cluster without Domain Admin permissions appeared first on SQL Server with Mr. Denny.

T-SQL Code Coverage in SSDT using the SSDT Dev Pack

$
0
0

Code Coverage

What is code coverage?

When you write some code and then test it, how sure are you that you have tested the whole thing? Code coverage gives you an idea of how well tested a bit of code is.

If you have lots of branches in your code (not something I am advocating) it is important to make sure you test it all so we can use code coverage to get an idea of how much of a particular piece of code has been tested (or not).

How code coverage works with SQL Server (i.e T-SQL) is that because SQL provides a pretty good interface for tracking statements which have been executed (extended events) we can tell which statements have been called. So if we take this example stored procedure:


create procedure a_procedure(@a_value int)
as
begin

if 1=1
begin
select 1
end

if @a_value = 9
begin

end

select a, case when @a_value = 1 then 'a' else 'b' end from table_name;

end

If we execute this stored procedure we can monitor and show a) how many statements there are in this and also b) which statements have been called but we can't see which branches of the case statement were actually called. If it was a compiled language like c# where we have a profiler that can alter the assembly etc then we could find out exactly what was called but I personally think knowing which statements are called is way better than having no knowledge of what level of code coverage we have.

How do you use code coverage

I use it as a way to explore what parts of the code have low test coverage as it means that I need to be more careful about making changes in those areas.

I mostly work with legacy applications rather than new ones (my favorite thing is debugging) so often end up looking at big pieces of T-SQL without any tests which have an air of spaghetti code about them (not always just mostly!) so it is a good way to make sure as I write tests, all the diffent things that the code is supposed to be doing.

How do you not use code coverage

Don't set a requirement that the code must have X % of covered code.

Why

Each piece of code is unique, sometimes you don't really need to test it and sometimes you need to make sure a statement is called at least 20 different ways - code coverage is a guidance and advice tool not a cut and dry sort of a thing.

How do you measure code coverage in SQL Server?

This brings me neatly to the point of all this - you can either do it manually or you can use my new version of the SSDT Dev Pack (groan):

If you grab it from:

https://visualstudiogallery.msdn.microsoft.com/435e7238-0e64-4667-8980-5...

In SSDT if you do Tools->SSDT Dev Pack-->Code Coverage" you end up with this lovely little window:

the ssdt code coverage window

If you click on "Start Capture" it will ask to connect to SQL Server, go ahead and put in the server details and choose the database you want to monitor.

When you have done that go and run your test code - this could be a manual script, a tSQLt.Run or even you could run your application - just do whatever you want to see how much of you database is covered.

When you have finished then click "Stop Capture" and the dev pack will then enumerate your entire solution for procedures and functions and find out how many statements there are and it will then parse the results of the extended events trace and find which of the statements in the SSDT projects have been covered. You then get this nice little tree view which shows the cumulative amount of code coverage (statements that were executed as part of the test):

the ssdt code coverage window

In this case we can see that there is one project and one procedure and we have around 62% code coverage (pretty good for a SQL Server project!).

You get ths usual things like double clicking on an item takes you to the document that it exists in but more interestingly we can show the code coverage in the code itself, if you enable "tools->ssdt dev pack-->Display code coverage in Documents" and then click on the procedure in a document window you get:

procedure in ssdt showing covered statements

which is pretty cool if you ask me! What this shows is the statements that have been covered and we can see here that the statement wrapped in the "if 1 = 2" was not called which is lucky (otherwise it would be a pretty poor demo).

Things to note

This uses extended events so you need to have the right permissions to create them (you should probably be using localdb or a local dev instance anyway).

You will need to manually delete the extended events trace files periodically, they are written to the sql log directory and are called CoveCoderage*.xel (and xem on SQL 2008 r2).

If you change the document then I don't know if the offsets will match up so I stop showing the covered statements until you re-run the trace.

If you need any help with any of this give me a shout :)

Ed

Tip # 9 – Automatically Shrinking Your Database

$
0
0

Top 10 Tips for SQL Server Performance and Resiliency

This article is part 9 in a series on the top 10 most common mistakes that I have seen impact SQL Server Performance and Resiliency. This post is not all-inclusive.

Most common mistake #9: Automatically Shrinking Your Database

This is a topic that has been written about frequently, and most often, I try not to re-hash what many people have already blogged about.  However, as often as I see this I would be amiss if I did not add auto shrink to the list.

Often you will see IT professionals approaching their tasks from different angles.  Consider if you were a Systems Admin and you knew you needed some additional storage on a server you might send a request to the storage admin requesting an additional 50 gigs, or whatever amount you need.  As a Database Professional, you would be wise to not only include the size of storage that you need but also the performance specifications that you require.  As a DBA, we need to understand that SQL Server management may not always translate well to other types of systems management.  Now granted this should be no surprise, it is understand we do not approach all things the same way, but where this comes into play is the understanding we all have different backgrounds.  We became DBA’s from different career paths.

If you are new to being a Database Administrator or the Primary focus of your job is not to be a DBA you may see the benefits of shrinking a database automatically.  If the database shrinks by itself, it might be considered self-management; however, there is a problem when doing this.

When you shrink a data file SQL Server goes in and recovers all the unused pages, during the process it is giving that space back to the OS so the space can be used somewhere else.  The downstream effect of this is going to be the fact your indexes are going to become fragmented.  This can be demonstrated in a simple test.

I have a database in my lab based on the Chicago Crime Stats.  I have been doing a lot of testing in the database with an automated indexing script, that has me inserting a deleting a large number of rows at different times.  Over time this database has become rather large for my small lab, it is time to shrink it down to a more manageable size.  The first thing done is to check what the status of my indexes is.

This is a simple query that will return all the indexes in the database with its fragmentation level.

SELECT db_name() as [database],
      Object_Name(ps.object_id) as [table],
      i.name as Index_Name,
      round(avg_fragmentation_in_percent, 0) as Frag
FROM sys.dm_db_index_physical_stats(db_id(), null, null, NULL, NULL) ps
            Join sys.indexes i on ps.Object_ID = i.object_ID and ps.index_id = i.index_id

 

The results look like this:

image1

 

More or less the indexes are looking good; there is not a lot of fragmentation except in the one table (that is a discussion for later topics). What happens if I shrink the whole database, to include not only the log but also the data file as well?

 

Use the following T-SQL:

DBCC ShrinkDatabase ([ChicagoCrimeStats])

Rerunning the index fragmentation script, I now receive these results:

image2

 

If I have queries that use the IDX_Crimes_Frag_XCORD_Clustered index, there is a real good chance the performance on that query is going to degrade.

There are times when you may need to shrink a file, some considerations could be after a large delete of records or maybe you archived much of the data out of the database.  These sort of operations remove data leaving your databases with a lot of free space.  This free space can be reclaimed by using the DBCC Shrinkfile or DBCC Shrinkdatabase T-SQL commands, however be aware you should re-index after those statements are run.

It is not a bad thing to shrink a database as long as you do it in a controlled manor with proper maintenance afterwards.

Top 10 Tips for SQL Server Performance and Resiliency

  1. Improper Backups
  2. Improper Security
  3. Improper Maintenance
  4. Not having a Baseline
  5. SQL Server Max Memory
  6. Change History
  7. Disaster Recovery Plans
  8. TempDB

What is RESULT SETS?

$
0
0

I was reading some code the other day and it included the statement RESULT SETS. I’d never seen it before so it seemed worth a quick look. I’ll warn you in advance this is somewhat of a BOL blog post. I’m basically taking what you can find in BOL and repeating it back with a little bit of interpretation. I try not to do this type of post often but I haven’t used this particular option before and haven’t found a great use for it yet. So consider this a blog post to point out a new option :)

First of all it’s part of the EXECUTE command.

Quick definition. A result set is the output of a query. It could result in a one row, one column output or a 100+ column, million+ row output. Either way that’s a result set. Note: you can have multiple result sets from a single object (stored procedure, function etc) call.

There are three options.

  • RESULT SETS UNDEFINED – This is the default and means that you don’t know what the result set will be.
  • RESULT SETS NONE – You expect (and require) that there will be no result set(s) returned.
  • RESULT SETS ( <result_sets_definition> ) – There will be result set(s) returned and you are going to specify the definition. The column names within the definition(s) can act as alias’ for the column names.

 
So what use is this? Well primarily it would protect your code from changes in the code that it’s calling. Specifically if you aren’t getting what you expected then throw an error. Every now and again you have code that you would rather fail than be wrong. This will help with that.

And here is the obligatory demo :)

-- Create a procedure to test on
CREATE PROCEDURE ResultSetsExample AS
SELECT database_id, name
FROM sys.databases
GO
-- All of these have almost identical output
EXEC ResultSetsExample
EXEC ResultSetsExample WITH RESULT SETS UNDEFINED -- Default option

-- The output of this last one has the column 
-- names aliased to [db_id] and [db_name]
EXEC ResultSetsExample WITH 
	RESULT SETS (([db_id] int, [db_name] varchar(100)))

But what happens when you change the output of the stored procedure?

ALTER PROCEDURE ResultSetsExample AS
SELECT database_id, name, owner_sid
FROM sys.databases
GO
EXEC ResultSetsExample WITH 
	RESULT SETS (([db_id] int, [db_name] varchar(100)))

Now we get an error.

Msg 11537, Level 16, State 1, Procedure ResultSetsExample, Line 13
EXECUTE statement failed because its WITH RESULT SETS clause specified 2 column(s) for result set number 1, but the statement sent 3 column(s) at run time.

Last but not least if you run this:

EXEC ResultSetsExample WITH RESULT SETS NONE

Because the SP call actually returns an output you get this error:

Msg 11535, Level 16, State 1, Procedure ResultSetsExample, Line 11
EXECUTE statement failed because its WITH RESULT SETS clause specified 0 result set(s), and the statement tried to send more result sets than this.


Filed under: Microsoft SQL Server, SQLServerPedia Syndication, T-SQL Tagged: microsoft sql server, T-SQL

Change Availability Group Endpoint IP

$
0
0

I had someone email me and ask how they could change the IP address on their Availability Group Endpoint.  It’s no surprise that IPs need to be changed from time to time due to certain circumstances, but what you might be wondering is where during the creation of your AG did you assign the endpoint an IP.  If you used the GUI to create your AG then the answer is that you never assigned your endpoint an IP.  So why do you even care?

The one thing I challenge folks to think about when I teach my HA/DR precons and AG sessions is to think about the types and amount of traffic across the entire ecosystem.  An Availability Group is going to have cluster heart beat traffic, user traffic, SQL replica traffic, and depending on your storage you might have traffic for that as well.  Your network is just like plumbing.  You can only fit so much water through a 1 inch pipe at a time.  The same goes for network connections, and that’s a lot of traffic to put over a single NIC port.  However, you can separate all of those traffic types and changing the IP of the endpoint will let you give the AG data replica traffic its own network pipe.

Every replica in the AG has its own endpoint so you will need a new IP for each one.  Go assign the IPs to a NIC card in each server.  Now we need to change the endpoint to use the new IP on each replica.  The good news here is that no outage is required to do this.  Here is the code to change your endpoint.  You just need to edit the endpoint, port (Read the caution below), and IP.

ALTER ENDPOINT [MyEndpoint]

STATE = STARTED

AS TCP (LISTENER_PORT = 5023, LISTENER_IP = (10.x.x.x))

FOR DATA_MIRRORING (ROLE = ALL, AUTHENTICATION = Windows Negotiate, ENCRYPTION = REQUIRED ALGORITHM AES)

GO

Caution!

You MUST use the same port you are using today. If you decide to change the port then the URL will also need to be updated. You can read this post on how to do that.


PowerShell- Monitoring Multiple Services On Multiple Servers Using WMI Class -Win32_Service

$
0
0

The requirement is to check only those services where startup mode set to Auto and services that stopped. In my previous post have used Get-Service cmdlet which do not bind any such information hence I’m querying Win32_Service. This class has StartMode and State attributes.

Function Get-ServiceStatusReport  
{  
param(  
[String]$ComputerList,[String[]]$includeService,[String]$To,[String]$From,[string]$SMTPMail  
)  
$script:list = $ComputerList   
$ServiceFileName= "c:\ServiceFileName.htm"  
New-Item -ItemType file $ServiceFilename -Force  
# Function to write the HTML Header to the file  
Function writeHtmlHeader  
{  
param($fileName)  
$date = ( get-date ).ToString('yyyy/MM/dd')  
Add-Content $fileName "<html>"  
Add-Content $fileName "<head>"  
Add-Content $fileName "<meta http-equiv='Content-Type' content='text/html; charset=iso-8859-1'>"  
Add-Content $fileName '<title>Service Status Report </title>'  
add-content $fileName '<STYLE TYPE="text/css">'  
add-content $fileName  "<!--"  
add-content $fileName  "td {"  
add-content $fileName  "font-family: Tahoma;"  
add-content $fileName  "font-size: 11px;"  
add-content $fileName  "border-top: 1px solid #999999;"  
add-content $fileName  "border-right: 1px solid #999999;"  
add-content $fileName  "border-bottom: 1px solid #999999;"  
add-content $fileName  "border-left: 1px solid #999999;"  
add-content $fileName  "padding-top: 0px;"  
add-content $fileName  "padding-right: 0px;"  
add-content $fileName  "padding-bottom: 0px;"  
add-content $fileName  "padding-left: 0px;"  
add-content $fileName  "}"  
add-content $fileName  "body {"  
add-content $fileName  "margin-left: 5px;"  
add-content $fileName  "margin-top: 5px;"  
add-content $fileName  "margin-right: 0px;"  
add-content $fileName  "margin-bottom: 10px;"  
add-content $fileName  ""  
add-content $fileName  "table {"  
add-content $fileName  "border: thin solid #000000;"  
add-content $fileName  "}"  
add-content $fileName  "-->"  
add-content $fileName  "</style>"  
Add-Content $fileName "</head>"  
Add-Content $fileName "<body>"  
 
add-content $fileName  "<table width='100%'>"  
add-content $fileName  "<tr bgcolor='#CCCCCC'>"  
add-content $fileName  "<td colspan='4' height='25' align='center'>"  
add-content $fileName  "<font face='tahoma' color='#003399' size='4'><strong>Service Stauts Report - $date</strong></font>"  
add-content $fileName  "</td>"  
add-content $fileName  "</tr>"  
add-content $fileName  "</table>"  
 
}  
 
# Function to write the HTML Header to the file  
Function writeTableHeader  
{  
param($fileName)  
 
Add-Content $fileName "<tr bgcolor=#CCCCCC>"  
Add-Content $fileName "<td width='10%' align='center'>ServerName</td>"  
Add-Content $fileName "<td width='50%' align='center'>Service Name</td>"  
Add-Content $fileName "<td width='10%' align='center'>status</td>"  
Add-Content $fileName "</tr>"  
}  
 
Function writeHtmlFooter  
{  
param($fileName)  
 
Add-Content $fileName "</body>"  
Add-Content $fileName "</html>"  
}  
 
Function writeDiskInfo  
{  
param($filename,$Servername,$name,$Status)  
if( $status -eq "Stopped")  
{  
 Add-Content $fileName "<tr>"  
 Add-Content $fileName "<td bgcolor='#FF0000' align=left ><b>$servername</td>"  
 Add-Content $fileName "<td bgcolor='#FF0000' align=left ><b>$name</td>"  
 Add-Content $fileName "<td bgcolor='#FF0000' align=left ><b>$Status</td>"  
 Add-Content $fileName "</tr>"  
}  
else  
{  
Add-Content $fileName "<tr>"  
 Add-Content $fileName "<td >$servername</td>"  
 Add-Content $fileName "<td >$name</td>"  
 Add-Content $fileName "<td >$Status</td>"  
Add-Content $fileName "</tr>"  
}  
 
}  
 
writeHtmlHeader $ServiceFileName  
 Add-Content $ServiceFileName "<table width='100%'><tbody>"  
 Add-Content $ServiceFileName "<tr bgcolor='#CCCCCC'>"  
 Add-Content $ServiceFileName "<td width='100%' align='center' colSpan=3><font face='tahoma' color='#003399' size='2'><strong> Service Details</strong></font></td>"  
 Add-Content $ServiceFileName "</tr>"  
 
 writeTableHeader $ServiceFileName  
 
#Change value of the following parameter as needed  
 
$InlcudeArray=@()  
 
#List of programs to exclude  
#$InlcudeArray = $inlcudeService  
 
Foreach($ServerName in (Get-Content $script:list))  
{  
$service = Get-WmiObject Win32_Service -ComputerName $servername |Where-Object { $_.StartMode -eq 'Auto' -and $_.state -eq 'Stopped'} if ($Service -ne $NULL)  
{  
foreach ($item in $service)  
 {  
 #$item.DisplayName  
 Foreach($include in $includeService)   
     {                         
 write-host $inlcude                                      
 if(($item.name).Contains($include) -eq $TRUE)  
    {  
    Write-Host  $servername $item.name $item.Status   
    writeDiskInfo $ServiceFileName $servername $item.name $item.Status   
    }  
    }  
 }  
}  
}  
 
Add-Content $ServiceFileName "</table>"   
 
writeHtmlFooter $ServiceFileName  
 
function Validate-IsEmail ([string]$Email)  
{  
 
                return $Email -match "^(?("")("".+?""@)|(([0-9a-zA-Z]((\.(?!\.))|[-!#\$%&'\*\+/=\?\^`\{\}\|~\w])*)(?<=[0-9a-zA-Z])@))(?(\[)(\[(\d{1,3}\.){3}\d{1,3}\])|(([0-9a-zA-Z][-\w]*[0-9a-zA-Z]\.)+[a-zA-Z]{2,6}))$"  
}  
 
Function sendEmail    
{   
param($from,$to,$subject,$smtphost,$htmlFileName)    
[string]$receipients="$to"  
$body = Get-Content $htmlFileName   
$body = New-Object System.Net.Mail.MailMessage $from, $receipients, $subject, $body   
$body.isBodyhtml = $true  
$smtpServer = $MailServer  
$smtp = new-object Net.Mail.SmtpClient($smtphost)  
$validfrom= Validate-IsEmail $from  
if($validfrom -eq $TRUE)  
{  
$validTo= Validate-IsEmail $to  
if($validTo -eq $TRUE)  
{  
$smtp.Send($body)  
write-output "Email Sent!!"  
 
}  
}  
else  
{  
write-output "Invalid entries, Try again!!"  
}  
}  
 
$date = ( get-date ).ToString('yyyy/MM/dd')  
 
sendEmail -from $From -to $to -subject "Service Status - $Date" -smtphost $SMTPMail -htmlfilename $ServiceFilename  
 
}

The Function Get-ServiceStatusReport contains five parameters

  1. ComputerList – List of Servers
  2. ServiceName – Name of Services separated by comma
  3. SMTPMail – SMTP mail address
  4. FromID – Valid Email ID
  5. ToID – Valid Email ID

Function call –

Get-ServiceStatusReport -ComputerList C:\server.txt -includeService  "MySQL","MpsSvc","W32Time" -To pjayaram@app.com -From pjayaram@ app.com -SMTPMail  app01. app.com

OR

Get-ServiceStatusReport -ComputerList C:\server.txt -includeService  MySQL,MpsSvc,W32Time -To pjayaram@app.com -From pjayaram@ app.com -SMTPMail  app01. app.com

 

 


Temporary Post Used For Theme Detection (7bc987a8-70ff-4bce-b0b5-d4bad4ef7912 – 3bfe001a-32de-4114-a6b4-4005b770f6d7)

$
0
0

This is a temporary post that was not deleted. Please delete this manually. (c6d728fb-93d8-4ec0-a66c-6dd5e7281550 – 3bfe001a-32de-4114-a6b4-4005b770f6d7)


Interpret SQL Transaction Log using sys.fn_dblog

$
0
0
Ever wondered how to read the transaction log for a database? There is an undocumented SQL function sys.fn_dblog which may help you to read T-Log except for the truncated transaction details. We can use this function effectively for point in time recovery at a LSN level.

First lets see how the typical output of the function, I have run the function on the AdventureWorks2012 DB,

select*fromsys.fn_dblog(NULL,NULL)

Note:-The 2 parameters for are sys.fn_dblog StartLSN and EndLSN if you want to see the operation between specific LSN range. Default NULL, NULL will read the entire T-Log.


There you go you can see a CurrentLSN Column, Operations Column, TransactionID, PreviousLSN column etc, I am not going to discuss all columns in detail but we will see the use of some of them in our exercise today.

As you can see in the Operations column you can see operation like INSERT(LOP_INSERT_ROWS), Begin transaction(LOP_BEGIN_XACT), CheckpointEnd(LOP_END_CKPT)etc. Lets carry out an example and see how can we interpret these details.

Lets Create a Table and see what happens in the Log,

CREATETABLETest(a int)
GO
select [Current LSN],Operation,Context,[Transaction ID],[Previous LSN],[Transaction Name],AllocUnitName,[Begin Time] fromsys.fn_dblog(NULL,NULL)ORDERBY [Transaction ID] 



As you can see there is a BEGINTRAN event(LOP_BEGIN_XACT) with the corresponding Transaction name column as "CREATE TABLE", this is followed by set of INSERT and UPDATE operation(LOP_INSERT_ROWS and LOP_MODIFY_ROW) on various system tables(Refer AllocUnitName Column on the pic) for the new table creation.

Next I insert values to the table,

INSERTINTOTest Values(1)
GO
INSERTINTOTest Values(2)
GO
INSERTINTOTest Values(3)
GO
select [Current LSN],Operation,Context,[Transaction ID],[Previous LSN],[Transaction Name],AllocUnitName,[Begin Time],[End Time] fromsys.fn_dblog(NULL,NULL)ORDERBY [Transaction ID]


As you can see we have 3 LOP_BEGIN_XACT,LOP_INSERT_ROWS,LOP_COMMIT_XACT set of operation for the 3 row insert and some page allocation tasks. You can also see we have start time for Begin TRAN and a End Time for the Commit Tran. If you see the Context column for the INSERT operation you can see LCK_HEAP indicating inserting rows to a heap table.

Next I ran UPDATE,

UPDATE Test SETa = 5 where a = 1


As expected you can see LOP_MODIFY_ROW on a LCX_HEAP for the UPDATE statement. Next I ran DELETE,

DELETEfromTest where a =5


 There you go LOP_DELETE_ROWS operation on LCX_HEAP. 

Next I ran,

DROPTABLETest


As in CREATE Table statement DROP Table has LOP_BEGIN_XACT,LOP_LOCK_XACT operation followed by updating system tables operations and finally Commit operation LOP_COMMIT_XACT.

Finally Lets see how a rollback looks like in the T-Log,

BEGINTRAN
INSERTINTOTest Values(1)
ROLLBACK


There was a INSERT operation then on Rollback there was a DELETE operation of the inserted row and finally Abort Transaction operation(LOP_ABORT_XACT).

Now that we have seen how we can interpret the sys.fn_dblog output now we can see how we can put it to use, 

For example if an user DELETED multiple row by mistake and does not know what rows he deleted and comes to you for help and want to restore the database to a state exactly before the data deletion. We usually go back to restoring the backups but if you have multiple users using the DB how will you go to the exact state before the DELETE was issues that's where sys.fn_dblog can be very useful. You can just read the out of the Log around the time the delete was issued then note the CurrentLSN on the LOP_BEGIN_XACT operation for the Delete then Restore backup and Transaction log backup with STOPBEFOREMARK as the CurrentLSN on the LOP_BEGIN_XACT operation.

For assumption lets say my LOP_BEGIN_XACT is '0000002c:000000a9:001e' just prefix lsn:0x to the current LSN Value on LOP_BEGIN_XACT  operation in our case it will be lsn:0x0000002c:000000a9:001e.

Now Run RESTORE LOG with STOPBEFOREMARK,

RESTORELOG[AdventureWorks2012]
FROM DISK='H:\DBBkp\Log\AdventureWorks2012_bkp25.trn'
WITH STOPBEFOREMARK='lsn:0x0000002c:000000a9:001e',
RECOVERY;

This should exactly get us back to a state exactly before the DELETE statement issued.

PowerShell e o Metrô

$
0
0

Trabalhando no centro você acaba tendo que usar muito o transporte público, o que na maioria dos casos é muito chato,,,

Devido a chuva a operação da CPTM e do Metrô estavam com algumas lentidões, mas nada comparado ao sites deles,,,

O site do Metrô (www.metro.sp.gov.br) estava muito lento, um tempo de resposta de uns 10/15 segundos.

O da CPTM (www.cptm.sp.gov.br) não estava muito longe disso também,,,

Aí fiquei pensando se algum deles tinha uma API para trazer a informação do status da linha e descobri que, claro, nenhum deles tem isso…

Mas, a Viaquatro, que opera a linha 4 do metrô tem uma API, que apenas trás as informações do metrô e por curiosidade não trás informações sobre a própria linha 4,,, mas já está valendo….

O site com as informações das linhas de metrô é o: http://www.viaquatro.com.br/generic/Main/LineStatus

Legal, não precisa de chave de API, não tem necessidade de autenticação, é bem simples e direto…

metro

 

Agora com isso já é possível trabalhar um pouco com o poweshell…


$metro = Invoke-RestMethod -Uri "http://www.viaquatro.com.br/generic/Main/LineStatus" | select * -ExpandProperty StatusMetro
$linha = $metro.ListLineStatus

$linha | select Line,StatusMetro

E agora tenho uma pesquisa direta do status das linhas na hora que eu quiser e sem ter que abrir o site do metrô,,,

Quando eu descobrir se a CPTM tem o mesmo tipo de serviço tento incorporar no código,,,

 


Arquivado em:Powershell, Uncategorized Tagged: api, código, cptm, http, invoke, linha, metro, posh, powershell, requisicao, restmethod, status

Updating Metadata in SSIS

$
0
0

This is just a quick tip re: updating metadata in SSIS. A few days ago I was speaking with an SSIS developer who wasn't aware of this change so I thought I'd share it.

Let's say a column is deleted in the underlying table or view that feeds an SSIS package. Or, maybe you're starting with a 'template' SSIS package that has metadata for a different table than the one you are building...once you adjust the source query, you see red X's on the precedence constraints (pipeline paths) in the data flow:

Prior to SQL Server 2012, updating metadata changes of this nature was a very tedious process. It's now *much* easier than it was before. First, you right-click on the precedence constraint and choose Resolve References:

  

Then you want to check the box on the bottom left (it's not checked by default) then OK. It will then delete the downstream references in the package for the column(s) which no longer exist:

  

And voila, the references no longer exist for the transformations down below in the data flow. You might have to do this in a couple of places if the data flow splits off (like mine does in the screen shot). Depending on what else changed, you may also need to 'click through' some of the pages of some transformations to update metadata (ex: a data type change), but the Resolve References functionality is a huge time-saver.

My BlueGranite colleague and friend Meagan Longoria has been convincing me recently that these are the kind of changes that BIML is well-equipped to take care of quickly & efficiently when it regenerates a package. I will learn BIML soon (really I will, I will), but in the meantime, Resolve References sure is a handy thing to know. It certainly delighted the SSIS developer I was speaking with last week!

You Might Also Like...

Getting Started with Parameters, Variables & Configurations

Parameterizing Connections and Values at Runtime Using SSIS Environment Variables

Documenting Precedence Constraints in SSIS

Viewing all 1849 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>