Quantcast
Channel: Nebraska SQL from @DBA_ANDY
Viewing all 112 articles
Browse latest View live

Configuring a Perfmon Collector for SQL Server

$
0
0
Something I always stress to people when I write and present is the importance of tracking data *before* you need it.  Microsoft does a fair job collecting some data via DMVs, the default trace, and Extended Events, but the data is still very limited in many ways, and in many cases the data doesn't persist past a SQL service restart.

One of the important tools for a SQL Server DBA (or Dev, or Analyst, or...) is Windows Perfmon.  Perfmon can be configured to track a ton of data, but it doesn't collect anything unless you are actively watching via the Perfmon GUI or you have configured a Perfmon Collector.

One of the downsides to Perfmon, like most other monitoring tools, is performance overhead.  The more frequently you measure data, the more impactful it can be.

https://33.media.tumblr.com/871fe906d55f4f01872fd65482f94a8a/tumblr_inline_njxqxpHf471s778iq.jpg
Because of this, I set up my collector to gather data every five minutes to lessen that impact.  This number is a happy medium from past discussions with other colleagues for a number that is frequent enough to notice trends over time while still being infrequent enough to have minimal impact.  Every five minutes may strike you as liable to miss problems, and it can - if something spikes (or valleys) for a moment - or even a minute or two - you may not see it.  For many Perfmon counters however, you will see an extreme followed by a gradual change like this image from my recent "Server Using All of my RAM" blog post:


As you can see, Page Life Expectancy (PLE) on this graph dips, gradually climbs, and then dips again.  With a collection every five minutes you may not catch the exact peak - all you know is that the PLE was 50,000 at 12:55am and then only 100 at 1:00am on 03/13.  It may have climbed higher than that before it dipped, but by 1:00am it had dipped down to around 100 (coincidentally at 1am the CheckDB job had kicked off on a large database).

If you really need to know (in this example) exactly how high PLE gets before it dips, or exactly how low it dips, or at what specific time it valleys or dips, you need to actively watch or set up a collector with a more frequent collection.  You will find that in most cases this absolute value isn't important - it is sufficient to know that a certain item peaks/valleys in a certain five minute interval, or that during a certain five minute interval ("The server was slow last night at 3am") a value was in an acceptable/unacceptable range.

If you do set up a collector with a more frequent interval, make sure to delete it (or at least turn it off) after you have collected your data.  I am a fan of deleting it outright so that it doesn't accidentally get turned back on and cause impact, but sometimes it does make more sense to leave it in place. #ItDepends

--

My mechanism (not originally created by me but significantly modified from the original source material - I do not know who was the original author) uses a folder structure that I can ZIP and then unzip into C:\Perflogs, with batch files to create the collector and manage it via Windows Scheduled tasks.

Here are the counters I collect by default:

  • "\Memory\Available MBytes"
  • "\Memory\Pages/sec"
  • "\MSSQLSERVER:Access Methods\Forwarded Records/sec"
  • "\MSSQLSERVER:Access Methods\Full Scans/sec"
  • "\MSSQLSERVER:Access Methods\Index Searches/sec"
  • "\MSSQLSERVER:Buffer Manager\Buffer cache hit ratio"
  • "\MSSQLSERVER:Buffer Manager\Free List Stalls/sec"
  • "\MSSQLSERVER:Buffer Manager\Free pages"
  • "\MSSQLSERVER:Buffer Manager\Lazy writes/sec"
  • "\MSSQLSERVER:Buffer Manager\Page life expectancy"
  • "\MSSQLSERVER:Buffer Manager\Page reads/sec"
  • "\MSSQLSERVER:Buffer Manager\Page writes/sec"
  • "\MSSQLSERVER:General Statistics\User Connections"
  • "\MSSQLSERVER:Latches(*)\Latch Waits/sec"
  • "\MSSQLSERVER:Locks(*)\Lock Waits/sec"
  • "\MSSQLSERVER:Locks(*)\Number of Deadlocks/sec"
  • "\MSSQLSERVER:Memory Manager\Target Server Memory (KB)"
  • "\MSSQLSERVER:Memory Manager\Total Server Memory (KB)"
  • "\MSSQLSERVER:SQL Statistics\Batch Requests/sec"
  • "\MSSQLSERVER:SQL Statistics\SQL Compilations/sec"
  • "\MSSQLSERVER:SQL Statistics\SQL Re-Compilations/sec"
  • "\Paging File(*)\% Usage"
  • "\PhysicalDisk(*)\Avg. Disk sec/Read"
  • "\PhysicalDisk(*)\Avg. Disk sec/Write"
  • "\PhysicalDisk(*)\Disk Reads/sec"
  • "\PhysicalDisk(*)\Disk Writes/sec"
  • "\Process(sqlservr)\% Privileged Time"
  • "\Process(sqlservr)\% Processor Time"
  • "\Processor(*)\% Privileged Time"
  • "\Processor(*)\% Processor Time"
  • "\SQLSERVER:Access Methods\Forwarded Records/sec"
  • "\SQLSERVER:Access Methods\Full Scans/sec"
  • "\SQLSERVER:Access Methods\Index Searches/sec"
  • "\SQLSERVER:Buffer Manager\Buffer cache hit ratio"
  • "\SQLSERVER:Buffer Manager\Free List Stalls/sec"
  • "\SQLSERVER:Buffer Manager\Free pages"
  • "\SQLSERVER:Buffer Manager\Lazy writes/sec"
  • "\SQLSERVER:Buffer Manager\Page life expectancy"
  • "\SQLSERVER:Buffer Manager\Page reads/sec"
  • "\SQLSERVER:Buffer Manager\Page writes/sec"
  • "\SQLSERVER:General Statistics\User Connections"
  • "\SQLSERVER:Latches(*)\Latch Waits/sec"
  • "\SQLSERVER:Locks(*)\Lock Waits/sec"
  • "\SQLSERVER:Locks(*)\Number of Deadlocks/sec"
  • "\SQLSERVER:Memory Manager\Target Server Memory (KB)"
  • "\SQLSERVER:Memory Manager\Total Server Memory (KB)"
  • "\SQLSERVER:SQL Statistics\Batch Requests/sec"
  • "\SQLSERVER:SQL Statistics\SQL Compilations/sec"
  • "\SQLSERVER:SQL Statistics\SQL Re-Compilations/sec"
  • "\System\Processor Queue Length"

As you can see there is duplication between MSSQLSERVER counters and SQLSERVER counters - this is because at some version of SQL the hive name changed from MSSQLSERVER to SQLSERVER, and by including both of them in the list it covers all of the bases regardless of the Windows/SQL Server version being monitored.  If the listed hive doesn't exist, the collector creates without error, so it doesn't hurt to have both of them in the list.

This is my default list, curated over time from experiences and input from others - if you have other counters you want to collect just edit the list accordingly.

One note - if you have a named instance, the counter hive will be named differently - something like MSSQL$instancename.  The easiest way to handle this is to edit the counter list, copy-paste the SQLSERVER counter list, and then find-replace SQLSERVER to MSSQL$instancename for the new items.

--

The folder structure starts at the top level with a folder named SQLServerPerf (in my case Ntirety-SQLServerPerf).  Below that are two folders, SQLServerPerf\BatchFiles and SQLServerPerf\Logs.

By default, the Logs folder is empty,  In BatchFiles are five files - a .CFG file that includes the counter list above, and four .BAT files to create/start/stop/cycle the collector itself.

To start, unzip the package (or copy the SQLServerPerf folder) into C:\PerfLogs, resulting in C:\PerfLogs\SQLServerPerf\

Why do I use C:?  By default Windows creates a PerfLogs folder there, plus using that path guarantees (almost guarantees) that the batch files will run since Windows servers in general have C: drives - using a different drive would require edits to the files to reference that different drive, and it you absolutely can't write to C: that is the fix - edit the files to change references from C: to the drive of your choice.

--

SQLServerPerf\BatchFiles\SQLServer.CFG is a text file whose contents are just the counter list:
"\Memory\Available MBytes"
"\Memory\Pages/sec"
"\MSSQLSERVER:Access Methods\Forwarded Records/sec"
"\MSSQLSERVER:Access Methods\Full Scans/sec"
"\MSSQLSERVER:Access Methods\Index Searches/sec"
"\MSSQLSERVER:Buffer Manager\Buffer cache hit ratio"
"\MSSQLSERVER:Buffer Manager\Free List Stalls/sec"
"\MSSQLSERVER:Buffer Manager\Free pages"
"\MSSQLSERVER:Buffer Manager\Lazy writes/sec"
"\MSSQLSERVER:Buffer Manager\Page life expectancy"
"\MSSQLSERVER:Buffer Manager\Page reads/sec"
"\MSSQLSERVER:Buffer Manager\Page writes/sec"
"\MSSQLSERVER:General Statistics\User Connections"
"\MSSQLSERVER:Latches(*)\Latch Waits/sec"
"\MSSQLSERVER:Locks(*)\Lock Waits/sec"
"\MSSQLSERVER:Locks(*)\Number of Deadlocks/sec"
"\MSSQLSERVER:Memory Manager\Target Server Memory (KB)"
"\MSSQLSERVER:Memory Manager\Total Server Memory (KB)"
"\MSSQLSERVER:SQL Statistics\Batch Requests/sec"
"\MSSQLSERVER:SQL Statistics\SQL Compilations/sec"
"\MSSQLSERVER:SQL Statistics\SQL Re-Compilations/sec"
"\Paging File(*)\% Usage"
"\PhysicalDisk(*)\Avg. Disk sec/Read"
"\PhysicalDisk(*)\Avg. Disk sec/Write"
"\PhysicalDisk(*)\Disk Reads/sec"
"\PhysicalDisk(*)\Disk Writes/sec"
"\Process(sqlservr)\% Privileged Time"
"\Process(sqlservr)\% Processor Time"
"\Processor(*)\% Privileged Time"
"\Processor(*)\% Processor Time"
"\SQLSERVER:Access Methods\Forwarded Records/sec"
"\SQLSERVER:Access Methods\Full Scans/sec"
"\SQLSERVER:Access Methods\Index Searches/sec"
"\SQLSERVER:Buffer Manager\Buffer cache hit ratio"
"\SQLSERVER:Buffer Manager\Free List Stalls/sec"
"\SQLSERVER:Buffer Manager\Free pages"
"\SQLSERVER:Buffer Manager\Lazy writes/sec"
"\SQLSERVER:Buffer Manager\Page life expectancy"
"\SQLSERVER:Buffer Manager\Page reads/sec"
"\SQLSERVER:Buffer Manager\Page writes/sec"
"\SQLSERVER:General Statistics\User Connections"
"\SQLSERVER:Latches(*)\Latch Waits/sec"
"\SQLSERVER:Locks(*)\Lock Waits/sec"
"\SQLSERVER:Locks(*)\Number of Deadlocks/sec"
"\SQLSERVER:Memory Manager\Target Server Memory (KB)"
"\SQLSERVER:Memory Manager\Total Server Memory (KB)"
"\SQLSERVER:SQL Statistics\Batch Requests/sec"
"\SQLSERVER:SQL Statistics\SQL Compilations/sec"
"\SQLSERVER:SQL Statistics\SQL Re-Compilations/sec"
"\System\Processor Queue Length"
--

SQLServerPerf\BatchFiles\SQLPerfmonCollector-Create.bat is a batch (text) file whose contents create the collector and also several Windows Scheduled Tasks that manage the collector:
logman create counter SQLServerPerf -f bin -si 300 -v nnnnnn -o "c:\perflogs\SQLServerPerf\Logs\SQLServerPerf" -cf "c:\perflogs\SQLServerPerf\BatchFiles\SQLServer.cfg"
timeout /T 2
logman start SQLServerPerf
timeout /T 2
schtasks /create /tn "Cycle SQLServerPerf Perfmon Counter Log" /tr C:\PerfLogs\SQLServerPerf\BatchFiles\SQLPerfmonCollector-Cycle.bat /sc daily /st 23:59:58 /ed 01/01/2099 /ru system
timeout /T 2
schtasks /create /tn "Start SQLServerPerf Perfmon Counter Log" /tr C:\PerfLogs\SQLServerPerf\BatchFiles\SQLPerfmonCollector-Start.bat /sc onstart /ru system
timeout /T 2
schtasks /create /tn "Purge SQLServerPerf Perfmon Counter Log" /tr "PowerShell -command {Get-ChildItem -path C:\PerfLogs\SQLServerPerf\Logs -Filter *.blg | where {$_.Lastwritetime -lt (date).addmonths(-13)} | remove-item}" /sc daily /st 23:59:58 /ed 01/01/2099 /ru system
pause
As you can see there are references to the direct path to our C:\PerfLogs\SQLServerPerf - if you move the files to another drive/path, these references need to be changed.

This batch file does the following:

  1. Uses "logman create" to create the actual collector, writing files to to the Logs folder and gathering the counters listed in the SQLServer.cfg file
  2. Uses "timeout" to pause for two seconds to allow the user to see any return messages
  3. Uses "logman start" to start the collector
  4. Pauses two more seconds
  5. Uses "schtasks /create" to create a scheduled task to run SQLPerfmonCollector-Cycle.bat nightly at 11:59:58pm (Batch file contents to follow)
  6. Pauses two more seconds
  7. Uses "schtasks /create" to create a scheduled task to run SQLPerfmonCollector-Start.bat on system startup - this makes sure that after a reboot the collector is automatically started for minimal interruption
  8. Pauses two more seconds
  9. Uses "schtasks /create" to create a scheduled task to run nightly at 11:59:58pm to run a Powershell command to delete log (.BLG) files older than 13 months
  10. Uses "pause" to stop progress until the user hits a key - this again prevents the window from closing until the user acknowledges the progress
This is specifically written to be run interactively (right-click on the BAT file and "Run" or if available, "Run As Administrator") - if you want to truly automate the deploy of the collector, the  pause command should be removed.

--

**Important Note** - current versions of Windows have settings for some of what I am using Scheduled Tasks to do, such as restarting the collector.  This wasn't always the case, and I wanted a solution I could deploy without worrying about the Windows version.

--

SQLServerPerf\BatchFiles\SQLPerfmonCollector-Cycle.bat is a batch (text) file whose contents stop and then start the collector:
logman stop SQLServerPerf
timeout /T 2
logman start SQLServerPerf
I included the timeout to pause for two seconds as I sometimes had issues with the stop/start process where I would get an error from the start command complaining that the collector was already running since the stop hadn't completed yet - adding the two second pause give the collector time to be completely stopped before the start attempt.

--

SQLServerPerf\BatchFiles\SQLPerfmonCollector-Start.bat is a batch (text) file whose contents start the collector:
logman start SQLServerPerf
--

SQLServerPerf\BatchFiles\SQLPerfmonCollector-Stop.bat is a batch (text) file whose contents stop the collector:
logman stop SQLServerPerf
I included this for completeness when I was creating the batch files although I have yet to need it - if I want to stop the collector for some reason I simply do so interactively in the Perfmon GUI.

--

As noted above, I keep the SQLServerPerf folder as a ZIP, and then to deploy it I unzip it into C:\PerfLogs, and then run the Create batch file - it's a simple as that.

The output file from a Perfmon collector by default is a .BLG file.  As noted above, there is a scheduled tasks to stop and start the collector every night, resulting in a new .BLG file each day:


To look at an individual day's info, you simply open that day's file in Perfmon (by opening Perfmon and then opening the file, or by double-clicking the BLG file itself).  If you wish to string multiple days' data together, use the Relog command as I described in this post from a couple years ago: "Handling Perfmon Logs with Relog."

As you can see in the above screenshot, on my laptop the collector stores about 1MB per day.  Even on a big OLTP production cluster I don't see more than 1.5MB-2.0MB/day.  Stretch that out over 13 months (remember we purge files older than 13 months) and it comes out to a max size of around 800MB.  Hopefully 1GB of Perfmon data isn't going to fill your C: drive (if it is you have other problems) but if it is going be a problem, relocate the folders as described above.

--

There are few things more important for *reacting* to problems than to be *proactively* gathering data - and a Perfmon collector helps you to do just that.

Hope this helps!



How is SQL Server using more than the Max Server Memory?

$
0
0
As usual, the pager went off...

"Server is near 100% memory utilization - what is running?"
http://www.starshipearththebigpicture.com/wp-content/uploads/2015/09/red-alert-picard-300x228.jpg
I saw when I signed on that the Windows server was at 98% memory utilization (the sum of all processes in Windows, including but not limited to SQL Server) and that the sqlservr.exe process for the single SQL Server 2008R2 instance was using a lion’s share of it – 77GB out of 80GB on the server.

The Max Server Memory cap in SQL Server was set to 72.5GB, so how was this possible?!?

A key that many DBAs don't consider is that there are multiple other processes that run outside of the Buffer Pool memory, and before SQL server 2012, the Buffer Pool was the only thing that is governed by the Max Server Memory cap.  This is how the sqlservr process can use more than the cap.  

When you configure Max Server Memory (you *have* configured Max Server Memory, right!!?!?!?!?!) all it was doing before SQL Server 2012 was setting a cap on the Buffer Pool.  Not only does this not impact external SQL Server processes such as Reporting Service and Analysis Services, it doesn't even cover everything inside sqlservr.exe.

(SQL Server 2012 dramatically changed what is and isn't included under Max Server Memory - see the post "Memory Manager Configuration changes in SQL Server 2012" from the MS SQLOS team which discusses how several things were shifted to be included under Max Server Memory as of 2012.)

An interesting point in this situation was the fact that I saw that the Page Life Expectancy (PLE) was through the roof – over 88,000 on both NUMA nodes.  Regardless of what guidelines you subscribe to, this is a very high number and indicates a large amount of free Buffer Pool.  

This relates to how the SQL Server application (and most other enterprise applications) manage memory – they gradually grab more and more memory as they need it (for large queries, etc.) but they don’t release it gracefully (if at all).  
http://i60.tinypic.com/1692rgj.jpg
At some point some large query or other unit of work probably needed that 72GB of Buffer Pool memory, but it was mostly free at the time I checked the PLE value (as evidenced by the large number).  

In many unfortunate cases the only way to release this memory from the Windows process is to restart the SQL Server service (MSSQLServer or MSSQL$instance) in Windows, or to lower the Max Server Memory cap in SQL Server (which will gradually force the Buffer Pool to release the memory it is holding to the level of the new cap - this can take many minutes depending on how much the value of Max Server Memory is decreased).

--

Sidebar: WARNING - as we have discussed in the past, remember that PLE is a trending number - the fact that PLE is 88,000 right now does *not* by itself indicate you have too much memory configured for your process - it just means that at the moment it was measured, there was significant free memory.  

PLE needs to be monitored over time at different times of day during different times of the week and month.  It may be that there is significant free memory right now, but at 2am when CheckDB is running (you *do* run CheckDB, right!!?!?!?!?!) or on the last day of the month when the payroll reports run, PLE may drop down to nothing as that process needs all of the memory available and then some.  

NEVER, EVER, make capacity planning decisions solely on a single point-in-time value measurement - even if you think it is a busy time on the server!  This is *NOT* an #ItDepends


http://static.comicvine.com/uploads/original/3/33200/3378703-9952743819-Prude.jpg
--

As I was saying, there are many components that work outside of the buffer pool, and one of them (in 2008R2) is CLR (Common Language Runtime), the run-time process for executing .NET managed code (such as VB.NET and C#) inside SQL Server.

This was the output for the CLR memory space (from DBCC MEMORYSTATUS):
MEMORYCLERK_SQLCLR (node 0)              KB
---------------------------------------- -----------
VM Reserved                              6313088
VM Committed                             25792
Locked Pages Allocated                   0
SM Reserved                              0
SM Committed                             0
SinglePage Allocator                     1408
MultiPage Allocator                      39256

6313088KB reserved equals 6GB, which is a fair chunk of the “extra” memory SQL Server was using.  As you can see here the “committed” number was significantly smaller than the “reserved” number, meaning that at the specific moment I ran the query there wasn’t much CLR traffic going on, but there was enough recent work that it had to reserve 6GB of space.

The DMV companion to this is sys.dm_os_memory_clerks, and a useful query is:
SELECT top 10 type,
virtual_memory_reserved_kb,
virtual_memory_committed_kb
from sys.dm_os_memory_clerks
order by virtual_memory_reserved_kb desc

As you can see, the DMV returns comparable information. and you receive it in a nice tight query result set rather than an ugly 80-page DBCC output. :)

--

In the article referenced above, CLR is one of the things that shifts under Max Server Memory as of SQL Server 2012, so if this had been a SQL Server 2012/2014 the problem may not have even been noticed.  With so much free Buffer Pool (high PLE) there might have been sufficient head room under Max Server Memory to handle the CLR needs *without* taking the server to 98%+ RAM utilization.

CLR is one of those things you just have to allow for when planning memory capacity and setting Max Server Memory on a SQL Server – in this case *unless* there was something unusual going on with the CLR – such as a new code release that has badly memory-managed code in it – this showed that the Max Server Memory cap needed to be set lower on this instance (or RAM needed to be added to the server) to allow for what CLR needed.

IMPORTANT - realize this need to plan for CLR is true regardless of your SQL version - on a SQL Server 2005/2008/2008R2, Max Server Memory needs to be set sufficiently low to give head room for CLR *outside* the cap, while on a newer SQL Server with the changes described above, the cap needs to be set high enough to include CLR's needs.

--

As the on-call I wasn't familiar with the regular workload of this server, so I advised the client and the primary DBA that if the current situation is the “real” and “normal” situation on this server, I recommended lowering Max Server Memory by 2GB-4GB to allow more head room for the operating system, etc. while still satisfying the needs of the CLR space.

I lowered the Max Server Memory cap is 70GB (71600MB) to try to help for the immediate time – since the PLE was so high it could handle it in the immediate term and the free memory on the Windows server went up to 5% (from 2% when I logged on).

At the end of the day I turned it over to the client's primary DBA to follow-up with the client as to whether this was a memory capacity problem - does the client really need 72.5GB of Buffer Pool *and* 6+GB of CLR memory? - or if it was a code/intermittent problem - was there a new piece of poorly written code that was the underlying cause of the issue?

--

The takeaway here is to remember that there are many things - CLR, extended events sessions, the lock manager, linked servers, and lots more - that function outside the realm of the Buffer Pool, depending on your SQL Server version.  If your server uses CLR components, or if you do a lot of XEvents tracking, or use complicated extended stored procedures, etc., make sure to allow for that when capacity planning and when configuring Max Server Memory.

Hope this helps!


Another Source of Knowledge

$
0
0
It started with a tweet from one the many members of the #sqlfamily who always has something meaningful to say, Jes Borland (blog/@grrl_geek):


The referenced link is http://www.ted.com/talks/celeste_headlee_10_ways_to_have_a_better_conversation.

I followed Jes's advice, paused the other items I was working on, and watched the talk.  It was a nice tight eleven minutes on how to be present in the moment, paying attention to those around you and interacting with them in a more meaningful way.

I was not formerly familiar with the speaker Celeste Headlee (blog/@CelesteHeadlee), a public radio talk host, but after listening to her talk I could definitely listen to her again - she spoke very clearly and made her points without much fluff.

I'm sure some people will tune out right there, thinking they would never agree with a public radio host or her agenda - my take has always been that it is worth giving any speaker at least two chances.  If I have listened to you speak/present/etc. twice (anyone can have a bad day) and both times you have failed to hold my interest (on topics in which I am interested) I will discount then you as a speaker - at least as a speaker I would be interested in listening to.

--

While I enjoyed this particular talk, it reminded me of something I used to really enjoy doing and have all but stopped over the last couple years.

I tripped over TED talks online five or six years ago, back when I still worked in an on-site office.  I don't remember how - probably from Twitter or Facebook - but I discovered this wide array of free recordings that I could listen to on my breaks or when I had a free few minutes.  As a full-time work-from-home these last few years I have not handled my workday in the same way (I eat lunch with my family now for example) and have not watched talks in the same way I used to.

TED is a nonprofit organization that puts on a series of conferences on Technology, Entertainment, and Design.  The talks given at the conferences are by practice 18 minutes or less (although some do run longer), with many talks being around ten minutes long.

The conferences themselves aren't cheap (and as such I have never been in person) - TED2017 in Vancouver next year has a registration cost of US$8500 - but the real treasure trove are the talks themselves, posted online for free on TED's website (and often on YouTube) after a reasonable time has passed.

TED talks are given by an amazing array of speakers from all different backgrounds, from industry leaders like Bill Gates and Richard Branson to intellectuals like Sir Ken Robinson and Dan Gilbert to entertainers like Apollo Robbins and Ze Frank.

You can search the talks by topic or speaker or length, or you can watch one of the curated playlists of talks to see talks related to one another on a given subject.

A sidebar to the TED conferences and talks is TEDx, TEDx is an offshoot of the parent conference where local staff organize an event that is a combination of TED-style talks from local/regional speakers and airings of national TED talks.  The best comparison I can make for SQL Server is to think of TEDx as TED's SQL Saturday - not entirely accurate but close.  TEDx talks are also available for viewing for free at ted.com.

Another benefit of a resource like this is to a speaker (or a potential speaker, which is about everyone).  You can learn a lot about speaking to an audience from simply watching other people present, and these talks are great examples.

--

At the end of the day, you probably won't learn anything directly about SQL Server from seeking out things like TED talks, but you will learn a lot about the world around you, and remember Knowledge is Power!

http://cdn.thedailybeast.com/content/dailybeast/articles/2014/09/06/schoolhouse-rock-a-trojan-horse-of-knowledge-and-power/jcr:content/body/inlineimage.img.800.jpg/47000010.cached.jpg

(Whoa...flashback...)

Hope this helps!

Update Your Scripts!

$
0
0
I had an incident today where a colleague was running a Wait Stats Report and the output looked like this:

As soon as he showed me the report I knew he had an outdated script.

Each new version of SQL Server introduces new wait types, often (but not always) related to new features.  For example, waits related to AlwaysOn (the feature formerly known as HADRON) have a prefix of HADR.

The wait at the top of his list, HADR_FILESTREAM_IOMGR_IOCOMPLETION  means "The FILESTREAM AlwaysOn I/O manager is waiting for I/O completion" - the catch is the AlwaysOn I/O manager is almost always waiting for I/O completion as data flows back and forth - this isn't indicative of any issue, so it can be excluded.

Paul Randal (@PaulRandal/blog) keeps a list of the "excludable" wait types in a blog post he maintains related to wait stats.  The post includes a version of the Wait Stats script from Glenn Berry's (@GlennAlanBerry/blog) DMV scripts, and that Wait Stats script has a large list of exclusions:
WHERE [wait_type] NOT IN (
        N'BROKER_EVENTHANDLER',             N'BROKER_RECEIVE_WAITFOR',
        N'BROKER_TASK_STOP',                N'BROKER_TO_FLUSH',
        N'BROKER_TRANSMITTER',              N'CHECKPOINT_QUEUE',
        N'CHKPT',                           N'CLR_AUTO_EVENT',
        N'CLR_MANUAL_EVENT',                N'CLR_SEMAPHORE',
        N'DBMIRROR_DBM_EVENT',              N'DBMIRROR_EVENTS_QUEUE',
        N'DBMIRROR_WORKER_QUEUE',           N'DBMIRRORING_CMD',
        N'DIRTY_PAGE_POLL',                 N'DISPATCHER_QUEUE_SEMAPHORE',
        N'EXECSYNC',                        N'FSAGENT',
        N'FT_IFTS_SCHEDULER_IDLE_WAIT',     N'FT_IFTSHC_MUTEX',
        N'HADR_CLUSAPI_CALL',               N'HADR_FILESTREAM_IOMGR_IOCOMPLETION',
        N'HADR_LOGCAPTURE_WAIT',            N'HADR_NOTIFICATION_DEQUEUE',
        N'HADR_TIMER_TASK',                 N'HADR_WORK_QUEUE',
        N'KSOURCE_WAKEUP',                  N'LAZYWRITER_SLEEP',
        N'LOGMGR_QUEUE',                    N'ONDEMAND_TASK_QUEUE',
        N'PWAIT_ALL_COMPONENTS_INITIALIZED',
        N'QDS_PERSIST_TASK_MAIN_LOOP_SLEEP',
        N'QDS_SHUTDOWN_QUEUE',
        N'QDS_CLEANUP_STALE_QUERIES_TASK_MAIN_LOOP_SLEEP',
        N'REQUEST_FOR_DEADLOCK_SEARCH',     N'RESOURCE_QUEUE',
        N'SERVER_IDLE_CHECK',               N'SLEEP_BPOOL_FLUSH',
        N'SLEEP_DBSTARTUP',                 N'SLEEP_DCOMSTARTUP',
        N'SLEEP_MASTERDBREADY',             N'SLEEP_MASTERMDREADY',
        N'SLEEP_MASTERUPGRADED',            N'SLEEP_MSDBSTARTUP',
        N'SLEEP_SYSTEMTASK',                N'SLEEP_TASK',
        N'SLEEP_TEMPDBSTARTUP',             N'SNI_HTTP_ACCEPT',
        N'SP_SERVER_DIAGNOSTICS_SLEEP',     N'SQLTRACE_BUFFER_FLUSH',
        N'SQLTRACE_INCREMENTAL_FLUSH_SLEEP',
        N'SQLTRACE_WAIT_ENTRIES',           N'WAIT_FOR_RESULTS',
        N'WAITFOR',                         N'WAITFOR_TASKSHUTDOWN',
        N'WAIT_XTP_HOST_WAIT',              N'WAIT_XTP_OFFLINE_CKPT_NEW_LOG',
        N'WAIT_XTP_CKPT_CLOSE',             N'XE_DISPATCHER_JOIN',
        N'XE_DISPATCHER_WAIT',              N'XE_TIMER_EVENT')
If you refer back to my colleague's original screen shot, you'll see that the top ten items on his list - everything above "OLEDB" - is on the excludable list!

--

The broader lesson here is to make sure you update your script libraries regularly - even if a script still runs and provides output (that is, you think it "works") it doesn't mean you are receiving valid data.

Although this example is about wait stats and wait types, it is applicable to a wide array of configurations and settings.  Changes like this are often version-related, but even within a version it can be decided that a particular wait type/trace flag/sp_configure setting/etc. is no longer important and can be ignored - or even worse, that some item is now important but wasn't included in your original scripts!

A little regular maintenance and ongoing research will help your toolbox stay clean and organized.

http://s421.photobucket.com/user/48548/media/tools/IMG_3765.jpg.html

Hope this helps!





Pulling Security Info

$
0
0
A frequent request I receive is to pull a list of logins/users with certain accesses, role memberships, etc.

I had a query to use xp_logininfo to pull group membership chains - that is, DOMAIN\Andy has access, but not directly - DOMAIN\Andy has access because he is a member of DOMAIN\DBAGroup.  The query is this:
/*
Domain Login Group Security Info
*/ 
DECLARE @name sysname 
CREATE TABLE ##logininfo
(
[account name] sysname,
[type] nvarchar(50),
[privilege] nvarchar(50),
[mapped login name] sysname,
[permission path] sysname
DECLARE namecursor cursor fast_forward
for
select name from master.sys.server_principals
where type='G' and name not like 'NT SERVICE%' 
open  namecursor
fetch next from namecursor into @name 
WHILE @@fetch_status=0
BEGIN
insert into ##logininfo EXEC ('xp_logininfo '''+ @name+''',''members''')
fetch next from namecursor into @name
END 
CLOSE namecursor
DEALLOCATE namecursor 
select @@SERVERNAME as InstanceName, *
from ##logininfo
/*
where [mapped login name] like'%agalb%'
*/ 
DROP TABLE ##logininfo
I needed the other half (or two-thirds) - I needed the ability to pull server and database role memberships.  Rather than script something from scratch I went looking and sure enough found the raw material for what I wanted on SQLServerCentral in the forums in the post "Query to get the lisst of logins having sysadmin and serveradmin."

The query is pulled from Jagan Kondapalli's (@JVKondapalli) reply to the original poster's question.  I modified it in a couple places and am posting my modification here:
IF  EXISTS (SELECT * FROM tempdb.dbo.sysobjects WHERE name = '##Users' AND type in (N'U'))
    DROP TABLE ##Users
IF  EXISTS (SELECT * FROM tempdb.dbo.sysobjects WHERE name = '##LOGINS' AND type in (N'U'))
    DROP TABLE ##LOGINS
GO
USE tempdb
GO
/*CREATE TABLE ##LOGINS
(
[Login Name] varchar(50),
[Default Database] varchar(60),
[Login Type] varchar(40),
[AD Login Type] varchar(40),
[sysadmin] char(5),
[securityadmin] char(5),
[serveradmin] char(5),
[setupadmin] char(5),
[processadmin] char(5),
[diskadmin] char(5),
[dbcreator] char(5),
[bulkadmin] char(5)
)*/
CREATE TABLE ##Users
(
    [Database] VARCHAR(64),
    [Database User ID] VARCHAR(64),
    [Server Login] VARCHAR(64),
    [Database Role] VARCHAR(64)
)
use master
go
SELECT  sid,
        loginname AS [Login Name],
        dbname AS [Default Database],
        CASE isntname
            WHEN 1 THEN 'AD Login'
            ELSE 'SQL Login'
        END AS [Login Type],
        CASE
            WHEN isntgroup = 1 THEN 'AD Group'
            WHEN isntuser = 1 THEN 'AD User'
            ELSE ''
        END AS [AD Login Type],
        CASE sysadmin
            WHEN 1 THEN 'Yes'
            ELSE 'No'
        END AS [sysadmin],
        CASE [securityadmin]
            WHEN 1 THEN 'Yes'
            ELSE 'No'
        END AS [securityadmin],
        CASE [serveradmin]
            WHEN 1 THEN 'Yes'
            ELSE 'No'
        END AS [serveradmin],
        CASE [setupadmin]
            WHEN 1 THEN 'Yes'
            ELSE 'No'
        END AS [setupadmin],
        CASE [processadmin]
            WHEN 1 THEN 'Yes'
            ELSE 'No'
        END AS [processadmin],
        CASE [diskadmin]
            WHEN 1 THEN 'Yes'
            ELSE 'No'
        END AS [diskadmin],
        CASE [dbcreator]
            WHEN 1 THEN 'Yes'
            ELSE 'No'
        END AS [dbcreator],
        CASE [bulkadmin]
            WHEN 1 THEN 'Yes'
            ELSE 'No'
        END AS [bulkadmin]
INTO ##LOGINS
FROM dbo.syslogins /*IN ORDER TO GET THE ACCESS INFORMATION A LOGIN ADD THE LOGIN NAME TO THE WHERE CLAUSE BELOW*/
--WHERE [LOGINNAME] = 'PUNCH IN THE LOGIN NAME HERE'
SELECT @@SERVERNAME as InstanceName, [Login Name],        [Default Database],  
        [Login Type],        [AD Login Type],        [sysadmin],        [securityadmin],        [serveradmin],        [setupadmin],        [processadmin],        [diskadmin],        [dbcreator],        [bulkadmin]
FROM tempdb..##LOGINS
--where [mapped login name] like'%agalb%'
ORDER BY [Login Type], [AD Login Type], [Login Name]
--
USE master
GO
DECLARE @DBName             VARCHAR(60)
DECLARE @SQLCmd             VARCHAR(1024)
Declare @DBIDvarchar(3)
set @DBID = (select MAX(database_id) from sys.databases)
--print @DBID
WHILE @DBID != 0
BEGIN
set @DBName = (select DB_NAME (''+@DBID+''))
SELECT @SQLCmd = 'INSERT ##Users ' +
        ' SELECT ''' + @DBName + ''' AS [Database],' +
        '      su.[name] AS [Database User ID], ' +
        '      COALESCE (u.[Login Name], ''** Orphaned **'') AS [Server Login], ' +
        '      COALESCE (sug.name, ''Public'') AS [Database Role] ' +
        '   FROM [' + @DBName + '].[dbo].[sysusers] su' +
        '       LEFT OUTER JOIN ##LOGINS u' +
        '           ON su.sid = u.sid' +
        '       LEFT OUTER JOIN ([' + @DBName + '].[dbo].[sysmembers] sm ' +
        '                            INNER JOIN [' + @DBName + '].[dbo].[sysusers] sug  ' +
        '                                ON sm.groupuid = sug.uid)' +
        '           ON su.uid = sm.memberuid ' +
        '   WHERE su.hasdbaccess = 1' +
        '     AND su.[name] != ''dbo'''
     IF DATABASEPROPERTYEX(@DBName, 'Status')='ONLINE'
EXEC (@SQLCmd)
     print @DBID
     set @DBID = @DBID - 1
END
SELECT @@SERVERNAME as InstanceName,*
FROM ##Users
/*IN ORDER TO GET THE ACCESS INFORMATION A USER ADD THE USER TO THE WEHRE CLUASE BELOW*/
--WHERE [Database User ID] = 'PUNCH IN THE USER HERE'
/*IN ORDER TO GET THE ACCESS INFORMATION OF ALL USERS TO A PARTICULAR DATABASE, ADD THE DATABASE NAME TO THE WHERE CLUASE BELOW*/
--WHERE [DATABASE] = 'PUNCH IN YOUR DATABASE NAME HERE'
--where [server login] like '%AGALBRAI%'
ORDER BY [Database], [Database User ID]
   
IF  EXISTS (SELECT * FROM tempdb.dbo.sysobjects WHERE name = '##LOGINS' AND type in (N'U'))
    DROP TABLE ##LOGINS
GO
IF  EXISTS (SELECT * FROM tempdb.dbo.sysobjects WHERE name = '##Users' AND type in (N'U'))
    DROP TABLE ##Users
Again - this is *not* my underlying query but it is exactly what I needed to pull role memberships.  I usually pull this info and dump it into Excel for client consumption.

I could not find a blog or other presence for Jagan besides his SQLServerCentral profile and the inactive Twitter account mentioned above, but thanks!

--

Hope this helps!


Why I Love #SQLSaturday

$
0
0
PASS SQLSaturdays are free 1-day training events for SQL Server professionals that focus on local speakers, providing a variety of high-quality technical sessions, and making it all happen through the efforts of volunteers. Whether you're attending a SQLSaturday or thinking about hosting your own, we think you'll find it's a great way to spend a Saturday – or any day. - sqlsaturday.com
Sounds simple, doesn't it?  Here are the top three reasons I *LOVE* SQLSaturdays:

The opportunity to learn - the quality and quantity of free training available at a SQLSaturday never ceases to amaze me.  Most SQLSaturdays have anywhere from three to six tracks, resulting in 15-30 sessions on everything from database administration to business intelligence to professional development and everything in between.  Sessions range from 100-level introductory up through 400-level expert.
 
If you want to pay $100-$150, most SQLSaturdays offer full-day "pre-conference" sessions on the Friday before the event (and sometimes even the Thursday before as well). While it isn't free, there aren't very many places to get a full day of high end training for a hundred bucks!

Another aspect of this is *who* provides the training.  I regularly see sessions from Microsoft Certified Masters (MCM's) and Most Valuable Professionals (MVP's) as well as amazing content from first-timers dipping their toes in the pond.  At a recent SQL Saturday (in Boston) I saw an MCM present a great talk on working with virtualized SQL, one MVP speak on working with your infrastructure team, and another MVP talk about the Query Store, an upcoming feature from the unreleased next version of SQL Server.  Having said that, one of the best SQLSaturday sessions I have ever seen came a couple years ago from a first-time speaker excitedly presenting a new way she had devised to analyze trace data. 

All of these speakers share their expertise without pay or reward (other than usually a nice dinner the night before).

The opportunity to share - another fun part of SQLSaturday for me is being one of those speakers sharing what I know for free.  I have written before about the benefits of blogging and speaking, and they are many.  The biggest benefit to me personally (not counting the joy of helping others) is how creating a presentation helps force me to learn a new topic or a better way to do something I already do.

The presentation I used to give all the time was about doing health checks using Glenn Berry's DMV scripts, Ola Hallengren's maintenance solution, and how to make them work together to check client servers.  The presentation I frequently give now is about Extended Events (XEvents) - I told myself about a year ago I needed to learn more about XEvents and Powershell, and I knew (for me) creating presentations would help.  I submitted an Intro to Extended Events 100-level session to a couple of SQLSaturdays, and when it was chosen I was suddenly very motivated to learn more about the topic to produce the content!

The first presentation - the health check talk - highlights an aspect of how I work with SQL Server, and it is shared by many others.  The are lots of free tools and shared knowledge out there about SQL Server, and you don't need to recreate the wheel nine times out of ten - a little Google-Fu or #sqlhelp will usually give you an answer or at least give you raw material you can mold into an answer.  Just because you are working with someone else's raw material does not mean you can't write or speak about the situation - just make 100% sure you give credit (and notation) whenever it is due.  If you read my blog or see me speak with any regularity you will see that a lot if what I write about is *how* I use someone else's scripts, whether modified or "straight from the box," as opposed to creating completely new scripts, but I also reference those authors' blog posts, forum answers, and Twitter feeds.

You don't have to create a completely new way to do something for it to be useful to share!

Theopportunityto network - I list this third, but it can often be the most important. While it can be very useful to interact with the #sqlfamily online, there is no substitute for being able to sit down across the table from an expert on a topic you need help with and getting hundreds of dollars worth of free consulting while forming friendships that continue on after the event ends. There is nothing like the #sqlfamily, and it is fun to watch people from other areas as their eyes open wide. I have seen an Oracle DBA visit a SQLSaturday and watched their jaw hit the floor when they heard a speaker sharing their expert knowledge for free; I have had a manager ask me "How did you get the answer so fast?" and reply "I got on Twitter and asked the person who wrote the feature and he told me"; I have watched people who have never met in person raise thousands of dollars for charitable causes suggested by #sqlfamily members.

Another way to get involved and network is to volunteer at a SQLSaturday. I mentioned the speakers are volunteers, but so ate all of the others involved - coordinators, room monitors, check-in staff, and the rest.

This networking is invaluable to your career in other ways - two of my last three jobs came from a member of the #sqlfamily notifying me directly of an opening or making a key introduction to help me start the process.

SQLSaturdays are awesome!!!

Don't Use Delimiters In Your Object Names!

$
0
0
It started with a failed backup status check:
Msg 50000, Level 16, State 1, Line 73
** Backups listed in this file are missing / out of date **
db                status   recovery model   last full backup   Full BU hours old
[FinanceDB]       ONLINE   SIMPLE           NULL            NULL
Check out the database name - [FinanceDB],  At first my response was something like:

http://s.quickmeme.com/img/15/151d67b9a73228d44b1bfabea0d012b54b9cd2821a25bf4b4be1bad10c41a95d.jpg
I went and looked on the server, and sure enough in Management Studio I saw one database named "FinanceDB" and a database named "[FinanceDB]".

This was on a SQL 2008R2 instance, but as a test I created a database named [test] on my local SQL 2014 instance and sure enough it worked!

The source of the problem at the client was the LiteSpeed maintenance plan.  Even though the backup task was set to backup all user databases, it wasn't picking up the square-bracketed database.

On my test system I set up a "regular" SQL Server maintenance plan, and it had did not have the same problem - the backup job did pick up the square-bracketed [test] database.

I next ran the Ola Hallengren DatabaseBackup that I already had installed, and it also picked up the rogue database, creating a backup named TestServer_[test]_FULL_20160404_143721.bak.

--

Since I don't have LiteSpeed on my test system I created a quick test on the client's test system  (SQL 2008R2) - I created a new database named [test] and created a new LiteSpeed maintenance plan to back up *just* that database - the job failed with this error:
Executed as user: Domain\User. Microsoft (R) SQL Server Execute Package Utility  Version 11.0.5058.0 for 64-bit  Copyright (C) Microsoft Corporation. All rights reserved.    Started:  3:47:35 PM  Progress: 2016-04-04 15:47:35.81     Source: {D8AD0CC9-710A-4C59-A8E6-1B9228562535}      Executing query "DECLARE @Guid UNIQUEIDENTIFIER EXECUTE msdb..sp_ma...".: 100% complete  End Progress  Error: 2016-04-04 15:47:37.01     Code: 0xC002F210     Source: Fast Compression Backup 1 Execute SQL Task     Description: Executing the query "execute master..xp_slssqlmaint N'-BkUpMedia DISK -..." failed with the following error: "LiteSpeed? for SQL Server? 8.1.0.644  ? 2015 Dell Inc.    Database 'test' is invalid: not found   Msg 62401, Level 16, State 1, Line 0: Database 'test' is invalid: not found    ".   End Error  DTExec: The package execution returned DTSER_FAILURE (1).  Started:  3:47:35 PM  Finished: 3:47:37 PM  Elapsed:  1.809 seconds.  The package execution failed.  The step failed.
That was strange - remember the original backup job succeeded (although not backing up the square-bracketed database).  I considered and realized in the original situation there were two databases - one with brackets and one without.  Sure enough when I created a database just named "test" and included it in my new maintenance plan, the job succeeded.

How did it succeed?  It backed up "test" TWICE:


The LiteSpeed Fast Compression backup backed up the "test" database twice (Fast Compression runs a FULL every so often and DIFF's in-between - this is why two backups in a row results in a FULL and a DIFF).  I ran the job again and saw the same thing:


I verified by creating an object in [test] and running the backups again and running a restore - what I was backing up was truly "test" over and over.

--

This is not about bashing Dell/Quest LiteSpeed - it is about pointing out a shortcoming in how SQL Server allows us to name objects and a cautionary tale on that naming.

Although it worked in most of the test scenarios above, you can see how the square-bracketed database name failed under one maintenance scenario - and not only did it fail, it failed in a particularly nefarious way because the backup job *didn't* fail - it just didn't pick up the database.

--

I re-created my [test] database on my local instance and wanted to see what the syntax would look like for a DROP - I went through the SCRIPT action in Management Studio and ended up with this:

USE [master]
GO

/****** Object:  Database [[test]]    Script Date: 4/4/2016 2:00:55 PM ******/
DROPDATABASE[[test]]]
GO

Note the *three* square brackets at the end of the DROP statement – two open brackets but three close brackets.

A syntax checker parse showed statement OK – when I removed one of the closing brackets, so that I had two open and two close, the parse showed me this:

Msg 105, Level 15, State 1, Line 5
Unclosed quotation mark after the character string '[test]
GO

'.
Msg 102, Level 15, State 1, Line 5
Incorrect syntax near '[test]
GO

…and an execution resulted in the same errors.  When I re-added the third close square bracket, my DROP statement succceeded.

A script of a simple BACKUP statement resulted in the same thing:

BACKUPDATABASE[[test]]]
TODISK=N'C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\Backup\[test].bak'WITHNOFORMAT,NOINIT,  NAME =N'[test]-Full Database Backup',SKIP,NOREWIND,NOUNLOAD,  STATS= 10
GO


--

This third bracket may be the source of the issue in LiteSpeed and it may not - but it shows the importance of handling object names properly and how touchy the parser can be when you don't make it happy.

--

At the end of the day - just don't do it. There is always going to be some other name that is just as useful as any name that includes a delimiter, and you don't need to handle the bizarre and inconsistent scenarios that will arise.



Hope this helps!


How Long Did That Job Run?

$
0
0
When you use the msdb tables to try to gather information on your SQL Server Agent jobs, inevitably you will end up in a situation like this:
SELECT TOP 10 
sj.name as JobName
,sjh.run_date as RunDate
,sjh.run_time as RunTime
,sjh.run_duration as Duration
from msdb.dbo.sysjobhistory sjh
join msdb.dbo.sysjobs sj
on sjh.job_id = sj.job_id
where step_id=0
JobNameRunDateRunTimeDuration
ServerA_RESTORE_FROM_PROD_V2201409042228214131
ServerA_RESTORE_FROM_PROD_V2201409042310323055
ServerA_RESTORE_FROM_PROD_V2201409042311305003
Bloomberg_Pkg20140904231233205
ServerA_RESTORE_FROM_PROD_V2201409042313502343
DatabaseIntegrityCheck - SYSTEM_DATABASES201409231500433
DatabaseBackup - SYSTEM_DATABASES - FULL201409231000023
syspolicy_purge_history201409232000036
DatabaseBackup - USER_DATABASES - FULL20140923400002333
DatabaseIntegrityCheck - SYSTEM_DATABASES201409241500113
So....when did "DatabaseIntegrityCheck - SYSTEM_DATABASES" start? At 1500 - is that 3pm?  You may be able hash out that this translates to 12:15am local time...but what if you want to perform datetime-style math on the RunDate/RunTime?  Sure you can do multiple leaps to say (RunDate>X and RunDate<=Y) AND (RunTime>A and RunTime<=B), but you then need to explicitly format your X, Y, A, and B in the appropriate integer-style format.  Wouldn't it be easier to just be able to datetime math?

The next part is even worse - quick - how long did the first instance of "ServerA_RESTORE_FROM_PROD_V2" run?

4,131 somethings (seconds, ms, etc), right?

http://ct.fra.bz/ol/fz/sw/i51/5/7/25/frabz-no-191096.jpg
In truth, the job ran forty-one minutes and thirty-one seconds.

http://upandhumming.com/wp-content/uploads/2015/11/serious.jpg
Yes, yes I am.  No precision beyond seconds, and no quick way to do math.  if you want to figure out how much longer instance 1 of the job ran than instance 2, you have to do some serious goofiness to figure it out.

Here is the fix for both of these problems!

--

The first item is easy although not directly obvious without a little help from that old standby:

https://imgflip.com/i/12hu6g
(Yes I am that old...)

Microsoft included a system scalar function msdb.dbo.agent_datetime(run_date, run_time) that turns the combination  on run_date and run_time into a datetime:
SELECT TOP 10
sj.name as JobName
,sjh.run_date as RunDate
,sjh.run_time as RunTime
,msdb.dbo.agent_datetime(sjh.run_date,sjh.run_time) as RunDateTime
from msdb.dbo.sysjobhistory sjh
join msdb.dbo.sysjobs sj
on sjh.job_id = sj.job_id
where step_id=0
JobNameRunDateRunTimeRunDateTime
ServerA_RESTORE_FROM_PROD_V220140904222821
09/04/2014 22:28:21
ServerA_RESTORE_FROM_PROD_V220140904231032
09/04/2014 23:10:32
ServerA_RESTORE_FROM_PROD_V220140904231130
09/04/2014 23:11:30
Bloomberg_Pkg20140904231233
09/04/2014 23:12:33
ServerA_RESTORE_FROM_PROD_V220140904231350
09/04/2014 23:13:50
DatabaseIntegrityCheck - SYSTEM_DATABASES201409231500
09/23/2014 00:15:00
DatabaseBackup - SYSTEM_DATABASES - FULL2014092310000
09/23/2014 01:00:00
syspolicy_purge_history2014092320000
09/23/2014 02:00:00
DatabaseBackup - USER_DATABASES - FULL2014092340000
09/23/2014 04:00:00
DatabaseIntegrityCheck - SYSTEM_DATABASES201409241500
09/24/2014 00:15:00
I agree with the author of this post who calls the agent_datetime() function "undocumented" since there wasn't a record of it in Books Online - I checked around and couldn't find any standard documentation of it on MSDN or TechNet.

Now that we have a datetime, we can perform all of the regular datetime manipulation functions such as DATEDIFF() on the values.

--

The second part is a little more obnoxious - there isn't a quick Microsoft function (documented or otherwise) to make the run_duration into a process-ready value.

To hash the run_duration into a useful value I wrote a CASE statement some time ago:
SELECT TOP 10
sj.name as JobName
,CASE len(sjh.run_duration)
WHEN 1 THEN cast('00:00:0'
+ cast(sjh.run_duration as char) as char (8))
WHEN 2 THEN cast('00:00:'
+ cast(sjh.run_duration as char) as char (8))
WHEN 3 THEN cast('00:0'
+ Left(right(sjh.run_duration,3),1)
+':' + right(sjh.run_duration,2) as char (8))
WHEN 4 THEN cast('00:'
+ Left(right(sjh.run_duration,4),2)
+':' + right(sjh.run_duration,2) as char (8))
WHEN 5 THEN cast('0'
+ Left(right(sjh.run_duration,5),1)
+':' + Left(right(sjh.run_duration,4),2)
+':' + right(sjh.run_duration,2) as char (8))
WHEN 6 THEN cast(Left(right(sjh.run_duration,6),2)
+':' + Left(right(sjh.run_duration,4),2)
+':' + right(sjh.run_duration,2) as char (8))
END as 'Duration'

,run_duration
from msdb.dbo.sysjobhistory sjh
join msdb.dbo.sysjobs sj
on sjh.job_id = sj.job_id
where step_id=0
JobNameDurationrun_duration
ServerA_RESTORE_FROM_PROD_V200:41:314131
ServerA_RESTORE_FROM_PROD_V200:00:033
ServerA_RESTORE_FROM_PROD_V200:00:055
Bloomberg_Pkg00:00:022
ServerA_RESTORE_FROM_PROD_V200:00:022
DatabaseIntegrityCheck - SYSTEM_DATABASES00:00:4343
DatabaseBackup - SYSTEM_DATABASES - FULL00:00:022
syspolicy_purge_history00:00:3636
DatabaseBackup - USER_DATABASES - FULL00:00:022
DatabaseIntegrityCheck - SYSTEM_DATABASES00:00:1111

Interestingly I recently discovered a more elegant solution (while looking for a answer to a different problem) that utilizes the STUFF() function.  Look at this forum post on SQLServerCentral.  The fourth item down from "Mudluck" is almost exactly the same as my CASE statement above, but look at the reply below it from "JG-324908":
SELECT stuff(stuff(replace(str(run_duration,6,0),'','0'),3,0,':'),6,0,':') FROM sysJobHist
Sure enough, running this STUFF() code results in the same output as the much more complicated CASE statement above.  JG notes that he/she found it in a forum post somewhere and I dug around a little to see if I could find the original author without any luck :(

--

As with many other bits of code, I keep these things in a big NotePad file of useful code snippets (some people like OneNote instead, and there are dozens of choices - use the one you normally prefer) so that I can quickly reference them when needed.  I always note the website or forum post where I found the code if I didnt create it myself as well as the original author.  This lets me give credit where credit is due when showing the code to others, but it also gives me an idea of someone to approach in the future if I have a question about a similar topic.

It was especially interesting to find the STUFF() code because STUFF() isn't a function I often use, and in this case it was perfect.

--

Hope this helps!



Where Is My Primary Replica Again?

$
0
0
We have many clients with multi-node Availability Groups - that is, AGs with more than two replicas.  One of the problems I have always had with Availability Group management via the GUI (admit it, you use the GUI sometimes all you non-PowerShell geeks) is the fact that most of the work needs to be done from the primary replica.  You can connect to the Availability Group manager on a secondary replica, but you can only see limited data about that particular secondary replica and can't see much about the other replicas, including *which* replica is the current primary replica!

To perform management you almost always need to connect to the primary replica, but how can I figure out which one is primary without just connecting to the instances one by one until I get lucky?

https://cdn.meme.am/instances/500x/55766239.jpg
Enter the T-SQL:
SELECT
AG.name AS AvailabilityGroupName
, HAGS.primary_replica AS PrimaryReplicaName
, HARS.role_desc as LocalReplicaRoleDesc
, DRCS.database_name AS DatabaseName
, HDRS.synchronization_state_desc as SynchronizationStateDesc
, HDRS.is_suspended AS IsSuspended
, DRCS.is_database_joined AS IsJoined
FROM master.sys.availability_groups AS AG
LEFT OUTER JOIN master.sys.dm_hadr_availability_group_states as HAGS
ON AG.group_id = HAGS.group_id
INNER JOIN master.sys.availability_replicas AS AR
ON AG.group_id = AR.group_id
INNER JOIN master.sys.dm_hadr_availability_replica_states AS HARS
ON AR.replica_id = HARS.replica_id AND HARS.is_local = 1
INNER JOIN master.sys.dm_hadr_database_replica_cluster_states AS DRCS
ON HARS.replica_id = DRCS.replica_id
LEFT OUTER JOIN master.sys.dm_hadr_database_replica_states AS HDRS
ON DRCS.replica_id = HDRS.replica_id
AND DRCS.group_database_id = HDRS.group_database_id
ORDER BY AG.name, DRCS.database_name
This query can be run on any of the replicas, and it will return information about the Availability Groups and their member databases, *including* the name of the primary replica instance!

http://vignette4.wikia.nocookie.net/randycunningham9thgradeninja/images/9/97/YES_cat.jpg/revision/latest?cb=20150330230809
Hope this helps!

SQL 2016 - The Time Has Come!

$
0
0
Microsoft dropped it on us this morning:


https://media.makeameme.org/created/yes-finally.jpg

I read the article and was pleasantly surprised, although after some consideration it was more like this:

https://i.imgflip.com/4hu4p.jpg
If you haven't already started working with SQL Server 2016 this is definitely the time.  Here are some suggestions:

  • Get a copy of SQL Server Developer Edition - Microsoft told us at the end of March that Developer Edition is now free for SQL Server 2014, and that it will be continue to be free when it is released for SQL Server 2016 later this year.  As always Developer Edition is a fully-featured edition of the SQL Server product (with all of the same features as Enterprise Edition) that is intended for use in development and test environments and "and not for production environments or for use with production data" (a curve ball called out in the linked release article above that many people don't consider).  Even though SQL 2016 isn't out yet, prepare yourself by getting a free copy of 2014 Developer Edition now so that you are ready to move to 2016 when it comes out.
  • Attend SQL Saturday sessions - I have written before about how amazing SQL Saturdays are and the many reasons for taking part, and now the impending release of the new SQL Server 2016 product makes it even more important than usual!  I don't have any talks in my repertoire about the new version, but both of the SQL Saturdays I am scheduled to speak at in early June have multiple sessions about SQL Server 2016 (as do most other SQL Saturdays - check the full SQL Saturday schedule here):
  • Read Blogs and Whitepapers online
    • Microsoft has an array of whitepapers and demo videos for SQL Server 2016 available on their official 2016 site.  
    • Several major tools vendors have plenty of blogs about SQL Server 2016, especially SQL Sentry and Solarwinds.
    • The community has a wide array of blogs related to SQL Server 2016 - a great place to start is on SQLServerCentral
--

Get in front of this now - we can't all be like this:
http://cdn1-www.dogtime.com/assets/uploads/gallery/funny-dog-memes-part-4/funny-dog-meme-learning-tricks-doesnt-matter-when-youre-really-really-ridiculously-good-looking.jpg
--

Hope this helps!




Upcoming SQL Saturdays - June 2016

$
0
0
I am speaking at two different SQL Saturdays in June!

--



First up on June 4th is SQL Saturday #517 Philadelphia (#SQLSAT517) - I am speaking at 2:45pm on "Getting Started with Extended Events." This is quickly becoming my favorite of the talks that I give because I learn something new every time I present it:
Few subjects in Microsoft SQL Server inspire the same amount of Fear, Uncertainty, and Doubt (FUD) as Extended Events.  Many DBA's continue to use Profiler and SQL Trace even though they have been deprecated for years.  Why is this? 
Extended Events started out in SQL Server 2008 with no user interface and only a few voices in the community documenting the features as they found them.  Since then it has blossomed into a full feature of SQL Server and an amazingly low-impact replacement for Profiler and Trace. 
Come learn how to get started - the basics of sessions, events, actions, targets, packages, and more.  We will look at some base scenarios where Extended Events can be very useful as well as considering a few gotchas along the way.  You may never go back to Profiler again!
Register for SQL Saturday Philadelphia now (before it goes to wait-list!) at https://www.sqlsaturday.com/517/registernow.aspx!

I am also excited about this event because I will be attending Karen Lopez's (blog/@Datachick) Friday pre-con "Model-Driven Database Development: Physical to Implementation" - Karen is a very engaging and entertaining speaker who really knows her stuff.  There are still tickets available at https://www.eventbrite.com/e/model-driven-database-development-physical-to-implementation-karen-lopez-tickets-24628652964

 --


The following Saturday June 11th is SQL Saturday #523 - Iowa City (#SQLSatIowa) - I am up at 10am with "Does it Hurt When I Do This? Performing a SQL Server Health Check":
How often do you review your SQL Servers for basic security, maintenance, and performance issues?  Many of the servers I "inherit" as a managed services provider have quite a few gaping holes. It is not unusual to find databases that are never backed up, servers with constant login failures (is it an attack or a bad connection string?), and servers that need more RAM/CPU/etc. (or sometimes that even have too much!)  
Come learn how to use freely available tools from multiple layers of the SQL Server stack to check your servers for basic issues like missing backups and CheckDB as well as for more advanced issues like page life expectancy problems and improper indexing. If you are responsible in any way for a Microsoft SQL Server (DBA, Windows Admin, even a Developer) you will see value in this session!
This talk covers a wide array of tools to check out your servers, including the awesome DMV scripts from Glenn Berry (blog/@GlennAlanBerry) of SQLskills.

Register for SQL Saturday Iowa City now at https://www.sqlsaturday.com/523/registernow.aspx!

I am not headed to any pre-cons in Iowa City but there is an impressive line-up from many top speakers including my buddy David Klee (blog/@kleegeek) - he is covering "SQL Server Infrastructure Tuning for Availability and Performance" - tickets for his (and the other) pre-cons are available at http://www.eventbrite.com/e/sqlsaturday-iowa-city-2016-pre-conference-sessions-registration-24792389705?ref=ebtn

--

I have written multiple times about how amazing SQL Saturdays are as both training and networking opportunities.  If you are in the area either of these weekends come see me and check it out!

--

Hope this helps!



Catching Up on The #SQLPASS Virtual Chapters

$
0
0
(Didn't realize until I was posting that this was my 100th post...whew!)

--

A few months ago we purchased a new "smart" Blu-Ray player and one of its intriguing features was its built-in WiFi with pre-installed Internet apps like Amazon Video, YouTube, Vudu, etc. (Previously we had a 6-year-old "dumb" DVD player that was barely smart enough to eject its tray when you pushed the button).

https://i.ytimg.com/vi/4pvHKG4VMDs/maxresdefault.jpg
Interestingly, the limitations of these pre-installed apps became one of my frustrations as we started to use the player more and more.  I wanted to be able to view Pluralsight videos and TED Talks, and my wife wanted to watch Craftsy videos, and lacking customized applications we couldn't do so.

http://www.famousmarketer.com/images/but_wait.jpg
After some searching on my laptop I found that TED Talks are available on YouTube!  Some more digging and I found a few PASS videos as well - and more intensive searching found that almost every single PASS Virtual Chapter has its own YouTube channel where they upload recordings of their presentations!

Add captionhttp://www.sqlpass.org/images/chapterlogos/logo_pass_vc.png

Below I have lifted the list of PASS Virtual Chapters (current as of May 2016) and the descriptions provided by PASS.  I have gone through YouTube and found the link for each chapter's YouTube channel and listed them below.

(I apologize to any international readers but I have omitted the non-English speaking chapters as I don't have the language skills to properly search for their content.)

Chapter Name
Short Description
Virtual Chapter URL
YouTube Recordings
Application Development
Training and information for Application Developers
Big Data
Discuss Big Data technologies & Hadoop-based systems
Business Analytics
The PASS Business Analytics Chapter provides virtual train...
Business Intelligence
Connecting BI Professionals globally
Cloud
Enabling cloud knowledge sharing
Data Architecture
Focusing on your data architecture concerns
Data Science
Advanced Analytics, Machine Learning, Data Mining
Database Administration
Forum for discussion on DBA topics
DBA Fundamentals
Rock solid foundations!
Excel Business Intelligence
Helping users achieve excellence in Excel
Healthcare
Connecting SQL Server Pros in the healthcare industry
High Availability and Disaster Recovery
Reduce the risk and impact of system faults and outages
Hybrid
Cover all technologies and solutions that may be integrate...
N/A
In Memory VC
Take advantage of the new In-Memory features
Performance
Discuss SQL Server performance-related content
PowerShell
Learn and share best practices around PowerShell
Professional Development
Join the conversation and share resources
Saturday Night SQL
Forum for discussion about BI and Databases
Security
Guidance and education on SQL Server security topics
Virtualization
Improving management of SQL Server in virtual environments
Women in Technology
Forum for discussion of issues pertinent to WIT

What I like to do is to "subscribe" to each of the relevant channels and that makes it easy for me to receive content on our Blu-Ray player (as well as on the YouTube app on my Android).

The Virtual Chapter websites listed above do have archives of the recording of their meetings as well, but they aren't centralized and they are only available from a regular web browser as opposed to a custom app on your TV or phone.

There are also several other useful  YouTube channels I recommend:

--

Can't beat free info - it's free, and it's info!

http://blogs.opentext.com/wp-content/uploads/2016.01-Iheartfreestuff-new-660px.jpg

--

Hope this helps!

How Do I Change My Domain Password on Windows Server 2012?

$
0
0
As a DBA I spend a lot of time in RDP sessions, both to SQL Servers and to "jump"/pass-through servers on client domains.  Most clients (unfortunately not all of them - security is important, people!) have some variant of password expiration in their domains requiring regular password changes every 30/90/180 days.

Prior to Windows Server 2012, this was relatively straightforward - in the Start Menu select "Windows Security" (sometime hidden under Administrative Tools>>Windows Security):


...which then gives you a friendly menu where you can choose to "Change A Password":



Easy right?

** Often unknown tip - from the Change Password Prompt:



You can edit the top line to any account to which you have access - your accounts in other domains (assuming there is access to domain controllers in the other domain) or even other accounts altogether!  Even though I am logged into the above server as DOMAIN\agalbraith, I could modify the line to change the password for SOMEOTHERDOMAIN\agalbraith or DOMAIN\SQLServiceAccount,

--

The catch to all of this is that in Windows Server 2012, the easy method...went away.  How could they do that???

https://themuseletter.files.wordpress.com/2014/11/61225_bill-gates-shrug.jpg

The Windows Security box is Dead....Long Live the Windows Security box!

--

Here are three different ways to get to the same screen in Windows Server 2012.  These three methods work in Windows Server 2008 and 2008 R2 as well, so once you get used to one of them you can use it on your old servers as well.

http://cache.gawker.com/assets/images/gizmodo/2009/08/old_pc.jpg
Maybe not *that* old....

--

The first method is the one I have known for a long time, and is very simple.  Instead of CTRL-ALT-DEL, use CTRL-ALT-END.  Most of the time, this takes you to the same prompt screen as we saw above:


--

The second method is a little more involved, but useful - and I have been in at least three situations where I *had* to use it - once when there was a keyboard mapping error in the RDP session (something I have only ever experienced once) and twice where I was several layers deep in RDP (RDP to RDP to RDP).  I found this method at http://www.tomshardware.com/answers/id-1629393/change-password-ctrl-alt-del-rdp-rdp.html.

From a command prompt, type osk to bring up the On-Screen Keyboard (something I didn't know even existed at the time):


With the OSK up, press CTRL and ALT on your actual physical keyboard, and then click DEL on the OSK (CTRL-ALT-DEL all on the OSK just functions like a regular CTRL-ALT-DEL):



BOOYAH!

--

The third method was recently offered up by a member of the team here at Ntirety, Mike Skaff.  It is one more example of #YouCanDoAnythingWithPowerShell.

I don't know where Mike found it, but I was able to find references in a couple of places, including http://wiki.mundy.co/Change_password_on_Remote_Desktop_Server_2012

From a PowerShell prompt, enter the following:
(New-Object -COM Shell.Application).WindowsSecurity()

Sure enough:


Like the OSK method above, this PowerShell method works from RDP in RDP in RDP as well - and it's PowerShell!

--

Hope this helps!


Finding File Growths with Extended Events

$
0
0
I give a SQLSaturday session called "Introduction to Extended Events" and at a recent presentation I had a pair of attendees ask me afterwards for a more "real world" example.  I had a basic demo in my session showing setting up a session with a predicate filter to pull particular events for a single database, but it didn't really show using XEvents to solve a business problem.

I pondered for a while and decided to try to recreate something I normally do with the default trace.  The obvious example to me was pulling data on file growths.

What?  You didn't know you could pull information about file growths from the default trace?

http://canitbesaturdaynow.com/images/fpics/1137/kitty_-(8)__large.jpg
I discovered this some time ago when I needed to find out why a database was blowing up at 1am without needing to be awake and online at 1am...at first I was going to set up a manual trace when I found a pair of blog posts that helped me out.

The first was "Did we have recent autogrows?" from Tibor Karazi (blog/@TiborKaraszi) which clued me in to the fact that autogrow information is even in the default trace.  The catch to Tibor's query is that it relied on a specific trc file - the default trace is a rollover structure so at any given time there are usually multiple sequential trc files, and using his query meant I needed to run it, edit the file name, run it again, etc.

The next piece was the Simple Talk article "Collecting the Information in the Default Trace" from Feodor Georgiev (blog/@FeodorGeorgiev) that showed me how to dynamically build the log file path *and* the fact that I could simply reference log,trc (instead of log_123.trc) and the function would concatenate all of the trc files into a single resultset.

I have said it before and I will say it again - when you need help, odds are someone else has already solved the same (or a similar) problem - look around and you will probably find it!  When you save or reference someone else's work, always make sue to give credit (as I have done above) - even if you make modifications to code or ideas, always acknowledge the original material.

http://www.funcatpictures.com/wp-content/uploads/2013/10/funny-cat-pics-internet-high-five.jpg
I took the two queries I found and combined them into a single modified query that used Tibor's base query to pull autogrowths but Feodor's concept of the custom built log path:
/*
Default Trace AutoGrow Query
Modified from Tibor Karazi
http://sqlblog.com/blogs/tibor_karaszi/archive/2008/06/19/did-we-have-recent-autogrow.aspx
and Feodor Georgiev
https://www.simple-talk.com/sql/database-administration/collecting-the-information-in-the-default-trace/
*/

DECLARE @df bit
SELECT @df = is_default FROM sys.traces WHERE id = 1
IF @df = 0 OR @df IS NULL
BEGIN
  RAISERROR('No default trace running!', 16, 1)
  RETURN
END
SELECT te.name as EventName
, t.DatabaseName
, t.FileName
, t.StartTime
, t.ApplicationName
, HostName
, LoginName
, Duration
, TextData
FROM fn_trace_gettable(
(SELECT REVERSE(SUBSTRING(REVERSE(path), CHARINDEX('\', REVERSE(path)),256)) + 'log.trc'
FROM    sys.traces
WHERE   is_default = 1
), DEFAULT) AS t
INNER JOIN sys.trace_events AS te
ON t.EventClass = te.trace_event_id
WHERE 1=1
and te.name LIKE '%Auto Grow'
--and DatabaseName='tempdb'
--and StartTime>'05/27/2014'
ORDER BY StartTime
--
SELECT TOP 1 'Oldest StartTime' as Label, t.StartTime
FROM fn_trace_gettable(
(SELECT REVERSE(SUBSTRING(REVERSE(path), CHARINDEX('\', REVERSE(path)),256)) + 'log.trc'
FROM    sys.traces
WHERE   is_default = 1
), DEFAULT) AS t
INNER JOIN sys.trace_events AS te
ON t.EventClass = te.trace_event_id
ORDER BY StartTime   
The top query returns all rows for an event like '%Auto Grow' (to capture DATA and LOG autogrowths) and as you can see you can also filter on a specific database, date range, etc.

The second query simply uses the same base table set to return the datetime of the oldest row in the current default trace.  Since the default trace is a rolling first in first out (FIFO) arrangement, there is no way to definitively say how far back the trace goes other than to actually look in the trace like this.  If you have a lot of trace-relevant events or a lot of service restarts, your default trace may only have a few days (or even hours) of data and not have what you need - for example if you are trying to troubleshoot a problem from last Saturday and by Monday it has already aged out of the trace, then this isn't useful.

The resultset looks like this:



Pretty slick, right?

http://i.imgur.com/vIdCPPt.png
...but what does this have to do with Extended Events?

As I started out, I was looking for something slick I do with a trace that I could replicate in Extended Events, and this was a great candidate.

--

The catch as I discovered, is that while file growths are captured in the default trace, they are *not* in the system health session...

http://www.wicproject.com/images/2010/05/hobbes_doh.jpg
This was slightly annoying, but it also gave me an excuse to create a demo for setting up an XEvents session :)

--

First, the session:
USE [master]
GO
 
SET NOCOUNT ON 
/* Create Extended Events Session */
IF EXISTS (SELECT 1 FROM master.sys.server_event_sessions WHERE name = 'DemoFileSize')
DROP EVENT SESSION [DemoFileSize] ON SERVER
GO
CREATE EVENT SESSION [DemoFileSize] ON SERVER
ADD EVENT sqlserver.database_file_size_change(SET collect_database_name=(1)
    ACTION(package0.collect_system_time,sqlos.task_time,
sqlserver.client_app_name,sqlserver.client_hostname,
sqlserver.client_pid,sqlserver.database_id,sqlserver.database_name,
sqlserver.server_instance_name,sqlserver.session_id,
sqlserver.sql_text,sqlserver.username)),
/* Note - no predicate/filter - will collect *all* DATA file size changes */
ADD EVENT sqlserver.databases_log_file_size_changed(
    ACTION(package0.collect_system_time,sqlos.task_time,
sqlserver.client_app_name,sqlserver.client_hostname,
sqlserver.client_pid,sqlserver.database_id,sqlserver.database_name,
sqlserver.server_instance_name,sqlserver.session_id,
sqlserver.sql_text,sqlserver.username))
/* Note - no predicate/filter - will collect *all* LOG file size changes */
ADD TARGET package0.event_file(SET filename=N'c:\XE\DemoFileSize.xel',
max_file_size=(500),max_rollover_files=(10))
WITH (MAX_MEMORY=4096 KB,EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,
MAX_DISPATCH_LATENCY=30 SECONDS,MAX_EVENT_SIZE=0 KB,
MEMORY_PARTITION_MODE=NONE,TRACK_CAUSALITY=OFF,STARTUP_STATE=ON)
GO

ALTER EVENT SESSION [DemoFileSize] ON SERVER
STATE = START;
GO
This just creates a basic session, tracking two events, sqlserver.database_file_size_change and sqlserver.log_file_size_change, writing the output to a file target.

--

Next the database and table:
/* Create DemoGrowth database, dropping any existing DemoGrowth database */
IF EXISTS (SELECT database_id from sys.databases WHERE name = 'DemoGrowth')
BEGIN
 ALTER DATABASE [DemoGrowth] SET SINGLE_USER WITH ROLLBACK IMMEDIATE
 DROP DATABASE [DemoGrowth]
END
CREATE DATABASE [DemoGrowth]
 ON  PRIMARY
( NAME = N'DemoGrowth_Data'
 , FILENAME = 'C:\Temp\DemoGrowth.MDF'
 , SIZE = 4MB , MAXSIZE = UNLIMITED, FILEGROWTH = 2MB )
 /* Small increments for maximum autogrowths */
 LOG ON
( NAME = N'DemoGrowth_log'
 , FILENAME = 'C:\Temp\DemoGrowth.LDF'
 , SIZE = 2MB , MAXSIZE = 2048GB , FILEGROWTH = 2MB )

ALTER DATABASE [DemoGrowth] SET RECOVERY FULL
GO

BACKUP DATABASE [DemoGrowth]
TO DISK = 'C:\TEMP\DemoGrowth.bak'
WITH STATS = 5, INIT

USE [DemoGrowth]
GO
/* Create DemoGrow table, dropping any existing DemoGrow table */
IF OBJECT_ID('dbo.DemoGrow','U') is not null
DROP TABLE dbo.DemoGrow
CREATE TABLE dbo.DemoGrow
/* Purposely large datatypes for maximum needed autogrowths */
(
ID BIGINT IDENTITY(1,1)
,BIGNAME NCHAR(4000) DEFAULT 'BOB'
)
Setting the table with an identity and a character column with a default value makes the test inserts straightforward to code.  The database was created small and the table with large datatypes in order to maximize potential auto-growths.

--

The next step is to Watch Live Data for the session in order to see that autogrowths are happening - note that the sessions as configured is set to write to an event_file target, so Watch Live Data is the easy way to watch events happen, albeit with some minor performance impact.



Next, let's insert some rows and see what happens:
/* Insert 6000 rows and return file sizes before and after inserts */
SELECT name, size*8.0/1024.0 as SizeMB from DemoGrowth.sys.sysfiles 
GO
INSERT INTO dbo.Grow DEFAULT VALUES 
GO 6000 
SELECT count(ID) from dbo.Grow (NOLOCK) 
SELECT name, size*8.0/1024.0 as SizeMB from DemoGrowth.sys.sysfiles
/* If after sizes <> before sizes, check Watch Live Data for rows */
The sysfiles query shows whether the data and log files have grown after the inserts - it is possible (although highly unlikely) that the 6000 inserts don't cause either file to grow - if this is the case (or if you just want to witness more traffic) you re-run the code block multiple times.

--

Next we need to backup the LOG (remember that we are in FULL recovery) and manually shrink the database in order to see that the shrink also causes events:
BACKUP LOG DemoGrowth
TO DISK = 'C:\TEMP\DemoGrowth.trn'
WITH STATS = 5, INIT
/* Shrink files and return sizes before and after */
SELECT name, size*8.0/1024.0 as SizeMB from DemoGrowth.sys.sysfiles 
DBCC SHRINKFILE (N'DemoGrowth_Data')
GO
DBCC SHRINKFILE (N'DemoGrowth_Log')
GO
SELECT name, size*8.0/1024.0 as SizeMB from DemoGrowth.sys.sysfiles
/* Assuming shrink occurs, check Watch Live Data for rows */
/* If the LOG doesn't show shrunk, then re-run LOG backup and shrink again */
Just as a shrink logs events, so do manual growths:
/* Manually grow DATA file to see effect on session */
SELECT name, size*8.0/1024.0 as SizeMB from DemoGrowth.sys.sysfiles
/* Check Current File Size to set an appropriate size in ALTER grow statement */
ALTER DATABASE [DemoGrowth] MODIFY FILE ( NAME = N'DemoGrowth_Data', SIZE = 128MB )
GO
SELECT name, size*8.0/1024.0 as SizeMB from DemoGrowth.sys.sysfiles
/* Check Watch Live Data for rows */
--

So what is in the event_file (XEL file)?  There is a system function (sys.fn_xe_file_target_read_file) to return that data:
/* So let's look at the XEL file */
SELECT * FROM sys.fn_xe_file_target_read_file('C:\temp\DemoFileSize*.xel',NULL, NULL, NULL);  

BLEAH!

As always, someone has already come up with a better solution.  Google brought me to SQL Zealot and his query to shred the XML at https://sqlzealots.com/2015/04/01/auto-file-growth-track-the-growth-events-using-extended-events-in-sql-server/  - I modified the query slightly to return the data elements I wanted, including the timestamp:
SELECT
Case when file_type = 'Data file' Then 'Data File Grow' Else File_Type End AS [Event Name]
, database_name AS DatabaseName
, dateadd(minute, datediff(minute, sysutcdatetime(), sysdatetime()), timestamp1) as LocalTimeStamp
/* added the timestamp and in XE is UTC - this code converts it to local server time zone */
, file_names
, size_change_mb
, duration
, client_app_name AS Client_Application
, client_hostname
, session_id AS SessionID
, Is_Automatic
FROM
(
SELECT
(n.value ('(data[@name="size_change_kb"]/value)[1]', 'int')/1024.0) AS size_change_mb
, n.value('(@timestamp)[1]', 'datetime2') as timestamp1
, n.value ('(data[@name="database_name"]/value)[1]', 'nvarchar(50)') AS database_name
, n.value ('(data[@name="duration"]/value)[1]', 'int') AS duration
, n.value ('(data[@name="file_type"]/text)[1]','nvarchar(50)') AS file_type
, n.value ('(action[@name="client_app_name"]/value)[1]','nvarchar(50)') AS client_app_name
, n.value ('(action[@name="session_id"]/value)[1]','nvarchar(50)') AS session_id
, n.value ('(action[@name="client_hostname"]/value)[1]','nvarchar(50)') AS Client_HostName
, n.value ('(data[@name="file_name"]/value)[1]','nvarchar(50)') AS file_names
, n.value ('(data[@name="is_automatic"]/value)[1]','nvarchar(50)') AS Is_Automatic
FROM
(
SELECT CAST(event_data AS XML) AS event_data
FROM sys.fn_xe_file_target_read_file(
N'C:\temp\DemoFileSize*.xel'
, NULL
, NULL
, NULL
)
) AS Event_Data_Table
CROSS APPLY event_data.nodes('event') AS q(n)) xyz
ORDER BY timestamp1 desc
/* Much Better! */

--

So as you can see, a basic XEvents session provides the information comparable to the default trace.

But....XEvents is supposed to be better right?

I neglected to tell you one thing...in the query above you can see that there is a data element for sql_text - look what's in it:
INSERT INTO dbo.DemoGrow DEFAULT VALUES
http://s.quickmeme.com/img/15/151d67b9a73228d44b1bfabea0d012b54b9cd2821a25bf4b4be1bad10c41a95d.jpg
Pretty slick, huh?  The element shows you *what query* caused the file size change - something the default trace does not.  If you go back to the default trace, you will see there is a textData field, but that field is NULL for all of the operations in question (it is used for a few other types of events such as DBCC events, but not for the file growth events).

Also you may have noticed that our XEvents are called sqlserver.database_file_size_change and sqlserver.log_file_size_change, and they return data for growths *and* shrinks.  The default trace only returns data on file growths directly (although you could scrape the "DBCC Events" looking for DBCC SHRINKDATABASE and DBCC SHRINKFILE, but there are a lot of other DBCC events to search through.

--

Extended Events are awesome, and with a little work can give you everything that trace gives you and so much more.

Hope this helps!



Thank You SQL Saturday Sioux Falls!

$
0
0
I just finished giving my SQL Saturday presentation on "Getting Started with Extended Events" at SQLSaturday Sioux Falls.  The audience was great with some multiple questions and follow-ups, so it was awesome!

The sessions have been informative and the event team has done a wonderful job with all of the logistics, so the event is awesome - whether you are here now or not you should definitely put it on your calendar for next July!

My Powerpoint slides and demo scripts are now up on the SQLSaturday web page at http://www.sqlsaturday.com/539/Sessions/Details.aspx?sid=49228 - thanks!


#SQLSpeaking this Fall!

$
0
0
I have some great speaking dates coming up this Fall - here is the list:

--









First - the big one.  I am honored to be chosen for the first time to speak at the PASS Summit in Seattle.  The only negative is that I am not on until Friday morning, so I will get to spend the whole week sweating it!

I am giving my health check session "Does it Hurt When I Do This? Performing a SQL Server Health Check" - http://www.sqlpass.org/summit/2016/Sessions/Details.aspx?sid=47833

IMPORTANT - the final price increase for registration is this Monday September 19th - register by midnight Sunday to beat the $200 increase!  http://www.sqlpass.org/summit/2016/RegisterNow.aspx

--












Multiple SQL Saturdays on the agenda as well:

09/24 - SQL Saturday #548 Kansas City (#SQLSATKC)  http://www.sqlsaturday.com/548/eventhome.aspx

10/01 - SQL Saturday #557 Minnesota (#SQLSATMN)  http://www.sqlsaturday.com/557/eventhome.aspx

At both of these SQL Saturdays I will be making a final tune-ups on my health check session - I have given it multiple times in the past but will be walking through the "final" version before Summit.

11/19 0 SQL Saturday #552 Lincoln, NE (#SQLSAT552) http://www.sqlsaturday.com/552/eventhome.aspx

The formal speaker list/schedule hasn't been released yet but I am hopeful to be giving either my XEvents or health check session.

--

I look forward to seeing you at any or all of these events!




YouTubing Down the SQL Server River

$
0
0
I recently stumbled on a YouTube channel from one of my favorite SQLPeople - she is sharp (and an MCM) and one of the nicest people out there - Kendra Little (blog/@Kendra_Little).  Since divesting herself from Brent Ozar Unlimited over a year ago Kendra has been on a "do-it-myself sabbatical to learn, write, teach, and see a bit of the world." During that time she has produced a *lot* of SQL Server content.

I read her blog regularly but somehow never saw that she had a YouTube channel.



The channel hosts 10-30 minute performance tuning videos as well as Kendra's weekly "Dear SQL DBA" podcast.  As I said Kendra really knows her stuff, so definitely check this out!

--

I have written in the past about the PASS virtual chapter YouTube feeds, but this made me go looking for who else is out there in YouTube-land, and I found quite a bit!

--

My company Ntirety has its own YouTube channel with short videos on database administration (and a few commercials #disclosure).

--

Microsoft has quite a few "official" YouTube channels:

While I couldnt find an official "Microsoft SQL Server" or "Microsoft Database Platform" channel, I did see this interesting SQL Server channel - a channel "generated automatically by YouTube's video discovery system" that has managed to find many videos from various sources.

--

SQL Server MVP (and all-around fun guy) Grant Fritchey (blog/@GFritchey) has a channel here about a variety of topics.

--

SQL Server author/instructor Sharon Dooley has a frequently updated channel here.

--

Of course the big vendors/consultancies have their own channels - some of it is company/product commercials (gotta pay the bills) but a majority of it is really good free content about SQL Server.

In no particular order:

--

Of course there are other non-YouTube video sources out there, but I recommend you start here - watching this free content will keep you busy learning for some time!



How many Backups is Too Many?

$
0
0
Does this title seem strange?  Of course we know this is the true answer:

https://cdn.meme.am/cache/instances/folder633/58097633.jpg

But here is a story of a client that may have too many backups after all.

--

Our monitoring system found that the backups on a client server had suddenly started running very long.  When I signed on to look I saw that the nightly backups were still running even the next morning!  These were the three currently running processes related to backups:

--

LINE
dd hh:mm:ss.mss
session_id
sql_text
login_name
1
00 15:24:36.674
134
backup database [BobData] to virtual_device = N'CA_BAB_MSSQL_a786700_10002_4c59b51f_BobData' with differential, blocksize = 65536, buffercount = 1, maxtransfersize = 2097152
DOMAIN1\sa_arcbackup
2
00 11:27:32.043
165
BACKUP DATABASE [BobData] TO  DISK = N'B:\SQL_BAC\BobData\BobData_backup_2017_02_23_020011_9582786.bak' WITH  RETAINDAYS = 10, NOFORMAT, NOINIT,  NAME = N'BobData_backup_2017_02_23_020011_9582786', SKIP, REWIND, NOUNLOAD, COMPRESSION,  STATS = 10
DOMAIN1\sqlcluster
3
00 02:58:20.540
84
backup database [FredData] to virtual_device = N'CA_BAB_MSSQL_a786080_10001_4f06c827_FredData' with blocksize = 65536, buffercount = 1, maxtransfersize = 2097152
DOMAIN1\sa_arcbackup



LINE
wait_info
CPU
tempdb_allocations
tempdb_current
blocking_session_id
1
(55475564ms)ASYNC_IO_COMPLETION
236
0
0
NULL
2
(41252214ms)LCK_M_U
0
0
0
134
3
(10699785ms)ASYNC_IO_COMPLETION
357
0
0
NULL



LINE
percent_complete
host_name
database_name
program_name
start_time
login_time
1
79.8087
Server5
BobData
Arcserve Backup
2/22/2017 22:02
2/22/2017 22:02
2
NULL
Server1
BobData
Microsoft SQL Server Management Studio
2/23/2017 2:00
2/23/2017 2:00
3
18.4672
Server5
FredData
Arcserve Backup
2/23/2017 10:30
2/23/2017 1:05


The middle process (Line 2 – session ID/SPID 165) is a regular BACKUP DATABASE statement that had been running since 2am local server time.  As you can see from the wait info, the entire life of this SPID has been spent waiting for an update lock (LCK_M_U).  Also notice that the Percent Complete is still NULL because it is being blocked by the first SPID (134) and hasn’t done anything.

The first and third SPID’s (134 and 84) are Arcserve backups of the BobData and FredData databases respectively.  Looking in the backup history in msdb for the BobData database I see this:

database_name
physical_device_name
type
backup_start_date
backup_finish_date
BobData
B:\SQL_BAC\BobData\BobData_backup_2017_02_22_020015_7258925.bak
D
02/22/2017 04:29:47
02/22/2017 04:40:41
BobData
CA_BAB_MSSQL_8ea33d0_10002_4732d2d8_BobData
I
02/21/2017 22:01:34
02/22/2017 04:29:44
BobData
B:\SQL_BAC\BobData\BobData_backup_2017_02_21_020010_3116691.bak
D
02/21/2017 11:54:41
02/21/2017 12:05:22
BobData
CA_BAB_MSSQL_8c6ff40_10002_420ce264_BobData
I
02/20/2017 22:02:02
02/21/2017 11:54:39
BobData
B:\SQL_BAC\BobData\BobData_backup_2017_02_18_020007_3945711.bak
D
02/19/2017 23:39:43
02/19/2017 23:50:38
BobData
CA_BAB_MSSQL_8d4b360_10001_329872eb_BobData
D
02/17/2017 22:00:34
02/19/2017 23:39:41

The Arcserve backup was kicking off around 10pm local server time, and taking a widely variable amount of time to complete – the first backup (start date 02/17 22:00) was a FULL backup (type D) and took just over *2 days* to complete (02/17 22:00 to 02/19 23:39)!

https://s-media-cache-ak0.pinimg.com/736x/25/b2/82/25b2825a9b57026e1932ba48909699e9.jpg
The catch is the next row up – the regular FULL backup to the B: drive looks like it kicks off right when the Arcserve backup completes, at 23:39 – this is most likely because it was being blocked by the Arcserve backup (just like the current activity above) and when the Arcserve backup completes, the FULL backup to the B: drive begins.

This history shows this same pattern of the B: drive backup kicking off right when the Arcserve backup completes through the following days.

The SQL “regular” Agent job kicks off at 2am, so it is hanging as a blocked SPID each day until the Arcserve job completes and then running in a very reasonable amount of time – BobData is a 340GB database that backs up into a 36GB file (nice compression!) and as seen in the start/finish times above only takes 10-30 minutes to run, which by my experience is very good for a database this size.

From the DBA end we can’t speak to why the Arcserve backup takes such a long time – completely guessing I would assume it isn’t compressing and therefore is sending the full 340GB across the network pipe, but this still seems very slow.

The SPID’s for the Arcserve backups are waiting on “ASYNC_IO_COMPLETION” which means it is waiting on storage read/writes, in this case certainly *writes*.

--

There are two pieces to this for follow-up – one very important and another less so.

First and foremost – as a DBA I strongly recommend not having “competing” backups like this.  The backups should either run to disk (very *very* strongly preferred) or to a tool like Arcserve.  Running to two different tools leads to exactly the kind of problem seen here.

https://imgflip.com/i/1k9t97
Note that this is not related to Arcserve in particular - any third-party backup tool that uses a database agent to directly run backups will display this behavior.

A conversation showed that the Arcserve backups had been recently added and it was definitely a match to the problems.

The workaround to this is to back the databases up to one tool and then to *copy* those backup files to the other tool.  The best recommendation I have in these situation is always to run “regular” SQL Server Agent job backups to a file location (either a local or network drive) and then to have your third-party tool use file backup for the actual BAK/TRN files from that location *rather* than running the “database agent” on the third party tool to backup the database directly.

In this model the third party tool never directly touches SQL Server – in this client's environment you would run SQL Server Agent jobs similar to the current maintenance plan (although something better than maintenance plans, such as Ola Hallengren's scripts or maybe MinionBackup) to backup to actual disk (in thiscase the B: drive) and then use Arcserve to backup the files that have been written on the B: drive.

Another advantage to this backup model is it makes it easier for the DBAs managing your SQL Servers to access the backups for disaster recovery/data copy/etc.  If you backup to the third-party tool alone it usually requires some third party (usually a server admin or SAN admin) to perform a database restore – with regular SQL backups directly to disk, the DBA can perform that work by themselves which can greatly decrease recovery time in case of a disaster.

If you absolutely *must* backup directly to a tool like Arcserve, it needs to be made more reliable than what this client was seeing – wide swings of duration from 6 hours to 13 hours to 49 hours can’t be acceptable – and the bottleneck must be determined and resolved – there is no way it can be acceptable for a backup to take over 2 days on a 340GB database.  Fixing this requires work across your network, storage, and admin teams.

--

Second – as you can see from the limited number of rows in my history grid above, the current SQL Maintenance Plan is purging backup history, etc. that is older than 4 days


I understand a desire to have backup file cleanup of 4 days, as seen in the next item in the maintenance plan, as the actual backups can take up a lot of space:


However I strongly recommend editing the “Clean Up History” task the retain records for a much longer interval, such as 35-40 days.  There are often processes that happen monthly that require troubleshooting (for example, an end-of-month data load that slows the system down), and only keeping 30-31 days can often cause an issue where something happens on a Friday or Saturday but isn’t investigated until Monday (or Tuesday) – by the time the investigation begins, the 30th or 31st day of history has rolled off and the troubleshooting is hampered by lack of data...

http://i0.kym-cdn.com/entries/icons/original/000/019/630/ihnmotp.jpg

Maintenance/Backup history takes up very little space and can easily be retained for even several months (such as 90-100 days) without disk space issues.

--

At the end of the day watch out for duplicative backups - some times they are necessary, but in most cases you can meet your requirements to offshore your backups, send them to a DR site, etc. using a simple file copy of BAK and TRN files.

Also watch your history retentions closely - don't swing the other way - I can't count the number of servers I have found with huge msdb's because no cleanup is happening - but set it to a realistic number that won't hamper your troubleshooting.  Consider the frequency of your key processes; in almost all environments there are at least monthly considerations, but often there are also bi-monthly or quarterly concerns that would require you to keep up to 100 days of history for effective troubleshooting.

--

Hope this helps!









Embrace The Missing Index DMVs - But Proceed with Caution!

$
0
0
One of the performance tools I use all of the time is the set of Missing Index DMVs: 
·         sys.dm_db_missing_index_details– Detailed specifics on the missing indexes, including column lists 
·         sys.dm_db_missing_index_groups– relates individual missing indexes to index groups
·         sys.dm_db_missing_index_group_stats– information on potential cost and benefit of the missing indexes
·         sys.dm_db_missing_index_columns– (not regularly used but included for completeness) – included information on individual columns in the missing indexes – this information is readily retrieved from sys.dm_db_missing_index_details as groups of columns 
As I always tell you, the easiest way to start is to borrow from someone else.  The most commonly used query is from Glenn Berry’s (blog/@GlennAlanBerry) Diagnostic Information Queries.  As of the February 2017 release it is Query #31:

-- Missing Indexes for all databases by Index Advantage  (Query 31) (Missing Indexes All Databases)

SELECT CONVERT(decimal(18,2), user_seeks * avg_total_user_cost * (avg_user_impact * 0.01)) AS index_advantage,
migs.last_user_seek, mid.statement AS Database.Schema.Table,
mid.equality_columns, mid.inequality_columns, mid.included_columns,
migs.unique_compiles, migs.user_seeks, migs.avg_total_user_cost, migs.avg_user_impact
FROM sys.dm_db_missing_index_group_stats AS migs WITH (NOLOCK)
INNER JOIN sys.dm_db_missing_index_groups AS mig WITH (NOLOCK)
ON migs.group_handle = mig.index_group_handle
INNER JOIN sys.dm_db_missing_index_details AS mid WITH (NOLOCK)
ON mig.index_handle = mid.index_handle
ORDER BY index_advantage DESC OPTION (RECOMPILE);
------
-- Getting missing index information for all of the databases on the instance is very useful
-- Look at last user seek time, number of user seeks to help determine source and importance
-- Also look at avg_user_impact and avg_total_user_cost to help determine importance
-- SQL Server is overly eager to add included columns, so beware
-- Do not just blindly add indexes that show up from this query!!!

The results look like this:

Index
advantage
last_user
seek
Database.Schema.
Table
Equality
columns
Inequality
columns
Included
columns
Unique
compiles
User
seeks
avg_total
user_cost
avg_user
impact
219108325.6
02/16/2017 22:28:13
database1.schema2.table1
employee_id
NULL
id, as_of_effective_date
5
36411704
6.07278699
99.09
9319171.13
02/17/2017 10:09:54
database4.dbo.table3
cmpcode, year_max, period_max
NULL
rundatetime
581
3482
2679.33185
99.89
7881068.93
02/17/2017 10:05:19
database2.dbo.table99
code
grpcode
cmpcode
341
3451
2285.305576
99.93
7037526.5
02/17/2017 10:09:39
database4.dbo.table3
cmpcode, usercode
NULL
rundatetime
453
2588
2720.924091
99.94
5861313.35
02/17/2017 10:05:19
database2.dbo.table99
NULL
grpcode
cmpcode, code
341
3451
2285.305576
74.32
3440880.84
02/17/2017 10:09:39
database4.dbo.table3
cmpcode, usercode
rundatetime
NULL
227
1294
2661.233188
99.92
3362884.24
02/17/2017 10:05:43
database2.dbo.table12
elmlevel, deldate
NULL
cmpcode, code, name, sname
278
629
5353.357209
99.87
1646616.36
02/17/2017 10:10:39
database7.schema33.table2
NULL
doc_status
cmpcode, doccode, docnum
724
234099
10.3851265
67.73
1592595.34
02/17/2017 10:09:51
database2.dbo.table99
code
NULL
cmpcode, name, sname
140
310
5137.918128
99.99
877818.52
02/16/2017 15:57:09
database2.dbo.table99
elmlevel, grpcode
NULL
cmpcode, code
184
378
3185.120325
72.91

What this tells me is that the potentially (**potentially**) most useful index is on schema2.table1 in database1 on the employee_id column, with the id and as_of_effective_date columns along for the ride as INCLUDEs.  Since the MSSQLServer service was last restarted, the index would have been compiled 5 times (low cost) and would have been used a whopping 36 million times (huge benefit)!

…but wait!

At this point we all need to pause and consider the collective wisdom...

https://cdn.meme.am/instances/44649369.jpg
You can see that Glenn warns us in the last comment of his script “Do not just blindly add indexes that show up from this query!!!”

One of the most common reasons people quote for this is the impact such an index can have.  It is possible that adding an index can cause other queries to create/choose a query plan that is less favorable than its current plan because of the index of the new index.  Maybe Query1 was using Plan1 and running smoothly but now that there is a new index it may start using Plan2 which take milliseconds longer but is at a lower “cost.”  (Yes, milliseconds definitely matter!)

This is very uncommon but it can happen.  Always test missing indexes in a DEV/TEST environment before you roll them out into production!

Because you all have DEV/TEST environments for every single PROD environment that matches the hardware/software specs of PROD, right?

Right?  

RIGHT?
http://24.media.tumblr.com/tumblr_m8lvn0pSSH1qbsjydo1_500.jpg
Well….test if you can – I would never recommend “test in PROD” from an academic sense, but we all know in the real world we often have no choice – which is just another reason to be even more cautious of blindly adding new indexes – “missing” or otherwise.

--

One of the top reasons I say to be cautious of the Missing Index DMVs that I want to discuss has to do with duplicative suggestions.

Let’s look at the results from above again:

Index
advantage
last_user
seek
Database.Schema.
Table
Equality
columns
Inequality
columns
Included
columns
Unique
compiles
User
seeks
avg_total
user_cost
avg_user
impact
219108325.6
02/16/2017 22:28:13
database1.schema2.table1
employee_id
NULL
id, as_of_effective_date
5
36411704
6.07278699
99.09
9319171.13
02/17/2017 10:09:54
database4.dbo.table3
cmpcode, year_max, period_max
NULL
rundatetime
581
3482
2679.33185
99.89
7881068.93
02/17/2017 10:05:19
database2.dbo.table99
code
grpcode
cmpcode
341
3451
2285.305576
99.93
7037526.5
02/17/2017 10:09:39
database4.dbo.table3
cmpcode, usercode
NULL
rundatetime
453
2588
2720.924091
99.94
5861313.35
02/17/2017 10:05:19
database2.dbo.table99
NULL
grpcode
cmpcode, code
341
3451
2285.305576
74.32
3440880.84
02/17/2017 10:09:39
database4.dbo.table3
cmpcode, usercode
rundatetime
NULL
227
1294
2661.233188
99.92
3362884.24
02/17/2017 10:05:43
database2.dbo.table12
elmlevel, deldate
NULL
cmpcode, code, name, sname
278
629
5353.357209
99.87
1646616.36
02/17/2017 10:10:39
database7.schema33.table2
NULL
doc_status
cmpcode, doccode, docnum
724
234099
10.3851265
67.73
1592595.34
02/17/2017 10:09:51
database2.dbo.table99
code
NULL
cmpcode, name, sname
140
310
5137.918128
99.99
877818.52
02/16/2017 15:57:09
database2.dbo.table99
elmlevel, grpcode
NULL
cmpcode, code
184
378
3185.120325
72.91

Highlighted rows 4 and 6 are an example of what I call duplicative recommendations.  The CREATE INDEX statement for the two recommendations (generated using Bart Duncan’s Missing Index script) shows this even more clearly:

CREATE INDEX missing_index_2044_2043_table3 ON database4.dbo.table3 (cmpcode, usercode) INCLUDE (rundatetime)

CREATE INDEX missing_index_2046_2045_table3 ON database4.dbo.table3 (cmpcode, usercode,rundatetime)

The first index is only two columns with an INCLUDE of a third column, while the second index is only all three columns.  The second index will not only satisfy any situations needing that index, but will also satisfy any situations needing the first index.

Note that the Index Advantage (weighted average of cost and benefit) of the second index, the index we really want, is only half that of the first index.  When I report recommendations like this to the client I edit the output to match the highest Index Advantage of the duplicative indexes with the most correct recommendation – in this case I would use the second index definition (the index on all three columns with no INCLUDE) with the first Index Advantage (7037526.5).

--

Another situation similar to that of the duplicative recommendation is that of the “left-hand-equivalent” recommendation.  Consider the two highlighted rows here:

Index
advantage
last_user
seek
Database.Schema.
Table
Equality
columns
Inequality
columns
Included
columns
Unique
compiles
User
seeks
avg_total
user_cost
avg_user
impact
219108325.6
02/16/2017 22:28:13
database1.schema2.table1
employee_id
NULL
id, as_of_effective_date
5
36411704
6.07278699
99.09
9319171.13
02/17/2017 10:09:54
database4.dbo.table3
cmpcode, year_max, period_max
NULL
rundatetime
581
3482
2679.33185
99.89
7881068.93
02/17/2017 10:05:19
database2.dbo.table99
code
grpcode
cmpcode
341
3451
2285.305576
99.93
7037526.5
02/17/2017 10:09:39
database4.dbo.table3
cmpcode, usercode
NULL
rundatetime
453
2588
2720.924091
99.94
5861313.35
02/17/2017 10:05:19
database2.dbo.table99
NULL
grpcode
cmpcode, code
341
3451
2285.305576
74.32
3440880.84
02/17/2017 10:09:39
database4.dbo.table3
cmpcode, usercode
rundatetime
NULL
227
1294
2661.233188
99.92
3362884.24
02/17/2017 10:05:43
database2.dbo.table12
elmlevel, deldate
NULL
cmpcode, code, name, sname
278
629
5353.357209
99.87
1646616.36
02/17/2017 10:10:39
database7.schema33.table2
NULL
doc_status
cmpcode, doccode, docnum
724
234099
10.3851265
67.73
1592595.34
02/17/2017 10:09:51
database2.dbo.table99
code
NULL
cmpcode, name, sname
140
310
5137.918128
99.99
877818.52
02/16/2017 15:57:09
database2.dbo.table99
elmlevel, grpcode
NULL
cmpcode, code
184
378
3185.120325
72.91

As above, here are the scripted CREATE INDEX statements for those two rows:

CREATE INDEX missing_index_35_34_table99 ON database2.dbo.table99 (code,grpcode) INCLUDE (cmpcode)

CREATE INDEX missing_index_250_249_table99 ON database2.dbo.table99 (code) INCLUDE (cmpcode, name, sname)

These two indexes are not as obviously related but they are.
http://i.imgur.com/iQYuWno.jpg
They are not only on different fields, but also have different INCLUDE columns.  If you look closely though, the actual index columns are what I call “left-hand equivalent” – they both start with code and then the first index adds grpcode, so an index on code, grpcode would cover both situations for the searchable index fields.

The second piece that would truly make an index cover both situations is for it to include the sum of the INCLUDE’d columns – hence:

CREATE INDEX missing_index_250_249_table99 ON database2.dbo.table99 (code,grpcode) INCLUDE (cmpcode, name, sname)

This index on two columns with three included columns covers both situations – instead of choosing one index over the other we need to do a little work and combine them, but the effect can be very beneficial, and once you understand how it works it doesn’t take that much time.

--

Here is another (completely contrived) situation:

CREATE INDEX missing_index_44_45_table23 ON database3.dbo.table23 (name,address1) INCLUDE (address2)

CREATE INDEX missing_index_32_33_table23 ON database3.dbo.table23 (name) INCLUDE (address1, address2, state)

CREATE INDEX missing_index_55_56_table23 ON database3.dbo.table23 (name, city) INCLUDE (address1)

CREATE INDEX missing_index_48_49_table23 ON database3.dbo.table23 (name, address1,city)

So we need to start at the beginning – are the indexes all on the same database and table?  Check!  (You may chuckle but especially when looking across an instance you may find you have very similar looking databases/tables!)

Next, let’s look at left-hand equivalence.  All four indexes start with name field – so far so good.  Index_32_33 ends there, so it is a likely candidate to be consolidated with something else.

This is where it gets a little trickier – both index_44_45 and index_48_49 have address1 as their second column, which means they could be duplicative and could also be related (left-hand-equivalent) to index_32_33 upon further investigation.

Index_55_56 however does not continue with address1 – instead it has city in its second position.  This means index_55_56 is *not* duplicative of index_44_45 or index_48_49 although it can still be related to the narrowest index, index _32_33.

This demonstrates again how important the order in the index is – indexes are searched from left-to-right, so "name, city"<> "name, address".

Consider index_55_56 and index_48_49 – even though index_48_49 *does* have the city column in its index list, it is not in the same-left-to-right order (with address1 in the way) so it isn’t left-hand-equivalent and therefore not combinable.

This leaves us with two options, either of which can be optimal: 
Combine index_32_33, index_44_45, and index_48_49, and just create index_55_56 as is: 
CREATE INDEX missing_index_98_99_table23 ON database3.dbo.table23 (name,address1,city) INCLUDE (address2,state) 
CREATE INDEX missing_index_55_56_table23 ON database3.dbo.table23 (name, city) INCLUDE (address1)
 Combine index_44_45 with index_48_49 (both containing name,address1) and index_32_33 and index_55_56: 
CREATE INDEX missing_index_77_78_table23 ON database3.dbo.table23 (name,address1,city) INCLUDE (address2) 
CREATE INDEX missing_index_88_89_table23 ON database3.dbo.table23 (name, city) INCLUDE (address1, address2, state) 
As stated above either of these options work – they both cover all four situations.  One thing to consider is the size of the fields contained in the indexes – in option 1 we are storing eight fields (name twice, address1 twice, city twice, address2 once, and state once) whereas in option 2 we are storing *nine* fields as we have address2 in the INCLUDE of both indexes.  This may make Option 1 at least slightly “better” although depending on the datatype of address2 and the number of rows in table23, that advantage may be negligible.

--

Missing indexes are an oft-avoided subject but they really can make a difference to performance, and the algorithms inside SQL Server to help determine and weight the recommendations has become much better with each version of SQL Server.  One thing these improved algorithms still don’t watch for are the duplicative/related situations we have discussed here, so you still need to watch for them yourselves.

http://www.cindyvallar.com/crowsnest.jpg

Again, do *not* just blindly create new indexes – consider, test if possible, and weigh the advantages against the possible disadvantages such as the amount of space the index will consume.

--

Hope this helps!


Querying SQL and Windows Version Info with T-SQL

$
0
0
Just a quick one today - I see questions sometimes about polling Windows information from inside SQL Server itself.  There are a couple of frequently touted options:


--


(1)  SELECT @@VERSION


The most basic option, and it does return most of what we want but not in any kind of a pretty format:


Microsoft SQL Server 2012 (SP3) (KB3072779) - 11.0.6020.0 (X64)
Oct 20 2015 15:36:27                  
Copyright (c) Microsoft Corporation                
Enterprise Edition (64-bit) on Windows NT 6.1 <X64> (Build 7601: Service Pack 1) (Hypervisor)


To just run a query and see the answer this is fine, but usually I like to be able to programmatically manipulate the data (such as an ORDER BY), and a result set that is one big text field (with embedded line feeds) is not a great way to go.


--


(2) exec master..xp_cmdshell 'systeminfo'


This is sort of cheating as it requires a call to xp_cmdshell to call a Windows command rather than anything truly inside SQL Server, but it does work (assuming you have xp_cmdshell enabled):


Host Name:               Instance01
OS Name:                   Microsoft Windows Server 2008 R2 Standard 
OS Version:                6.1.7601 Service Pack 1 Build 7601
OS Manufacturer:           Microsoft Corporation
OS Configuration:          Member Server
OS Build Type:             Multiprocessor Free
Registered Owner:         MyCompany
Registered Organization:   MyCompany
Product ID:                00477-001-0000421-84319
Original Install Date:     3/13/2013, 8:28:33 AM
System Boot Time:          1/28/2017, 8:03:41 AM
System Manufacturer:       VMware, Inc.
System Model:              VMware Virtual Platform
System Type:               x64-based PC
Processor(s):              4 Processor(s) Installed.
                           [01]: Intel64 Family 6 Model 45 Stepping 2 GenuineIntel ~2893 Mhz
                           [02]: Intel64 Family 6 Model 45 Stepping 2 GenuineIntel ~2893 Mhz
                           [03]: Intel64 Family 6 Model 45 Stepping 2 GenuineIntel ~2893 Mhz
                           [04]: Intel64 Family 6 Model 45 Stepping 2 GenuineIntel ~2893 Mhz
BIOS Version:              Phoenix Technologies LTD 6.00, 9/21/2015
Windows Directory:         C:\Windows
System Directory:          C:\Windows\system32
Boot Device:               \Device\HarddiskVolume1
System Locale:             en-us;English (United States)
Input Locale:              en-us;English (United States)
Time Zone:                 (UTC+01:00) Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna
Total Physical Memory:     24,576 MB
Available Physical Memory: 5,405 MB
Virtual Memory: Max Size:  25,598 MB
Virtual Memory: Available: 6,182 MB
Virtual Memory: In Use:    19,416 MB
Page File Location(s):     D:\pagefile.sys
Domain:                   mydomain.com
Logon Server:              N/A
Hotfix(s):                 143 Hotfix(s) Installed.
                           [01]: KB2470949
                           [02]: KB2509553
                           [03]: KB2511455
                           [04]: KB2547244
                           [05]: KB2560656
                           [06]: KB2570947
                           [07]: KB2585542
                           [08]: KB2604115
                           [09]: KB2621440
                           [10]: KB2644615
                           [11]: KB2654428
                           [12]: KB2667402
                           [13]: KB2676562
                           [14]: KB2690533
                           [15]: KB2692929
                           [16]: KB2698365
                           [17]: KB2705219
                           [18]: KB2709715
                           [19]: KB2724197
                           [20]: KB2736422
                           [21]: KB2742599
                           [22]: KB2758857
                           [23]: KB2765809
                           [24]: KB2770660
                           [25]: KB2799494
                           [26]: KB2807986
                           [27]: KB2813170
                           [28]: KB2813347
                           [29]: KB2813430
                           [30]: KB2840149
                           [31]: KB2840631
                           [32]: KB2861698
                           [33]: KB2862152
                           [34]: KB2862330
                           [35]: KB2862335
                           [36]: KB2862973
                           [37]: KB2864202
                           [38]: KB2868038
                           [39]: KB2871997
                           [40]: KB2884256
                           [41]: KB2892074
                           [42]: KB2893294
                           [43]: KB2894844
                           [44]: KB2898851
                           [45]: KB2911501
                           [46]: KB2931356
                           [47]: KB2937610
                           [48]: KB2943357
                           [49]: KB2957189
                           [50]: KB2968294
                           [51]: KB2972100
                           [52]: KB2972211
                           [53]: KB2973112
                           [54]: KB2973201
                           [55]: KB2973351
                           [56]: KB2977292
                           [57]: KB2978120
                           [58]: KB2984972
                           [59]: KB2991963
                           [60]: KB2992611
                           [61]: KB3000483
                           [62]: KB3003743
                           [63]: KB3004361
                           [64]: KB3004375
                           [65]: KB3010788
                           [66]: KB3011780
                           [67]: KB3018238
                           [68]: KB3019978
                           [69]: KB3021674
                           [70]: KB3022777
                           [71]: KB3023215
                           [72]: KB3030377
                           [73]: KB3033889
                           [74]: KB3035126
                           [75]: KB3037574
                           [76]: KB3038314
                           [77]: KB3042553
                           [78]: KB3045685
                           [79]: KB3046017
                           [80]: KB3046269
                           [81]: KB3055642
                           [82]: KB3059317
                           [83]: KB3060716
                           [84]: KB3068457
                           [85]: KB3071756
                           [86]: KB3072305
                           [87]: KB3072630
                           [88]: KB3074543
                           [89]: KB3075220
                           [90]: KB3076895
                           [91]: KB3078601
                           [92]: KB3080446
                           [93]: KB3084135
                           [94]: KB3086255
                           [95]: KB3092601
                           [96]: KB3097989
                           [97]: KB3101722
                           [98]: KB3108371
                           [99]: KB3108381
                           [100]: KB3108664
                           [101]: KB3108670
                           [102]: KB3109103
                           [103]: KB3109560
                           [104]: KB3110329
                           [105]: KB3122648
                           [106]: KB3123479
                           [107]: KB3124275
                           [108]: KB3126587
                           [109]: KB3127220
                           [110]: KB3133043
                           [111]: KB3135983
                           [112]: KB3139398
                           [113]: KB3139914
                           [114]: KB3139940
                           [115]: KB3142024
                           [116]: KB3142042
                           [117]: KB3145739
                           [118]: KB3146706
                           [119]: KB3146963
                           [120]: KB3149090
                           [121]: KB3156016
                           [122]: KB3156017
                           [123]: KB3156019
                           [124]: KB3159398
                           [125]: KB3161949
                           [126]: KB3161958
                           [127]: KB3163245
                           [128]: KB3164033
                           [129]: KB3164035
                           [130]: KB3170455
                           [131]: KB3177186
                           [132]: KB3184122
                           [133]: KB3185911
                           [134]: KB3188740
                           [135]: KB3192321
                           [136]: KB3192391
                           [137]: KB3205394
                           [138]: KB3210131
                           [139]: KB3212642
                           [140]: KB958488
                           [141]: KB976902
                           [142]: KB976932
                           [143]: KB3212646
Network Card(s):           1 NIC(s) Installed.
                           [01]: Intel(R) PRO/1000 MT Network Connection
                                 Connection Name: LAN Prod
                                 DHCP Enabled:    No
                                 IP address(es)
                                   [01]: 192.168.22.33



https://i.ytimg.com/vi/eOJ32gNM0qc/hqdefault.jpg 


Well....assuming you have xp_cmdshell enabled *and* you want to know every single KB applied to your server, ever.

Again this isn't cleanly parseable, so it isn't the optimal answer.


--


(3) Glenn Berry


If you have read my blog at all you know I am a huge fan of Glenn Alan Berry's (blog/@GlennAlanBerry) "Diagnostic Information Queries" - commonly referred to as the "DMV Queries." (as seen here, here, here, and here.)  


http://s2.quickmeme.com/img/99/995c9a89f2eb1f869fb6fd7fc72ac143a76d6313f05cb0b8309813514eb0f876.jpg

Glenn has been maintaining this forever, and does an amazing job of both deciphering the DMVs and of cataloging submissions from other prominent SQL Server professionals (such as Jimmy May's great Disk Latency query).

The first relevant query is currently Query #1:


SELECT @@SERVERNAME AS [Server Name], @@VERSION AS [SQL Server and OS Version Info];


As you can see this is just the combination of @@VERSION described above and @@SERVERNAME to return the instance's name.

The next (and more interesting) query is Query #3:


SELECT SERVERPROPERTY('MachineName') AS [MachineName], 
SERVERPROPERTY('ServerName') AS [ServerName],
SERVERPROPERTY('InstanceName') AS [Instance],
SERVERPROPERTY('IsClustered') AS [IsClustered],
SERVERPROPERTY('ComputerNamePhysicalNetBIOS') AS [ComputerNamePhysicalNetBIOS],
SERVERPROPERTY('Edition') AS [Edition],
SERVERPROPERTY('ProductLevel') AS [ProductLevel],                                                       -- What servicing branch (RTM/SP/CU)
SERVERPROPERTY('ProductUpdateLevel') AS [ProductUpdateLevel],         -- Within a servicing branch, what CU# is applied
SERVERPROPERTY('ProductVersion') AS [ProductVersion],
SERVERPROPERTY('ProductMajorVersion') AS [ProductMajorVersion],
SERVERPROPERTY('ProductMinorVersion') AS [ProductMinorVersion],
SERVERPROPERTY('ProductBuild') AS [ProductBuild],
SERVERPROPERTY('ProductBuildType') AS [ProductBuildType],                    -- Is this a GDR or OD hotfix (NULL if on a CU build)
SERVERPROPERTY('ProductUpdateReference') AS [ProductUpdateReference], -- KB article number that is applicable for this build
SERVERPROPERTY('ProcessID') AS [ProcessID],
SERVERPROPERTY('Collation') AS [Collation],
SERVERPROPERTY('IsFullTextInstalled') AS [IsFullTextInstalled],
SERVERPROPERTY('IsIntegratedSecurityOnly') AS [IsIntegratedSecurityOnly],
SERVERPROPERTY('FilestreamConfiguredLevel') AS [FilestreamConfiguredLevel],
SERVERPROPERTY('IsHadrEnabled') AS [IsHadrEnabled],
SERVERPROPERTY('HadrManagerStatus') AS [HadrManagerStatus],
SERVERPROPERTY('IsXTPSupported') AS [IsXTPSupported],
SERVERPROPERTY('BuildClrVersion') AS [Build CLR Version];
 





MachineName
ServerName
Instance
IsClustered
EDCRMDB01V
EDCRMDB01V
NULL
0




ComputerNamePhysicalNetBIOS
Edition
ProductLevel
ProductUpdateLevel
EDCRMDB01V
Enterprise Edition (64-bit)
SP3
NULL




ProductVersion
ProductMajorVersion
ProductMinorVersion
ProductBuild
11.0.6020.0
11
0
6020




ProductBuildType
ProductUpdateReference
ProcessID
Collation
NULL
KB3072779
1988
Latin1_General_CI_AI




IsFullTextInstalled
IsIntegratedSecurityOnly
FilestreamConfiguredLevel
IsHadrEnabled
1
0
0
0




HadrManagerStatus
IsXTPSupported
Build CLR Version

2
NULL
v4.0.30319




This query introduces the concept of the SERVERPROPERTY() function and shows some of the many fields that it can retrieve.  On of the nicest things about SERVERPROPERTY() to me is that it cleanly parses out the Major and Minor versions of SQL Server - this is the most direct way to find out that your instance is Version 11 (SQL 2012).

The limitation for our purposes here is that it doesn't return any information about the Operating System version (here we are talking about Windows because these hooks just don't play into Linux)...you get a little external information like the NETBIOS name, but nothing about the actual Windows Version.

We need something more...


--


(4) My Answer


As is often the case, my best answer is a cross between a couple of the previous options to take the best of both... kind of like this:



http://weknowmemes.com/wp-content/uploads/2014/02/hilarious-animal-hybrids.jpg


(Sorry - I found that picture and couldn't resist...)


This query uses string functions to parse @@VERSION and extract some of the information (and then returns pretty text via a pair of CASE statements) and also uses some of the more meaningful SERVERPROPERTY() values.

The string to parse the Windows Version Number out of @@VERSION came from here - much easier than trying to backtrack it myself!


SELECT SERVERPROPERTY('ServerName') AS [SQLServerName]
, SERVERPROPERTY('ProductVersion') AS [SQLProductVersion]
, SERVERPROPERTY('ProductMajorVersion') AS [ProductMajorVersion]
, SERVERPROPERTY('ProductMinorVersion') AS [ProductMinorVersion]
, SERVERPROPERTY('ProductBuild') AS [ProductBuild]
, CASE LEFT(CONVERT(VARCHAR, SERVERPROPERTY('ProductVersion')),4) 
WHEN '8.00' THEN 'SQL Server 2000'
WHEN '9.00' THEN 'SQL Server 2005'
WHEN '10.0' THEN 'SQL Server 2008'
WHEN '10.5' THEN 'SQL Server 2008 R2'
WHEN '11.0' THEN 'SQL Server 2012'
WHEN '12.0' THEN 'SQL Server 2014'
ELSE 'SQL Server 2016+'
END AS [SQLVersionBuild]
, SERVERPROPERTY('ProductLevel') AS [SQLServicePack]
, SERVERPROPERTY('Edition') AS [SQLEdition]
, RIGHT(SUBSTRING(@@VERSION, CHARINDEX('Windows NT', @@VERSION), 14), 3) as [WindowsVersionNumber]
, CASE RIGHT(SUBSTRING(@@VERSION, CHARINDEX('Windows NT', @@VERSION), 14), 3)
WHEN '5.0' THEN 'Windows 2000'
WHEN '5.1' THEN 'Windows XP'
WHEN '5.2' THEN 'Windows Server 2003/2003 R2'
WHEN '6.0' THEN 'Windows Server 2008/Windows Vista'
WHEN '6.1' THEN 'Windows Server 2008 R2/Windows 7'
WHEN '6.2' THEN 'Windows Server 2012/Windows 8'
ELSE 'Windows 2012 R2+'
END AS [WindowsVersionBuild]


This allows me to return a pretty result set and also to extract number values and simple strings that I can then programmatically act upon (or sort/filter in Excel):


SQLServerName
SQLProductVersion
ProductMajorVersion
ProductMinorVersion
ProductBuild
EDCRMDB01V
11.0.6020.0
11
0
6020
SQLVersionBuild
SQLServicePack
SQLEdition
WindowsVersionNumber
WindowsVersionBuild
SQL Server 2012
SP3
Enterprise Edition (64-bit)
6.1
Windows Server 2008 R2/Windows 7



--


Hope this helps!



Viewing all 112 articles
Browse latest View live