Creativity, Opportunity and your Career

Are there times in life when you find that you are more creative than others? Are there certain times during the day when you find that you have a knack or otherwise when your mind is best able to relax and engage in a creative moment?

I find that it’s interesting – this creative process. It does not come about during the normal hustle and bustle of work and life. It is hard to find when one is entwined in day to day life and trying to keep up with a workload or with family, personal interests or otherwise any sort of commitment which requires engagement.

Do you know where that spot is in your life? Do you know how to create that moment? Do you create that moment?

Personally, it’s difficult for me to find the time to engage in this process at a level that I find of interest or where they are productive enough to matter.

I have a routine which I go through and this routine can be accomplished in any sort of environment.

At this moment, I’m in my process of finding my creative space – this takes, for me, several forms.

The first is physical. I need to be comfortable. If I’m hungry, it’s more difficult to relax and focus – if I’m tired, my focus isn’t what it needs to be in order to settle into a constructive form of creative thought.

The second is mental. I need to have things on my mind wrapped up – little things like remembering to forward an email about the landscaping to my wife, or setting up a reminder to call someone back or scheduling the meeting that’s been on my mind, paying a bill, etc… get rid of the ticky-tack stuff in your brain so that it’s not of any concern.

The third is emotional. We are all emotional creatures. As a man, it took me time to realize that I am incredibly emotional and this is an incredible blessing and one which should be embraced. For me, I find that music can often bring me to a an emotional state where I find that my foot is tapping, I have a smile on my face and I’m singing along to some new song I’ve recently heard (even though I’m terrible at using the actual lyrics).

The final item, for me, is environment. Turing off the phone. Closing the door, turning up the music a bit higher, or putting on headphones. I need to be lost in a lot of “white noise” which I’m familiar – in this manner, outside stimuli which my brain isn’t anticipating is out of the picture. The same is true with line of sight. I ensure that nothing is in my line of sight which isn’t expected or anticipated. This includes co-workers, pets, TV, internet, etc… The goal here, is to eliminate the outside stimulus which your brain isn’t already expecting.

Today I find myself in this space, on an airplane. And now you, the reader are saying, are you kidding me? You just broke like five out of the four rules you mentioned above. Heh yes, indeed; I did. The point is, that I actually find myself in a very familiar environment – I typically sit in one of two seats on each flight. So, the noise is about the same, the routines of the flight attendants is about the same, etc…

Today I find myself listening to the Killers and listening to the album “Battle Born”. I just finished my typical fare which consists of the Thai Chicken roll and a club soda with a lime.

The work consists of thinking outside of the box as it relates to technical architecture components but more importantly the relationships, conversation and language around what I’m thinking. You see, so often in life we have a picture of something in our minds eye. Meaning that we “get-it”, we can “see” the answer – yet we struggle with articulating what that means or why it’s important or how do we get there?

These are some pretty important conversations in your job. You have them all of the time. They have been known to frustrate you – perhaps to the extent that they have been known to drive you to monster or dice to look for “another company” which isn’t so “dysfunctional”.

How do I know this you ask? Well, I’ve been there and I’ve done that, more than once. However, it dawned on me one day that something else was going on… you know the definition of insanity is doing the same thing over and over and expecting a different result. Here you are, in the same boat, again, doing what you have always done, because, you are of course, right. But guess what? It doesn’t matter if you are right. It doesn’t matter how wrong everyone else is. What matters is you, because after all, who can you control? Yeah, that’s what I thought. It’s up to you to change the game. The game cannot be changed if you keep doing the same thing over and over, now can it?

It can be tough to look in the mirror and realize that the degree(s), certification(s), successes which you’ve had in your career to date don’t matter all of the time. This is true for your manager, for their manager, etc… its true all the way up the chain.

You see folks, as database people, we are given an opportunity unlike many others. The platform exists, based on what you do, day to day to have an incredible impact to your overall organization. Now, this impact, it can take more than one form. It can be positive. It can be negative. I’ve seen both. I’ve done both. Choosing the positive path is highly recommended. Choosing the positive path is significantly harder. Choosing the positive path requires a bit of humility, grace, patience, perseverance and temperance. As database people, we are often “right” – you see, this is the rub, from an organizational standpoint. If you have *EVER* said “I told you so” or “I warned you…” etc… you have failed yourself and you’ve severely impacted the value that you might be able to have within your organization…. Screech! Say what? – But I was RIGHT, dammit! Heh. Yeah, you were. So what?

Put yourself in the shoes of the person or team you said that to. Seriously. Stop reading. Close your eyes. Think about where they were, what they had just worked on – what are they thinking in that moment? I’ll help you out, they are thinking “wow, what an ass”, or “here we go again…” or “why do they (the company you work for) tolerate this kind of behavior”, etc…

Yup – and if we are all honest, at some point in our jobs / careers, we have thought the same thing of others.

So, what does this all mean, why does it matter, what is the point of this post?

As a data professional we are uniquely positioned to see the “world” from a vantage point that others aren’t privy to. We have the ability to take on responsibilities outside of our specific domain. We have the ability to contribute in areas where many others don’t have an opportunity. Your company can benefit from you not thinking about yourself and others but instead thinking about the challenge at hand. Thinking about the strategic or corporate initiatives and then finding a way to contribute to them. It’s about finding a way to help the development team or the server and san guru’s, or if you are really good, the project management team. It’s easy to help each one of those groups. Take them out to lunch, get to know their story as it relates to work. Ask how you can help. You will be surprised by the responses you receive.

I can already hear some of you saying to yourself “you don’t know my boss, you don’t know this environment, you don’t know…” yup, you are right. I don’t know. I have no idea. Here’s what I do know, if you take the time to think through this post and send a list of thoughtful questions to someone whom you admire professionally, their response may surprise you. I also know that I’ve been blessed to be in some incredible organizations with some wicked tough challenges. Earlier in my career, these challenges would have torn me to bits – I simply wouldn’t have been ready for them. Heck, I’m still not ready for some of the challenges my clients face. However, I’m honest with myself about it, and I’m able to articulate the value that I can provide and just as important where I cannot provide value.

My hope for you, is that you will go through this process. And if you have gone through this type of process, why stop now? Go through it again and again. Then take an even more important step and go through this process in your personal life.

Speaking

This past weekend, I had the most challenging experience I’ve ever encountered as it relates to speaking in public.

I’ve been blessed with opportunities to speak at Dev Connections, SQL Saturday events, User Group events and of course, in my day to day life as a consultant I speak to differing audiences on a fairly regular basis.

I’ve read a lot over the years as it pertains to giving a good presentation, I’ve seen many good public speakers and some real duds as well.

However, I now have a new found appreciation for folks who speak to audiences on manners that aren’t technical or in general, targeted topics / concepts.

This past weekend I gave a eulogy.

The individual I gave it for was close to me, however, even if they hadn’t been, I believe that the same challenges would have existed.

It’s interesting – one of the first rules in public speaking is to “Know Thy Audience” – this is, of course critically important to being an effective speaker, consultant, employee, manager, CxO, etc…
So, what is the audience for a eulogy?

Well, it’s obvious, right?

Clearly, it’s full of people who knew the deceased – but then there are some wrinkles in the midst of that – differing relationships – from Mother, Daughter, Wife, Husband, Son, Father, Brother, Sister, 1st cousin, 8th cousin, grandkids, friends, colleagues, etc… and then in the midst of that wrinkle there are some very significant generational gaps. From toddlers to seniors who were born during the great depression.

Wow. Imagine preparing your favorite technical session with that audience in mind – and then, do it without a slide deck. Do it when EVERYONE is watching YOU. Do it when you see tears streaming down a cheek. Do it when there are tears streaming down your cheek and the only thing you can feel is a lump in your throat the size of a grapefruit.

The purpose of this post is to challenge you, the reader, to get up – be bold. Speak. It’s to challenge you in your personal life. Take that risk. I don’t know what it is, but take it. There is but this moment in life, seize it.
It’s tough, I’m not going to lie. However, it does get easier. You will get better. You’ve heard the old adage a thousand times “Practice makes perfect” – while I’m not sure about the “perfection” aspect of it, it certainly makes one better, a lot better.

Had I not faced my fear of speaking in front of my peers on technically related items, I’m not sure how the Eulogy would have turned out – and let’s face it, what’s more important in life, providing a good technical session or being able to prepare oneself for a life event that takes on a very real sense of importance?

I’m not trying to diminish or otherwise negate the importance of technical sessions – they have provided me with a great deal of knowledge over the years and I have certainly benefitted from them, however, having attended many of them and given more than a handful, it’s my experience that, having given a eulogy, my approach to giving technical sessions is going to change – perhaps drastically.

First and foremost, preparation.

In the normal course of giving a technical presentation, I will typically come up with what I think is a good idea. Then I’ll scribe an outline, put together the demo’s I believe illustrate the points / relevant technical details and then create some slides, add the detail to my outline and practice.

In the course of giving the eulogy – it was a very different experience. First, I ASKED many people what they thought of different ideas, I bounced around different stories, I THOUGHT a lot about the audience. I wrote some notes, scribbled down thoughts over the course of a day or two, then sat down and wrote it out.

Then I practiced. I practiced in a way I hadn’t before. I practiced speaking the eulogy out loud with my favorite music playing. I practiced it while driving the car. I practiced it walking around. I practiced it while hitting golf balls.
This was different for me – it had been my intent to practice in enough “distracted” environments that when it came time to do it “live”, I’d be ready for the grapefruit in my throat, I’d be ready for the faces in the audience who were looking at me.
And you know what? It worked. Was my delivery flawless? No. Did I recite the eulogy from memory without once checking my notes? No. Did I have some unexpected hiccups? Yup. Did they throw me? Nope.
Was I able to connect with every person in the Church? I haven’t a clue – but I do know that out of the ~ 175 folks in attendance, I was blessed to hear many of them say “thank you, it was so eloquent, etc…” Afterwards, I asked a couple of different generational groups what they heard me say – and guess what? They heard the same thing. Victory. In a big, huge way.

How many times have you walked out of a technical presentation with a different set of “facts” or even “ideas” about the subject matter than the guy who was sitting next to you? I can answer that for you, it happens very often.

Now, I’ll likely take some flak for this, and all I can say is “bring it on” – here you are: if your session has folks walking out with differing ideas of what you just spoke about, you weren’t as effective as you’d hoped. Yup, that’s right, as a speaker, you failed them, and you failed yourself.

I’m guilty of this. I’m absolutely certain that there’s been ambiguity in my sessions. I always chalked it up to “different people have different levels of understanding”, so, it’s not on me.

But guess what? It is.

Here’s the thing – when I give presentations to my clients, I can often anticipate the outcome of the conversation. If I cannot anticipate the questions which will come my way, based on information which I’ve delivered, then either I don’t know my subject area well enough or I haven’t done enough due diligence. If the “takeaways” from a client presentation are not 100 percent clear to the client and myself, I’ve failed us both and my time as being their consultant will surely come to an end at some point down the road.

This is where I see this fork in the road. Giving a presentation to a client or giving a eulogy both have different levels of meaning and importance. Giving a session at a conference doesn’t have the same types of pressure. There are different pressures with technical conferences – there’s the being grilled / vetted by your peers if you are wrong. There’s absolutely accountability from an accuracy of the presentation standpoint.

BUT,

There’s NOT any accountability from the perspective of WHAT you conveyed to your audience in terms of the HOW’s and WHEN’s and WHY’s.

This is really important and I’m *finally* getting to the heart of this post.

If you are going to give a presentation or if you are going to attend a presentation and the speaker makes a statement such as “this technology will save your DBA staff so much time” or “this new feature will help increase your performance” – stop. Full stop. Are you sure? How sure are you? I’ll answer this for you, you aren’t. Neither the speaker, nor the attendee.

However, as the speaker, you can be sure. As the attendee, it’s your responsibility to be sure.

It’s always amazed me how often I’ll give a presentation and the amount of questions at the end aren’t more numerous or specific with implementation details or their specific environment or challenge.

Know Thy Audience pertains to the speaker. Know Thy Self pertains to the attendee and Mr. / Ms. Attendee, if you don’t know the subject material being presented on then please do one of two things, either learn it before you attend the session or spend a significant amount of time after the session learning WHEN and WHY it *might* be a good idea to implement.

I cannot count, on both hands or feet, how many times I’ve seen terrific technology used in the wrong manner, resulting in botched software, poor quality, high TCO, headaches, nightmares, well, you get the idea.

This also comes back to an old post on technical debt I wrote once. It feeds into it perfectly and, for those of you who speak, you have a larger responsibility, and for those attendees, you also have a larger responsibility. Don’t feed the monster which is technical debt. While we’ve all done it, most often out of the best of intentions, there exists an opportunity to acknowledge the reality and learn from it.

So, here’s the challenge to the speakers out there – focus less on breadth and depth of a topic and focus more on the relevance of a topic. Attendees, do yourself and your respective companies a huge service and ensure that you are ready for the material which is about to be presented.

Index Aligned Partitions and Resolution

Most of us are aware of partitioning in SQL Server.  It was introduced with SQL Server 2005 and then further enhanced in later releases.  The value of partitioning a table can vary.  Some use it as a means of archiving off older data, others use it for the sake of performance gains, either way, there are some important things to keep in mind when one partitions a table.

One of those items brings us to the point of the post, aligning your non-clustered indexes on the partition scheme.  This is an important rule and more often than not it should be followed.

One of the reasons that it’s important is if you ever want to switch partitions, all of the indexes associated to that object must be aligned.  ref: http://msdn.microsoft.com/en-us/library/ms187526.aspx

Another reason is around maintenance.  If the indexes are partitioned then only the active partition (based on your partitioning strategy) needs to be maintained (re-org’d).  For larger indexes, it can become prohibitive to re-build these when they become fragmented.

Probably the best reason to have your indexes aligned is that the amount of reads to satisfy your queries will significantly drop when that index is used.

I recently came across a scenario where I found that there were a couple of tables where the indexes weren’t aligned with the partition scheme.  Instead of going through every partitioned table in the database, I wrote a quick query that would tell me which objects had indexes that weren’t created on the partition scheme for the table.

This query will also show you the usage of the indexes which can be interesting as well.  Without further ado, here’s the query:

SELECT Db_name(database_id) AS dbname, 
       o.name               object_name, 
       i.name               index_name, 
       i.type_desc, 
       ds.name, 
       ds.type_desc, 
       s.user_seeks, 
       s.user_scans, 
       s.user_lookups, 
       s.user_updates, 
       s.last_user_seek, 
       s.last_user_update 
FROM   sys.objects AS o 
       JOIN sys.indexes AS i 
         ON o.object_id = i.object_id 
       JOIN sys.data_spaces ds 
         ON ds.data_space_id = i.data_space_id 
       LEFT OUTER JOIN sys.dm_db_index_usage_stats AS s 
         ON i.object_id = s.object_id 
            AND i.index_id = s.index_id 
WHERE  o.TYPE = 'u' 
       AND i.Type in (1, 2) 
       AND o.name IN (SELECT a.name 
                      FROM   (SELECT ob.name, 
                                     ds.type_desc 
                              FROM   sys.objects ob 
                                     JOIN sys.indexes ind 
                                       ON ind.object_id = ob.object_id 
                                     JOIN sys.data_spaces ds 
                                       ON ds.data_space_id = ind.data_space_id 
                              GROUP  BY ob.name, 
                                        ds.type_desc) a 
                      GROUP  BY a.name 
                      HAVING COUNT(*) > 1) 
ORDER  BY object_name DESC  

That’s fun stuff… But then what happens when you actually want to FIX them?  If there are a lot of them, it makes for a fairly tedious task.  The code below will help.  Be absolutely certain that you check the options chosen.  For my database, I used MAXDOP = 32, ONLINE = ON, etc… Ensure that the the settings are appropriate for your environmnent… enjoy!

SET NOCOUNT ON

SELECT db_name(database_id) as DBName,o.name Object_Name,		
        i.name Index_name, 		
       i.Type_Desc as IDX_typeDesc ,DS.name , ds.type_desc
		,sch.name + '.'+o.name as SchemaName_TableName, CAST(NULL as varchar(128)) as partitonOrdinal
into #tmp1
 FROM sys.objects AS o		
      JOIN sys.indexes AS i		
  ON o.object_id = i.object_id 		
	JOIN sys.schemas sch on sch.schema_id = o.schema_id
	JOIN sys.data_spaces DS on DS.data_space_id = i.data_space_id		
	LEFT OUTER JOIN sys.dm_db_index_usage_stats AS s ON i.object_id = s.object_id AND i.index_id = s.index_id		

 WHERE  o.type = 'u'		
   AND i.type IN (1, 2) -- Clustered and Non-Clustered indexes
   AND O.NAME in 		
	(	
	SELECT a.name from 		
		(SELECT OB.NAME, ds.type_desc 
		 FROM sys.objects OB 
			JOIN sys.indexes ind on ind.object_id = ob.object_id 
			JOIN sys.data_spaces ds on ds.data_space_id = ind.data_space_id		
		GROUP BY ob.name, ds.type_desc ) a 
  group by a.name having COUNT (*) > 1
	)	
order by Object_Name desc;

update a
	set partitonOrdinal = c.name 
	FROM #tmp1 A JOIN 
	  sys.tables          t ON A.Object_Name = t.name  
	join sys.schemas s on s.schema_id = t.schema_id
	join  sys.indexes         i 
		  on(i.object_id = t.object_id 
		  and i.index_id < 2)
	join  sys.index_columns  ic 
		  on(ic.partition_ordinal > 0 
		  and ic.index_id = i.index_id and ic.object_id = t.object_id)
	join  sys.columns         c 
		  on(c.object_id = ic.object_id 
		  and c.column_id = ic.column_id)
	WHERE I.name = a.Index_name and a.IDX_typeDesc = 'clustered'    ;    

update a
	set name = b.name, partitonOrdinal = b.partitonOrdinal
	FROM #tmp1 a join #tmp1 b on a.object_name = b.object_name where b.idx_typedesc = 'clustered';



WITH index_cte (table_name, index_name, type_desc, index_columns, index_included_columns,
filter_definition, is_unique, is_primary_key, is_unique_constraint,fill_factor,
is_padded, allow_row_locks, allow_page_locks, ignore_dup_key, data_compression_desc,
no_recompute, partitionname, partitonOrdinal)
as(
select sch.name+'.'+obj.name [table_name], i.name [index_name], i.type_desc,
REPLACE(
(select '['+col.name+'] '+
	CASE WHEN k.is_descending_key = 1 THEN 'DESC'
	ELSE 'ASC'
	END as [data()]
from sys.index_columns as k
inner join sys.columns col
		on k.object_id=col.object_id and k.column_id=col.column_id
	where k.object_id = i.object_id
	and k.index_id = i.index_id	and k.index_id<>0 and k.is_included_column<>1
	order by key_ordinal, k.column_id
for xml path(''))
,'C [','C, [') as [index_columns],
isnull(
REPLACE(
	(select '['+col.name+']' as [data()]
from sys.index_columns as k
inner join sys.columns col
		on k.object_id=col.object_id and k.column_id=col.column_id
	where k.object_id = i.object_id	and k.index_id = i.index_id and k.index_id<>0 and k.is_included_column<>0
	order by key_ordinal, k.column_id

for xml path(''))
,'] [','], [')
,'') as [index_included_columns],

	ISNULL(i.filter_definition,'') as filter_definition,
	is_unique, is_primary_key, is_unique_constraint, fill_factor, is_padded, allow_row_locks, allow_page_locks, ignore_dup_key, 
	data_compression_desc, no_recompute, a.name, a.partitonOrdinal 
from sys.indexes as i
inner join sys.objects obj
	on i.object_id=obj.object_id
inner join sys.schemas sch
	on obj.schema_id=sch.schema_id
inner join sys.partitions part
	on i.object_id=part.object_id
	and i.index_id=part.index_id
inner join sys.stats stats
	on i.object_id=stats.object_id
	and i.index_id=stats.stats_id

JOIN #tmp1 a ON a.index_name = i.name AND a.object_name = obj.name and a.type_desc = 'rows_filegroup'

where i.name is not null and obj.type<>'S' and sch.name<>'sys' and i.index_id>1
)
SELECT
'CREATE ' +
CASE is_unique
	WHEN 1 THEN 'UNIQUE '
	ELSE ''
END
+ index_cte.type_desc +' INDEX ' + index_cte.index_name COLLATE DATABASE_DEFAULT +' ON ' +table_name +' ('+index_columns+')'+
CASE index_included_columns
	WHEN '' THEN ''
	ELSE ' INCLUDE (' + index_included_columns + ') '
END+
CASE filter_definition
	WHEN '' THEN ''
	ELSE ' WHERE ' + filter_definition
END+
' 
	WITH ('+
	'FILLFACTOR = ' + CASE fill_factor WHEN 0 THEN '100' ELSE CAST(fill_factor as varchar) END+
	', PAD_INDEX = ' + CASE is_padded WHEN 1 THEN 'ON' ELSE 'OFF' END+
	', ALLOW_PAGE_LOCKS = ' + CASE allow_page_locks WHEN 1 THEN 'ON' ELSE 'OFF' END+
	', ALLOW_ROW_LOCKS = ' + CASE allow_row_locks WHEN 1 THEN 'ON' ELSE 'OFF' END+
	', IGNORE_DUP_KEY = ' + CASE ignore_dup_key WHEN 1 THEN 'ON' ELSE 'OFF' END+
	', STATISTICS_NORECOMPUTE = ' + CASE no_recompute WHEN 1 THEN 'ON' ELSE 'OFF' END+
	', DATA_COMPRESSION = ' + data_compression_desc +
	', SORT_IN_TEMPDB = ON'+
	', ONLINE = ON'+
	', MAXDOP = 32'+
	', DROP_EXISTING = ON'+
	')' + '
	 ON  ' +  partitionname + ' (' + partitonOrdinal + ')
 GO'
as 'USE Database statement goes here (TNW)'
from index_cte
 order by table_name, index_cte.type_desc, index_cte.index_name

drop table #tmp1

Kerberos, SQL Server 2008, Windows Server 2008

Updated Jan. 4th, 2011 to include:

Service account information for SQL Browser  |  Loopback registry setting  |  Additional SPN’s for Analysis Services

So over the past couple of weeks I have been working on a multi-node server 2008 cluster (5 active, 1 passive).  Part of the prep work before the databases were migrated from their old Server 2003 and SQL Server 2005 homes was to set up linked servers.  Simple task.  right?

It always had been in the past.

Ever seen this error? 

Message
Error: 18456, Severity: 14, State: 11.

Loginfailed for user ‘NT AUTHORITY\ANONYMOUS LOGON’. Reason: Token-based server access validation failed with an infrastructure error. Check for previous errors.

Now, I’ve seen that error plenty in my life and typically it’s for a handful of well-know reasons such as these :  Resource #1 | Resource #2 | Resource #3  | Resource #4 … each of these provided a great deal of insight, but none of them resolved the issue. 

One of the most frustrating elements of this issue was to get it to occur on demand (either failure or success).  It appeared to be an intermittent issue.  Well, it wasn’t an intermittent issue at all at the end of the day… it was simply the manner in how I was testing it.

To begin I ran kerblist tickets from a command prompt (kerblist.exe is part of the 2003 resource pack) this showed me which tickets had been generated.  … screech… back up.  Prior to all of this, it’s important to validate that your SPN’s are already set and already accurate.  If you are unsure, visit this site and spend some time understanding kerberos and SPN’s.

Once I had a list of the tickets, I could see that some were generated as I expected yet others were missing that I expected to see there… So, I would run “kerblist purge” from the command prompt and successfully TEST a linked server successfully … and not have a ticket generated.  c.o.n.f.u.s.i.n.g.

Well, it’s because my sql server session had already been established and the ticket was created when I originally connected to the server via management studio.  When I ran the “kerblist purge”; it did indeed remove the kerberos tickets, but it did NOT sever my already established connection to sql server; so it continued to work even without a ticket being generated.

Now, the fix for the testing is pretty easy.  Don’t stay connected in management studio.  close it, re-open it or un-register and re-register the server.  viola, this solved the “intermitted” issues with linked servers… kind – of.  Throughout the process I was personally convinced that it was a physical host issue on some level or another.  So, I was moving resources from one node to another, testing, moving, testing, all sorts of differing results.  It got to the point where I created 4 linked servers on each server in an attempt to build out a matrix of what worked and what didn’t.  Even after all of that work it still was inconsistent. 

Enter Microsoft.  Tier one and two spent a lot of time on this (two weeks) and we did a slew of items ranging from registry entries to new service accounts to trying differing ways of connecting to the cluster using kerberos etc… none of it was successful.  I do owe an apology to tier two support for how frustrated I became after a week of working on this withthem.  They were trying their hardest and I did not request an escalation point as soon as I should have.  Once I finally did, tier 3 called me and we got down to business.  Tait Neville with the Enterprise Platforms Support – Directory Services team is one heck of a guy.  We spent about 8 hours on the phone the first day and ran traces, kerblist commands galore, setspncommands six ways from Sunday, read logs, re-read logs, failed over the environment, blew away stuff, re-built stuff… and on and on… That day ended with us realizing that the inconsistency we were seeing was perhaps a larger issue than the lack of connectivity; because sometimes, it did indeed work correctly.

Late in the call (7+hours) Tait came across a newer kb article that showed promise; but didn’t fully explain what we were seeing.  Nonetheless, we installed it.  tried it out, inconsistency.  went home.  frustrated and tired.

The next day brought about a new attitude (namely mine).  It started withthe realization that I wrote about above (Management studio being connected)… Once we could easily re-produce tickets being generated, progress was very swift.  At that point, we decided to clear out a huge swath of SPN’s that had been created over the period of trying and testing everything. 

On each node in the cluster (6 in our case) we ensured that the SQL Server Browser was running under the service account. 

On each node in the cluster we also ensured that the following registry entry was set to 1.  >> HKEY_LOCAL_MACHINE>SYSTEM>CurrentControlSet>Lsa>DisableLoopbackCheck

Here’s what we ended up with for SPNs on the SQL Server(s):  *NOTE:  These are all based on the Service account that sql server is running under.

MSSQLSvc/Servername.domainname:Port  (in all cases we are running named instances in SQL Server)  — A total of five entries with the servername and ports being different, as applicable.

MSSQLSvc/Servername.domainname:SQLServerNamedInstanceName  *Note:  .DomainName is the fully qualified domain name (FQDN)

MSSQLSvc/Servername:Port

Here’s what we ended up with for Analysis Services:

MSOLAPDisco.3/ServerName:SQLServerNamedInstanceName

MSOLAPSvc.3/ServerName:SQLServerNamedInstanceName

MSOLAPDisco.3/ServerName

MSOLAPSvc.3/ServerName

MSOLAPDisco.3/ServerName.DomainName  *Note:  .DomainName This is the fully qualified domain name (FQDN)

MSOLAPSvc.3/ServerName.DomainName *Note:  .DomainName is the fully qualified domain name (FQDN)

MSOLAPDisco.3/ServerName.DomainName:SQLServerNamedInstanceName  *Note:  .DomainName is the fully qualified domain name (FQDN)

MSOLAPSvc.3/ServerName.DomainName:SQLServerNamedInstanceName *Note:  .DomainName is the fully qualified domain name (FQDN)

Wait for AD to replicate…. ho hum ho hum….  (in our experience, this can take up to 20 minutes) …  viola.  works like a charm.

      Summary:

It turned out that we had a couple of issues in our environment.

1.)  We had too many SPN’s (none of them malformed) but not all of them were needed and it was causing some odd issues.

2.)  Every time we failed over the SQL Server resource to another node, we would see it work, sometimes and not other times.  The hotfix resolved this.

3.)  SQL Server Browser Service was not set to the Service Account.

4.)  DisableLoopbackCheck was set to 0.

To Paul (the SE here) and Tait (MSFT engineer), thank you.

Fun times… well, not really, but I learned a lot and hopefully this post will also help you if you find yourself in a similar scenario.

The main take away for me is: 

1.)  Attention to detail, specifically with SPN creation and doing everything in the same order on multiple machines in a cluster is critical.

2.)  It’s absolutely critical to have a reproducible set of steps that you can test and reproduce at will.  A lot of time was spent on this case because of what appeared to be inconsistent behavior; when in reality, it was the test that was causing a lot of the confusion.

3.)  When you just can’t figure it out, a call to Microsoft, while long, time-consuming and at times frustrating, can yield tremendous results.

Plan Guides

In November 2008 I had the opportunity to speak at Dev Connections.  One of the subjects that I covered was Plan Guides.  The abstract read “In SQL Server 2008, there exists a way to change or create a new plan for a query without touching the query itself. In environments where the database was provided by a third-party or in a high availability environment, these will help to decrease your impact on production systems while improving performance at the same time. Over the course of the session, we will walk through the creation of a plan guide.“ 

The reason for this blog post is that I was discussing the upcoming event that the Denver SQLUG is hosting with Kalen Delaney on June 9th, 2010.  Register here

I’ve already registered and I’m very much looking forward to hearing her talk on this subject.  I wanted to provide some background and familiarity with to plan guides for those who follow my blog and will be attending the UG meeting on the 9th.  

For me personally, understanding plan guides, why they matter, how to use them and what it’s all about stems from a solid understanding of execution plans and how SQL Server chooses to get to the data based on the query that you are passing it.  This is not a trivial subject and one that is best understood with a solid foundation of HOW the optimizer reports the methods it is utilizing in the retrieving of data. 

In my slide deck these questions are answered:  

     Q:  Why Plan Guides?  

     A:  Due to increased complexity in database environments today.  These complexities include Linq to SQL and it’s impact on plan cache.  C.O.T.S (Commercial Off the Shelf) implementations and the ability to be Proactive vs. Reactive in your environments. 

     Q:  What are they?  (Plan Guides) 

     A:  Plan Guides can be used to optimize the performance of queries when you cannot or do not want to change the text of the query directly.  Plan Guides can be useful when a subset of queries in a database application deployed from a third-party vendor are not performing as expected and the ability to change the query does not exist.  Plan Guides influence optimization of queries by attaching query hints or a fixed query plan to them.  (The last answer is from Microsoft). 

The slide deck and the demo also talk about the different memory pools in SQL Server and how to “see” the cache on the server.  I also touch on the life of cache and why it’s relevant. 

One of the biggest items in this presentation revolved around parameterization (forced or simple).  This is an incredibly powerful option that must be tested thoroughly in your environment.  The gist of it is that if you have a ton of compilations in your environment the changing of parameterizataion to forced might be something to really look into as it will increase your concurrency due to the decrease in compilations. 

Up until now in the demonstration, all I had covered was background information that is helpful to truly grasp Plan Guides… the last half of the presentation is on plan guides, how to create them, disable them, view them etc…    

Without further ado, here is the Powerpoint and the set of t-sql scripts that I used in my demo’s.  I’ve also commented the code that you can find below as it should help you when you walk through the powerpoint presentation. 

We begin with an Object Plan Guide   


USE AdventureWorks2008; 

go 

CREATE PROCEDURE Sales.GetSalesOrderByCountry (@Country_region nvarchar(60)) 

AS 

BEGIN 

    SELECT * 

    FROM Sales.SalesOrderHeader AS h, Sales.Customer AS c, 

        Sales.SalesTerritory AS t 

    WHERE h.CustomerID = c.CustomerID 

        AND c.TerritoryID = t.TerritoryID 

        AND CountryRegionCode = @Country_region 

END;        

GO 

  

– Create an OBJECT plan guide 

 

sp_create_plan_guide 

@name = N'Guide1', 

@stmt = N'SELECT *FROM Sales.SalesOrderHeader AS h,       

        Sales.Customer AS c, 

        Sales.SalesTerritory AS t 

        WHERE h.CustomerID = c.CustomerID 

            AND c.TerritoryID = t.TerritoryID 

            AND CountryRegionCode = @Country_region', 

@type = N'OBJECT', 

@module_or_batch = N'Sales.GetSalesOrderByCountry', 

@params = NULL, 

@hints = N'OPTION (OPTIMIZE FOR (@Country_region = N''US''))'  

   

– @hints || base this on selectivity of the data “US” is good because it will be representative of rougly 2/3rds 

–                           of the depth in the index / heap 

—  Where do the created plans exist? 

 

select * from sys.plan_guides  -- (notice the HINTS column) 

GO 

--Disable the plan guide. 

EXEC sp_control_plan_guide N'DISABLE', N'Guide1'; 

GO 

select * from sys.plan_guides 

-- Now shows as disabled 

--Enable the plan guide. 

EXEC sp_control_plan_guide N'ENABLE', N'Guide1'; 

GO 

select * from sys.plan_guides 

-- Now shows as enabled 

--Drop the plan guide. 

EXEC sp_control_plan_guide N'DROP', N'Guide1'; 

GO 

select * from sys.plan_guides 

-- Now it's gone 

-- Disable ALL plan guides. 

EXEC sp_control_plan_guide N'DISABLE ALL'; 

GO 

   

– SQL Plan Guide 

– Example:  Set the Maxdop (Degree of parallelism = 1 

 

EXEC sp_create_plan_guide 

    @name = N'Guide1', 

    @stmt = N'SELECT TOP 1 * 

              FROM Sales.SalesOrderHeader 

              ORDER BY OrderDate DESC', 

    @type = N'SQL', 

    @module_or_batch = NULL, 

    @params = NULL, 

    @hints = N'OPTION (MAXDOP 1)'; 

GO 

   

 – NOTE ** any Option clause in a select statement will work with the @hints parameter. 

– Template Plan Guide 

          — Let’s Force the parameterization on this statement.  

 

DECLARE @stmt nvarchar(max); 

DECLARE @params nvarchar(max); 

EXEC sp_get_query_template 

    N'SELECT * FROM AdventureWorks2008.Sales.SalesOrderHeader AS h 

      INNER JOIN AdventureWorks2008.Sales.SalesOrderDetail AS d 

          ON h.SalesOrderID = d.SalesOrderID 

      WHERE h.SalesOrderID = 45639;', 

    @stmt OUTPUT, 

    @params OUTPUT 

EXEC sp_create_plan_guide N'TemplateGuide1', 

    @stmt, 

    N'TEMPLATE', 

    NULL, 

    @params, 

    N'OPTION(PARAMETERIZATION FORCED)'; 

GO 

   

  — NOTE:  This forces the query to be parameterized and then a guide can be created against it. 

  — Creating a plan using XML_showplan 

 

USE AdventureWorks2008; 

GO 

SELECT City, StateProvinceID, PostalCode FROM Person.Address ORDER BY PostalCode DESC; 

GO 

DECLARE @xml_showplan nvarchar(max); 

SET @xml_showplan = (SELECT query_plan 

    FROM sys.dm_exec_query_stats AS qs 

    CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS st 

    CROSS APPLY sys.dm_exec_text_query_plan(qs.plan_handle, DEFAULT, DEFAULT) AS qp 

    WHERE st.text LIKE N'SELECT City, StateProvinceID, PostalCode FROM Person.Address ORDER BY PostalCode DESC;%'); 

EXEC sp_create_plan_guide 

    @name = N'Guide1_from_XML_showplan', 

    @stmt = N'SELECT City, StateProvinceID, PostalCode FROM Person.Address ORDER BY PostalCode DESC;', 

    @type = N'SQL', 

    @module_or_batch = NULL, 

    @params = NULL, 

    @hints =@xml_showplan; 

GO 

   

– Create a plan guide for the query by specifying the query plan in the plan cache via the plan handle.  

 

USE AdventureWorks2008; 

GO 

SELECT WorkOrderID, p.Name, OrderQty, DueDate 

FROM Production.WorkOrder AS w 

JOIN Production.Product AS p ON w.ProductID = p.ProductID 

WHERE p.ProductSubcategoryID &amp;gt; 4 

ORDER BY p.Name, DueDate; 

GO                           

   

– Inspect the query plan by using dynamic management views. 

 

SELECT * FROM sys.dm_exec_query_stats AS qs 

CROSS APPLY sys.dm_exec_sql_text(sql_handle) 

CROSS APPLY sys.dm_exec_text_query_plan(qs.plan_handle, qs.statement_start_offset, qs.statement_end_offset) AS qp 

WHERE text LIKE N'SELECT WorkOrderID, p.Name, OrderQty, DueDate%'; 

GO 

   

  — Create a plan guide for the query by specifying the query plan in the plan cache via the plan handle. 

 

DECLARE @plan_handle varbinary(64); 

DECLARE @offset int; 

SELECT @plan_handle = plan_handle, @offset = qs.statement_start_offset 

FROM sys.dm_exec_query_stats AS qs 

CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS st 

CROSS APPLY sys.dm_exec_text_query_plan(qs.plan_handle, qs.statement_start_offset, qs.statement_end_offset) AS qp 

WHERE text LIKE N'SELECT WorkOrderID, p.Name, OrderQty, DueDate%'; 

EXECUTE sp_create_plan_guide_from_handle 

    @name =  N'Guide8', 

    @plan_handle = @plan_handle, 

    @statement_start_offset = @offset; 

GO 

   

Reference the previous example of ….. “Means that there are 4811 plans for three nearly identical statements” 

With this, a plan can be created against that string….   ‘SELECT WorkOrderID, p.Name, OrderQty, DueDate%’ 

This is HUGE!!!  Think about the Commercial-Off-The-Shelf (COTS) implementations in your environment where you are largely bound by licensing and cannot physcially modify the schema, triggers, clustered indexes etc…  Additionally, there are still many applications that use a lot of dynamically built or in-line t-sql that don’t always perform well or, in the case of the example that I cited, there are hundreds or thousands of execution plans for the same t-sql statement. 

This next piece of t-sql checks the validity of the plan guides in the system. 

             An empty result set means that all plan guides are valid. 

 

USE AdventureWorks2008; 

GO 

SELECT plan_guide_id, msgnum, severity, state, message 

FROM sys.plan_guides 

CROSS APPLY fn_validate_plan_guide(plan_guide_id); 

GO 

   

Select statement taken from SQL Profiler 

Look at the execution plan. 

 

 SELECT COUNT(*) AS c 

FROM Sales.SalesOrderHeader AS h 

INNER JOIN Sales.SalesOrderDetail AS d 

  ON h.SalesOrderID = d.SalesOrderID 

WHERE h.OrderDate BETWEEN '20000101' and '20050101'; 

GO 

   

Look at the execution plan and note the Hash Match Inner Join  

Now, let’s make it a Merge Join instead. 

 

EXEC sp_create_plan_guide 

    @name = N'MyGuideProfiler1', 

    @stmt = N' SELECT COUNT(*) AS c 

FROM Sales.SalesOrderHeader AS h 

INNER JOIN Sales.SalesOrderDetail AS d 

  ON h.SalesOrderID = d.SalesOrderID 

WHERE h.OrderDate BETWEEN ''20000101'' and ''20050101''; 

', 

    @type = N'SQL', 

    @module_or_batch = NULL, 

    @params = NULL, 

    @hints = N'OPTION (MERGE JOIN)'; 

   

Spacing is critical on this. 

Check the statement to ensure that the planguide will force a merge join 

 

 SELECT COUNT(*) AS c 

FROM Sales.SalesOrderHeader AS h 

INNER JOIN Sales.SalesOrderDetail AS d 

  ON h.SalesOrderID = d.SalesOrderID 

WHERE h.OrderDate BETWEEN '20000101' and '20050101'; 

--Drop the plan guide. (if needed) 

EXEC sp_control_plan_guide N'DROP', N'MyGuideProfiler1'; 

EXEC sp_control_plan_guide N'DROP', N'TemplateGuide1'; 

EXEC sp_control_plan_guide N'DROP', N'Guide1'; 

    

– Jeremy Lowell http://www.datarealized.com || http://datarealized.wordpress.com || http://sqlperspectives.wordpress.com 

Feel free to ask questions as you step through this and I look forward to seeing everyone on the 9th!

Statistics for partitions

As data volumes continue to expand and partitioning is further adopted in SQL Server 2008 the lack of control over statistics at the partition level will become a larger issue for more companies.

I placed this as a connect item on Microsoft’s site in June of 2009.  Per the final comment from Microsoft, they recommend that the more votes it has, the higher priority it will become.

If you are reading this post, please vote, here:  https://connect.microsoft.com/SQLServer/feedback/details/468517/update-statistics-at-the-partition-level

I’m posting the connect details below. 

I’d like to see Update Statistics modified to include partitionID in combination with the existing table or indexed view name.

With the improvements of 2k8 and partitioning, very large result sets are being seen at one of my clients that utilizes partitioning. Having to update statistics against partitions that haven’t changed in order to achieve the change on the partition that actually needs it, creates additional overhead that is not required. It also forces a change to plans that wouldn’t otherwise be altered which causes for memory churn in the proc cache.

One other related note, is that you can defrag an individual partition index, but it will not update the statistics.

Proposed Solution
Two parts:

1.)
Modify Update Statistics to include an option for declaring either a partition name or partitionID for either a table or an indexed view.
i.e…
UPDATE STATISTICS table_or_indexed_view_name
[
{
{ index_or_statistics__name }
{ index_or_statistics_name }
| ( { Partition_index_or_partition_statistics_name } [ ,...n ] )
}
]

2.) Upon completion of the defrag of an individual partition index have it update the statistics for that partition.

 
Benefits
Improved Reliability
Improved Administration
Improved Performance
Other (please provides details below)

DMV's and t-sql for the DBA

The Rocky Mountain Tri-Fecta was today and I had the opportunity to speak.  I mentioned to those brave souls who attended my session that I would post the slide deck and the demo’s.

Feel free to download them (links are at the end of this post) and provide your comments / thoughts.

I had a great time attending the event and I learned a great number of things.  A special thanks to everyone involved with putting this together.  I’m sill amazed and the stacks of empty pizza boxes.  WOW!

Part of this post is also to ask the readers of this blog if they would be interested in checking out a new tool that I’ve been testing over the last couple of months.  It’s formally known as SmartMon and it provides a form of pattern analysis against the data looking for anomalies and patterns of similar behavior.  It has decreased the amount of time that I spend doing analysis on perfmon files by many factors.  It’s a great tool and the guys building it are stand up guys in the San Jose, CA area.  If you are interested in becoming one of the first handful of people to see the release of this product, e-mail feedback@perfmonanalysis.com.  Please mention that I referred you.

Without further ado, enjoy the SQL and thanks again for the opportunity to share what I’ve learned with all of you.

RockyMountainTriFecta (Link to the slide deck PPT)

Essentials-DMVs_tsql_dba (t-sql – Demo)

Technical Debt, Normalization Debt, Requirements Debt, Git-r-done Debt, OH MY!

An individual that I respect a great deal recently commented on a blog post that I wrote in a way that I didn’t see coming.  It’s interesting, how when one takes a moment to get out of what’s in their brain and onto paper the meaning that it has to others.  His comment struck a chord with me and put into words something that I talk about all of the time, but he did it in a manner that was concise, relevant and easy to digest.

The basic premise revolves around “debt”, technical debt to be specific.  In my initial blog post I was attempting to make the case for the critical nature of Finding Functional Dependencies as they relate to a business and the nature of that business as it’s persisted, stored and understood in the form of data.  The original content that spurred this post can be found in chapter three of this book “SQL Server MVP Deep Dives” and was authored by Hugo Kornelis.

Cutting to the chase, the point of this post is partially to share my opinion but it’s also the hope that it will resonate with a larger audience and we can begin to clearly communicate the danger with technical debt. 

As defined in 1992 by Ward Cunningham , technical debt was described as:  “Shipping first time code is like going into debt.  A little debt speeds development so long as it is paid back promptly with a rewrite.  The danger occurs when the debt is not repaid.  Every minute spent on not-quite-right code counts as interest on that debt.  Entire engineering organizations can be brought to a stand-still under the debt load of an unconolidated implementation, object-oriented or otherwise.”

Over the years I have had countless conversations about incurring the debt, as defined above, with a boss, a client, a co-worker, the business and even CxO level executives.  The outcome is rarely any different from one time to another and it typically boils down to “is it good enough?”.  I like to compare this question to that of a moped in the Rocky Mountains on a beautiful July afternoon…. Is it enough?  Heck yeah, it’s perfect!  I get the sunlight reflecting from the Ray-Bans, the wind through my hair, absolutely nothing inhibiting my view of the beautiful Rocky Mountains.  Then, as time goes on and it’s the middle of December and you find yourself in -32 degree weather with the wind blowing at 50 mph and your wife telling you that it’s time to head to the emergency room as you are an expecting father.  Suddenly that moped is just a bit short of being adequate.  What to do?  Well, there’s no time to head to the car dealership.  There’s no time to put studs on the moped and build a little canopy around it.  There’s no time for anything other than what you have immediately available.  In this case, not enough thought had been put into the original decision. 

Granted, that story is a bit beyond silly when it comes to the amount of thought that most organizations and individuals put into their technical, architectural and design considerations.  What I’ve seen isn’t the amount of respect that a decision is given but rather the context and understanding of the subject.  Another example that illustrates this well outside of the technical arena begins just like the first story above.  Young man moves from South America.  Let’s say he’s moving from Quito, Ecuador to Vail, Colorado.  This guy brings his lovely new bride to Vail, CO, they buy a little cabin and decide that transportation would be in need.  He thinks about the elevation and decides that since Vail, CO has an altitude of 8,200 feet and Quito Ecuador is roughly 9,895 feet in altitude then he shouldn’t have to worry about cold weather… after all, the higher in the atmosphere you go, the colder it gets and since he is decreasing in altitude, he should be fine.  A logical deduction but unfortunately he has a missing data element that he’s unaware of that completely changes the equation.  Latitude.  A simple little word that’s easy to understand for those folks who live in an area where it changes their sleeping patterns, eating patterns, etc… but for this guy, having come from Quito, Ecuador, which happens to be on the equator; latitude had no bearing or effect.

I hope that those illustrations helped to get the point across that technical debt is not something to be trifled with.  For the moment that you need the capital that should have been invested in your solution, product or service offering and you find youself in debt, it will undoubtably be at a very inopportune time where the answers might not be acceptable.

Over the years I have had the pleasure of arguing this exact scenario before but without the language or the concise understanding of what I was trying to communicate.  I would often default to my position on modular design and loosely coupled methodologies, especially in an iterative development process.  While I don’t have empirical proof on this, it appears to me that as technology continues to mature at a overwhemlingly rapid pace, the amount of technical debt is being incurred not only in design considerations but at an even more expensive level; the feeling of being “cool”.  I must admit, like every driven and successful IT person out there, I LOVE to be working on the newest, fastest, best stuff I can find.  I often find myself asking the question; is this sound in light of what this client needs?  Is this sound in light of where they are?  Would it make more sense to S.T.O.P. for a moment, address what isn’t fun in order to do the right thing?  Perhaps moving the “cloud” isn’t as important as addressing the Normalization debt that has already been incurred.  Perhaps that new third-party application really won’t solve all of the problems the sales guy said it would… perhaps it will simply be one more thing added to the VISA. 

tick.tock.  Every moment that goes by, every line of code that’s written against the initial short-cut is interest.  That’s a lot of interest.  Stop compounding your debts interest and be the unpopular one who stands up for doing it right, the first time.  Part of that process, in my experience, is to first-and-foremost admit that you don’t know it all and more importantly admit that you might not have all of the facts necessary to make the correct decision.  Don’t let that be a reason to shy away from this.  Let that be the reason to embrace it!  Think of what can be learned, think of what you can contribute and more importantly think of how driven and excited you will be knowing that you are paying down your companies technical debt!

 tick.tock.

What three events brought you here?

I was tagged by Chris Shaw for this…  It’s an open-ended question that I could have a lot of fun with.  For instance, I could simply say that what brought me here was a need for a larger house, the need for a dedicated home office and a killer deal.  That’s what brought me to where I’m currently sitting as I write this blog post…  I don’t suppose that’s the intent, so I’ll do my best to share a bit of my history.

1.)  I was in marketing.  I did a lot of work with pulling data from an AS400, “importing” it into Access so that I could then “import” it into Word and mail merge direct marketing campaigns.  A lot of fun.  After having all of that fun, there came the day that I was introduced to SQL Server.  Not knowing any better, I thought it to be a form of Access.  Thankfully it wasn’t and thus began my career as a ‘geek’.

2.)  I did some bouncing around in the early years at companies such as MCI (WorldCom), Norwest (WellsFargo), Lucent(Avaya).  Do you see the trend there?  Every company that I went to work for was purchased by another company.  I thought to myself, there has to be a better way to life than going through these mergers / acquisitions etc… so I went independent.  Since that time, I have had a couple of stints being an employee at startup’s but by and large, I’ve been independent and I love it.

3.)  Being independent has upsides and downsides.  For one, I don’t have the traditional “boss”, instead, I have many of them in the form of clients.  This is awesome in a lot of ways, yet significantly more demanding in others.  It has allowed me the freedom to look for the type of work that I want to do and not worry about being on a certain project team or growing stagnant in a particular organization fulfilling a particular role.

Going forward I hope to expand my little business as I believe that I’m able to deliver quality better than most of the big consulting shops out there for less money overall.

With that said, if you are in need of some SQL Server help, I’d love to hear from you.

New Blog — sqlPerspectives

I’m excited to introduce everyone who follows this blog to a new blog http://sqlperspectives.wordpress.com/.  This blog will have 3 regular authors and one guest author and will post on a weekly basis while the content of the book SQL Server MVP Deep Dives, available here: http://www.manning.com/nielsen/.

Add this new blog site to your favorites.