Archive for the ‘ SQL Server 2005 ’ Category

Index Aligned Partitions and Resolution

Most of us are aware of partitioning in SQL Server.  It was introduced with SQL Server 2005 and then further enhanced in later releases.  The value of partitioning a table can vary.  Some use it as a means of archiving off older data, others use it for the sake of performance gains, either way, there are some important things to keep in mind when one partitions a table.

One of those items brings us to the point of the post, aligning your non-clustered indexes on the partition scheme.  This is an important rule and more often than not it should be followed.

One of the reasons that it’s important is if you ever want to switch partitions, all of the indexes associated to that object must be aligned.  ref: http://msdn.microsoft.com/en-us/library/ms187526.aspx

Another reason is around maintenance.  If the indexes are partitioned then only the active partition (based on your partitioning strategy) needs to be maintained (re-org’d).  For larger indexes, it can become prohibitive to re-build these when they become fragmented.

Probably the best reason to have your indexes aligned is that the amount of reads to satisfy your queries will significantly drop when that index is used.

I recently came across a scenario where I found that there were a couple of tables where the indexes weren’t aligned with the partition scheme.  Instead of going through every partitioned table in the database, I wrote a quick query that would tell me which objects had indexes that weren’t created on the partition scheme for the table.

This query will also show you the usage of the indexes which can be interesting as well.  Without further ado, here’s the query:

SELECT Db_name(database_id) AS dbname, 
       o.name               object_name, 
       i.name               index_name, 
       i.type_desc, 
       ds.name, 
       ds.type_desc, 
       s.user_seeks, 
       s.user_scans, 
       s.user_lookups, 
       s.user_updates, 
       s.last_user_seek, 
       s.last_user_update 
FROM   sys.objects AS o 
       JOIN sys.indexes AS i 
         ON o.object_id = i.object_id 
       JOIN sys.data_spaces ds 
         ON ds.data_space_id = i.data_space_id 
       LEFT OUTER JOIN sys.dm_db_index_usage_stats AS s 
         ON i.object_id = s.object_id 
            AND i.index_id = s.index_id 
WHERE  o.TYPE = 'u' 
       AND i.Type in (1, 2) 
       AND o.name IN (SELECT a.name 
                      FROM   (SELECT ob.name, 
                                     ds.type_desc 
                              FROM   sys.objects ob 
                                     JOIN sys.indexes ind 
                                       ON ind.object_id = ob.object_id 
                                     JOIN sys.data_spaces ds 
                                       ON ds.data_space_id = ind.data_space_id 
                              GROUP  BY ob.name, 
                                        ds.type_desc) a 
                      GROUP  BY a.name 
                      HAVING COUNT(*) > 1) 
ORDER  BY object_name DESC  

That’s fun stuff… But then what happens when you actually want to FIX them?  If there are a lot of them, it makes for a fairly tedious task.  The code below will help.  Be absolutely certain that you check the options chosen.  For my database, I used MAXDOP = 32, ONLINE = ON, etc… Ensure that the the settings are appropriate for your environmnent… enjoy!

SET NOCOUNT ON

SELECT db_name(database_id) as DBName,o.name Object_Name,		
        i.name Index_name, 		
       i.Type_Desc as IDX_typeDesc ,DS.name , ds.type_desc
		,sch.name + '.'+o.name as SchemaName_TableName, CAST(NULL as varchar(128)) as partitonOrdinal
into #tmp1
 FROM sys.objects AS o		
      JOIN sys.indexes AS i		
  ON o.object_id = i.object_id 		
	JOIN sys.schemas sch on sch.schema_id = o.schema_id
	JOIN sys.data_spaces DS on DS.data_space_id = i.data_space_id		
	LEFT OUTER JOIN sys.dm_db_index_usage_stats AS s ON i.object_id = s.object_id AND i.index_id = s.index_id		

 WHERE  o.type = 'u'		
   AND i.type IN (1, 2) -- Clustered and Non-Clustered indexes
   AND O.NAME in 		
	(	
	SELECT a.name from 		
		(SELECT OB.NAME, ds.type_desc 
		 FROM sys.objects OB 
			JOIN sys.indexes ind on ind.object_id = ob.object_id 
			JOIN sys.data_spaces ds on ds.data_space_id = ind.data_space_id		
		GROUP BY ob.name, ds.type_desc ) a 
  group by a.name having COUNT (*) > 1
	)	
order by Object_Name desc;

update a
	set partitonOrdinal = c.name 
	FROM #tmp1 A JOIN 
	  sys.tables          t ON A.Object_Name = t.name  
	join sys.schemas s on s.schema_id = t.schema_id
	join  sys.indexes         i 
		  on(i.object_id = t.object_id 
		  and i.index_id < 2)
	join  sys.index_columns  ic 
		  on(ic.partition_ordinal > 0 
		  and ic.index_id = i.index_id and ic.object_id = t.object_id)
	join  sys.columns         c 
		  on(c.object_id = ic.object_id 
		  and c.column_id = ic.column_id)
	WHERE I.name = a.Index_name and a.IDX_typeDesc = 'clustered'    ;    

update a
	set name = b.name, partitonOrdinal = b.partitonOrdinal
	FROM #tmp1 a join #tmp1 b on a.object_name = b.object_name where b.idx_typedesc = 'clustered';



WITH index_cte (table_name, index_name, type_desc, index_columns, index_included_columns,
filter_definition, is_unique, is_primary_key, is_unique_constraint,fill_factor,
is_padded, allow_row_locks, allow_page_locks, ignore_dup_key, data_compression_desc,
no_recompute, partitionname, partitonOrdinal)
as(
select sch.name+'.'+obj.name [table_name], i.name [index_name], i.type_desc,
REPLACE(
(select '['+col.name+'] '+
	CASE WHEN k.is_descending_key = 1 THEN 'DESC'
	ELSE 'ASC'
	END as [data()]
from sys.index_columns as k
inner join sys.columns col
		on k.object_id=col.object_id and k.column_id=col.column_id
	where k.object_id = i.object_id
	and k.index_id = i.index_id	and k.index_id<>0 and k.is_included_column<>1
	order by key_ordinal, k.column_id
for xml path(''))
,'C [','C, [') as [index_columns],
isnull(
REPLACE(
	(select '['+col.name+']' as [data()]
from sys.index_columns as k
inner join sys.columns col
		on k.object_id=col.object_id and k.column_id=col.column_id
	where k.object_id = i.object_id	and k.index_id = i.index_id and k.index_id<>0 and k.is_included_column<>0
	order by key_ordinal, k.column_id

for xml path(''))
,'] [','], [')
,'') as [index_included_columns],

	ISNULL(i.filter_definition,'') as filter_definition,
	is_unique, is_primary_key, is_unique_constraint, fill_factor, is_padded, allow_row_locks, allow_page_locks, ignore_dup_key, 
	data_compression_desc, no_recompute, a.name, a.partitonOrdinal 
from sys.indexes as i
inner join sys.objects obj
	on i.object_id=obj.object_id
inner join sys.schemas sch
	on obj.schema_id=sch.schema_id
inner join sys.partitions part
	on i.object_id=part.object_id
	and i.index_id=part.index_id
inner join sys.stats stats
	on i.object_id=stats.object_id
	and i.index_id=stats.stats_id

JOIN #tmp1 a ON a.index_name = i.name AND a.object_name = obj.name and a.type_desc = 'rows_filegroup'

where i.name is not null and obj.type<>'S' and sch.name<>'sys' and i.index_id>1
)
SELECT
'CREATE ' +
CASE is_unique
	WHEN 1 THEN 'UNIQUE '
	ELSE ''
END
+ index_cte.type_desc +' INDEX ' + index_cte.index_name COLLATE DATABASE_DEFAULT +' ON ' +table_name +' ('+index_columns+')'+
CASE index_included_columns
	WHEN '' THEN ''
	ELSE ' INCLUDE (' + index_included_columns + ') '
END+
CASE filter_definition
	WHEN '' THEN ''
	ELSE ' WHERE ' + filter_definition
END+
' 
	WITH ('+
	'FILLFACTOR = ' + CASE fill_factor WHEN 0 THEN '100' ELSE CAST(fill_factor as varchar) END+
	', PAD_INDEX = ' + CASE is_padded WHEN 1 THEN 'ON' ELSE 'OFF' END+
	', ALLOW_PAGE_LOCKS = ' + CASE allow_page_locks WHEN 1 THEN 'ON' ELSE 'OFF' END+
	', ALLOW_ROW_LOCKS = ' + CASE allow_row_locks WHEN 1 THEN 'ON' ELSE 'OFF' END+
	', IGNORE_DUP_KEY = ' + CASE ignore_dup_key WHEN 1 THEN 'ON' ELSE 'OFF' END+
	', STATISTICS_NORECOMPUTE = ' + CASE no_recompute WHEN 1 THEN 'ON' ELSE 'OFF' END+
	', DATA_COMPRESSION = ' + data_compression_desc +
	', SORT_IN_TEMPDB = ON'+
	', ONLINE = ON'+
	', MAXDOP = 32'+
	', DROP_EXISTING = ON'+
	')' + '
	 ON  ' +  partitionname + ' (' + partitonOrdinal + ')
 GO'
as 'USE Database statement goes here (TNW)'
from index_cte
 order by table_name, index_cte.type_desc, index_cte.index_name

drop table #tmp1

Statistics for partitions

As data volumes continue to expand and partitioning is further adopted in SQL Server 2008 the lack of control over statistics at the partition level will become a larger issue for more companies.

I placed this as a connect item on Microsoft’s site in June of 2009.  Per the final comment from Microsoft, they recommend that the more votes it has, the higher priority it will become.

If you are reading this post, please vote, here:  https://connect.microsoft.com/SQLServer/feedback/details/468517/update-statistics-at-the-partition-level

I’m posting the connect details below. 

I’d like to see Update Statistics modified to include partitionID in combination with the existing table or indexed view name.

With the improvements of 2k8 and partitioning, very large result sets are being seen at one of my clients that utilizes partitioning. Having to update statistics against partitions that haven’t changed in order to achieve the change on the partition that actually needs it, creates additional overhead that is not required. It also forces a change to plans that wouldn’t otherwise be altered which causes for memory churn in the proc cache.

One other related note, is that you can defrag an individual partition index, but it will not update the statistics.

Proposed Solution
Two parts:

1.)
Modify Update Statistics to include an option for declaring either a partition name or partitionID for either a table or an indexed view.
i.e…
UPDATE STATISTICS table_or_indexed_view_name
[
{
{ index_or_statistics__name }
{ index_or_statistics_name }
| ( { Partition_index_or_partition_statistics_name } [ ,...n ] )
}
]

2.) Upon completion of the defrag of an individual partition index have it update the statistics for that partition.

 
Benefits
Improved Reliability
Improved Administration
Improved Performance
Other (please provides details below)

DMV's and t-sql for the DBA

The Rocky Mountain Tri-Fecta was today and I had the opportunity to speak.  I mentioned to those brave souls who attended my session that I would post the slide deck and the demo’s.

Feel free to download them (links are at the end of this post) and provide your comments / thoughts.

I had a great time attending the event and I learned a great number of things.  A special thanks to everyone involved with putting this together.  I’m sill amazed and the stacks of empty pizza boxes.  WOW!

Part of this post is also to ask the readers of this blog if they would be interested in checking out a new tool that I’ve been testing over the last couple of months.  It’s formally known as SmartMon and it provides a form of pattern analysis against the data looking for anomalies and patterns of similar behavior.  It has decreased the amount of time that I spend doing analysis on perfmon files by many factors.  It’s a great tool and the guys building it are stand up guys in the San Jose, CA area.  If you are interested in becoming one of the first handful of people to see the release of this product, e-mail feedback@perfmonanalysis.com.  Please mention that I referred you.

Without further ado, enjoy the SQL and thanks again for the opportunity to share what I’ve learned with all of you.

RockyMountainTriFecta (Link to the slide deck PPT)

Essentials-DMVs_tsql_dba (t-sql – Demo)

New Blog — sqlPerspectives

I’m excited to introduce everyone who follows this blog to a new blog http://sqlperspectives.wordpress.com/.  This blog will have 3 regular authors and one guest author and will post on a weekly basis while the content of the book SQL Server MVP Deep Dives, available here: http://www.manning.com/nielsen/.

Add this new blog site to your favorites.

Move non clustered indexes to a new filegroup

So, over the years I have needed to move an index or a set of indexes to a new or different file group. I’ve written scripts for this in the past and lost them along the way and I couldn’t find anything out there on the bing or google interwebs.

The concept of multiple files and file groups for a database is beginning to sink in with a lot of developers and dba’s. The rationale for using them is typically sound and in my experience it’s one of wanting to decrease the internal file fragmentation of the data and the indexes. Often, one can find that there is a performance improvement as well because this allows you to move those indexes to that raid 1+0 array that your SAN admin has been waiting for you to use. :)

Here’s a stab at it that makes sense to me and it worked great for what I needed to accomplish today. What it doesn’t take into account are partitions and their related indexes, so be sure to read this code and test it prior to just taking my word for it working well.

I’m 100 percent open to ideas and suggestions, in the form of t-sql. I certainly appreciate comments and ideas, however, I will not write the t-sql for your idea. I’m giving this away and it’s my hope that as you improve upon it, you will kindly re-send or re-post it with the improvements.

Lastly, this handles non-clustered indexes only. It will ignore system indexes and statistics. Both of those are named out in the script so in the event that you want them; it’s an easy modification. I have comments in the code posted below and I STRONGLY advise you to read them before you run on a live system.

Click on the word doc attached to this post.  WordPress really makes the formatting of the code look ugly and I haven’t yet figured out how to make it look decent.

Enjoy!

 NonClusteredIndex_MOVE_newFG

Update Statistics for a partition

Last week, I was working with a client and we wanted to update statistics on a partition. After trying it out, then consulting with BOL and finally twitter, I determined that it could not be done. Thanks to Lara Rubbelke for her help and pointing out the second issue that is discussed below.

In addition to that problem, when a partition’s index is defragmented, statistics are not updated. I suppose an easy workaround is to update statistics, but the problem is that it updates statistics for all of the partitions and/or indexes of that object, not the specific one that was just defragmented.

So, I created a suggestion @connect.microsoft.com. I’ll post the entire subject / issue below, but please visit the site, vote and validate this request.

This is an https:// link, so if you aren’t logged into connect it will not work. That said, if you don’t have a login for this site, I’d recommend that you get one. Tons of great information can be found there. If you wish to search for it, go to connect.microsoft.com and type in “Update Statistics at the partition level” OR search for feedbackid 468517.  https://connect.microsoft.com/SQLServer/feedback/ViewFeedback.aspx?FeedbackID=468517

Full description and proposed solution below:

Description:
I’d like to see Update Statistics modified to include partitionID in combination with the existing table or indexed view name.

With the improvements of 2k8 and partitioning, very large result sets are being seen at one of my clients that utilizes partitioning. Having to update statistics against partitions that haven’t changed in order to achieve the change on the partition that actually needs it, creates additional overhead that is not required. It also forces a change to plans that wouldn’t otherwise be altered which causes for memory churn in the proc cache.

One other related note is that you can defrag an individual partition index, but it will not update the statistics.

Proposed Solution
Two parts:
1.)
Modify Update Statistics to include an option for declaring either a partition name or partitionID for either a table or an indexed view.
i.e…
UPDATE STATISTICS table_or_indexed_view_name
[
{
{ index_or_statistics__name }
{ index_or_statistics_name }
| ( { Partition_index_or_partition_statistics_name } [ ,...n ] )
}
]
2.) Upon completion of the defrag of an individual partition index have it update the statistics for that partition.

Benefits
Improved Reliability
Improved Administration
Improved Performance

Meta Data – Cardinality Explored

Last night I had the privilege of speaking at the Colorado Springs SQL Server Users Group. The subject was Meta Data – Cardinality Explored.

This topic is a very enjoyable one to me. Along with the DMV’s that SQL Server now provides, the meta-data about your database implementations is critical to their scalability, stability, accuracy and performance.

Over the years, I have found that cardinality is not a topic that is often understood. I also believe that it’s not discussed all that often. It is an absolutely critical data-set for the developers, DBA’s, testers and performance testers out there.

Attached to this post you will find my sql script and PowerPoint slide deck. I’d recommend starting with the slide deck… walk through it and when you see “demo”, open up the sql file and learn how this result set can make an impact with the databases that you are responsible for or those that you are developing against.

Have fun and if you have questions about the presentation or the sql code I put together, feel free to comment!

Thanks again to the terrific folks down in Colorado Springs for the warm welcome and not falling asleep as I droned on and on. =)

CSSQLUG_June09
MetaData – Cardinality Explored – SQL

Schema Management (DDL Triggers)

Over the years I have used a variety of tools for schema management.  The best one that I really relied on for quite a while was from RedGate.  They offer schema compare and data compare tools that really make quick work of determing the delta against two environments.

Recently I’ve been working with DB Pro to manage this and thus far I’ve been very, very impressed.  However, it has some downside; primarly the cost.  Some of my clients just can’t justify / stomach the dollars involved.  So, in this gap I have been using DDL Triggers.  It’s an automated log, stored in a database (table) that captures ddl changes (Create, Drop, Alter).   This concept is different than a traditional trigger in that it only fires for database events, not data events.  Meaning that the scope of this trigger is at the schema or the structure level, not the data or the dml level.

The value that it has provided some of my clients is simply the certainty that they know what happened in a specific environment and it saves a great deal of time when it comes to debugging, troubleshooting or otherwise trying to figure out why something that worked perfectly yesterday in the database does not work correctly today.

There are a few steps that need to be taken to make this work.  The first is to determine if you want to store this data in it’s own database; or store it in a table in an existing database.  It can be done either way.  In the code below you will see that I’ve created a stand alone db with one table in it.

CREATE DATABASE [DDL_ChangeLog]

GO

use ddl_changelog;

CREATE TABLE DDLChangeLog (

DDLChangeLogID [int] IDENTITY(1,1) NOT NULL,

eventtype nvarchar(100),

objectname nvarchar(100),

objecttype nvarchar(100),

databasename nvarchar(100),

schemaname nvarchar(100),

SystemUser nvarchar(255),

AppName nvarchar(255),

HostName nvarchar(255),

tsql nvarchar(MAX),

createddate datetime,

CONSTRAINT [PK_DDL_Changelog] PRIMARY KEY CLUSTERED

(DDLChangeLogID ASC))
The code above will simply create a database and a table.

Once you have that in place, then choose the database that you would like to track ddl changes on and create this database trigger.

 

CREATE TRIGGER [trgLogDDLEvent] ON DATABASE

FOR DDL_DATABASE_LEVEL_EVENTS

AS

DECLARE @data XML

SET @data = EVENTDATA()

IF @data.value(‘(/EVENT_INSTANCE/EventType)[1]‘, ‘nvarchar(100)’) &lt;&gt; ‘CREATE_STATISTICS’

INSERT INTO ddl_changelog..DDLChangeLog

(EventType, ObjectName, ObjectType, DatabaseName, SchemaName, SystemUser, AppName, HostName, tsql , createddate)

VALUES (

@data.value(‘(/EVENT_INSTANCE/EventType)[1]‘,

                           ‘nvarchar(100)’),

@data.value(‘(/EVENT_INSTANCE/ObjectName)[1]‘,

                           ‘nvarchar(100)’),

@data.value(‘(/EVENT_INSTANCE/ObjectType)[1]‘,

                           ‘nvarchar(100)’),

@data.value(‘(/EVENT_INSTANCE/DatabaseName)[1]‘,

                           ‘nvarchar(100)’),

@data.value(‘(/EVENT_INSTANCE/SchemaName)[1]‘,

                           ‘nvarchar(100)’),

system_user , app_name (),host_name(),

@data.value(‘(/EVENT_INSTANCE/TSQLCommand)[1]‘,

                           ‘nvarchar(max)’) ,

getdate()) ;

GO

ENABLE TRIGGER [trgLogDDLEvent] ON DATABASE

GO
So, let’s test this out.

With the database that you just created the trigger on, let’s do something simple like this:

            SELECT 1 as test into TestTable;

           SELECT * from TestTable;

           DROP TABLE TestTable;

SELECT * FROM DDL_ChangeLog..DDLChangeLog

wow!  there is data there, not just any data, but it should have returned two rows; even though there are three statements above in our test.  This is because that trigger only fired for the Create table (select into) and the drop table statements.  The trigger will not fire for the select statement.

I initially started using these for my own reasons / purposes; however as time has gone on, I’ve found that there are other audiences that LOVE knowing this can be tracked… Auditors love this functionality, QA loves to be able to verify that the blackbox they are testing has some controls / audit processes around it.  Management loves it because they can have hard data to go back to the business on in the event of an outage, mistake or whatever.  I’ve found that it’s typically best to be as transparent as possible with changes / modifications and the state of the databases that I’m responsible for.  In this manner, I’m able to better serve my clients and in turn that are more likely to believe / trust my opinion when it counts.

Policy Management

In Novemeber, I had the opportunity to present at DevConnections.  I spoke on two topics, Policy Management and Plan Guides.  Both of these are new / unique to SQL Server 2008 and I really love both of the additions to the SQL Server platform.  

In this blog post I have attached two files; one of them is my powerpoint slide deck and the other is the script I used for the demo’s.  

This was a very difficult talk to give as the concept is/was relatively new and the amount of information available was scarce when I was developing the content.   

So, what is Policy Management?  Microsoft’s definition is as follows :  “Policy-Based Management is a policy-based system for managing one or more instances of SQL Server 2008.  Use this with SQL Server Management Studio to create policies that manage entities on the server, such as the instance of SQL Server, databases, and other SQL Server objects.”  What this definition leaves out is something of great importance.  This new feature CAN be used with down versions, providing that the facet you want to throw a policy around is availalable in that version.  Meaning, if you want to monitor datafiles or schema changes in a SQL Server 2005 instance, you can!  

I found there to be three components to Policy Management.  

  1. Policy Management  — Creation of conditions and policies
  2. Explicit Administration — One to many managed targets
  3. Automated Administration — Automate enforcement of policies

For each of these there are components – Refer to the slide deck and Microsoft’s site for more information.  

One of the biggest helps that I had when leaning Policy Management was the following CTE:   

The results of this query provide all of the available facet’s and which mode then can be run for.  This is a huge help as going through the gui to see if something is availalble for a certain type of mode can take a while.   

USE MSDB;
with automatedpolicyexecutionmode (modeid, modename)
as
          (select * from (values (0, ‘On Demand’), 
                              (1,’Enforce Compliance’), 
                              (2, ‘Check on Change and Log’),
                              (4, ‘Check on Schedule and Log’))
as em(modeid, modename))
   

So far I have successfully implemented policies against a host of the facets offered.  Microsoft is going to continue to build on this inital release as time goes on; so look to new additions of facets as time goes on.  

Enjoy Policy Based Management and may it save you a great deal of time, decrease your daily / weekly checklist of items and enable you to provide further value to your organization.  

SELECT dmf.management_facet_id as facetid, dmf.name as facetname, apemode.modename
FROM syspolicy_management_facets as dmf
       INNER JOIN automatedpolicyexecutionmode as apemode on dmf.execution_mode &amp; apemode.modeid=apemode.modeid
ORDER BY dmf.name, apemode.modename  

If you walk through the slide deck and the demo script; you can quickly see how easy it is to create a policy.  Once you have one created (IN DEVELOPMENT),  =), then evaluate the policy in two ways.  The first way you will want to ensure that the evaluation is successful.  The second time, break it, so that the evaluation of the policy is false.  Once you have done this, run the query below.  

 USE MSDB;
GO
SELECT a.execution_date AS 'Date Run' , 
                  c.name AS 'Policy' , 
                  a.target_query_expression AS 'Policy Failure Targets' , 
                  d.name as 'Condition',
                  d.description as 'Condition Description', 
                  d.facet as 'Condition Facet'
FROM syspolicy_policy_execution_history_details_internal a
              INNER JOIN syspolicy_policy_execution_history_internal b ON a.history_id = b.history_id
              INNER JOIN syspolicy_policies_internal c ON b.policy_id = c.policy_id
                           INNER JOIN syspolicy_conditions d ON c.condition_id = d.condition_id
WHERE a.result = 0
ORDER BY a.execution_date DESC, c.name ASC, d.name ASC 

Attachements for this post ::  

Script: policymanagementscript2  — The majority of the t-sql in this document can be found below.  

Slide Deck: sql-connections-conference-slides-policy-management2  


USE msdb;
GO
WITH automatedpolicyexecutionmode (modeid, modename)
AS
 (select * from (values  (0, 'On Demand'),
    (1,'Enforce Compliance'),
    (2, 'Check on Change and Log'),
    (4, 'Check on Schedule and Log'))
          as em(modeid, modename))
     SELECT dmf.management_facet_id as facetid, dmf.name as facetname, apemode.modename
     FROM syspolicy_management_facets as dmf
        INNER JOIN automatedpolicyexecutionmode as apemode on dmf.execution_mode &amp; apemode.modeid=apemode.modeid
     OREDER BY dmf.name, apemode.modename

– Demo 1
– Greenfield development
– Standard Naming convention
– Create Conditions
– Create Policy 


use DemoDB
go

-- Successful Object Creation
-- Table
CREATE TABLE UserTable_test_me3 (test int)
GO

-- Procedure
CREATE Procedure PRC_Test_proc AS SELECT GETDATE()
GO

-- View
CREATE VIEW UserView_Test AS SELECT test from UserTable_test_me3
GO

-- Failed Object Creation due to Policy enforcement
-- Table
CREATE TABLE test_me3 (test int)
GO

Create Procedure Test_proc AS select GETDATE()
GO

Create View VW_Test as select test from UserTable_test_me3
GO

Create View Test as select test from UserTable_test_me3
GO

-- Fails
EXEC sp_rename 'UserTable_test_me3', 'test_me3'
GO 

--  Check MSDB for policy execution Failures

USE msdb;
GO

     SELECT a.execution_date AS 'Date Run' , c.name AS 'Policy', a.target_query_expression AS 'Policy Failure Targets',
                     D.name as 'Condition', D.description as 'Condition Description', D.facet as 'Condition Facet'
      FROM syspolicy_policy_execution_history_details_internal a
        INNER JOIN syspolicy_policy_execution_history_internal b ON a.history_id = b.history_id
        INNER JOIN  syspolicy_policies_internal c  ON b.policy_id = c.policy_id
        INNER JOIN syspolicy_conditions d  ON c.condition_id = d.condition_id
    WHERE a.result = 0 AND a.execution_date &gt;= GETDATE() -1
    ORDER BY a.execution_date DESC, c.name ASC, a.target_query_expression ASC;
GO

-- Existing environment somewhat different ::

select * from msdb.dbo.syspolicy_policies_internal order by date_created desc;
GO
-- Delete the policy
 -- Can use SSMS to do this as well.
Declare @Object_set_ID_Delete INT
set @Object_set_ID_Delete = (select object_set_id from msdb.dbo.syspolicy_policies_internal where name = 'Standard Naming Convention')

EXEC msdb.dbo.sp_syspolicy_delete_policy @name= 'Standard Naming Convention'
EXEC msdb.dbo.sp_syspolicy_delete_object_set @object_set_id = @Object_set_ID_Delete
GO
-- Delete the Condition
 -- Can use SSMS to do this as well.
 EXEC msdb.dbo.sp_syspolicy_delete_condition @name = 'Standard - Naming Convention'

-- Create an Existing environment

Create Database Demo_existing
GO

USE Demo_existing
GO
create table test_me3 (test int)
GO

Create Procedure Test_proc AS select GETDATE() as The_Time_is_Now
GO

Create View VW_Test as select test from test_me3
GO

Create View Test as select test from test_me3
GO

–  Re-Create Condition and Policy
– Using t-sql versus SSMS
  

Declare @condition_id int
EXEC msdb.dbo.sp_syspolicy_add_condition @name=N'Standard - Naming Convention', @description=N'Enforcement of Standard Naming Convention

The Object you attempted to create was rolled back.

This Policy prevents the following naming standards ::

Object names MAY NOT begin with &quot;tbl&quot; OR &quot;sp&quot; OR &quot;vw&quot;

AND

Ojbect Names MUST begin with &quot;UserTable&quot; OR &quot;PRC&quot; OR &quot;UserView&quot;

', @facet=N'IMultipartNameFacet', @expression=N'&lt;Operator&gt;
  &lt;TypeClass&gt;Bool&lt;/TypeClass&gt;
  &lt;OpType&gt;AND&lt;/OpType&gt;
  &lt;Count&gt;2&lt;/Count&gt;
  &lt;Group&gt;
    &lt;TypeClass&gt;Bool&lt;/TypeClass&gt;
    &lt;Count&gt;1&lt;/Count&gt;
    &lt;Operator&gt;
      &lt;TypeClass&gt;Bool&lt;/TypeClass&gt;
      &lt;OpType&gt;OR&lt;/OpType&gt;
      &lt;Count&gt;2&lt;/Count&gt;
      &lt;Operator&gt;
        &lt;TypeClass&gt;Bool&lt;/TypeClass&gt;
        &lt;OpType&gt;OR&lt;/OpType&gt;
        &lt;Count&gt;2&lt;/Count&gt;
        &lt;Operator&gt;
          &lt;TypeClass&gt;Bool&lt;/TypeClass&gt;
          &lt;OpType&gt;NOT_LIKE&lt;/OpType&gt;
          &lt;Count&gt;2&lt;/Count&gt;
          &lt;Attribute&gt;
            &lt;TypeClass&gt;String&lt;/TypeClass&gt;
            &lt;Name&gt;Name&lt;/Name&gt;
          &lt;/Attribute&gt;
          &lt;Constant&gt;
            &lt;TypeClass&gt;String&lt;/TypeClass&gt;
            &lt;ObjType&gt;System.String&lt;/ObjType&gt;
            &lt;Value&gt;tbl%&lt;/Value&gt;
          &lt;/Constant&gt;
        &lt;/Operator&gt;
        &lt;Operator&gt;
          &lt;TypeClass&gt;Bool&lt;/TypeClass&gt;
          &lt;OpType&gt;NOT_LIKE&lt;/OpType&gt;
          &lt;Count&gt;2&lt;/Count&gt;
          &lt;Attribute&gt;
            &lt;TypeClass&gt;String&lt;/TypeClass&gt;
            &lt;Name&gt;Name&lt;/Name&gt;
          &lt;/Attribute&gt;
          &lt;Constant&gt;
            &lt;TypeClass&gt;String&lt;/TypeClass&gt;
            &lt;ObjType&gt;System.String&lt;/ObjType&gt;
            &lt;Value&gt;vw%&lt;/Value&gt;
          &lt;/Constant&gt;
        &lt;/Operator&gt;
      &lt;/Operator&gt;
      &lt;Operator&gt;
        &lt;TypeClass&gt;Bool&lt;/TypeClass&gt;
        &lt;OpType&gt;NOT_LIKE&lt;/OpType&gt;
        &lt;Count&gt;2&lt;/Count&gt;
        &lt;Attribute&gt;
          &lt;TypeClass&gt;String&lt;/TypeClass&gt;
          &lt;Name&gt;Name&lt;/Name&gt;
        &lt;/Attribute&gt;
        &lt;Constant&gt;
          &lt;TypeClass&gt;String&lt;/TypeClass&gt;
          &lt;ObjType&gt;System.String&lt;/ObjType&gt;
          &lt;Value&gt;sp%&lt;/Value&gt;
        &lt;/Constant&gt;
      &lt;/Operator&gt;
    &lt;/Operator&gt;
  &lt;/Group&gt;
  &lt;Group&gt;
    &lt;TypeClass&gt;Bool&lt;/TypeClass&gt;
    &lt;Count&gt;1&lt;/Count&gt;
    &lt;Operator&gt;
      &lt;TypeClass&gt;Bool&lt;/TypeClass&gt;
      &lt;OpType&gt;OR&lt;/OpType&gt;
      &lt;Count&gt;2&lt;/Count&gt;
      &lt;Operator&gt;
        &lt;TypeClass&gt;Bool&lt;/TypeClass&gt;
        &lt;OpType&gt;OR&lt;/OpType&gt;
        &lt;Count&gt;2&lt;/Count&gt;
        &lt;Operator&gt;
          &lt;TypeClass&gt;Bool&lt;/TypeClass&gt;
          &lt;OpType&gt;LIKE&lt;/OpType&gt;
          &lt;Count&gt;2&lt;/Count&gt;
          &lt;Attribute&gt;
            &lt;TypeClass&gt;String&lt;/TypeClass&gt;
            &lt;Name&gt;Name&lt;/Name&gt;
          &lt;/Attribute&gt;
          &lt;Constant&gt;
            &lt;TypeClass&gt;String&lt;/TypeClass&gt;
            &lt;ObjType&gt;System.String&lt;/ObjType&gt;
            &lt;Value&gt;UserTable%&lt;/Value&gt;
          &lt;/Constant&gt;
        &lt;/Operator&gt;
        &lt;Operator&gt;
          &lt;TypeClass&gt;Bool&lt;/TypeClass&gt;
          &lt;OpType&gt;LIKE&lt;/OpType&gt;
          &lt;Count&gt;2&lt;/Count&gt;
          &lt;Attribute&gt;
            &lt;TypeClass&gt;String&lt;/TypeClass&gt;
            &lt;Name&gt;Name&lt;/Name&gt;
          &lt;/Attribute&gt;
          &lt;Constant&gt;
            &lt;TypeClass&gt;String&lt;/TypeClass&gt;
            &lt;ObjType&gt;System.String&lt;/ObjType&gt;
            &lt;Value&gt;UserView%&lt;/Value&gt;
          &lt;/Constant&gt;
        &lt;/Operator&gt;
      &lt;/Operator&gt;
      &lt;Operator&gt;
        &lt;TypeClass&gt;Bool&lt;/TypeClass&gt;
        &lt;OpType&gt;LIKE&lt;/OpType&gt;
        &lt;Count&gt;2&lt;/Count&gt;
        &lt;Attribute&gt;
          &lt;TypeClass&gt;String&lt;/TypeClass&gt;
          &lt;Name&gt;Name&lt;/Name&gt;
        &lt;/Attribute&gt;
        &lt;Constant&gt;
          &lt;TypeClass&gt;String&lt;/TypeClass&gt;
          &lt;ObjType&gt;System.String&lt;/ObjType&gt;
          &lt;Value&gt;PRC%&lt;/Value&gt;
        &lt;/Constant&gt;
      &lt;/Operator&gt;
    &lt;/Operator&gt;
  &lt;/Group&gt;
&lt;/Operator&gt;', @is_name_condition=0, @obj_name=N'', @condition_id=@condition_id OUTPUT
Select @condition_id

GO 

—-
–   Condition was created with an ID of XX.
—- 

Declare @object_set_id int
EXEC msdb.dbo.sp_syspolicy_add_object_set @object_set_name=N'Standard Naming Convention_ObjectSet_1', @facet=N'IMultipartNameFacet', @object_set_id=@object_set_id OUTPUT
Select @object_set_id

Declare @target_set_id int
EXEC msdb.dbo.sp_syspolicy_add_target_set @object_set_name=N'Standard Naming Convention_ObjectSet_1', @type_skeleton=N'Server/Database/StoredProcedure', @type=N'PROCEDURE', @enabled=False, @target_set_id=@target_set_id OUTPUT
Select @target_set_id

EXEC msdb.dbo.sp_syspolicy_add_target_set_level @target_set_id=@target_set_id, @type_skeleton=N'Server/Database/StoredProcedure', @level_name=N'StoredProcedure', @condition_name=N'', @target_set_level_id=0
EXEC msdb.dbo.sp_syspolicy_add_target_set_level @target_set_id=@target_set_id, @type_skeleton=N'Server/Database', @level_name=N'Database', @condition_name=N'', @target_set_level_id=0

EXEC msdb.dbo.sp_syspolicy_add_target_set @object_set_name=N'Standard Naming Convention_ObjectSet_1', @type_skeleton=N'Server/Database/Synonym', @type=N'SYNONYM', @enabled=False, @target_set_id=@target_set_id OUTPUT
Select @target_set_id

EXEC msdb.dbo.sp_syspolicy_add_target_set_level @target_set_id=@target_set_id, @type_skeleton=N'Server/Database/Synonym', @level_name=N'Synonym', @condition_name=N'', @target_set_level_id=0
EXEC msdb.dbo.sp_syspolicy_add_target_set_level @target_set_id=@target_set_id, @type_skeleton=N'Server/Database', @level_name=N'Database', @condition_name=N'', @target_set_level_id=0

EXEC msdb.dbo.sp_syspolicy_add_target_set @object_set_name=N'Standard Naming Convention_ObjectSet_1', @type_skeleton=N'Server/Database/Table', @type=N'TABLE', @enabled=False, @target_set_id=@target_set_id OUTPUT
Select @target_set_id

EXEC msdb.dbo.sp_syspolicy_add_target_set_level @target_set_id=@target_set_id, @type_skeleton=N'Server/Database/Table', @level_name=N'Table', @condition_name=N'', @target_set_level_id=0
EXEC msdb.dbo.sp_syspolicy_add_target_set_level @target_set_id=@target_set_id, @type_skeleton=N'Server/Database', @level_name=N'Database', @condition_name=N'', @target_set_level_id=0

EXEC msdb.dbo.sp_syspolicy_add_target_set @object_set_name=N'Standard Naming Convention_ObjectSet_1', @type_skeleton=N'Server/Database/UserDefinedFunction', @type=N'FUNCTION', @enabled=False, @target_set_id=@target_set_id OUTPUT
Select @target_set_id

EXEC msdb.dbo.sp_syspolicy_add_target_set_level @target_set_id=@target_set_id, @type_skeleton=N'Server/Database/UserDefinedFunction', @level_name=N'UserDefinedFunction', @condition_name=N'', @target_set_level_id=0
EXEC msdb.dbo.sp_syspolicy_add_target_set_level @target_set_id=@target_set_id, @type_skeleton=N'Server/Database', @level_name=N'Database', @condition_name=N'', @target_set_level_id=0

EXEC msdb.dbo.sp_syspolicy_add_target_set @object_set_name=N'Standard Naming Convention_ObjectSet_1', @type_skeleton=N'Server/Database/UserDefinedType', @type=N'TYPE', @enabled=False, @target_set_id=@target_set_id OUTPUT
Select @target_set_id

EXEC msdb.dbo.sp_syspolicy_add_target_set_level @target_set_id=@target_set_id, @type_skeleton=N'Server/Database/UserDefinedType', @level_name=N'UserDefinedType', @condition_name=N'', @target_set_level_id=0
EXEC msdb.dbo.sp_syspolicy_add_target_set_level @target_set_id=@target_set_id, @type_skeleton=N'Server/Database', @level_name=N'Database', @condition_name=N'', @target_set_level_id=0

EXEC msdb.dbo.sp_syspolicy_add_target_set @object_set_name=N'Standard Naming Convention_ObjectSet_1', @type_skeleton=N'Server/Database/View', @type=N'VIEW', @enabled=False, @target_set_id=@target_set_id OUTPUT
Select @target_set_id

EXEC msdb.dbo.sp_syspolicy_add_target_set_level @target_set_id=@target_set_id, @type_skeleton=N'Server/Database/View', @level_name=N'View', @condition_name=N'', @target_set_level_id=0
EXEC msdb.dbo.sp_syspolicy_add_target_set_level @target_set_id=@target_set_id, @type_skeleton=N'Server/Database', @level_name=N'Database', @condition_name=N'', @target_set_level_id=0

EXEC msdb.dbo.sp_syspolicy_add_target_set @object_set_name=N'Standard Naming Convention_ObjectSet_1', @type_skeleton=N'Server/Database/XmlSchemaCollection', @type=N'XMLSCHEMACOLLECTION', @enabled=False, @target_set_id=@target_set_id OUTPUT
Select @target_set_id

EXEC msdb.dbo.sp_syspolicy_add_target_set_level @target_set_id=@target_set_id, @type_skeleton=N'Server/Database/XmlSchemaCollection', @level_name=N'XmlSchemaCollection', @condition_name=N'', @target_set_level_id=0
EXEC msdb.dbo.sp_syspolicy_add_target_set_level @target_set_id=@target_set_id, @type_skeleton=N'Server/Database', @level_name=N'Database', @condition_name=N'', @target_set_level_id=0
GO
Declare @policy_id int
EXEC msdb.dbo.sp_syspolicy_add_policy @name=N'Standard Naming Convention', @condition_name=N'Standard - Naming Convention', @execution_mode=1, @is_enabled=True, @policy_id=@policy_id OUTPUT, @object_set=N'Standard Naming Convention_ObjectSet_1'
Select @policy_id

-- The next piece will Update the Targets for the policy
GO
EXEC msdb.dbo.sp_syspolicy_update_target_set @target_set_id=128, @enabled=TRUE
EXEC msdb.dbo.sp_syspolicy_update_target_set_level @target_set_id=128, @type_skeleton=N'Server/Database/StoredProcedure', @condition_name=N''
EXEC msdb.dbo.sp_syspolicy_update_target_set_level @target_set_id=128, @type_skeleton=N'Server/Database', @condition_name=N''

EXEC msdb.dbo.sp_syspolicy_update_target_set @target_set_id=129, @enabled=TRUE
EXEC msdb.dbo.sp_syspolicy_update_target_set_level @target_set_id=129, @type_skeleton=N'Server/Database/Synonym', @condition_name=N''
EXEC msdb.dbo.sp_syspolicy_update_target_set_level @target_set_id=129, @type_skeleton=N'Server/Database', @condition_name=N''

EXEC msdb.dbo.sp_syspolicy_update_target_set @target_set_id=130, @enabled=TRUE
EXEC msdb.dbo.sp_syspolicy_update_target_set_level @target_set_id=130, @type_skeleton=N'Server/Database/Table', @condition_name=N''
EXEC msdb.dbo.sp_syspolicy_update_target_set_level @target_set_id=130, @type_skeleton=N'Server/Database', @condition_name=N''

EXEC msdb.dbo.sp_syspolicy_update_target_set @target_set_id=131, @enabled=TRUE
EXEC msdb.dbo.sp_syspolicy_update_target_set_level @target_set_id=131, @type_skeleton=N'Server/Database/UserDefinedFunction', @condition_name=N''
EXEC msdb.dbo.sp_syspolicy_update_target_set_level @target_set_id=131, @type_skeleton=N'Server/Database', @condition_name=N''

EXEC msdb.dbo.sp_syspolicy_update_target_set @target_set_id=132, @enabled=TRUE
EXEC msdb.dbo.sp_syspolicy_update_target_set_level @target_set_id=132, @type_skeleton=N'Server/Database/UserDefinedType', @condition_name=N''
EXEC msdb.dbo.sp_syspolicy_update_target_set_level @target_set_id=132, @type_skeleton=N'Server/Database', @condition_name=N''

EXEC msdb.dbo.sp_syspolicy_update_target_set @target_set_id=133, @enabled=TRUE
EXEC msdb.dbo.sp_syspolicy_update_target_set_level @target_set_id=133, @type_skeleton=N'Server/Database/View', @condition_name=N''
EXEC msdb.dbo.sp_syspolicy_update_target_set_level @target_set_id=133, @type_skeleton=N'Server/Database', @condition_name=N''

EXEC msdb.dbo.sp_syspolicy_update_target_set @target_set_id=134, @enabled=TRUE
EXEC msdb.dbo.sp_syspolicy_update_target_set_level @target_set_id=134, @type_skeleton=N'Server/Database/XmlSchemaCollection', @condition_name=N''
EXEC msdb.dbo.sp_syspolicy_update_target_set_level @target_set_id=134, @type_skeleton=N'Server/Database', @condition_name=N''
GO
EXEC msdb.dbo.sp_syspolicy_update_policy @policy_id=62, @execution_mode=1, @schedule_uid=N'00000000-0000-0000-0000-000000000000'
GO

–Verify that the new policy is working as intended 

-- Will FAIL 
&lt;pre&gt;create table TEST_Post_NewPolicy (testcol int)
GO

-- Will suceed
create table UserTable_Post_NewPolicy (testcol int)
GO
-- Find an object to alter
Select * from sysobjects where uid = 1 and xtype in ('v','u','p')

-- Alter object to another invalid naming convention

-- Will FAIL
Alter Procedure Test_proc
AS select GETDATE()  as [When]

--Will Suceed
Drop Procedure Test_proc

-- Alter an Invalid object to a valid naming convention

-- Successful
EXEC sp_rename 'Test', 'UserView_Test'
GO 

-- Check the execution history

USE msdb;
GO

     SELECT   a.execution_date AS 'Date Run' , c.name AS 'Policy', a.target_query_expression AS 'Policy Failure Targets',
                  D.name as 'Condition', D.description as 'Condition Description', D.facet as 'Condition Facet'
     FROM syspolicy_policy_execution_history_details_internal a
        INNER JOIN syspolicy_policy_execution_history_internal b
           ON a.history_id = b.history_id
        INNER JOIN  syspolicy_policies_internal c
         ON b.policy_id = c.policy_id
       INNER JOIN syspolicy_conditions d
        ON c.condition_id = d.condition_id
     WHERE a.result = 0 AND a.execution_date &gt;= GETDATE() -1
     ORDER BY a.execution_date DESC, c.name ASC, a.target_query_expression ASC;
GO

With the sql above and the slide deck that is attached to this post, you should be well on your way to creating your own Policies.

Partitioning

Over the past few years I have implemented partitioning a number of ways.  In the world of SQL Server 2000, it was “home-made”.  Meaning that I would create a number of tables whose schema was identical, create check constraints and then link all of the data together with views.  This method had tremendous upside, but as you can imagine, the overhead ov moving data from one “partition” to another was very time intensive and expensive from an I/O perspective.

With SQL Server 2005, Microsoft introduced partitioning as part of the relation engine… This was huge news for me specifically.  I could actually do this “Right” for my clients.  As many of you know, paritioning has some limitations in SQL Server 2005 that made a great option, but the limitation of parallelism that accompanied the feature left for a bad taste in the mouths of many.

This brings us to SQL Server 2008 where, based on my testing, the parrallism issue is truly resolved and the optimizer is actually recoqnizing the partitions as it should. 

So, what have I done?  I’ve implemented it all over the place.  That is the point of this post.  At one client, it’s worked just as I imagined that it would… at another, I’ve had a bit of stumbling block.  I did not realize the impact that non-clustered indexes would have if they weren’t created with the partition.  So what I ended up with, over time were very, very, very heavily fragmented indexes.  In fact, some of them were so bad that the optimizer was choosing different paths to the data.  (That’s B.A.D.).

I thought my fix would be a tough one.  But it turned out to be uber simple.  I dropped the indexes that existed on the primary filegroup and re-created them on the partition.  Viola, piece of cake.  Everything was awesome, until about 2 weeks later when the fragmentation of those indexes caught up with me.  This left me cursing the amount of time it was going to take to rebuild these huge indexes and wondering how often I’d have to do it…

So, I stopped trying to do it my way and went to my favorite resource.  BOL.  Of course, there is a way to do this and it’s not painful.

It’s the function $PARTITION.  If you are using partitioning in your environment and you are unaware of this function, become aware. 

What this function enabled, in a very easy and straight foward manner was for me to quickly and dynamically identify which partition was the active partition.  Once I had that information, I simply wrote an alter index statement with rebuild where the partition equaled my variable value.

Code snippet below.

Declare @PartitionID INT

Select @PartitionID  =  (SELECT $PARTITION.PartitionName (getdate()));

ALTER INDEX IndexName ON owner.TableName

REBUILD Partition = @PartitionID;

    –  Note that I can utilize the getdate function with the $Partition function.  Since my partition function (key) is a datetime value.  Simple logic could be used if it were an int.

The obvious advantage to this is that when I run the rebuild index script, it is only running for the active partition.  This brings down the amount of time required for this action and enables me to run it nightly as the data becomes very fragmented from day to day in the current partition.

So, my lesson for this day?

Easy, think ahead about what will happen down the road.  Always check BOL and test, test, test. 

Happy Partitioning!