Move non clustered indexes to a new filegroup

So, over the years I have needed to move an index or a set of indexes to a new or different file group. I’ve written scripts for this in the past and lost them along the way and I couldn’t find anything out there on the bing or google interwebs.

The concept of multiple files and file groups for a database is beginning to sink in with a lot of developers and dba’s. The rationale for using them is typically sound and in my experience it’s one of wanting to decrease the internal file fragmentation of the data and the indexes. Often, one can find that there is a performance improvement as well because this allows you to move those indexes to that raid 1+0 array that your SAN admin has been waiting for you to use. :)

Here’s a stab at it that makes sense to me and it worked great for what I needed to accomplish today. What it doesn’t take into account are partitions and their related indexes, so be sure to read this code and test it prior to just taking my word for it working well.

I’m 100 percent open to ideas and suggestions, in the form of t-sql. I certainly appreciate comments and ideas, however, I will not write the t-sql for your idea. I’m giving this away and it’s my hope that as you improve upon it, you will kindly re-send or re-post it with the improvements.

Lastly, this handles non-clustered indexes only. It will ignore system indexes and statistics. Both of those are named out in the script so in the event that you want them; it’s an easy modification. I have comments in the code posted below and I STRONGLY advise you to read them before you run on a live system.

Click on the word doc attached to this post.  WordPress really makes the formatting of the code look ugly and I haven’t yet figured out how to make it look decent.

Enjoy!

 NonClusteredIndex_MOVE_newFG

On an Island with no Umbrella Drinks?

I was tagged by Chris Shaw to answer this question.

“So You’re On A Deserted Island With WiFi and you’re still on the clock at work. Okay, so not a very good situational exercise here, but let’s roll with it; we’ll call it a virtual deserted island. Perhaps what I should simply ask is if you had a month without any walk-up work, no projects due, no performance issues that require you to devote time from anything other than a wish list of items you’ve been wanting to get accomplished at work but keep getting pulled away from I ask this question: what would be the top items that would get your attention?”

heh… well, in the event that my golf clubs didn’t make it, I suppose that I would spend a great deal of time learning more about the guts or internals of SQL Server. It’s a subject area that I’m very interested in, yet I find that the time to truly dig in is often cut short by work.

I would also spend a fair bit of time coming up with ideas for articles, blog posts and presentations. These are all activities that I enjoy and it’s one way that I can say “thank-you” to those who have helped me along the way.

Given enough time and concentration, I would write a tool that could take any schema and the data in that schema and build a true load test. Over the years I have encountered many clients who face the challenge of truly perf testing their environment prior to going “live”. Often, those who have invested the time and money into this activity find that the performance they experienced in the lab is different than in production… The reasons vary but I believe that I’ve seen it enough to be able to hit this pretty closely.

Wow, *if* I could accomplish all of that in a month, I’d pack up my laptop, flipflops and hit the beach. :)

Since I’m new to blogging, I’m not going to tag anyone just yet.

Update Statistics for a partition

Last week, I was working with a client and we wanted to update statistics on a partition. After trying it out, then consulting with BOL and finally twitter, I determined that it could not be done. Thanks to Lara Rubbelke for her help and pointing out the second issue that is discussed below.

In addition to that problem, when a partition’s index is defragmented, statistics are not updated. I suppose an easy workaround is to update statistics, but the problem is that it updates statistics for all of the partitions and/or indexes of that object, not the specific one that was just defragmented.

So, I created a suggestion @connect.microsoft.com. I’ll post the entire subject / issue below, but please visit the site, vote and validate this request.

This is an https:// link, so if you aren’t logged into connect it will not work. That said, if you don’t have a login for this site, I’d recommend that you get one. Tons of great information can be found there. If you wish to search for it, go to connect.microsoft.com and type in “Update Statistics at the partition level” OR search for feedbackid 468517.  https://connect.microsoft.com/SQLServer/feedback/ViewFeedback.aspx?FeedbackID=468517

Full description and proposed solution below:

Description:
I’d like to see Update Statistics modified to include partitionID in combination with the existing table or indexed view name.

With the improvements of 2k8 and partitioning, very large result sets are being seen at one of my clients that utilizes partitioning. Having to update statistics against partitions that haven’t changed in order to achieve the change on the partition that actually needs it, creates additional overhead that is not required. It also forces a change to plans that wouldn’t otherwise be altered which causes for memory churn in the proc cache.

One other related note is that you can defrag an individual partition index, but it will not update the statistics.

Proposed Solution
Two parts:
1.)
Modify Update Statistics to include an option for declaring either a partition name or partitionID for either a table or an indexed view.
i.e…
UPDATE STATISTICS table_or_indexed_view_name
[
{
{ index_or_statistics__name }
{ index_or_statistics_name }
| ( { Partition_index_or_partition_statistics_name } [ ,...n ] )
}
]
2.) Upon completion of the defrag of an individual partition index have it update the statistics for that partition.

Benefits
Improved Reliability
Improved Administration
Improved Performance

Meta Data – Cardinality Explored

Last night I had the privilege of speaking at the Colorado Springs SQL Server Users Group. The subject was Meta Data – Cardinality Explored.

This topic is a very enjoyable one to me. Along with the DMV’s that SQL Server now provides, the meta-data about your database implementations is critical to their scalability, stability, accuracy and performance.

Over the years, I have found that cardinality is not a topic that is often understood. I also believe that it’s not discussed all that often. It is an absolutely critical data-set for the developers, DBA’s, testers and performance testers out there.

Attached to this post you will find my sql script and PowerPoint slide deck. I’d recommend starting with the slide deck… walk through it and when you see “demo”, open up the sql file and learn how this result set can make an impact with the databases that you are responsible for or those that you are developing against.

Have fun and if you have questions about the presentation or the sql code I put together, feel free to comment!

Thanks again to the terrific folks down in Colorado Springs for the warm welcome and not falling asleep as I droned on and on. =)

CSSQLUG_June09
MetaData – Cardinality Explored – SQL

SQL Server 2008 R2

Yesterday at TechEd, Microsoft announced the R2 version of SQL Server 2008.

It created a bit of confusion for me as I thought the next release “Kilimanjaro and Madison” were both going to be part of a new (major) SQL Server version. As I understand it, much of this functionality will simply be a “bolt-on” to SQL Server 2k8.

When I first read about this, I was conflicted. However, after spending the evening contemplating many of the conversations that I’ve had with clients around 2008, I think that this move by Microsoft is brilliant! The main reason that I’m excited is around the MDM and self-service reporting that Kilimanjaro and Madison were all about. It will be substantial in them martket have them sooner and more importantly, create an additional incentive to help justify the move to 2008. Over the last couple of years, I have had many conversations with clients who “want” to go to 2008, but have had a hard time justifying the regression testing, time etc… The additional features now included will really round out a signficant 2008 release.

Details of the offering can be found below.

http://www.microsoft.com/sqlserver/2008/en/us/r2.aspx

I just read this article and I liked it as well.

http://reddevnews.com/news/article.aspx?editorialsid=10836

These are the specific items that I’m looking forward to seeing:

Master Data Services (MDS)

Master Data Hub
Improve the Quality of Your Data
• Centralized approach to defining, deploying, and managing master data
• Ensure reporting consistency across systems and deliver faster more accurate results across the enterprise
• Serves both BI and operational requirements and improve the quality and efficiency of data and processes for all systems
• Enables organizations to track versions of master data to answer questions about master data at a specific point in time
• Supports integration through a Services (WCF) API, business rules, workflow, notification, and hierarchy management
Stewardship Portal
Improve the Quality of Your Data
• Central portal to create, edit and update master data master including members and hierarchies
• Enables direct stewardship and ensures data quality by enforcing business rules and managing supporting domains
• Enables model administrators to upload data, review validation status, manage data versions, develop models, copy versions of models and define the structure of hierarchies

Self Service Analysis
Add-in for Microsoft Office SharePoint 2010
Share & Collaborate with Confidence
• Enables users to publish reports and analyses to SharePoint to make them available to others in the organization
• Applications and reports can be refreshed automatically while also maintaining version control and tracking
• Central, secure location where users can easily control who has access to sensitive data while also maximizing the number of people who can view the reports
SharePoint 2010 Management Console
Manage User-Generated Analytical Applications
• Provides a single, comprehensive management tool through Microsoft Office SharePoint that gives IT administrators the ability to set server policies and monitor Self Service BI activities
• IT can manage and secure all BI assets, freeing the original authors to focus on business priorities
• Dashboard views to monitor access and utilization of analyses and reports as well as track patterns of hardware usage to help ensure the right security privileges are applied and user generated solutions are continuously available, up-to-date, and secured

Search… Are you Lijit?

I’ve recently added a new button on the blog. You can see it in the upper right hand corner. It’s labeled “Search my Network”. Click it!

This is pretty cool actually. What it does is search my blog for your criteria and it also searches other sites that I’ve added as “trusted”. Sites include Microsoft’s SQL CAT team, SSWUG, Ted Malone’s blog, Brent Ozar’s blog etc…

Think about this for your own blog or website! Wow, to be able to direct your readers to information that you deem as trustworthy and accurate. It is the opposite of a Google search in that the results are quality based (based on them being “trusted” by you), versus quantity based (Google).

This clever technology is provided free of charge, minus the ad pollution, but hey, it’s free! The company name is Lijit, based here in Colorado at the foothills of the majestic Rocky Mountains.

A shout out to a great colleague and even better friend, Zach Conger for clueing me in.

Schema Management (DDL Triggers)

Over the years I have used a variety of tools for schema management.  The best one that I really relied on for quite a while was from RedGate.  They offer schema compare and data compare tools that really make quick work of determing the delta against two environments.

Recently I’ve been working with DB Pro to manage this and thus far I’ve been very, very impressed.  However, it has some downside; primarly the cost.  Some of my clients just can’t justify / stomach the dollars involved.  So, in this gap I have been using DDL Triggers.  It’s an automated log, stored in a database (table) that captures ddl changes (Create, Drop, Alter).   This concept is different than a traditional trigger in that it only fires for database events, not data events.  Meaning that the scope of this trigger is at the schema or the structure level, not the data or the dml level.

The value that it has provided some of my clients is simply the certainty that they know what happened in a specific environment and it saves a great deal of time when it comes to debugging, troubleshooting or otherwise trying to figure out why something that worked perfectly yesterday in the database does not work correctly today.

There are a few steps that need to be taken to make this work.  The first is to determine if you want to store this data in it’s own database; or store it in a table in an existing database.  It can be done either way.  In the code below you will see that I’ve created a stand alone db with one table in it.

CREATE DATABASE [DDL_ChangeLog]

GO

use ddl_changelog;

CREATE TABLE DDLChangeLog (

DDLChangeLogID [int] IDENTITY(1,1) NOT NULL,

eventtype nvarchar(100),

objectname nvarchar(100),

objecttype nvarchar(100),

databasename nvarchar(100),

schemaname nvarchar(100),

SystemUser nvarchar(255),

AppName nvarchar(255),

HostName nvarchar(255),

tsql nvarchar(MAX),

createddate datetime,

CONSTRAINT [PK_DDL_Changelog] PRIMARY KEY CLUSTERED

(DDLChangeLogID ASC))
The code above will simply create a database and a table.

Once you have that in place, then choose the database that you would like to track ddl changes on and create this database trigger.

 

CREATE TRIGGER [trgLogDDLEvent] ON DATABASE

FOR DDL_DATABASE_LEVEL_EVENTS

AS

DECLARE @data XML

SET @data = EVENTDATA()

IF @data.value(‘(/EVENT_INSTANCE/EventType)[1]‘, ‘nvarchar(100)’) <> ‘CREATE_STATISTICS’

INSERT INTO ddl_changelog..DDLChangeLog

(EventType, ObjectName, ObjectType, DatabaseName, SchemaName, SystemUser, AppName, HostName, tsql , createddate)

VALUES (

@data.value(‘(/EVENT_INSTANCE/EventType)[1]‘,

                           ‘nvarchar(100)’),

@data.value(‘(/EVENT_INSTANCE/ObjectName)[1]‘,

                           ‘nvarchar(100)’),

@data.value(‘(/EVENT_INSTANCE/ObjectType)[1]‘,

                           ‘nvarchar(100)’),

@data.value(‘(/EVENT_INSTANCE/DatabaseName)[1]‘,

                           ‘nvarchar(100)’),

@data.value(‘(/EVENT_INSTANCE/SchemaName)[1]‘,

                           ‘nvarchar(100)’),

system_user , app_name (),host_name(),

@data.value(‘(/EVENT_INSTANCE/TSQLCommand)[1]‘,

                           ‘nvarchar(max)’) ,

getdate()) ;

GO

ENABLE TRIGGER [trgLogDDLEvent] ON DATABASE

GO
So, let’s test this out.

With the database that you just created the trigger on, let’s do something simple like this:

            SELECT 1 as test into TestTable;

           SELECT * from TestTable;

           DROP TABLE TestTable;

SELECT * FROM DDL_ChangeLog..DDLChangeLog

wow!  there is data there, not just any data, but it should have returned two rows; even though there are three statements above in our test.  This is because that trigger only fired for the Create table (select into) and the drop table statements.  The trigger will not fire for the select statement.

I initially started using these for my own reasons / purposes; however as time has gone on, I’ve found that there are other audiences that LOVE knowing this can be tracked… Auditors love this functionality, QA loves to be able to verify that the blackbox they are testing has some controls / audit processes around it.  Management loves it because they can have hard data to go back to the business on in the event of an outage, mistake or whatever.  I’ve found that it’s typically best to be as transparent as possible with changes / modifications and the state of the databases that I’m responsible for.  In this manner, I’m able to better serve my clients and in turn that are more likely to believe / trust my opinion when it counts.

Policy Management

In Novemeber, I had the opportunity to present at DevConnections.  I spoke on two topics, Policy Management and Plan Guides.  Both of these are new / unique to SQL Server 2008 and I really love both of the additions to the SQL Server platform.  

In this blog post I have attached two files; one of them is my powerpoint slide deck and the other is the script I used for the demo’s.  

This was a very difficult talk to give as the concept is/was relatively new and the amount of information available was scarce when I was developing the content.   

So, what is Policy Management?  Microsoft’s definition is as follows :  “Policy-Based Management is a policy-based system for managing one or more instances of SQL Server 2008.  Use this with SQL Server Management Studio to create policies that manage entities on the server, such as the instance of SQL Server, databases, and other SQL Server objects.”  What this definition leaves out is something of great importance.  This new feature CAN be used with down versions, providing that the facet you want to throw a policy around is availalable in that version.  Meaning, if you want to monitor datafiles or schema changes in a SQL Server 2005 instance, you can!  

I found there to be three components to Policy Management.  

  1. Policy Management  — Creation of conditions and policies
  2. Explicit Administration — One to many managed targets
  3. Automated Administration — Automate enforcement of policies

For each of these there are components – Refer to the slide deck and Microsoft’s site for more information.  

One of the biggest helps that I had when leaning Policy Management was the following CTE:   

The results of this query provide all of the available facet’s and which mode then can be run for.  This is a huge help as going through the gui to see if something is availalble for a certain type of mode can take a while.   

USE MSDB;
with automatedpolicyexecutionmode (modeid, modename)
as
          (select * from (values (0, ‘On Demand’), 
                              (1,’Enforce Compliance’), 
                              (2, ‘Check on Change and Log’),
                              (4, ‘Check on Schedule and Log’))
as em(modeid, modename))
   

So far I have successfully implemented policies against a host of the facets offered.  Microsoft is going to continue to build on this inital release as time goes on; so look to new additions of facets as time goes on.  

Enjoy Policy Based Management and may it save you a great deal of time, decrease your daily / weekly checklist of items and enable you to provide further value to your organization.  

SELECT dmf.management_facet_id as facetid, dmf.name as facetname, apemode.modename
FROM syspolicy_management_facets as dmf
       INNER JOIN automatedpolicyexecutionmode as apemode on dmf.execution_mode & apemode.modeid=apemode.modeid
ORDER BY dmf.name, apemode.modename  

If you walk through the slide deck and the demo script; you can quickly see how easy it is to create a policy.  Once you have one created (IN DEVELOPMENT),  =), then evaluate the policy in two ways.  The first way you will want to ensure that the evaluation is successful.  The second time, break it, so that the evaluation of the policy is false.  Once you have done this, run the query below.  

 USE MSDB;
GO
SELECT a.execution_date AS 'Date Run' , 
                  c.name AS 'Policy' , 
                  a.target_query_expression AS 'Policy Failure Targets' , 
                  d.name as 'Condition',
                  d.description as 'Condition Description', 
                  d.facet as 'Condition Facet'
FROM syspolicy_policy_execution_history_details_internal a
              INNER JOIN syspolicy_policy_execution_history_internal b ON a.history_id = b.history_id
              INNER JOIN syspolicy_policies_internal c ON b.policy_id = c.policy_id
                           INNER JOIN syspolicy_conditions d ON c.condition_id = d.condition_id
WHERE a.result = 0
ORDER BY a.execution_date DESC, c.name ASC, d.name ASC 

Attachements for this post ::  

Script: policymanagementscript2  — The majority of the t-sql in this document can be found below.  

Slide Deck: sql-connections-conference-slides-policy-management2  


USE msdb;
GO
WITH automatedpolicyexecutionmode (modeid, modename)
AS
 (select * from (values  (0, 'On Demand'),
    (1,'Enforce Compliance'),
    (2, 'Check on Change and Log'),
    (4, 'Check on Schedule and Log'))
          as em(modeid, modename))
     SELECT dmf.management_facet_id as facetid, dmf.name as facetname, apemode.modename
     FROM syspolicy_management_facets as dmf
        INNER JOIN automatedpolicyexecutionmode as apemode on dmf.execution_mode & apemode.modeid=apemode.modeid
     OREDER BY dmf.name, apemode.modename

– Demo 1
– Greenfield development
– Standard Naming convention
– Create Conditions
– Create Policy 


use DemoDB
go

-- Successful Object Creation
-- Table
CREATE TABLE UserTable_test_me3 (test int)
GO

-- Procedure
CREATE Procedure PRC_Test_proc AS SELECT GETDATE()
GO

-- View
CREATE VIEW UserView_Test AS SELECT test from UserTable_test_me3
GO

-- Failed Object Creation due to Policy enforcement
-- Table
CREATE TABLE test_me3 (test int)
GO

Create Procedure Test_proc AS select GETDATE()
GO

Create View VW_Test as select test from UserTable_test_me3
GO

Create View Test as select test from UserTable_test_me3
GO

-- Fails
EXEC sp_rename 'UserTable_test_me3', 'test_me3'
GO 

--  Check MSDB for policy execution Failures

USE msdb;
GO

     SELECT a.execution_date AS 'Date Run' , c.name AS 'Policy', a.target_query_expression AS 'Policy Failure Targets',
                     D.name as 'Condition', D.description as 'Condition Description', D.facet as 'Condition Facet'
      FROM syspolicy_policy_execution_history_details_internal a
        INNER JOIN syspolicy_policy_execution_history_internal b ON a.history_id = b.history_id
        INNER JOIN  syspolicy_policies_internal c  ON b.policy_id = c.policy_id
        INNER JOIN syspolicy_conditions d  ON c.condition_id = d.condition_id
    WHERE a.result = 0 AND a.execution_date >= GETDATE() -1
    ORDER BY a.execution_date DESC, c.name ASC, a.target_query_expression ASC;
GO

-- Existing environment somewhat different ::

select * from msdb.dbo.syspolicy_policies_internal order by date_created desc;
GO
-- Delete the policy
 -- Can use SSMS to do this as well.
Declare @Object_set_ID_Delete INT
set @Object_set_ID_Delete = (select object_set_id from msdb.dbo.syspolicy_policies_internal where name = 'Standard Naming Convention')

EXEC msdb.dbo.sp_syspolicy_delete_policy @name= 'Standard Naming Convention'
EXEC msdb.dbo.sp_syspolicy_delete_object_set @object_set_id = @Object_set_ID_Delete
GO
-- Delete the Condition
 -- Can use SSMS to do this as well.
 EXEC msdb.dbo.sp_syspolicy_delete_condition @name = 'Standard - Naming Convention'

-- Create an Existing environment

Create Database Demo_existing
GO

USE Demo_existing
GO
create table test_me3 (test int)
GO

Create Procedure Test_proc AS select GETDATE() as The_Time_is_Now
GO

Create View VW_Test as select test from test_me3
GO

Create View Test as select test from test_me3
GO

–  Re-Create Condition and Policy
– Using t-sql versus SSMS
  

Declare @condition_id int
EXEC msdb.dbo.sp_syspolicy_add_condition @name=N'Standard - Naming Convention', @description=N'Enforcement of Standard Naming Convention

The Object you attempted to create was rolled back.

This Policy prevents the following naming standards ::

Object names MAY NOT begin with "tbl" OR "sp" OR "vw"

AND

Ojbect Names MUST begin with "UserTable" OR "PRC" OR "UserView"

', @facet=N'IMultipartNameFacet', @expression=N'<Operator>
  <TypeClass>Bool</TypeClass>
  <OpType>AND</OpType>
  <Count>2</Count>
  <Group>
    <TypeClass>Bool</TypeClass>
    <Count>1</Count>
    <Operator>
      <TypeClass>Bool</TypeClass>
      <OpType>OR</OpType>
      <Count>2</Count>
      <Operator>
        <TypeClass>Bool</TypeClass>
        <OpType>OR</OpType>
        <Count>2</Count>
        <Operator>
          <TypeClass>Bool</TypeClass>
          <OpType>NOT_LIKE</OpType>
          <Count>2</Count>
          <Attribute>
            <TypeClass>String</TypeClass>
            <Name>Name</Name>
          </Attribute>
          <Constant>
            <TypeClass>String</TypeClass>
            <ObjType>System.String</ObjType>
            <Value>tbl%</Value>
          </Constant>
        </Operator>
        <Operator>
          <TypeClass>Bool</TypeClass>
          <OpType>NOT_LIKE</OpType>
          <Count>2</Count>
          <Attribute>
            <TypeClass>String</TypeClass>
            <Name>Name</Name>
          </Attribute>
          <Constant>
            <TypeClass>String</TypeClass>
            <ObjType>System.String</ObjType>
            <Value>vw%</Value>
          </Constant>
        </Operator>
      </Operator>
      <Operator>
        <TypeClass>Bool</TypeClass>
        <OpType>NOT_LIKE</OpType>
        <Count>2</Count>
        <Attribute>
          <TypeClass>String</TypeClass>
          <Name>Name</Name>
        </Attribute>
        <Constant>
          <TypeClass>String</TypeClass>
          <ObjType>System.String</ObjType>
          <Value>sp%</Value>
        </Constant>
      </Operator>
    </Operator>
  </Group>
  <Group>
    <TypeClass>Bool</TypeClass>
    <Count>1</Count>
    <Operator>
      <TypeClass>Bool</TypeClass>
      <OpType>OR</OpType>
      <Count>2</Count>
      <Operator>
        <TypeClass>Bool</TypeClass>
        <OpType>OR</OpType>
        <Count>2</Count>
        <Operator>
          <TypeClass>Bool</TypeClass>
          <OpType>LIKE</OpType>
          <Count>2</Count>
          <Attribute>
            <TypeClass>String</TypeClass>
            <Name>Name</Name>
          </Attribute>
          <Constant>
            <TypeClass>String</TypeClass>
            <ObjType>System.String</ObjType>
            <Value>UserTable%</Value>
          </Constant>
        </Operator>
        <Operator>
          <TypeClass>Bool</TypeClass>
          <OpType>LIKE</OpType>
          <Count>2</Count>
          <Attribute>
            <TypeClass>String</TypeClass>
            <Name>Name</Name>
          </Attribute>
          <Constant>
            <TypeClass>String</TypeClass>
            <ObjType>System.String</ObjType>
            <Value>UserView%</Value>
          </Constant>
        </Operator>
      </Operator>
      <Operator>
        <TypeClass>Bool</TypeClass>
        <OpType>LIKE</OpType>
        <Count>2</Count>
        <Attribute>
          <TypeClass>String</TypeClass>
          <Name>Name</Name>
        </Attribute>
        <Constant>
          <TypeClass>String</TypeClass>
          <ObjType>System.String</ObjType>
          <Value>PRC%</Value>
        </Constant>
      </Operator>
    </Operator>
  </Group>
</Operator>', @is_name_condition=0, @obj_name=N'', @condition_id=@condition_id OUTPUT
Select @condition_id

GO 

—-
–   Condition was created with an ID of XX.
—- 

Declare @object_set_id int
EXEC msdb.dbo.sp_syspolicy_add_object_set @object_set_name=N'Standard Naming Convention_ObjectSet_1', @facet=N'IMultipartNameFacet', @object_set_id=@object_set_id OUTPUT
Select @object_set_id

Declare @target_set_id int
EXEC msdb.dbo.sp_syspolicy_add_target_set @object_set_name=N'Standard Naming Convention_ObjectSet_1', @type_skeleton=N'Server/Database/StoredProcedure', @type=N'PROCEDURE', @enabled=False, @target_set_id=@target_set_id OUTPUT
Select @target_set_id

EXEC msdb.dbo.sp_syspolicy_add_target_set_level @target_set_id=@target_set_id, @type_skeleton=N'Server/Database/StoredProcedure', @level_name=N'StoredProcedure', @condition_name=N'', @target_set_level_id=0
EXEC msdb.dbo.sp_syspolicy_add_target_set_level @target_set_id=@target_set_id, @type_skeleton=N'Server/Database', @level_name=N'Database', @condition_name=N'', @target_set_level_id=0

EXEC msdb.dbo.sp_syspolicy_add_target_set @object_set_name=N'Standard Naming Convention_ObjectSet_1', @type_skeleton=N'Server/Database/Synonym', @type=N'SYNONYM', @enabled=False, @target_set_id=@target_set_id OUTPUT
Select @target_set_id

EXEC msdb.dbo.sp_syspolicy_add_target_set_level @target_set_id=@target_set_id, @type_skeleton=N'Server/Database/Synonym', @level_name=N'Synonym', @condition_name=N'', @target_set_level_id=0
EXEC msdb.dbo.sp_syspolicy_add_target_set_level @target_set_id=@target_set_id, @type_skeleton=N'Server/Database', @level_name=N'Database', @condition_name=N'', @target_set_level_id=0

EXEC msdb.dbo.sp_syspolicy_add_target_set @object_set_name=N'Standard Naming Convention_ObjectSet_1', @type_skeleton=N'Server/Database/Table', @type=N'TABLE', @enabled=False, @target_set_id=@target_set_id OUTPUT
Select @target_set_id

EXEC msdb.dbo.sp_syspolicy_add_target_set_level @target_set_id=@target_set_id, @type_skeleton=N'Server/Database/Table', @level_name=N'Table', @condition_name=N'', @target_set_level_id=0
EXEC msdb.dbo.sp_syspolicy_add_target_set_level @target_set_id=@target_set_id, @type_skeleton=N'Server/Database', @level_name=N'Database', @condition_name=N'', @target_set_level_id=0

EXEC msdb.dbo.sp_syspolicy_add_target_set @object_set_name=N'Standard Naming Convention_ObjectSet_1', @type_skeleton=N'Server/Database/UserDefinedFunction', @type=N'FUNCTION', @enabled=False, @target_set_id=@target_set_id OUTPUT
Select @target_set_id

EXEC msdb.dbo.sp_syspolicy_add_target_set_level @target_set_id=@target_set_id, @type_skeleton=N'Server/Database/UserDefinedFunction', @level_name=N'UserDefinedFunction', @condition_name=N'', @target_set_level_id=0
EXEC msdb.dbo.sp_syspolicy_add_target_set_level @target_set_id=@target_set_id, @type_skeleton=N'Server/Database', @level_name=N'Database', @condition_name=N'', @target_set_level_id=0

EXEC msdb.dbo.sp_syspolicy_add_target_set @object_set_name=N'Standard Naming Convention_ObjectSet_1', @type_skeleton=N'Server/Database/UserDefinedType', @type=N'TYPE', @enabled=False, @target_set_id=@target_set_id OUTPUT
Select @target_set_id

EXEC msdb.dbo.sp_syspolicy_add_target_set_level @target_set_id=@target_set_id, @type_skeleton=N'Server/Database/UserDefinedType', @level_name=N'UserDefinedType', @condition_name=N'', @target_set_level_id=0
EXEC msdb.dbo.sp_syspolicy_add_target_set_level @target_set_id=@target_set_id, @type_skeleton=N'Server/Database', @level_name=N'Database', @condition_name=N'', @target_set_level_id=0

EXEC msdb.dbo.sp_syspolicy_add_target_set @object_set_name=N'Standard Naming Convention_ObjectSet_1', @type_skeleton=N'Server/Database/View', @type=N'VIEW', @enabled=False, @target_set_id=@target_set_id OUTPUT
Select @target_set_id

EXEC msdb.dbo.sp_syspolicy_add_target_set_level @target_set_id=@target_set_id, @type_skeleton=N'Server/Database/View', @level_name=N'View', @condition_name=N'', @target_set_level_id=0
EXEC msdb.dbo.sp_syspolicy_add_target_set_level @target_set_id=@target_set_id, @type_skeleton=N'Server/Database', @level_name=N'Database', @condition_name=N'', @target_set_level_id=0

EXEC msdb.dbo.sp_syspolicy_add_target_set @object_set_name=N'Standard Naming Convention_ObjectSet_1', @type_skeleton=N'Server/Database/XmlSchemaCollection', @type=N'XMLSCHEMACOLLECTION', @enabled=False, @target_set_id=@target_set_id OUTPUT
Select @target_set_id

EXEC msdb.dbo.sp_syspolicy_add_target_set_level @target_set_id=@target_set_id, @type_skeleton=N'Server/Database/XmlSchemaCollection', @level_name=N'XmlSchemaCollection', @condition_name=N'', @target_set_level_id=0
EXEC msdb.dbo.sp_syspolicy_add_target_set_level @target_set_id=@target_set_id, @type_skeleton=N'Server/Database', @level_name=N'Database', @condition_name=N'', @target_set_level_id=0
GO
Declare @policy_id int
EXEC msdb.dbo.sp_syspolicy_add_policy @name=N'Standard Naming Convention', @condition_name=N'Standard - Naming Convention', @execution_mode=1, @is_enabled=True, @policy_id=@policy_id OUTPUT, @object_set=N'Standard Naming Convention_ObjectSet_1'
Select @policy_id

-- The next piece will Update the Targets for the policy
GO
EXEC msdb.dbo.sp_syspolicy_update_target_set @target_set_id=128, @enabled=TRUE
EXEC msdb.dbo.sp_syspolicy_update_target_set_level @target_set_id=128, @type_skeleton=N'Server/Database/StoredProcedure', @condition_name=N''
EXEC msdb.dbo.sp_syspolicy_update_target_set_level @target_set_id=128, @type_skeleton=N'Server/Database', @condition_name=N''

EXEC msdb.dbo.sp_syspolicy_update_target_set @target_set_id=129, @enabled=TRUE
EXEC msdb.dbo.sp_syspolicy_update_target_set_level @target_set_id=129, @type_skeleton=N'Server/Database/Synonym', @condition_name=N''
EXEC msdb.dbo.sp_syspolicy_update_target_set_level @target_set_id=129, @type_skeleton=N'Server/Database', @condition_name=N''

EXEC msdb.dbo.sp_syspolicy_update_target_set @target_set_id=130, @enabled=TRUE
EXEC msdb.dbo.sp_syspolicy_update_target_set_level @target_set_id=130, @type_skeleton=N'Server/Database/Table', @condition_name=N''
EXEC msdb.dbo.sp_syspolicy_update_target_set_level @target_set_id=130, @type_skeleton=N'Server/Database', @condition_name=N''

EXEC msdb.dbo.sp_syspolicy_update_target_set @target_set_id=131, @enabled=TRUE
EXEC msdb.dbo.sp_syspolicy_update_target_set_level @target_set_id=131, @type_skeleton=N'Server/Database/UserDefinedFunction', @condition_name=N''
EXEC msdb.dbo.sp_syspolicy_update_target_set_level @target_set_id=131, @type_skeleton=N'Server/Database', @condition_name=N''

EXEC msdb.dbo.sp_syspolicy_update_target_set @target_set_id=132, @enabled=TRUE
EXEC msdb.dbo.sp_syspolicy_update_target_set_level @target_set_id=132, @type_skeleton=N'Server/Database/UserDefinedType', @condition_name=N''
EXEC msdb.dbo.sp_syspolicy_update_target_set_level @target_set_id=132, @type_skeleton=N'Server/Database', @condition_name=N''

EXEC msdb.dbo.sp_syspolicy_update_target_set @target_set_id=133, @enabled=TRUE
EXEC msdb.dbo.sp_syspolicy_update_target_set_level @target_set_id=133, @type_skeleton=N'Server/Database/View', @condition_name=N''
EXEC msdb.dbo.sp_syspolicy_update_target_set_level @target_set_id=133, @type_skeleton=N'Server/Database', @condition_name=N''

EXEC msdb.dbo.sp_syspolicy_update_target_set @target_set_id=134, @enabled=TRUE
EXEC msdb.dbo.sp_syspolicy_update_target_set_level @target_set_id=134, @type_skeleton=N'Server/Database/XmlSchemaCollection', @condition_name=N''
EXEC msdb.dbo.sp_syspolicy_update_target_set_level @target_set_id=134, @type_skeleton=N'Server/Database', @condition_name=N''
GO
EXEC msdb.dbo.sp_syspolicy_update_policy @policy_id=62, @execution_mode=1, @schedule_uid=N'00000000-0000-0000-0000-000000000000'
GO

–Verify that the new policy is working as intended 

-- Will FAIL 
<pre>create table TEST_Post_NewPolicy (testcol int)
GO

-- Will suceed
create table UserTable_Post_NewPolicy (testcol int)
GO
-- Find an object to alter
Select * from sysobjects where uid = 1 and xtype in ('v','u','p')

-- Alter object to another invalid naming convention

-- Will FAIL
Alter Procedure Test_proc
AS select GETDATE()  as [When]

--Will Suceed
Drop Procedure Test_proc

-- Alter an Invalid object to a valid naming convention

-- Successful
EXEC sp_rename 'Test', 'UserView_Test'
GO 

-- Check the execution history

USE msdb;
GO

     SELECT   a.execution_date AS 'Date Run' , c.name AS 'Policy', a.target_query_expression AS 'Policy Failure Targets',
                  D.name as 'Condition', D.description as 'Condition Description', D.facet as 'Condition Facet'
     FROM syspolicy_policy_execution_history_details_internal a
        INNER JOIN syspolicy_policy_execution_history_internal b
           ON a.history_id = b.history_id
        INNER JOIN  syspolicy_policies_internal c
         ON b.policy_id = c.policy_id
       INNER JOIN syspolicy_conditions d
        ON c.condition_id = d.condition_id
     WHERE a.result = 0 AND a.execution_date >= GETDATE() -1
     ORDER BY a.execution_date DESC, c.name ASC, a.target_query_expression ASC;
GO

With the sql above and the slide deck that is attached to this post, you should be well on your way to creating your own Policies.

vConference – This week!

This is the week that the vConference will air.  SQL Server WorldWide Users Group (SSWUG) puts on the event and this year I was chosen to speak at the event.

If you have not already registered, you can do so here : https://www.vconferenceonline.com/upcoming.asp 

I have a discount code for you to use as well : SVPJLOSP09

The recording of the event was very different from an in person conference.  It took me a few minutes to get used to the cameras and not have feedback from an audience.  That said, I’m happy with my presentations even though I moved a bit too fast and I felt that I didn’t focus on some of the points and bring them back around the way I was hoping for. 

At any rate, enjoy the conference, there is no better value for the money! 

Later this week I’ll have a couple of posts up about my sessions at Dev Connections.  They will get a lot more technical and it’s my hope, be of use for you.

Partitioning

Over the past few years I have implemented partitioning a number of ways.  In the world of SQL Server 2000, it was “home-made”.  Meaning that I would create a number of tables whose schema was identical, create check constraints and then link all of the data together with views.  This method had tremendous upside, but as you can imagine, the overhead ov moving data from one “partition” to another was very time intensive and expensive from an I/O perspective.

With SQL Server 2005, Microsoft introduced partitioning as part of the relation engine… This was huge news for me specifically.  I could actually do this “Right” for my clients.  As many of you know, paritioning has some limitations in SQL Server 2005 that made a great option, but the limitation of parallelism that accompanied the feature left for a bad taste in the mouths of many.

This brings us to SQL Server 2008 where, based on my testing, the parrallism issue is truly resolved and the optimizer is actually recoqnizing the partitions as it should. 

So, what have I done?  I’ve implemented it all over the place.  That is the point of this post.  At one client, it’s worked just as I imagined that it would… at another, I’ve had a bit of stumbling block.  I did not realize the impact that non-clustered indexes would have if they weren’t created with the partition.  So what I ended up with, over time were very, very, very heavily fragmented indexes.  In fact, some of them were so bad that the optimizer was choosing different paths to the data.  (That’s B.A.D.).

I thought my fix would be a tough one.  But it turned out to be uber simple.  I dropped the indexes that existed on the primary filegroup and re-created them on the partition.  Viola, piece of cake.  Everything was awesome, until about 2 weeks later when the fragmentation of those indexes caught up with me.  This left me cursing the amount of time it was going to take to rebuild these huge indexes and wondering how often I’d have to do it…

So, I stopped trying to do it my way and went to my favorite resource.  BOL.  Of course, there is a way to do this and it’s not painful.

It’s the function $PARTITION.  If you are using partitioning in your environment and you are unaware of this function, become aware. 

What this function enabled, in a very easy and straight foward manner was for me to quickly and dynamically identify which partition was the active partition.  Once I had that information, I simply wrote an alter index statement with rebuild where the partition equaled my variable value.

Code snippet below.

Declare @PartitionID INT

Select @PartitionID  =  (SELECT $PARTITION.PartitionName (getdate()));

ALTER INDEX IndexName ON owner.TableName

REBUILD Partition = @PartitionID;

    –  Note that I can utilize the getdate function with the $Partition function.  Since my partition function (key) is a datetime value.  Simple logic could be used if it were an int.

The obvious advantage to this is that when I run the rebuild index script, it is only running for the active partition.  This brings down the amount of time required for this action and enables me to run it nightly as the data becomes very fragmented from day to day in the current partition.

So, my lesson for this day?

Easy, think ahead about what will happen down the road.  Always check BOL and test, test, test. 

Happy Partitioning!