There are a couple of steps needed to partition a table.  We can partition an existing table or create a new partitioned table and insert into it.  Assume circa 10% slowage to qualify the data prior to putting it into the right physical structure.  Also note that ALL tables (since SQL 2005?) are partitioned – just usually one partition.

HOW DO WE CHUNK the data?

Create partition function myFunction_PF (varchar(10))
as range right for values ('b','c','d')

range right means that ‘a’ has no place to go, but that ‘e’ (and ‘z’) will be put into the fourth bucket.  ‘right’ points to the equal sign in the criteria list – values that are equal to or greater than ‘d’ go in the last bucket and values less than ‘b’ go into the first bucket.  range left would be equal to or less than ‘a’ go into the first bucket and values (not equal to but) greater than ‘d’ go into the last bucket.  That means that ‘d’ is in the third bucket if range left.

varchar is the parameter-type of data to be passed as the partitioning value (the type of the column) – in this case it could have been char as well.  Or int or datetime.  In the case of dates and times, we need to be careful about the exact type – datetime2 will NOT work if we specify datetime.

That is all we do – create a ‘sorting hat’ that will be used later.  To delete a partition function

drop partition function myFunction_PF


Create a new schema that will describe which physical objects will be used and reference the PF.

create partition schema mySCHEMA_PS as PARTITION myFunction_PF ALL to (PRIMARY)
create partition schema mySCHEMA_PS as PARTITION myFunction_PF ALL to (newfilegroup)
create partition schema mySCHEMA_PS as PARTITION myFunction_PF to (newfg1,newfg2, newfg3, newfg4)

That says that this schema shows which function (myFunction_PF) should be used to partition yet unrevealed data to the primary file group, a (single) new file group, or among the n+1 listed filegroups.  We COULD have n file groups if we want to put several chunks of data into the same filegroup.  Delete a schema as the partition function above.


Finally, we can create a new table ON mySCHEMA_PS (partition_column_name).  On this point, the blank table “knows’ that based on the value of the data that will go into the partition column, it should use the partition function to create groups of data that will be stored in the appropriate filegroup(s) – primary, newfilegroup or newfg1..4.  When we insert data, the function will ‘sort’ the data into the correct filegroup, creating a partitioned table.

We can also take an existing table and partition it – the literature typically suggests (dropping) and creating a clustered index on the table with the partitioned schema – that will force the current data to be reorganized with the filegroups specified in the new schema.