Splitting single large data file into multiple files

I had come across a situation to split one large data file in one filegroup into multiple data files. We had a data file of one database which is 1.8TB. Similarly, we have atleast one large DB for each server. We are migrating to a new server and due to certain limitations we cannot have each LUN more 800GB. Hence the plan was to mount multiples LUN’s and present it as a single drive letter for DATA files or multiple mountpoints to a single drive letter.

Below is the process I have used to split the large datafile.

Step-1: Add new datafiles to the database and make sure right filegroup is selected.

How: I had decided to split the data file into 5 data files each with 400GB and hence added 4 secondary data files each with 400GB.

Step-2: Disable Autogrow for the newly created data files. This step is important to make sure that we are distributing the data equally among all the data files.

How:

ALTER DATABASE <DatabaseName> MODIFY FILE ( NAME = N'<ndf_filename>', FILEGROWTH = 0)

OR using SSMS

Right click DATABASENAME – Properties – Files (left pane)

Click on ellipses button against the ‘Autogrow / Maxsize’ for new datafile and uncheck the “Enable Autogrowth”

Step-3: “Empty” the original data file. Now that we are ready with the required data files and settings configured, we are ready to rebalance the datafile.  This step will move the data to the 4 newly created data files. Since these data files are exactly 400GB each proportional fill algorithm will evenly distribute the data into the new data files.
DBCC SHRINKFILE (N'<mdf_filename>' , EMPTYFILE)

OR using SSMS

Right click on DATABASENAME – Tasks – Shrink – Files

Select the datafile.

After the task is complete, this is how we intend to see.

Step-3: re-enable the autogrowth

Step-4: Shrink the original data file to release the disk space back to the drive.

 

References:

https://www.sqlskills.com/blogs/paul/investigating-the-proportional-fill-algorithm/

 


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.