I'm managing a 1.5Tb DB that is growing at a rate of ~37Gb/Week. I have been asked to split the DB into 500Gb files, but I cannot use Filegroups due to restrictions from the vendor.
My thinking was to create 3 devices and use the DBCC SHRINKFILE
command to equally spread the DB across the devices. Could anyone comment on best practices? Any pit falls to my process?
-
SQL Server always uses at least one filegroup (Primary). Do you mean that you cannot use any additional filegroups?RBarryYoung– RBarryYoung2017年05月08日 17:21:28 +00:00Commented May 8, 2017 at 17:21
-
>>Do you mean that you cannot use any additional filegroups? CorrectMark Henry– Mark Henry2017年05月08日 17:28:01 +00:00Commented May 8, 2017 at 17:28
-
If after trying all these suggestion, the issue is not resolved. You may try last and best option split your database with SQL Server Database recovery tool which has many features, like split database as per your need with no risk, solve the corruption and much more. Try its free demo version.Rylan08– Rylan082017年05月09日 04:49:14 +00:00Commented May 9, 2017 at 4:49
2 Answers 2
You really do not have much choice because how proportional fill works in SQL Server.
Proportional fill and how to check whether it works
I am assuming you currently have one file in primary filegroup.
- You can rebuild objects with clustered index and those will go to new files totally because of more free space in newly created tables. As your current file start freeing up space you might not be able to move to new files any more (depending on free space in each file).
- With
DBCC SHRINKFILE
only way this can work is withEMPTYFILE
option because you cannot shrink smaller than your current used space. If you try to shrink with smaller size it will not work. Once you shrink withEMPTYFILE
option current file will only hold some system objects and rest will be split over to your 3 new files.
Next problem will be subsequent data inserts will use the old file (currently emptied) because of proportional fill algorithm.
Here is an article by Paul Randal on the same topic.
First I would seek clarification on what the long term plan is. Will you keep those three files and grow them as needed or will you continue to add 500 GB files as needed? The way I usually see it done is to pick the number of files upfront and to let them grow with the data. If you do it the other way you'll end up with a new file every few months.
If you can temporarily get some space allocated to the server, I would create the files for a new database, turn on trace flag 1117 and/or 1118 if desired, install your code and move all of your data using minimally logged inserts. That'll get your data nicely distributed across the the files.
The point of the above strategy is to avoid fragmentation. It can be a lot of extra work depending on how difficult it is to install your application and how data is distributed in your tables. I've never tried SHRINKFILE (Test1data, EMPTYFILE)
so I can't comment on that.
-
>>First I would seek clarification on what the long term plan is: The primary reason for splitting is because we have been told by our VM group they will not provide a partition > 2Tb, so to answer your question we would like to use 3 files and allow the DB to grow evenly across all them. Would it be possible to go into a little more detail for moving the data into the new created files? My initial investigation yielded the DBCC SHRINKFILE (Test1data, EMPTYFILE) option to 'migrate' the data off the present file and evenly distribute across the 3 new files. Is this a valid method?Mark Henry– Mark Henry2017年05月09日 13:36:32 +00:00Commented May 9, 2017 at 13:36
-
@MarkHenry Tried to make my answer a little more relevant. I mostly hear about this stuff second hand so I don't think I can be more helpful.Joe Obbish– Joe Obbish2017年05月17日 03:55:38 +00:00Commented May 17, 2017 at 3:55