I'm trying to move SQL Server database files into an Azure storage account container. Creating or restoring databases in container storage works flawlessly, but moving fails.
First, I create a credential with the SAS token (permissions: rwdl):
CREATE CREDENTIAL [https://foobar.blob.core.windows.net/databases]
WITH IDENTITY='SHARED ACCESS SIGNATURE', SECRET = 'secretsecretsecret'
After that I'm altering the existing database and point the both data and log file to the storage account:
ALTER DATABASE [AdventureWorks] MODIFY FILE
( NAME = AdventureWorks, FILENAME =
'https://foobar.blob.core.windows.net/databases/AdventureWorks.mdf');
ALTER DATABASE [AdventureWorks]
MODIFY FILE ( NAME = AdventureWorks_log, FILENAME =
'https://foobar.blob.core.windows.net/databases/AdventureWorks.ldf');
After I copied the MDF and LOG file to the databases
container, I restart the database:
ALTER DATABASE [AdventureWorks] SET OFFLINE WITH ROLLBACK IMMEDIATE;
ALTER DATABASE [AdventureWorks] SET ONLINE;
But I'm getting this error:
Msg 5181, Level 16, State 5, Line 8
Could not restart database "AdventureWorks". Reverting to the previous status.
Msg 5069, Level 16, State 1, Line 8
ALTER DATABASE statement failed.
However, if I'm creating a new database and point the files to the storage account, it works:
CREATE DATABASE [WorkingAdventure] ON PRIMARY
( NAME = N'WorkingAdventure', FILENAME =
N'https://foobar.blob.core.windows.net/databases/WorkingAdventure.mdf',
SIZE = 1GB, MAXSIZE = UNLIMITED, FILEGROWTH = 50MB)
LOG ON
( NAME = N'WorkingAdventure_log', FILENAME =
N'https://foobar.blob.core.windows.net/databases/WorkingAdventure.ldf',
SIZE = 2MB, MAXSIZE = 250MB, FILEGROWTH = 1MB)
Restoring a backup works, too.
I'm trying to avoid creating a backup from the databases and restore them into a storage account, because of the database sizes (in total about 20TB).
1 Answer 1
Before we go further, I don't think your method will work for what you want since you have the steps slightly out of order. You will need to take the database offline, THEN move/copy the files, THEN run the SET ONLINE command to bring it back. This should work, but won't give you the smallest downtime that you apparently are looking for.
My suggestion is as follows:
- Create a new database <MyDatabase_Copy> with the files in the storage account.
- Restore (with NORECOVERY) a full, then diff, then rolling log files from your database to Database_Copy Keep restoring log files as they are generated until you are ready
- Identify your window!
- Take a last log file
- take the database offline, detach
- restore the last log file to <MyDatabase_Copy> (WITH RECOVERY)
- RENAME MyDatabase_Copy to MyDatabase
- let users back in.
Total downtime should be in the 10's of minutes range AND you have a method back if it falls apart on you; just detach the "new" and reattach the "old".
Make sure your backup files are compressed. If you really do have issues getting files into the cloud fast enough, MSFT doesn't talk about it much but I'm sure they are still doing the "ship us a drive" method; never underestimate the transmission speed of a station wagon full of thumbdrives.
I did also see this method for moving database files while the data is online, but it's not complete (can't move system objects using this method) and seems pretty janky/error prone. But it looks like it could technically work although 20TB?? Link - https://www.itprotoday.com/sql-server/move-database-files-without-taking-the-database-offline
-
First of all thank you for your advice and time. The downtime though isn't the issue here, it doesn't matter at all. These databases (about 30 of them) are dead data, just archives basically. It doesn't seem to work even with the order (first offline, then copy) you proposed. I already started to backup/restore the databases, I still have the weekend, it will be done on Monday, I hope. Anyway, I'm still interested on why this doesn't work.s3b– s3b2024年08月23日 20:15:06 +00:00Commented Aug 23, 2024 at 20:15
DETACH
andATTACH
method?