Normally, our log backups has < 10MB size In a big database with a lot of transactions per hour ).
But now we have a database, that is storing Images on a Image field.
the problem is, the database is HUGE ( it has now 500GB MDF, and 700GB on LDF file )and every log backup has more than 70GB !
What is the best way to store Images on a database? I've seen a lot of posts saying about store the Physical patch of the image on a VARCHAR field. But this way, we need to store the images in some place, right? It will fill the storage in the same way. Or am I wrong?
We have full backups every day at 22:00 and log backups every hour.
This is the sizes of the log backups.
EDIT1:
Each "Image" has a binary code, and it has 43679 characters ). 1 millions rows for this table ( 1.141.947 images ( with coduser, image, and etc )
EDIT2: We are now using varbinary(max), but it's not solving our problems. It has now almost 700mb after 20 minutes.
1 Answer 1
If you have a high volume of changes (inserts, updates or deletes) to those image BLOBs, then you would be best served to NOT store them within the database in the long run. Assuming your not having performance issues, which some features like FileStreaming and FileTables can help with, your problem is going to be the "velocity" of change to those BLOBs of data, which will manifest a single byte change within the BLOB into a full recreation of the entire BLOB in the transaction.
One possible "work-around" if you MUST keep them in the database AND you can tolerate some data lose of the binaries is instead store them in a separate database on the same instance. Set that database's recovery model to SIMPLE, and just do some extra differential or multi-day full backups. Then to query the data, utilize cross-database joins and/or views that do the same. Not an ideal situation, but at least a way to minimize data lose if your recovery needs can vary by individual data item (as opposed to the entire database).
-
This is a really good point ! Store this table in another database with simple recovery mode. We have 200 inserts each 60 seconds. this table has now 500mb ( after 10 minutes ( we droped the table and recreated it )).And, the problem is not really performance. I'ts disk storage space ( backup disk and MDF disk ).Racer SQL– Racer SQL2015年11月04日 20:18:18 +00:00Commented Nov 4, 2015 at 20:18
-
1Or at the very least, store the blobs in a separate, related table, instead of the table that is subject to all of the changes. Unless you're changing the blobs often (or these really are only inserts), this should minimize the logging activity, and if you are changing the blobs often, you want all of those changes captured in the log (that's kind of the point).Aaron Bertrand– Aaron Bertrand2015年11月04日 20:51:52 +00:00Commented Nov 4, 2015 at 20:51
Explore related questions
See similar questions with these tags.
varbinary(max)
which allows 2GB of image storage. Depending on your sql server edition, you should spend time exploring FILESTREAM for storing images.When you are using failover clustering, the FILESTREAM filegroups must be on shared disk resources.
, I need to use a disk that is inside the cluster, right? I can't use a external server, for example. I could configure it in my machine and it's perfect.