Top Tips for Effective SQL Server Database Maintenance. SQL Server. Top Tips for Effective Database Maintenance. Paul S. Randal. At a Glance: Managing data and transaction log files. Eliminating index fragmentation. Ensuring accurate, up- to- date statistics. Detecting corrupted database pages. Related Resources. SQL Server script to rebuild all indexes for all t. Automating SQL Server fragmentation management. Automated and Formatted Index Maintenance. Tags : reindex. This script will automatically reindex all indexes the tables in a selected database. When DBCC DBREINDEX is used to rebuild indexes, bear in mind. After updating the statistics, the execution plans that use these statistics may become invalid. Ideally SQL Server should then create a new execution plan. Establishing an effective backup strategy. Several times a week I'm asked for advice on how to effectively maintain a production database. When the SQL MERGE statement was introduced in SQL Server 2008, it allowed database programmers to replace reams of messy code with something quick, simple and. This article mentions Script to Update Statistics for all databases in SQL Server. SQL SERVER 2005 uses ALTER INDEX syntax to reindex database. SQL SERVER 2005 supports DBREINDEX but it will be deprecated in future versions. Let us learn how to do. I have a question I tried to google it but looks like they don't like * I'm using SQL Server 2008. I have the following database table: P Sometimes the questions come from DBAs who are implementing new solutions and want helpfine- tuning maintenance practices to fit their new databases' characteristics. More frequently, however, the questions come from people who are not professional DBAs but for one reason or another have been given ownership of and responsibility for a database. I like to call this role the . My top five areas of concern are (in no particular order of importance): Data and log file management. Index fragmentation. Statistics. Corruption detection. Backups. An unmaintained (or poorly maintained) database can develop problems in one or more of these areas, which can eventually lead to poor application performance or even downtime and data loss. In this article, I'll explain why these issues matter and show you some simple ways to mitigate the problems. I will base my explanations on SQL Server. Specifically, you should make sure that: The data and log files are separated from each other and isolated from everything else as well. ![]() Auto- growth is configured correctly. Instant file initialization is configured. Auto- shrink is not enabled and shrink is not part of any maintenance plan. When data and log files (which ideally should be on separate volumes altogether) share a volume with any other application that creates or expands files, there is the potential for file fragmentation. In data files, excessive file fragmentation can be a small contributing factor in poorly performing queries (specifically those that scan very large amounts of data). In log files, it can have a much more significant impact on performance, especially if auto- growth is set to increase each file size only by a very small amount each time it is needed. Log files are internally divided into sections called Virtual Log Files (VLFs) and the more fragmentation there is in the log file (I use the singular here because there is no gain from having multiple log files—there should only be one per database), the more VLFs there are. Once a log file has more than, say, 2. VLFs, performance can be negatively impacted for log- related operations such as log reads (for transactional replication/rollback, for example), log backups, and even triggers in SQL Server 2. SQL Server 2. 00. The best practice regarding the sizing of data and log files is to create them with an appropriate initial size. For data files, the initial size should take into account the potential for additional data being added to the database in the short- term. For instance, if the initial size of the data is 5. GB, but you know that over the next six months an additional 5. GB of data will be added, it makes sense to create the data file to be 1. GB right away, rather than having to grow it several times to reach that size. It's a little more complicated for log files, unfortunately, and you need to consider factors like transaction size (long- running transactions cannot be cleared from the log until they complete) and log backup frequency (since this is what removes the inactive portion of the log). For more information, see . Auto- grow should be left on as a just- in- case protection so the files can still grow if they need to if some abnormal event occurs. The logic against leaving file management entirely to auto- grow is that auto- grow of small amounts leads to file fragmentation, and that auto- grow can be a time- consuming process that stalls the application workload at unpredictable times. The auto- grow size should be set to a specific value, rather than a percentage, to bound the time and space needed to perform the auto- grow, if it occurs. For instance, you may want to set a 1. GB data file to have a fixed 5. GB auto- grow size, rather than, say 1. This means it will always grow by 5. GB, no matter how large the file ends up being, rather than an ever- increasing amount (1. GB, 1. 1GB, 1. 2GB, and so on) each time the file gets bigger. When a transaction log is grown (either manually or through auto- grow), it is always zero- initialized. Data files have the same default behavior in SQL Server 2. SQL Server 2. 00. Contrary to popular belief, this feature is available in all editions of SQL Server. For more information, enter . Shrink can be used to reduce the size of a data or log file, but it is a very intrusive, resource- heavy process that causes massive amounts of logical scan fragmentation in data files (see below for details) and leads to poor performance. I changed the SQL Server 2. Books Online entry for shrink to include a warning to this effect. Manual shrinking of individual data and log files, however, can be acceptable under special circumstances. Auto- shrink is the worst offender as it starts every 3. It is a somewhat unpredictable process in that it only shrinks databases with more than 2. Auto- shrink uses lots of resources and causes performance- dropping fragmentation and so is not a good plan under any circumstances. You should always switch off auto- shrink with. ALTER DATABASE My. Database SET AUTO. If you find that your database continually grows after the maintenance plan shrinks it, that's because the database needs that space in which to run. The best thing to do is allow the database to grow to a steady- state size and avoid running shrink altogether. You can find more information on the downsides of using shrink, plus some commentary on the new algorithms in SQL Server 2. MSDN. There are two basic types of fragmentation that can occur within a data file: Fragmentation within individual data and index pages (sometimes called internal fragmentation)Fragmentation within index or table structures consisting of pages (called logical scan fragmentation and extent scan fragmentation)Internal fragmentation is where there is a lot of empty space in a page. As Figure 1 shows, each page in a database is 8. KB in size and has a 9. Inside The Storage Engine category). Empty space can occur if each table or index record is more than half the size of a page, as then only a single record can be stored per- page. This can be very hard or impossible to correct, as it would require a table or index schema change, for instance by changing an index key to be something that doesn't cause random insertion points like a GUID does. Figure 1 The structure of a database page (Click the image for a larger view)More commonly, internal fragmentation results from data modifications, such as inserts, updates, and deletes, which can leave empty space on a page. Mismanaged fill- factor can also contribute to fragmentation; see Books Online for more details. Depending on the table/index schema and the application's characteristics, this empty space may never be reused once it is created and can lead to ever- increasing amounts of unusable space in the database. Consider, for instance, a 1. Over time, the application's data modification pattern leaves each page with an average of 2. The total space required by the table is about 5. GB, calculated as 8. KB page, then dividing 1. If the space wasn't being wasted, then 2. GB. That's a huge savings! Wasted space on data/index pages can therefore lead to needing more pages to hold the same amount of data. Not only does this take up more disk space, it also means that a query needs to issue more I/Os to read the same amount of data. And all these extra pages occupy more space in the data cache, thus taking up more server memory. Logical scan fragmentation is caused by an operation called a page split. This occurs when a record has to be inserted on a specific index page (according to the index key definition) but there is not enough space on the page to fit the data being inserted. The page is split in half and roughly 5. This new page is usually not physically contiguous with the old page and therefore is called fragmented. Extent scan fragmentation is similar in concept. Fragmentation within the table/index structures affects the ability of SQL Server to do efficient scans, whether over an entire table/index or bounded by a query WHERE clause (such as SELECT * FROM My. Table WHERE Column. AND Column. 1 < 4. Figure 2 shows newly created index pages with 1. Figure 3 shows the fragmentation that can occur after random inserts/updates/deletes. Figure 2 Newly created index pages with no fragmentation; pages 1. Click the image for a larger view)Figure 3 Index pages showing internal and logical scan fragmentation after random inserts, updates, and deletes (Click the image for a larger view)Fragmentation can sometimes be prevented by changing the table/index schema, but as I mentioned above, this may be very difficult or impossible. If prevention is not an option, there are ways to remove fragmentation once it has occurred—in particular, by rebuilding or reorganizing an index. Rebuilding an index involves creating a new copy of the index—nicely compacted and as contiguous as possible—and then dropping the old, fragmented one. As SQL Server creates a new copy of the index before removing the old one, it requires free space in the data files approximately equivalent to the size of the index. In SQL Server 2. 00. In SQL Server 2. 00. Enterprise Edition, however, index rebuilding can take place online, with a few restrictions. Reorganizing, on the other hand, uses an in- place algorithm to compact and defragment the index; it requires only 8. KB of additional space to run—and it always runs online. In fact, in SQL Server 2. I specifically wrote the index reorganize code as an online, space- efficient alternative to rebuilding an index. In SQL Server 2. 00. ALTER INDEX . This syntax replaces the SQL Server 2.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. Archives
August 2017
Categories |