Showing posts with label SQL. Show all posts
Showing posts with label SQL. Show all posts

Tuesday, May 8, 2012

Defragmenting a SQL Database

Issue

Generally, it is not necessary to defragment a SQL database. SQL generally does a good job of managing file resources. However, there are times when it may be necessary. One example is a database that is stored on a drive fairly full ( > 80% utilization). In this case, DB expansions are likely to result in excessive fragmentation, especially if you have not changed the default 1Mb DB expansion size.

Resolution

There are two major ways to defrag a SQL database.

  1. Take the database offline and detach it. Move the files to a second drive. Defragment the source volume and move the files back, then re-attach the database.
  2. The second option lets you defrag the files where they sit, while users are using the database. Download the Contig.exe utility below. This is a defragmentation utility produced by SysInternals that allows you to defrag a single file. Run contig /? from the command line to retrieve a list of available commands. Notably, contig -a <filename> will return an analysis of the file and list the number of fragments. Contig also supports wildcards, so the command contig -a -s *.mdf will return a list fragmentation in all .MDF files in the current folder and all sub-folders.

Lastly - Please, for gosh sakes, backup any files before you make such a low-level change.

Resources

http://technet.microsoft.com/en-us/sysinternals/bb897428

Syntax

Usage:

contig [-a] [-s] [-q] [-v] [existing file]

or contig [-f] [-q] [-v] [drive:]

or contig [-v] -n [new file] [new file length]

-a: Analyze fragmentation

-f: Analyze free space fragmentation

-q: Quiet mode

-s: Recurse subdirectories

-v: Verbose

Contig can also analyze and defragment the following NTFS metadata files:

$Mft

$LogFile

$Volume

$AttrDef

$Bitmap

$Boot

$BadClus

$Secure

$UpCase

$Extend

Wednesday, February 1, 2012

Setting up SQL Notifications and Alerts

When running a SQL server, it’s handy to let it tell you how things are going, rather than having to go look at the logs every so often and see if anything’s broken.

Here are the steps to do this. In this case, I have a large number of clients. To standardize, I setup a single GMail account to use for the outgoing email notifications for all the servers I manage. If you have an internal mail server or are using something other than GMail, you’re a big kid now - plug your settings in.

Tasks:

  • Setup your mail account
  • Setup Database mail
  • Setup Operators
  • Setup Alerts
  • Maintenance Plan Failure notification
  • Testing / Debugging
  • Resources

Note: These settings are Instance specific, so you'll need to do this for EACH instance.

 

Setup your Mail Account

Setup your GMail account. Verify that you can send / receive emails.

Setup Database Mail

Setup Mail Profile

  • SQL Mgt Studio
  • Management Folder / Database Mail / Right Click / Configure
  • Next / Setup up DB Mail / Next (If prompted to enable Database Mail, choose Yes)
  • Profile Name: Alert Notifications
  • Add new SMTP Account
  • Account name: {Descriptive name of your email user} 
  • Email address: {yourmail@server.com}
  • Display Name: SQL Alert – {Server \ Instance name} IE: JumboCorp \ CounterPoint
  • Reply email: {yourmail@server.com}
  • Email Server: smtp.gmail.com
  • Port: 587
  • Require secure connections: checked
  • Enter the basic authentication username / password. Note that for GMail you’ll use your full email address
  • Click Next
  • Check Public Profile / Set Default Profile to Yes
  • Click Next
  • Accept defaults for system parameters / Next
  • Review / Finish / Close

To test, right click Database Mail, choose Send Test E-Mail.

clip_image001

image

image

clip_image004

clip_image005

Setup Operators

  • Right Click Operators under SQL Server Agent / New Operator
  • Name: {Descriptive name for destination email address} 
  • E-mail name: {destinationemail@server.com}
  • Save

Setup additional operators if other users wish to be notified of SQL Alerts

 

Setup Alerts

Enable Alerts in the SQL Agent

  • Right click SQL Server Agent / Properties / Alert System
  • Enable mail profile
  • Mail system: Database Mail
  • Mail Profile: Accelerando Notification
  • Ok
  • Restart the SQL Server Agent service
    • (If notifications don't go out after a SQL Agent restart, you may also need to restart the SQL Service)

Setup Alerts - General Alerts

Run the this script against the Master database.

 

Enabling Alert Notification

After the alerts are created, you'll need to activate notification for each of them (one at a time)

  • Right click the alert / Properties
  • Response Tab / Notify operators: checked
  • Check email for Accelerando SQL Alerts
  • Click Options Tab / Include alert error in: Email (checked)
  • Delay between responses: 180 (3 hours)
  • Ok

clip_image006

clip_image007

 

Large Log File Alert

Additional alerts - these cannot be scripted because they depend on specific databases. You'll need to create an alert for each of the databases you wish to monitor.

  • Right click Alerts under SQL Server Agent / New Alert
  • Alert name: {Servername / Instance Name - DBNAME} DB - Large Log File Alert
  • Type: SQL Server performance condition alert
  • Object: SQL Server Databases
  • Counter: Log File(s) Size (KB)
  • Instance: Choose the DB to monitor
  • Alert if counter: rises above
  • Value: {Insert reasonable number here. For large databases 50000000 would be 50GB)
  • Enable the email alert notifications as in the section above

Maintenance Plan Success / Failure notification

Open the maintenance plan to which you'd like to add notifications. You can add notifications to just the backup task or to multiple tasks. To add a notification, use the following steps:

  • Grab the Notify Operator Task and drag it into the maintenance plan.
  • From the step on which you'd like to receive the notification, grab an available component output and connect it to the Notify Operator Task. Right click on the component output and choose whether you'd like the notification to happen on success or failure.
  • Edit the Notify Operator Task and check any operators that should be alerted in this notification
  • In the Subject line, enter {Server\Instance} - {Action} Failed / Succeeded
    • For example: JumboCorp\DynamicsGP - Transaction Log backup failed
  • In the notification message, repeat the above line.
  • Repeat the steps above for any other points in the plan that you'd like notification for.
  • NOTE: If you want to receive both a success and failure notification, you'll need to add two separate Notify Operator Tasks

clip_image008

clip_image009

Testing / Debugging

Test Alert Level Notifications

Run the following script against the Master database. It should trigger an alert notification and email response.

RAISERROR('This is a test', 17, 1) WITH LOG

Checking Sent Mail status

Run the following script against the Master database. It will return a list of the emails that have been sent and show the status of each of the emails

SELECT * FROM msdb.dbo.sysmail_allitems
SELECT * FROM msdb.dbo.sysmail_event_log

If sent emails fail, test by installing an email client on that machine with the same configuration as above.

Incoming mail server: pop.gmail.com
Port: 995
Require SSL: Checked

You can download Windows Live Mail here -
http://explore.live.com/windows-live-essentials-other-programs?T1=t2

Only install the email client (live mail)

Resources

SQL Server 2008 : Database Mail - Using SQL Server Agent Mail
http://mscerts.programming4.us/sql_server/sql%20server%202008%20%20%20database%20mail%20-%20using%20sql%20server%20agent%20mail.aspx

Setup SQL Server 2008 Maintenance Plan Email Notifications
http://808techblog.com/2009/07/setup-sql-server-2008-maintena.html

SQL Server Alerts: Soup to Nut
http://www.simple-talk.com/sql/database-administration/sql-server-alerts-soup-to-nuts/

SQL Script - General Alerts

Requirements

DBMail activated
Run against Master DB

 

Script

USE [msdb]
GO

/****** Object:  Alert [Alert Level 17 - Insufficient Resources]    Script Date: 11/16/2011 12:20:18 ******/
EXEC msdb.dbo.sp_add_alert @name=N'Alert Level 17 - Insufficient Resources',
        @message_id=0,
        @severity=17,
        @enabled=1,
        @delay_between_responses=10800,
        @include_event_description_in=1,
        @category_name=N'[Uncategorized]',
        @job_id=N'00000000-0000-0000-0000-000000000000'
GO

/****** Object:  Alert [Alert Level 18 - Nonfatal Internal Error]    Script Date: 11/16/2011 12:20:18 ******/
EXEC msdb.dbo.sp_add_alert @name=N'Alert Level 18 - Nonfatal Internal Error',
        @message_id=0,
        @severity=18,
        @enabled=1,
        @delay_between_responses=10800,
        @include_event_description_in=1,
        @category_name=N'[Uncategorized]',
        @job_id=N'00000000-0000-0000-0000-000000000000'
GO

/****** Object:  Alert [Alert Level 19 - Fatal Error in Resource]    Script Date: 11/16/2011 12:20:18 ******/
EXEC msdb.dbo.sp_add_alert @name=N'Alert Level 19 - Fatal Error in Resource',
        @message_id=0,
        @severity=19,
        @enabled=1,
        @delay_between_responses=10800,
        @include_event_description_in=1,
        @category_name=N'[Uncategorized]',
        @job_id=N'00000000-0000-0000-0000-000000000000'
GO

/****** Object:  Alert [Alert Level 20 - Fatal Error in Current Process]    Script Date: 11/16/2011 12:20:18 ******/
EXEC msdb.dbo.sp_add_alert @name=N'Alert Level 20 - Fatal Error in Current Process',
        @message_id=0,
        @severity=20,
        @enabled=1,
        @delay_between_responses=10800,
        @include_event_description_in=1,
        @category_name=N'[Uncategorized]',
        @job_id=N'00000000-0000-0000-0000-000000000000'
GO

/****** Object:  Alert [Alert Level 21 - Fatal Error in Database Processes]    Script Date: 11/16/2011 12:20:18 ******/
EXEC msdb.dbo.sp_add_alert @name=N'Alert Level 21 - Fatal Error in Database Processes',
        @message_id=0,
        @severity=21,
        @enabled=1,
        @delay_between_responses=10800,
        @include_event_description_in=1,
        @category_name=N'[Uncategorized]',
        @job_id=N'00000000-0000-0000-0000-000000000000'
GO

/****** Object:  Alert [Alert Level 22 - Fatal Error: Table Integrity Suspect]    Script Date: 11/16/2011 12:20:19 ******/
EXEC msdb.dbo.sp_add_alert @name=N'Alert Level 22 - Fatal Error: Table Integrity Suspect',
        @message_id=0,
        @severity=22,
        @enabled=1,
        @delay_between_responses=10800,
        @include_event_description_in=1,
        @category_name=N'[Uncategorized]',
        @job_id=N'00000000-0000-0000-0000-000000000000'
GO

/****** Object:  Alert [Alert Level 23 - Fatal Error: Database Integrity Suspect]    Script Date: 11/16/2011 12:20:19 ******/
EXEC msdb.dbo.sp_add_alert @name=N'Alert Level 23 - Fatal Error: Database Integrity Suspect',
        @message_id=0,
        @severity=23,
        @enabled=1,
        @delay_between_responses=10800,
        @include_event_description_in=1,
        @category_name=N'[Uncategorized]',
        @job_id=N'00000000-0000-0000-0000-000000000000'
GO

/****** Object:  Alert [Alert Level 24 - Fatal Error: Hardware Error]    Script Date: 11/16/2011 12:20:19 ******/
EXEC msdb.dbo.sp_add_alert @name=N'Alert Level 24 - Fatal Error: Hardware Error',
        @message_id=0,
        @severity=24,
        @enabled=1,
        @delay_between_responses=10800,
        @include_event_description_in=1,
        @category_name=N'[Uncategorized]',
        @job_id=N'00000000-0000-0000-0000-000000000000'
GO

/****** Object:  Alert [Alert Level 25 - Fatal Error]    Script Date: 11/16/2011 12:20:19 ******/
EXEC msdb.dbo.sp_add_alert @name=N'Alert Level 25 - Fatal Error',
        @message_id=0,
        @severity=25,
        @enabled=1,
        @delay_between_responses=10800,
        @include_event_description_in=1,
        @category_name=N'[Uncategorized]',
        @job_id=N'00000000-0000-0000-0000-000000000000'
GO

Friday, October 28, 2011

Recommended SQL Source Books

Based on a recommendation list by Kevin Kline of Quest Software. This is his recommended list of, if I was only going to buy one or two books in each category of SQL management, here’s what I would buy. They’re on my Amazon wishlist if someone’s feeling generous…anyone…Bueller?

For T-SQL programming
 

For SQL Internals

For Administration

Friday, August 12, 2011

Troubleshooting CRM Performance

Once in a while, the SQL box hosting CRM at the office begins to bog down. Here are a few scripts and procedures that I use to help troubleshoot what can be causing the slowdowns.

First of all – start with the basics. Does the server have adequate disk space. How badly fragmented are the drives?

NOTE: The built-in Windows Defrag won’t touch files larger than 2 GB so you may have to defrag by moving the large files onto another volume, defragging, then bringing them back.

Find a SQL process with high CPU utilization

Here’s a general SQL script that will show you the process causing high CPU usage. You can then use the SPID to kill the offending process. Insert the name of your organization database into the script as noted.

USE Master
GO

DECLARE @DATABASE_ID INT
SET @DATABASE_ID = DB_ID(‘organizationname_MSCRM');

SELECT ST.TEXT,
SP.*
FROM DBO.SYSPROCESSES SP
CROSS APPLY SYS.DM_EXEC_SQL_TEXT(SP.SQL_HANDLE) ST
WHERE SP.DBID = @DATABASE_ID
ORDER BY CPU DESC
GO

Once you know which process is stuck, you can kill it using the following command. Insert the correct SPID you gathered from the script above. NOTE: KILLING A RUNNING PROCESS IS NORMALLY A BAD IDEA. Only use this to kill a process that’s hung.

Kill spid

Cleaning up completed workflows

As workflows complete, their records remain in the AsyncOperationsBase table. Over time, this table can grow to enormous size. To resolve the problem, run the following script against your organization_MSCRM database, replacing with your orgname where noted. This is a one-time running script. If you want it to run on a recurring basis, you could set it up as a SQL job.

NOTE: There is sometimes business value in being able to see completed workflows. Make sure you really want them gone before running the script.

IF EXISTS (SELECT name from sys.indexes
                  WHERE name = N'CRM_AsyncOperation_CleanupCompleted')
      DROP Index AsyncOperationBase.CRM_AsyncOperation_CleanupCompleted
GO
CREATE NONCLUSTERED INDEX CRM_AsyncOperation_CleanupCompleted
ON [dbo].[AsyncOperationBase] ([StatusCode],[StateCode],[OperationType])
GO

declare @DeleteRowCount int
Select @DeleteRowCount = 2000
declare @DeletedAsyncRowsTable table (AsyncOperationId uniqueidentifier not null primary key)
declare @continue int, @rowCount int
select @continue = 1
while (@continue = 1)
begin     
begin tran     
insert into @DeletedAsyncRowsTable(AsyncOperationId)
      Select top (@DeleteRowCount) AsyncOperationId from AsyncOperationBase
      where OperationType in (1, 9, 12, 25, 27, 10) AND StateCode = 3 AND StatusCode in (30, 32)    
       Select @rowCount = 0
      Select @rowCount = count(*) from @DeletedAsyncRowsTable
      select @continue = case when @rowCount <= 0 then 0 else 1 end     
        if (@continue = 1)        begin
            delete WorkflowLogBase from WorkflowLogBase W, @DeletedAsyncRowsTable d
            where W.AsyncOperationId = d.AsyncOperationId            
delete BulkDeleteFailureBase From BulkDeleteFailureBase B, @DeletedAsyncRowsTable d
            where B.AsyncOperationId = d.AsyncOperationId
delete WorkflowWaitSubscriptionBase from WorkflowWaitSubscriptionBase WS, @DeletedAsyncRowsTable d
where WS.AsyncOperationId = d.AsyncOperationID
            delete AsyncOperationBase From AsyncOperationBase A, @DeletedAsyncRowsTable d
            where A.AsyncOperationId = d.AsyncOperationId            
            delete @DeletedAsyncRowsTable     
end      
commit
end
--Drop the Index on AsyncOperationBase
DROP INDEX AsyncOperationBase.CRM_AsyncOperation_CleanupCompleted

The above script was taken from Microsoft KB968520

Finding and removing waiting workflows

Here’s a script to list open workflow operations on the sql server. These are Workflows that are stuck in a waiting state. Too many and the server starts bogging down. I start getting worried at 50,000 and then at 100,000 we usually see the server start to croak.

Run this against your organization_mscrm database.

Select DISTINCT(asyncoperation.Name ), COUNT(asyncoperation.Name ) AS NUM
from asyncoperation
where StateCode = 1
    and RecurrenceStartTime is null
    and DeletionStateCode = 0
Group BY asyncoperation.Name
ORDER BY NUM DESC

An example might be a workflow that no longer exists, but has live versions of itself stuck in a waiting state. To remove these orphaned workflows, use the following steps. You can get the workflow name from the script above. Insert into the marked locations below.

PLEASE NOTE: MS DOES NOT SUPPORT editing data directly in the database. You do this at your own risk, and for the sake of all that’s holy, backup your database first.

delete from workflowwaitsubscriptionbase

where asyncoperationid in

(select asyncoperationid from asyncoperationbase where name = 'insert name of workflow here' and StateCode = 1)

delete from workflowlogbase

where asyncoperationid in

(select asyncoperationid from asyncoperationbase where name = 'insert name of workflow here' and StateCode = 1)

delete from asyncoperationbase where name = 'insert name of workflow here' and StateCode = 1

Async service on multi-core

KB Article 2489162 on Customersource / Partnersource has a good tip on improving CRM Async performance on multi-core servers. Apparently the Async service doesn’t automatically take advantage of multi-core processors. Login to CS/PS and search for article 2489162 to find the article titled How to improve Microsoft CRM Async Performance on multi-core servers

Tuesday, June 14, 2011

SQL Server–Reboot Pending

An annoying error that stops SQL reconfigs dead. Sometimes it’s warrented…so reboot your freaking server already. However, sometimes it’s a case of the computer’s right hand not knowing what it’s left one is doing and something gets stuck in the registry. Clearing that pesky stop error can sometimes be resolved by clearing the entries found on the following key:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\PendingFileRenameOperations

Being a geek, I always save the key out before I delete it, but then I’ve been around long enough to have that healthy fear.

Wednesday, January 26, 2011

Renaming a SQL Server

We use virtual machines in-house to test different software configurations. Some of these servers run SQL and a part of that process is renaming the servers. SQL doesn’t like this because it stores its own name internally. Sometimes stored procedures fail, scripts may not work right and SQL Agent jobs may freak out. However, there’s a process for renaming that works pretty well.

On the SQL Server, you can retrieve the internally stored name by running the query:

SELECT @@SERVERNAME AS 'Server Name'

If the instance of SQL you’re working with is the default instance, you can change the servername using the following script:

sp_dropserver OLD_NAME

GO

sp_addserver NEW_NAME, local

GO

If the instance you’re working with is not the default instance, use the script:

sp_dropserver 'OLD_NAME\instancename'
GO
sp_addserver 'OLD_NAME\instancename', local
GO



Note that you can’t change the instance name using this process. 


The information here was pulled from Microsoft Technet at http://technet.microsoft.com/en-us/library/ms143799.aspx

Saturday, January 22, 2011

SQLPass 2011 Notes

01/22/2011 – SQLPass training at UofL in louisville Kentucky

Live notes from sessions

 

Session 1 - SQL Query performance optimization

Use Set Statistics ON to get feedback on the efficiency of your statements. Provides results, execution plan, and message that shows efficiency of the statement.

Thumb rule - whenever possible, do not use functions on select statements, far more 'busy'

BAD:

SELECT ModifiedDate

FROM SALES.Customer c

WHERE YEAR(ModifiedDate) = 2005

GOOD

SELECT ModifiedDate

FROM Sales.Customer c

WHERE ModifiedDate >= '01/01/2005' AND ModifiedDate < '01/01/2006'

Parameter Sniffing

Local variables are expensive – compiler has to make assumptions about values when creating the plan and account for possibilities.

Instead, use stored procedures with parameters. Compilation uses the actual values when compiling.

NOTE: stored proc execution plan compiles for the values present when compiled. If you rerun with new values, this requires a new plan and may be more expensive.

Action: Check out Cross Apply action.

Better Data Vis with Reporting Svcs 2008 Rw

Presenter Arie Jones arie.jones@pti.net

www.programmersedge.com

www.sqlsherpa.com

www.twitter.com/programmersedge

Check out MongoDB video on YouTube

AJ’s 10 second rule – Data vis needs to be understandable within 10 seconds of viewing the report

Image Indicators

SQL 2008 R2 includes indicators as part of the available tool elements.

When setting ranges for kpi’s, start is >=, end is <

KPI’s include all the conditiional formatting capabilities of other report elements

Custom images can be imported and used in reports (company logo)

Data Bars

Show a bar-chart style representation of values

To show multiple styles of representation based on user preference, add multiple elements and use visibility property and show / hide based on a report parameter. Can even store user preference in db and pull back into reports using a 2nd dataset for the report.

Sparklines

Show trending over time

Can use tooltips for datapoints to display the values at specific points.

Switching between sparkline types maintains tooltips, conditional expressions, etc.

Spatial Data

Includes mappings by state, by county, checkbox for bing map linking

Metadata is included in the maps so the geolocation is not needed

Running in Windows7, you have to run dev environment as Administrator

New Features

Pagination improvements

Named Excel Worksheet Tabs

Text rotation up to 270 deg

Cache refresh plans

New Web Svc endpoints

Azure data retrieval

<fill additional feature list>

 

Database Design Considerations

  • Gather requirements
  • Initial design
    • Tables, Columns, Relationships, Constraints
  • Normal Forms <investigate>
  • Atomicity – break data down until it cannot be split further
    • Store data at the level you want to use it to report
  • To be in 1NF, some uniqueness needs to be on a column in the table that has meaning
  • FName,Lname,Children not 1NF – Custnum,FName,LName good
  • Investiage Boyce-Codd normal Forms

Real World Analytics

  • Ranking Data
  • Recursive expressions
  • Rows to columns / crosstab or pivot
  • Use DateDiff() to programmable start / end dates rather than using functions – functions cannot make use of table indexes and prompt full scans rather than seeks
  • Check on Anchor queries, recursive CTE
  • Mark Wills article Dynamic Pivot Procedure for SQL Server – http://e-e.com/A_653.html

Red Gate SQL Monitor

Presenter Louis Davidson – drsql@hotmail.com

Effective SQL: Understanding Order of Operation

Presenter Kevin C. Cross – Elliott Tool Technologies LTD

  • PEMDAS
    • Parenthesis
    • Exponents
    • Multiplication and Division
    • Addition and Subtraction
  • Additional
    • Unary Operations
    • Modulo
    • Type Conversions / Collation
    • Bitwise and logical operations

BAD:

SELECT *

FROM Products

WHERE CATEGORY = 1 OR CATEGORY = 2

AND PRICE <=40

The price <=40 is tied to the second Category, not as a second where clause

GOOD:

SELECT *

FROM PRODUCTS

WHERE (CATEGORY = 1 OR CATEGORY = 2)

AND PRICE <= 40

  • To avoid integer division where 1/3 = 0, use mathematics in the query to cast one of the values to a float. This avoids the expensive use of dropping a formula on top of the column.

BAD: Select cast(column1) / column2

Good: SELECT column1 * 1.00 / column2

  • Like data type conversions, collation will be applied to string before comparison (EXPENSIVE)
  • Bitwise ops are equal to arithmetic (after multiplation / division)
  • When joining, start with the table that contains the most data you’re using – use the least selective table
  • Select precedence
    • from
    • where
    • group by
    • having
    • select
    • distinct
    • union all
    • order by
    • top

Friday, December 3, 2010

CounterPoint SQL DTS Errors

One of the software packages our company sells / supports is a Point of Sale package called CounterPoint by Radiant Systems. For more information about CounterPoint, check out our website at www.accelerando.net

One of our project managers was migrating a customer from the older Pervasive based version to the new SQL based version. CounterPoint provides a utility to import historical data the uses the DTS engine to push the data into the new databases.

As you may know, DTS is a depreciated feature and can be difficult to get running if running on a new system as the SQL installs no longer include it.

There is a backwards compatibility toolkit available for SQL that provides DTS capabilities and this was installed. However, CounterPoint was still firing errors when the DTS package kicked off. The errors we were getting in the log file when running were

Step 'DTSStep_DTSDataPumpTask_1' failed

Step Error Source: Microsoft Data Transformation Services (DTS) Data Pump
Step Error Description:ActiveX Scripting Transform 'DTSTransformation__4' was not able to initialize the script execution engine.

Ultimately, this came down to missing DLL files in the DTS implementation. Following is a list of the steps I came up with to enable DTS functionality.

Requirements

  • SQL 2005 Backward Compatibility Components (source below)
  • The redistributable DTS files (available on SQL 2000 Media)

Steps

  • Install the SQL 2005 Backward compatibility components
  • Open a command prompt and change directory to C:\Program Files\Microsoft SQL Server\80\Tools\Binn
  • Register the following list of DLL’s using the syntax regsvr32 dllname.dll
    • dtsffile.dll
    • dtspkg.dll
    • dtspump.dll
    • axscphst.dll
    • custtask.dll

In our case, the axscphst.dll file was not registered because it had not installed with the Backward Compatibility Components. I was able to retrieve the file from the SQL 2000 media

Note that each of the above DLL files has an associated .RLL file that is located in Resources\1033 under the Binn folder listed above. The .RLL files are also available from the SQL 2000 Media.

NOTE: For my fellow co-workers, I have a zip file put together with all the files involved. It’s available on the ADrive FTP site. Contact me for connection information if needed. There’s a publically downloadable link here - http://tinyurl.com/sql2000dts

In coming up with this solution, I relied on the following sources:

Monday, October 11, 2010

Shrink a LogFile in SQL

Over time, log files on SQL databases can become bloated. This procedure will provide scripts to shrink a log file back to manageable sizes.

1) Perform a complete backup of the database

2) Perform a backup of the transaction log file

3) Run following script to shrink the log file
USE [DB_NAME]
DBCC SHRINKFILE (N'DB_Logical_Name' , 100, TRUNCATEONLY)
BACKUP LOG DB_NAME WITH TRUNCATE_ONLY

4) Check the size of the log file. If it has not shrunk, there may be some data causing it to fail the move. Use the following script to create a sample table and fill it with data which _should_ free up the stuck data.

/* Create the table */
USE [DB_NAME]
DROP TABLE MyTable
CREATE TABLE MyTable (MyField VARCHAR(10), PK INT )
INSERT Mytable (PK) VALUES (1)

/* Fill the table */
SET NOCOUNT ON
DECLARE @Index INT
SELECT @Index = 0
WHILE (@Index < 20000)
BEGIN
   UPDATE MyTable SET MyField = MyField WHERE PK = 1
   SELECT @Index = @Index + 1
END
SET NOCOUNT OFF

5) After running the script rerun the shrink script. The transaction log should now shrink.

USE [DB_NAME]
DBCC SHRINKFILE (N'DB_Logical_Name' , 100, TRUNCATEONLY)
BACKUP LOG DB_NAME WITH TRUNCATE_ONLY

6) If it still hasn't shrunk, delete data from the MyTable table and rerun the fill script. Try the shrink again.

/* Diagnostics */
DBCC LOGINFO

 

This tip came from Rob Bamforth on his blog at http://robbamforth.wordpress.com/2009/11/16/sql-how-to-shrink-un-shrinkable-database-log-database-log-wont-shrink/

Techniques to Shrink a SQL Database

Sometimes the problem is not an inflated logfile or dbfile, but the data itself. In that case, cleaning out old data will help regain that valuable disk space.

A way to gain a little space without purging data is to reindex the database. Indexes, like many other things in SQL grow over time and a reindex will shrink them back down to their fighting weight.

 

lennox_lewis

 

To shrink the indexes, use the following script…

  • USE MyDatabase
  • GO
  • EXEC sp_MSforeachtable @command1="print '?' DBCC DBREINDEX ('?', ' ', 85)"
  • GO
  • EXEC sp_updatestats
  • GO

This script will take some time to process. After it is completed, you’ll need to shrink the actual file to regain that lost space.

  • DBCC SHRINKFILE (N'DB_Logical_Name' , 100, TRUNCATEONLY)

Monday, August 23, 2010

Hidden Attribute Max Length Prevents Relationship Mapping

We have setup a custom attribute called Toll Free that we use to track 800 numbers.

In trying to setup a relationship map between the Lead and Account entity, I received the well known error:

“This attribute map is invalid. The selected attributes should be of the same type. The length of the target attribute should be equal to or greater than the length of the source attribute and the formats should match. The target attribute should not be used in another mapping.”

In looking, I noticed that we have the attribute set to a max of 25 characters in the lead and the default 100 characters in the account. Oh, well there’s the problem. I edited the attribute in the Account and shortened it. Problem solved, right?

Nope, same error. After checking all the other relationships to make sure it wasn’t used elsewhere, I was about to tear out some hair. Then I stumbled on Ronald Lemmen’s article about the error. His article didn’t solve the problem, but in the comments, someone had the solution.

image

When an attribute is created, a second database field called MaxLengthCRMAttribute set to 2x the value of the length of the attribute. Here’s the key, when you shrink the length, it doesn’t shrink the max length. However, when you increase the length, it does.

So I edited the Lead attribute that had been created, set it to 100, published then set it back to 25 and published again. After that, I was able to add the relationship because the max length field values now matched.

Wednesday, August 5, 2009

SQL Maintenance Plans – Rebuild or Reorganize?

Over time, customers may begin to notice a slowdown in the responsiveness of their Dynamics GP or CRM systems. One of the culprits can be SQL indexes that are no longer valid. It’s like telling your SQL Server to drive to California and then handing it a 20 year old road map – it’s going to take a little longer than it should!

closemaps3[1]

This is where maintenance plans come in. In a perfect world, each of our customers would either hire a SQL DBA to constantly monitor the server, tweaking and optimizing or else pay us to do this. Yeah…right. This is where Maintenance Plans come in – ways of automating and scheduling those cleanup tasks so that the system doesn’t deteriorate over time.

Paul Thurrott of SuperSite for Windows posits that the brain is like a stack. Put a new memory in at the top and an old memory has to drop out the bottom. As I get older, I’m starting to believe this more and more! Memorize that esoteric SQL command and out drops that memory of Skippy, my pet turtle. (Did I have a turtle? I seem to remember something like that…)

That’s where this blog post comes in. I need to document those common tasks that I use every time I setup a new maintenance plan and there’s no reason you shouldn’t benefit as well.

Overview:

Create a single maintenance plan called SLQ Maintenance. Create sub-plans under that master plan for the different scheduled elements.

Nightly sub plan:

  • Full backup of all user databases
  • Delete any existing full database backups older than 1 week (or more, depending on disk space)
  • Delete any existing transaction log backups older than 1 week (or more, depending on disk space)
  • Reorganize Indexes
  • Update Statistics

Daily sub plan (every hour):

  • Transaction log backup of all user databases

Weekly sub plan:

  • DB Integrity Check
  • Full backup of all databases (including system)
  • Rebuild Indexes
  • Update Statistics
  • Delete history older than 8 weeks
  • Shrink Databases

DataBaseSmall[1]

According to Pinalkumar Dave, indexes should be rebuilt when greater than 40% fragmented. Between 10% and 40%, use reorganize.

Edit log for this post:

9/2 – Revised history deletion to put both full backup and transaction log backups in the same sub-plan.