3

Let's say we have one SQL Server instance on one side:

  • SQL Server 2008 R2 SP3
  • Oracle 11.2 client installed with OLEDB Drivers for ODBC

.. and an Oracle database instance on another side:

  • Oracle Database 11.1.0.7 PSU 24
  • InstanceName: MERCURE

I created two database links on the SQL Server instance:

  1. MERCURE (using ODBC Server DSN and MSDASQL Provider)
  2. MERCURE_OLE (using OraOLEDB.Oracle)

On the Oracle instance (with instance name MERCURE), I created a schema and table as follows:

CREATE USER JEL_PERF_TST identified by "XXX";
GRANT CONNECT,RESOURCE,UNLIMITED TABLESPACE to JEL_PERF_TST;
ALTER SESSION SET CURRENT_SCHEMA=JEL_PERF_TST;
-- select DEFAULT_TABLESPACE from dba_users where username='JEL_PERF_TST';
-- USERS
-- create a table for testing (Taken from https://use-the-index-luke.com/sql/example-schema/oracle/3-minute-test)
CREATE TABLE TestTbl (
 id NUMERIC NOT NULL,
 date_column DATE,
 a NUMERIC,
 b NUMERIC,
 text VARCHAR(255),
 state CHAR(1),
 PRIMARY KEY (id)
);
 INSERT INTO TestTbl
 SELECT level
 , SYSDATE - level
 , MOD(level,1234)
 , TRUNC(DBMS_RANDOM.VALUE(1, 10))
 , DBMS_RANDOM.STRING('l', 20)
 , DECODE(MOD(level, 5), 'X', 'A')
 FROM dual
CONNECT BY level <= 50000;

So, basically, I have 50.000 rows in my table. Now, let's run the following queries on SQL Server instance:

First, let's activate statistics:

set statistics io on;
set statistics time on;

Then, run a first query using OraOLEDB.Oracle provider:

SELECT * FROM [MERCURE_OLE]..[JEL_PERF_TST].[TESTTBL];

Here is what I got:

SQL Server parse and compile time: 
 CPU time = 32 ms, elapsed time = 274 ms.
(50000 row(s) affected)
 SQL Server Execution Times:
 CPU time = 2199 ms, elapsed time = 5769 ms.

Performances aren't pretty good (more tha 8.5 ms for a single row) but it doesn't matter here. We have 100% of rows returned.

Let's now run the query using OPENQUERY and same Linked Server:

SQL Server parse and compile time: 
 CPU time = 0 ms, elapsed time = 103 ms.
(50000 row(s) affected)
 SQL Server Execution Times:
 CPU time = 2542 ms, elapsed time = 6716 ms.

Same here, performances are bad but we have 100% of rows returned.

Now, let's run the exact same query, but using MSDASQL provider:

select * from openquery(MERCURE,'select * from JEL_PERF_TST.TestTbl');

It results in:

 SQL Server parse and compile time: 
 CPU time = 15 ms, elapsed time = 107 ms.
(**49901** row(s) affected)
 SQL Server Execution Times:
 CPU time = 1357 ms, elapsed time = 3932 ms.

Timing is better, but where are my 99 missing rows ??

I've tried many changes to my ODBC connection descriptor (Enable/disable result sets, raise the Fetch Buffer Size, Use MSDTC or not, etc.). Nothing worked.

Any clue ?

asked Dec 2, 2016 at 13:41
1

1 Answer 1

1

Microsoft fixed this issue in Cumulative Update 4 for SQL Server 2014.

https://support.microsoft.com/help/2993262/

If you are unable to install the update, they provide a workaround:

To work around this issue, use a nested loops join or a hash join by using the Option command instead of the merge join.

Randolph West
3,73314 silver badges28 bronze badges
answered Nov 29, 2019 at 22:14

Your Answer

Draft saved
Draft discarded

Sign up or log in

Sign up using Google
Sign up using Email and Password

Post as a guest

Required, but never shown

Post as a guest

Required, but never shown

By clicking "Post Your Answer", you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.