Stay organized with collections
Save and categorize content based on your preferences.
This page describes how to set up your data pipeline to read data from a
Microsoft SQL Server table.
Before you begin
Sign in to your Google Cloud account. If you're new to
Google Cloud,
create an account to evaluate how our products perform in
real-world scenarios. New customers also get 300ドル in free credits to
run, test, and deploy workloads.
In the Google Cloud console, on the project selector page,
select or create a Google Cloud project.
Roles required to select or create a project
Select a project: Selecting a project doesn't require a specific
IAM role—you can select any project that you've been
granted a role on.
Create a project: To create a project, you need the Project Creator
(roles/resourcemanager.projectCreator), which contains the
resourcemanager.projects.create permission. Learn how to grant
roles.
In the Google Cloud console, on the project selector page,
select or create a Google Cloud project.
Roles required to select or create a project
Select a project: Selecting a project doesn't require a specific
IAM role—you can select any project that you've been
granted a role on.
Create a project: To create a project, you need the Project Creator
(roles/resourcemanager.projectCreator), which contains the
resourcemanager.projects.create permission. Learn how to grant
roles.
Enable the Cloud Data Fusion, BigQuery, Cloud Storage, and Dataproc APIs.
Roles required to enable APIs
To enable APIs, you need the Service Usage Admin IAM
role (roles/serviceusage.serviceUsageAdmin), which
contains the serviceusage.services.enable permission. Learn how to grant
roles.
In Cloud Data Fusion, click menuMenu and go to the Pipeline Studio page.
Click
add
Add.
For the driver, click Upload.
Select the JAR file, located in the jre7 folder.
Click Next.
To configure the driver, enter a Name and Class name.
Click Finish.
Deploy the SQL Server Plugin
In Cloud Data Fusion, click Hub.
In the search bar, enter SQL Server Plugins.
Click SQL server plugins.
Click Deploy.
Click Finish.
Click Create a pipeline.
Connect to SQL Server
You can connect to SQL Server from Cloud Data Fusion in Wrangler or the Pipeline Studio.
Wrangler
In Cloud Data Fusion, click menuMenu and go to the Wrangler page.
Click Add connection.
An Add connection window opens.
Click SQL Server to verify that the driver is installed.
JAR uploaded.
Enter details in the required connection fields. In the Password field, select the
secure key you stored previously.
It ensures that your password is retrieved using Cloud KMS.
Choose password.
To check that a connection can be established with the database, click
Test connection.
Click Add connection.
After your SQL Server database is connected and you've created a pipeline that
reads from your SQL Server table, you can apply transformations and
write your output to a sink.
Pipeline Studio
Open your Cloud Data Fusion instance and go to the Pipeline Studio
page.
Expand the Source menu and click SQL Server.
SQL Server.
On the SQL Server node, click Properties.
Properties.
In the Reference name field, enter a name that
identifies your SQL Server source.
In the Database field, enter the name of the database to connect to.
In the Import query field, enter the query to run. For example,
SELECT * FROM table WHERE $CONDITIONS.
Click Validate.
Click close close.
After your SQL Server database is connected and you've created a pipeline that
reads from your SQL Server table, add any desired transformations and
write your output to a sink.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025年10月16日 UTC."],[],[]]