25 Apr How to Connect Python with SQL Server Database Using Pyodbc
In this tutorial, we will look at how to connect Python with SQL server using Pyodbc Library. The Pyodbc library is an open-source Python module that makes accessing ODBC databases simple. It is also a community-supported software. We have other Database Management Systems(DMS) that also work with ODBC. These Database Management Systems include; Oracle, MS Access, IBM Db2, MS SQL Server, and MySQL.
How to Get Started
Step1
The first thing we need to do is to install the Pyodbc Library in order to create a connection between Python and the SQL Server.
Installation
We can install Pyodbc with the PIP Package Installer.
pip install pyodbc
Step 2
Now, we need to create a connection using the Pyodbc library we just installed. The credentials we need to use are; Server Name, Database Name, and the table we want to connect to.
1#import the pyodbc library 2import pyodbc 3 4#create a connection with the pyodbc using the needed credentials 5the_conn = pyodbc.connect('Driver={SQL Server};' 6 'Server=my-project.xxxxxxxxxxxxxs.amazonaws.com;' 7 'Database=myData;' 8 'Trusted_Connection=yes;') 9#create an instance and execute using the select command to retrieve your table details 10cursor = the_conn.cursor() 11cursor.execute('SELECT * FROM your_table_name') 12#loop through the result to obtain your table data 13for j in cursor: 14 print(j)
How to Connect Pyodbc with Pandas
You can also use Pandas library to connect to SQL Server along with the Pyodbc. The Pandas library is well known for its flexibility and ease of use. You can follow the steps below to use the library with Pandas.
Install Pandas
We can install Pandas with the Pip Package or can be installed with Anaconda or Miniconda as it is part of the Anaconda distribution:
Install with PIP 1pip install pandas Install with Conda 1conda install pandas
From Pandas DataFrame to SQL
Here, we will read data from our computer using Pandas to a Dataframe. After then, we convert the Dataframe to a SQL Table. The SQLAlchemy create engine will also be used in the script to pass in our connection URL.
1#import our library 2import pyodbc 3import pandas as pd 4import urllib 5 6#import create engine from sqlalchemy 7from sqlalchemy import create_engine 8 9#this is what the credentials we'll be passing in looks like 10server = 'my-project.xxxxxxxxxxxxxs.amazonaws.com' 11database = 'myData' 12username = 'admin' 13password = 'xxxxxxxx' 14 15#the file we are bringing in, we read it with pandas 16df= pd.read_csv('Python_Project.csv') 17 18#we use the urllib_parse_quote to read in all our credentials to a url string 19quoted = urllib.parse.quote_plus("DRIVER={ODBC Driver 17 for SQL Server}; SERVER=my-project.xxxxxxxxxxxxxs.amazonaws.com;DATABASE=myData;UID=admin;PWD=xxxxxxxx") 20engine = create_engine('mssql+pyodbc:///?odbc_connect={}'.format(quoted)) 21#we convert the file to SQL Table 22df.to_sql('provide_the_name_for_our_table', schema='dbo', con = engine)
From SQL to Pandas DataFrame
Earlier we learned how to convert a Pandas DataFrame to SQL Table. We can also read SQL data with Pandas and convert it to a DataFrame.
1#import our libraries 2import pandas as pd 3import pyodbc 4 5#create a connection using your credentials 6the_conn = pyodbc.connect('Driver={SQL Server};' 7 'Server=my-project.xxxxxxxxxxxxxs.amazonaws.com;' 8 'Database=myData;' 9 'Trusted_Connection=yes;') 10#we use the pandas read sql_query along with our connection instance to retrieve our table 11df = pd.read_sql_query('SELECT * FROM students', the_conn) 12#we then print our data 13print(df.head(n=5))
No Comments