Author name: cndro

Blogs

Getting Started With Regular Expressions in Python

                                                                      Photo by Alex Chumak on Unsplash In this tutorial, we’ll be discussing Regular Expressions in Python. According to Wikipedia, Regular Expressions can be defined as a sequence of characters that specifies a search pattern in text. We can go further by saying they are patterns used for matching character combinations in strings. Python provided a module that supports the use of regex. This module is known as re. We can import it for use in our code. So, we’ll discuss what the module entails and how we can use it. Let’s look at different examples of regex in Python. In this example, we use the search function to check if our test string starts with “I” and ends with “texas.” Below is our  result; 1 Got the search. Before we look at more examples, let’s discuss the different RegEx FunctionS available in Python. RegEx Functions 1 import re 2 3 pattern = ‘^I.*texas$’ 4 test_string = ‘I am in texas’ 5 result = re.match(pattern, test_string) 6 7 if result: 8 print(“Got the search.”) 9 else: 10 print(“Search unsuccessful.”)   findall: The findall function can be used to return all non-overlapping matches of the pattern in our string of data. Here in this example, we find all digits we have in our text string. 1 import re 2 # The string of text where regular expression will be searched. 3 string_1 = “””Here are some cudtomer’s id, Mr Joseph: 396 4 Mr Jones: 457 5 Mrs Shane: 222 6 Mr Adams: 156 7 Miss Grace: 908″”” 8 # Setting the regular expression for finding digits in the string. 9 regex_1 = “(\d+)” 10 match_1 = re.findall(regex_1, string_1) 11 print(match_1)   We should have this as our result. 1 [‘396’, ‘457’, ‘222’, ‘156’, ‘908’]   Search: . The search function is used with the regex module to search if a particular pattern exists within a string. If the search comes out successful, it returns the match object, but if not, it returns none. Look at the example we have here; In this example, we will use the search function to look for “school” in our string pattern. 1 2import re 3 4s = ‘what school did you graduated from?’ 5 6match = re.search(r’school’, s) 7 8print(‘Start Index:’, match.start()) 9print(‘End Index:’, match.end())   It gave us the result below. This result tells us where the start index of school is and the end. This should match if we count that from the string. 1 Start Index: 5 2 End Index: 11 Split: In the split function, it splits the string based on the occurrence of the regex pattern we specified and then returned the list containing our substrings. An example is here below for us to understand this better. In this example, we split the string by the first two hyphens and the pattern sequence for handling hyphens or non-alphanumeric characters. 1 import re 2 3#the pattern sequence for hyphen or non alphanumeric chracter 4pattern = r’\W+’ 5 6# our targeted string 7string = “100-joe-01-10-2022” 8 9#we want to split the string by the first 2 hyphens 10txt_ = re.split(pattern, string, maxsplit=2) 11 12print(txt_)   Output 1 [‘100’, ‘joe’, ’01-10-2022′] Sub: The sub-function can be used with the regex module to replace multiple elements in a string and now return the new replaced string. Let’s see the example below; 1 import re 2 3 # Our Given String 4 s = “Debugging is very important when coding.” 5 6 # Performing the Sub() operation 7 out_1 = re.sub(‘a’, ‘x’, s) 8 out_2 = re.sub(‘[a,I]’,’x’,s) 9 out_3 = re.sub(‘very’,’not’,s) 10 11 # Print output 12 print(out_1) 13 print(out_2)   Output 1 Debugging is very importxnt when coding. 2 Debugging is not important when coding.   Now, if we observe, we used different patterns in each of the functions we mentioned earlier. This brings us to Meta Characters. Python RegEx Meta Characters The Meta Characters are very useful in defining rules to find the specific pattern we want in a string. These Meta Characters are listed below; And that’s about Regular Expressions in Python. Thanks for reading this article. See you in the next post.

Blogs

What is Datamart in Power BI?

Datamart is one of the newest addition to the Power BI component. It helps to bridge the gap between business users and IT. It provides an easy and no-code experience to ingest data from different data sources and perform ETL on the data using Power Query. After that, we can load the data to an Azure SQL database which is fully managed and doesn’t require tuning or optimization. Datamart also provides a single platform to carry out all these processes without needing an extra tool, so we have our Dataflow, Azure SQL Database, Power BI Dataset, and a Web UI all in one place. The people who use Datamart consist of  Data Analysts, Developers, and Business owners.   Let’s discuss the features and benefits of Datamart. Features of Datamart in Power BI a)The Datamart tool has an automated performance tuning and optimization whereby you don’t need to do the tuning yourself. b)It is also a 100% web-based tool that requires no extra software. c) Datamart has a native integration with Power BI and other Microsoft analytics software. d)It has a friendly user interface. It requires no coding experience to use it. e)DataMart is supported to use with SQL and other in-demand client tools.   Benefits of DataMart a)Datamart is very efficient in data ingestion and performing Extraction, transformation, and loading of data with SQL b)To use Datamart, you don’t have to be a programmer. c)Datamart also provides self-service users to carry out relational database analytics without the aid of a database administrator. d)Datamart enable Power BI users to build end-to-end solutions without dependencies on other tooling or IT teams. e) Datamart provides a centralized small to moderate data volume (approximately 100 GB)for self-service users. Comparison Between DataFlow and DataMart Remember we talked about Power BI DataFlow in our earlier tutorial? We mentioned it’s a data transformation component in Power BI with a Power Query process running in the cloud. It helps store data into CDM(Common Data Model) inside Azure Data Lake storage. Power BI uses these Dataflows to ingest data into our Datamarts. We use dataflows whenever we want to utilize our ETL logic. When discussing the features, we talked about Datamart. We mentioned it’s a fully managed database that enables us to store our data in a relational managed Azure SQL DB; we also said it’s a no-code visual query designer. So in DataFlow, we can’t browse tables, query, or explore without providing our dataset, while in Datamart, it is possible to sort, filter, and do simple aggregation through SQL expressions. We also have access to our data via the SQL endpoint. DataFlow is usually used whenever we want to build reusable and shareable data prep in Power BI.

Blogs

How to Install Airflow with Docker on Ubuntu

                                                        Photo by Rubaitul Azad on Unsplash In this tutorial, we will demonstrate how we can install Airflow using Docker. Docker is an open platform for developing and running different applications. Docker enables us to separate our applications from infrastructure so we can deliver our software faster. We can also manage our infrastructures the same way we manage our applications. Now, let’s look at Airflow. Apache airflow is one of the brilliant tools utilized by many companies in defining & scheduling their complex data pipelines. We can programmatically schedule and monitor the workflow for our different jobs. This tool is widely used by data scientists, data engineers, software engineers, and many more. We’ll use a step by step process to show how to carry out this installation. Step1: Install Docker Engine The first stage of our installation is installing docker itself on our machine. We will check our computer using this command docker –version if we have docker installed. So if not, we will walk you through how to install it. Based on what we have on Docker’s official website, these are the following steps for installing docker; Install using the repository 1 sudo apt-get update 2 3 sudo apt-get install \ 4 ca-certificates \ 5 curl \ 6 gnupg \ 7 lsb-release   Add Docker’s official GPG key: 1 curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg –dearmor -o /usr/ share/keyrings/docker-archive-keyring.gpg   Set up a stable repository. You can add the nightly or test repository by using the word nightly or test in the command below; 1echo \ 2 “deb [arch=$(dpkg –print-architecture) signed-by=/usr/share/ keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \ 3 $(lsb_release -cs) stable” | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null   Now, we can install the latest version of Docker Engine, containerd, and Docker Compose by using the command below. 1sudo apt-get update 2 sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin   We can verify if Docker has been installed on our machine by running thehello-world image below. 1$ sudo docker run hello-world   So, we’ve installed docker on our machine. Let’s proceed to the next step. We must also note that we need docker-compose before proceeding to Airflow. Nice, it has been installed along with the Engine. Let’s move to the next step. Step2: Working with Vs Code Let’s open our Visual studio code or any other IDE we have and create a new folder for our project. We can name the folder airflow-docker. Now we need to download a docker-compose file that describes all the services required by Airflow. The Docker Compose file has already been made for us by the Airflow community. We can run this code below to download the file into our working directory. 1 curl -LfO ‘https://airflow.apache.org/docs/apache-airflow/2.3.0/docker-compose.yaml’   Now we should have the docker-compose YAML in our working directory. Let’s create more new folders for dags, plugins and logs. The Dag folder is where our python file will reside. You should have this as seen below. Whenever you open the docker-compose file, you should have something similar to this; Under the services, you can see the Postgres user and password named as airflow. We’ll use this as our username and password when logging into the webserver on the web browser. Step3: Export Environment Variables Here, we also need to export our environment variables to ensure the users and group permissions are the same as folders from host and folders in our container. Run the command below on our terminal in VScode; 1echo -e “AIRFLOW_UID=$(id -u)\nAIRFLOW_GID=0” > .env After you run the command, you should have a .env file in your directory. Step4: Initialize Airflow Instance Now that we are done with all our settings, we can initialize our airflow instance using the command below. This will create both user and password airflow based on the settings in the docker compose file. 1 docker-compose up airflow-init   We should have something like this as shown below after we performed our initialization; The next thing is to run all the services we specified in the docker compose file(the redis, scheduler, worker, webserver, e.t.c) so that our containers will come up and start running. We will use the command below; 1 docker-compose up If you have any error relating to permission issue, ensure before each command you add Sudo at the back. This is because you haven’t added docker among groups. Now, we can check the browser to see our Airflow Instance currently running by using the localhost:8080 command to view it in the web browser. We should have a page like this whenever we type the command in our browser.

Blogs

Data Joining in Tableau

Today’s article focuses on Data joining in Tableau. In Tableau, when we work with enormous data sets, it is common to find that those data sets have multiple tables with different data fields. This tells us data usually don’t reside in a single table. We can have many different tables. It means we can join tables using columns that are common or related. These related fields are usually known as Key fields or records. The method of combining this data is referred to as Data Joining. We can combine tables based on the same or different data sources to create a single table. Let’s look at the different types of Joins we have in Tableau. Types Of Joins Inner Join Left Join Right Join Full Outer Join We’ll look at each of these joins and demonstrate with the Sample superstore dataset in our Tableau Desktop. Inner Join What is an inner data join? Inner Join in Tableau refers to how we can create a new table from two tables containing only values or columns common between these two tables. Let’s look at our demonstration below in Tableau Desktop. We will bring in our Superstore data, load it in the Data pane, and then do an inner join using the order ID from the Orders table and the OrderID Returns table. Let’s see the image below. Left Join Left join tableau is also a way of creating a new table that is formed from all data from the left table and only matching values from the right table. If there are no common rows in the right table, null values are returned in the new table. Let’s demonstrate this join in Tableau. We’ll still use the Orders table and Returns table in our superstore dataset. The image below shows how we did that. In the data view below, let’s also observe null data present in some of our columns. Right Join Right Join is formed in Tableau between two tables, whereby the resulting table or new table created contains all values of the right table and only matching values from the left table. For non-common rows, null values are returned. Look at the screenshot below to see our Right Data Join in Tableau. Full Outer Join The full outer join is a type of join formed between two tables, whereby the resulting table contains all the data values from both the left and right tables. Also, values that do not find a match in both tables are shown as null. A screenshot of how to perform a Full Outer Join is shown below; We hope you’ve been able to learn about Data joining in Tableau and types of joins. Thanks for reading

Blogs

Introduction to NoSQL Graph Database

                                                                           Photo by Tobias Fischer on Unsplash The NoSQL graph database is a technology used for data management. It was invented to control very large sets of structured, semi-structured or unstructured data. It is also a type of database used to represent data in the form of a graph and consists of three components: nodes, relationships, and properties. These components can be used to model the data. Let’s look at the different examples of Graph Databases softwares such as Graph base,Neo4j, Oracle NoSQL DB etc. Each of these components are explained below; Nodes: The node can be used to represent objects or instances. We can group the nodes by applying a label to each member. Relationships: Relationships can as well be used for representing edges in the graph. They help in initiating relationships between nodes. Properties: The Properties represents the information associated with the nodes. Graph Database Models We have two common graph database models, these are; Resource Description Framework (RDF) graphs Property graphs Resource Description Framework (RDF) graphs : These graphs concentrates on data integration. They consist of the RDF triple whereby each is recognized by a unique resource identifier. The RDF graphs are often used by agencies, healthcare companies, statistics etc. Property graphs: Property graphs are much more illustrative, of which each of the elements carries properties and attributes that further determine its entities. This type of graph is very useful in data analysis. Applications Now, let’s talk about the applications of graph databases. Each are outlined below; Social Media Platform: Graph database is very efficient for social media platform where it can store all users and analyze their engagement. This is very useful for analyzing different users behavior and distinguishing groups for marketing purposes. Fraud Detection: Graph database is also an essential tool for fraud detection. It’s capable of tracking and mapping out most complex networks of relationships whereby running a simple query will help in identifying the fraud. Now, let’s discuss the advantages and disadvantages of the Graph Database. Advantages of Graph Database It is easier to spot trends and recognize elements with the most influence We don’t need a join when we have defined relationships It is flexible and agile Our Query usually depends on concrete relationships Graph Database is very good at establishing relationships with external sources Disadvantages NoSQL databases are designed to work for a specific purpose .They are not a universal solution designed to replace all other databases. The Database are hard to be scaled across a number of servers. The query language is also platform dependent. In handling complex relationships, the speed becomes slower while searching. It also has a smaller user base. Let’s move further by doing a comparison of NoSQL Graph Database with other relational databases. NoSQL Graph Database Vs. Relational Database Relational databases that is, MySQL and PostgreSQL, usually store data by using an explicit schema. On the other hand in a NoSQL database, users don’t define a schema. They would rather store the data using any structure they desire. We can say ‘SQL’ and ‘NoSQL’ actually refers to how our schemas are defined. Another main differences between relational databases and NoSQL systems is that while relational databases broadly supports minimal transactions, NoSQL systems on the other hand allows transactions to run on any row. The demand for graph database is actually realized by the level of connectivity between data. We can say graph database is a great choice for data analysis rather than simple data storage. Also, if we want to move with constantly changing data, a NoSQL graph database should be considered. For questions and observations, use the comments section below. Follow us for more informative posts. Thanks for reading.

Blogs

Errors and Exceptions in Python

Photo by Clément Hélardot on Unsplash In today’s article, we’d be discussing errors and exceptions in Python. Errors refers to problems which can occur in a program and stop the program’s execution. We can have the error in form of a “traceback” error message which provides a full report on it. While Exceptions are raised when some inside events occur that change the flow of the program. In handling this errors and exceptions, we use the Error Handling method to do this. Let’s look at each of the different types of errors we have in Python. This errors can be classified into two classes; Syntax errors Logical errors (Exceptions) Syntax Errors Syntax error are errors caused by not following the proper structure or syntax of the language. This can also be called parsing error. An example of what the syntax error looks like can be seen below; 1 if b == 1 2 print(“hello”) Output In this example, we can observe a colon : is missing at the end of the if statement. 1 File “<ipython-input-2-d2c0dbe9a7d6>”, line 1 2 if b == 1 3 ^ 4 SyntaxError: invalid syntax Now, let’s move further to what Logical Errors looks like. Logical Errors(Exceptions) Logical errors are errors that occurs at runtime. These type of errors are also known as exceptions. An example of this error types are; FileNotFoundError, ZeroDivisionError, ImportError Also whenever this error occurs, it creates an exception object whereby if they aren’t handled properly ,it returns a full report traceback to that particular error. Let’s look at some few examples below; 1 # initialize the variable 2 myvalue= 50 3 4 # we perform division with 0 5 result= myvalue / 0 6 print(result)   Output When we ran this code it gave us a ZeroDivision type of Error 1————————————————————————— 2 ZeroDivisionError Traceback (most recent call last) 3 <ipython-input-3-4303a2aa4bd8> in <module> 4 3 5 4 # we perform division with 0 6 —-> 5 result= myvalue / 0 7 6 print(result) 8 9 ZeroDivisionError: division by zero   Let’s try the FileNotFound Error Example. This means we’ll try opening a document that doesn’t exist in our working directory. 1open(“file.txt”) Output After, running the code, it gave us this FileNotFoundError error which we have below; 1————————————————————————— 2 FileNotFoundError Traceback (most recent call last) 3 <ipython-input-4-93b2af11911f> in <module> 4 —-> 1 open(“file.txt”) 5 6 FileNotFoundError: [Errno 2] No such file or directory: ‘file.txt’   Python Built-in Exceptions Python contains different built-in Exceptions. We can view all this built-in exceptions using the built-in local() function as seen below: 1 print(dir(locals()[‘__builtins__’])) Whenever we run the code, it prints out all the built-in exceptions for us. Few of these errors are shown below along with their causes. Let’s move further and see how we can handle all these errors in our code Error Handling Handling Exceptions with Try/Except/Finally The Try/Except/Finally is a very efficient way of handling exceptions. The try block is usually used for generating exceptions while the Except block helps us in handling the error and finally block executes the code, regardless of the result of the try- and except blocks. An example of this is shown below; 1 try: 2 print(“hello”) 3 4 # unsafe code to run 5 print(50 / 0) 6 7 except: 8 print(“an error occurs”) 9 10 finally: 11 print(“I’m back here”)   This is what we generated after running the code; 1 hello 2 an error occurs 3 I’m back here   Handling Many Exceptions With Try & Except We can also define as many exception blocks as we want, let’s say we want to handle a special kind of error; Let’s see the example below; Output We should have this result after running the code; 1 Variable a is not defined 1 try: 2 print(a) 3 except NameError: 4 print(“Variable a is not defined”) 5 except: 6 print(“Something else went wrong”) That’s about errors and exceptions in Python. Let’s have your observations and questions in the comment box below. Thanks for reading.

Blogs

What is LOD in Tableau?

An introduction to the concept of Level of Detail(LOD) expressions in Tableau. Photo by path digital on Unsplash Level of Detail (LOD) are expressions that give a user the privilege to control data computations on different levels of granularity in Tableau. We can perform simple or complex computations at a more granular level (INCLUDE LOD), or at a less granular level (EXCLUDE LOD), or at an independent level (FIXED LOD). The LOD expressions in Tableau allows you perform aggregations that are unavailable at a certain level of visualization. Types of LOD Expressions in Tableau We have three types of LOD Expression in Tableau, they are; INCLUDE EXCLUDE FIXED INCLUDE LOD In Include LOD, the INCLUDE expression will include an additional dimension along with the one specified by the user. i.e. It brings including the view dimension into the calculation The Syntax of the INCLUDE LOD is usually in the form below; {[ INCLUDE ] < declaration > : <expression to aggregate>}   Step by step process on how to create INCLUDE LOD Step 1 To Demonstrate this, we will use Superstore DATASET Open Tableau Desktop and create a new worksheet Step2 Drag Region to column and Sales to Rows The next thing to do is to create LOD using the calculated field. Step3 Create a new calculated field, then write this; {INCLUDE [Customer Name] : SUM([Sales])} The next thing is to drag the calculated field to the rows beside our sales. We should have 2 bar charts displayed. Now, change the aggregation type of the sales per customer. We will do this by right clicking on the pill in our view and change it to Average. We should have our visualization like this: EXCLUDE LOD Expressions The EXCLUDE level of detail or LOD in Tableau is used when we wish to leave out a dimension from the view level of detail. This is the opposite of what we saw in the INCLUDE LOD where a dimension from the view level of detail was being added with the user-specified dimension. The syntax for EXCLUDE LOD Expressions is in the form below: {[ EXCLUDE ] < declaration > : <expression to aggregate>} Step by step process on how to create EXCLUDE LOD Step1 Open a new worksheet Step2 Drag the region and sales to the rows and Order Date to Columns. We should have a line graph by default but we can change it to Bar chart using the Marks card. Our new visualization should look like this; Step3 Create a new calculated field and put the code below; { EXCLUDE [Region] : SUM([Sales])}   Now, we will drag our newly calculated field to color on the marks card, for which we would generate this visualization. FIXED LOD EXPRESSIONS In the Fixed LOD type, the calculation only considers or computes the data values based on user-specified dimensions. It does not take into account the dimension present in the view. The syntax for FIXED LOD Expressions is in the form below: {[ FIXED ] < declaration > : <expression to aggregate>}   Step by step process on how to create FIXED LOD Step1 Open a new worksheet Step2 Drag Region and States to the columns. Step3 Let’s create a calculated field for Sales by Region using the formula { FIXED [Region] : SUM([Sales])}   Let us drag the newly calculated field Sales by Region to Rows. We have this visualization; Hope you found this post helpful. Thanks for reading.

Blogs

How to Send EMails from Your Gmail Account using Python

                                                         Photo by Yogas Design on Unsplash Today, we’d be looking at an interesting topic, Sending Emails from your Gmail account using Python. How does that sound? Python is such a powerful programming language. We are going to use the SMTP(Simple Mail Transfer Protocol) library to implement this. Python provides smtplib module which defines an SMTP client session object that can be used to send mail to any Internet machine with an SMTP or ESMTP listener daemon. The SMTP object has an instance method which is called Sendmail, this is used for mailing messages. It has three parameters which are; The sender − A string with the address of the sender. The receivers − A list of strings, one for each recipient. The message − A message as a string formatted as specified in the various RFCs. Note: When sending emails with Python, you must ensure your SMTP connection is encrypted, so your message and login credentials are not easily accessed by others. SSL (Secure Sockets Layer) and TLS (Transport Layer Security) are two protocols that can be used to encrypt an SMTP connection. Let’s dive into each of these protocols; SMPTP WITH SSL   Here is a demonstration of how to create a secure connection with Gmail’s SMTP server using SSL Protocol. 1import smtplib, ssl 2 3port = 465 # For SSL 4password = input(“type-your-password-here”) 5 6# using a secure SSL context 7context = ssl.create_default_context() 8#this will be used to send our email 9 10with smtplib.SMTP_SSL(“smtp.gmail.com”, port, context=context) as server: 11 server.login(“[email protected]”, password) 12 SMPTP WITH TLS   Here is a demonstration of how to create a secure connection with Gmail’s SMTP server using TLS Protocol. 1import smtplib, ssl 2 3 4smtp_server = “smtp.gmail.com” #name of smptp server 5port = 587 # For starttls 6sender_email = “[email protected]” #put in your email address here 7password = input(“type-your-password-here “) 8 9# we use a secure SSL context 10context = ssl.create_default_context() 11 12# Now we try to log in to the server and send email 13try: 14 server = smtplib.SMTP(smtp_server,port) 15 server.ehlo() # Can be omitted 16 server.starttls(context=context) # Secure the connection using the tls protocol 17 server.login(sender_email, password) 18 19except Exception as e: 20 print(e) 21finally: 22 server.quit()   Now, we’ve seen how to create a secure connection using either of the protocols listed above. The next thing to do is to put an email message which will be in plain text format for our demonstration here. Sending Email In Simple Text Format using SSL Protocol   Here, we will pass in our email body in plain text format and then replicate the same code we used earlier for creating an SSL secured connection. 1import smtplib, ssl 2 3port = 465 # For SSL 4smtp_server = “smtp.gmail.com” 5sender_email = “[email protected]” #put in your email address here 6receiver_email = “[email protected]” #put in your receiver email address here 7password = input(“type-your-password-here”) 8message = “””\ 9Subject: Hi there 10 11This message is to notify you my friend….””” 12 13context = ssl.create_default_context() 14 15with smtplib.SMTP_SSL(smtp_server, port, context=context) as server: 16 server.login(sender_email, password) 17 server.sendmail(sender_email, receiver_email, message)   Sending Email in HTML Format   You might be wondering if it’s possible to send your text in a more modified or beautified format whereby your email content is bolded, italicized, contain images, and so on. Python email.mime module can help in handling that. An example code is shown below on how we have both plain text and that of Html format. In this code, we will use the MIMEText from email.mime module to convert both plain text and that of Html format into MIME object, before proceeding to attach as our message body. 1 import smtplib, ssl 2 from email.mime.text import MIMEText 3 from email.mime.multipart import MIMEMultipart 4 5 sender_email = “[email protected]” 6 receiver_email = “[email protected]” 7 password = input(“type-your-password-here”) 8 9 message = MIMEMultipart(“alternative”) 10 message[“Subject”] = “multipart test” 11 message[“From”] = sender_email 12 message[“To”] = receiver_email 13 14 #Create the plain-text and HTML version of your message 15 text = “””\ 16 Hello, 17 How are you? 18 “”” 19 html = “””\ 20 <html> 21 <body> 22 <p>Hello<br> 23 How are you? 24 </p> 25 </body> 26 </html> 27 “”” 28 29 # we then convert these into plain/html MIMEText objects 30 part1 = MIMEText(text, “plain”) 31 part2 = MIMEText(html, “html”) 32 33 #You then add the HTML/plain-text parts to MIMEMultipart message 34 # The email client will try to render the last part first 35 message.attach(part1) 36 message.attach(part2) 37 38 #create secure connection and send your email 39 context = ssl.create_default_context() 40 with smtplib.SMTP_SSL(“smtp.gmail.com”, 465, context=context) as server: 41 server.login(sender_email, password) 42 server.sendmail( 43 sender_email, receiver_email, message.as_string() 44 )   And that’s a wrap. Follow the steps above to start sending emails from your Gmail using Python. For questions and observations, use the comments section below. Follow us for more informative posts. Thanks for reading.

Blogs

Data Blending in Tableau

                                                                    Photo by fabio on Unsplash Data blending is referred to as a way of combining data in Tableau. Blending gives a quick and simple way to bring information from multiple data sources into a view. For instance, we have Profit data from SQL database and Profit data in an Excel spreadsheet. This might look confusing but whenever the data blending is used efficiently, it provides the best way to merge data sources in Tableau. We must also avoid using data blending incorrectly so the Tableau Server won’t be brought down. Primary and Secondary Sources in Data Blending There is constantly a primary source and a secondary data source when blending in Tableau. It is important for us to know which is the primary source as it can impact your view. As it is a type of left join, all fields will be included from the primary and related from the secondary. How to Prepare Data For Data Blending We will demonstrate using the coffee_chain_data and Office city data. Let’s show you how we brought both our primary and secondary data in. Click on the Data Tab to bring in the Coffee Chain Data. Then we double click on the Coffee Chain Query data to do an inner join using the other tables we have (Location, factTable, Product), which gives us what we have below. Now, let’s bring in the second data which is the Office City data. Click on the small drop-down to add a new data source and then bring in the data. Blending the Data To proceed with our data blending, we observe we have a common column in these two different data sources. This is the Region column in Office City data source and the Market column in the Coffee chain data source. The first thing is to create a relationship using these common columns in our view. To do that, click on Data Tab → Edit Blend Relationship. On the Edit Blend Relationship page, click on custom to enable us add our common column. By default, Tableau must have selected state from both columns for us. Ensure we set our Sample Coffee chain as the Primary data source here, then click on OK. So, we move to our view to do a comparison between sales from the two different data sources, just like what we have below. Note that our primary data source will select common data found in the secondary data source. This will be the data shown in the second view as our secondary. Under our Dimension Pane, we can see the active connection on the state which is in Orange color. Also on our Data Pane, we will find the Primary data source marked as blue and secondary as Orange color. Limitations Of Tableau Data Blending We also have some key limitations in data blending. Some of the difficulties we might encounter in data blending include the following: Dashboard performance not up to standard when data blending Calculations not working with a data blend The asterisk when data blending Filters not working as expected with a data blend. Hope you found this post helpful. If you do, comment and share with your friends. Thanks for reading.

Blogs

Functions in Python

Photo by Procreator UX Design Studio on Unsplash Functions in Python can be defined as a group of related statements used to perform a specific task. Functions are usually useful for breaking programs into smaller and modular chunks thereby making the code looks more organized and manageable. Functions can be both built-in or user-defined. It helps the program to be succint, non-repetitive, and as well arranged. Function Syntax usually looks like this: 1 def function_name(parameters): 2 “””hello””” 3 statement(s) 4 return expression   We’ll look at the following four key things: How to Define a Function How to Call a Function How to Add Docstring to a Function Now, let’s look at each of these one after the other. How to Define a Function We define a function to provide the required functionality for it. The rules below demonstrate how we can define a function in Python. We use the keyword def to declare the function or to begin the function blocks and then the function name and parentheses ( ( ) ) follow. Parameters are also added to the function, any input parameters or arguments should be placed within the parentheses we defined earlier, then we end the line with a colon. We add the statements we want our functions to execute. Now we end our function with a return statement. Without the return statement, our function will return an object None We can check the examples below; 1 def my_function(): 2 print(“Hello, you’re welcome!”) 1 def cube(side): 2 volume = side **3 3 surface_area = 6 * (side**2) 4 return volume, surface_area   We can go in-depth with our parameters and use them accordingly to what we have in mind. How to Call a Function As we have defined our function earlier, we need to call it because if we don’t, we might not see any output. We can call a function by using another function or directly from our Python Prompt. We’ll see various examples of how we can call functions. Let’s use one of the examples below; 1 #defining our function with its parameters 2 def cube(side): 3 volume = side **3 4 surface_area = 6 * (side**2) 5 #we use the return to parse our result 6 return volume, surface_area 7 8 #calling our function – means we want it to return the volume and surface area we defined 9 10 print(cube(3))   Output 1 (27, 54) Now we’ll look at how to call a function from another function 1 def sportClub(): 2 return (“Manchester”) 3 def sports(): # function definition 4 print (“There are different sport club that values Ronaldo”) 5 print(“we know he plays for”, sportClub()) # now call sportClub in this present function 6 7 sports()   How to Add Docstring to a Function It’s also good to add a docstring to our function. Docstrings are useful for describing what function does. These descriptions usually serve as documentation for our function. If anyone reads it, it’s easier for them to understand, without having to trace through all the code in the function definition. We can implement this in our code using the example we have below. 1 def my_function(): 2 “””Prints “Hello, you’re welcome!”. 3 4 Returns: 5 None 6 “”” 7 print(“Hello, you’re welcome!”) 8 return   Now, we need to look at the different Function Arguments we have; Function Arguments The different types of arguments below show us how we can call a function; Required arguments Default arguments Keyword arguments Variable-length arguments Required Arguments These are the arguments we must pass to a function in the right order. Whenever we’re doing this, the number of arguments we’re passing to the function must match with the function definition or requirements. 1 def student(name): 2 “This prints a whatsoever we pass into this function” 3 print(name) 4 return; 5 6 # Now you can call student function 7 student()   This will definitely give us an error because we didn’t pass the required argument to it. 1 TypeError Traceback (most recent call last) 2 <ipython-input-17-ce9e9cc0b820> in <module> 3 5 4 6 # Now you can call student function 5—-> 7 student() 6 7 TypeError: student() missing 1 required positional argument:’name’ 8   Default Arguments Default arguments are described as function that allows a default value even if no argument value is passed during the function call. It is possible to assign this default value by using the assignment operator =. Let’s see the example below; 1 #defining our function 2 def invalues(x ,y= 2): 3 return x + y 4 5 #let’s call invalues with first parameter ‘x’ 6 invalues(x=1) 7 8 #let’s call invalues with first, second parameter ‘x’ and ‘y’, whereby we changed y value 9 invalues(x=1, y=5)    Keyword Arguments Keywords are very helpful whenever we want to call our function and we want all parameters to be in the right order. This means we can identify the arguments by their parameter name. An example is implemented below; 1 # Defining our function 2 def invalues(x, y): 3 return x + y 4 5 # let’s call the invalues function with keyword arguments 6invalues(x=1, y=5) Variable-length arguments Variable-length arguments are usually used whenever there is a need to add more arguments to a function than you specified while defining it. These arguments are referred to as variable-length arguments. We can use the *args, let’s see an example below; 1 def addition_(*args): 2 total_sum = 0 3 for i in args: 4 total_sum += i 5 return total_sum 6 7 # Calculate the total sum 8 addition_(11, 14, 19, 21)   Thanks for reading this article.

Scroll to Top