Uncategorized

Uncategorized

How to Calculate Week Numbers For Each Quarter in Tableau

In this tutorial, we will demonstrate how we can generate week numbers for different quarters of the year in Tableau. Before we move to that, we need to understand some important date functions we will use in our calculation and their uses. With the help of the date function, we can manipulate dates in our data source. DATEADD It helps to generate the specific date with the specified number interval added to the specified date_part of that date. An example is what we have below; DATEADD(‘month’, 3, #2004-04-15#) = 2004-07-15 12:00:00 AM DATEPART The DATEPART function can be used to return part of a date. The tableau DATEPART function also accepts YEAR, MONTH, DAY. DATEPART(‘year’, #2004-04-15#) = 2004 DATEPART(‘month’, #2004-04-15#) = 4 DATETRUNC The Tableau DATETRUNC is used to return the first day of the specified date part. We are to use the first argument to specify the date part which then returns a new date. This DATETRUNC usually accepts YEAR, MONTH, DAY, etc. DATETRUNC(‘quarter’, #2004-08-15#) = 2004-07-01 12:00:00 AM DATETRUNC(‘month’, #2004-04-15#) = 2004-04-01 12:00:00 AM How to Calculate Week Numbers by Quarter We are going to demonstrate this with the Superstore dataset. First, we are to drag our order Date to the Row Pane and select the date as YEAR (this should have been selected automatically). After then, we will also drag the date again to the pane and select it as QUARTER. Now, we will place in the Order Date again on the Pane but now we will select it as a Week Number Now, we have our week number on the pane. We can as well do a table calculation to get the Week Number and drop it on the pane by using the DATEPART function. We will use the table calculation below and change the field from continuous to Discrete. MIN(DATEPART(‘week’, [Order Date])) After this we need to calculate the weeks by quarter. The DATEPART and DATETRUNC function will be used to calculate this. The formula used is this; MIN(1+(DATEPART(‘week’,[Order Date])-DATEPART(‘week’,DATETRUNC(‘quarter’,[Order Date])))) Just like what we did earlier, we will create a calculated field and use the formula above. After this, drag the field to the pane and change the field from continuous to Discrete, which then generates each week’s number by quarter. Thanks for reading this post. If you have any questions or observations, use the comment box below.

Uncategorized

Data Extraction from Wix API

The Wix Data API allows you to work with your database collections using code and gives you more capabilities than what is available in the Editor alone. You can use this API to work with your site’s registered users. This API lets you get information about the current user, prompt a user to log in, and can log out the current user. The Wix-users API can be used in your site’s front-end code. Here is the list of the processes we would follow in this article to get data from the Wix API: Create an application under the Wix developer center Set up permission and derive the token, authorization, code, access token, and refresh token Get the data you need   Create an application under the Wix Developer center Step 1: To get started with the API, we need to first go to the website https://dev.wix.com/ and create a new application via the developer center for our Wix account. Afterward, we would set our Redirect URL & App URL on the OAuth tab. Step 2: The next step is to add new permission under the permissions tab, where we select the kind of permission we need from the different categories (Wix Stores, Pricing Plans, Marketing, Wix Events, Contacts & Members, etc.). Any of these permissions we set, actually indicates the type of data we are interested in. Step 3: After this has been done, we will test our app. It will prompt us to install the app and redirect us to the App URL we defined. From here we will generate a token code under the URL params returned. We’ll then proceed to authenticate by using the token code we obtained alongside other credentials available.   For Authentication In getting the authorization code and access token, we can use the Flask framework in form of a web application to generate the details we need. We’ll use the authentication URL endpoint with the token code, appId, and Redirect URL as a connection string. When the flask app has been ran, it will generate our authorization code.     from flask import Flask,redirect,url_for     import config import requests app = Flask(__name__) app.config.from_pyfile(“config.py”, silent=True) import json @app.route(‘/callback’) def oauth_callback(): redirect_url = ‘https://cndro.com/’ auth_url = (‘https://www.wix.com/installer/install?’ + ‘token=’ +app.config[“TOKEN_CODE”]+ ‘&appId=’ +app.config[“APP_ID”]+ ‘&redirectUrl=’ + redirect_url) return redirect(auth_url)   Generate Access Token and Refresh TokenTo generate the Access token and Refresh token, we’ll use the temporary authorization code returned, together with the URL endpoint and other credentials to request an access token and a refresh token.   @app.route(‘/access’) def wixAccessToken(): the_url = ‘ https://www.wix.com/oauth/access’ response = requests.post(the_url, headers = { ‘Content-Type’: ‘application/json’ }, data = { ‘grant_type’ : ‘authorization_code’, “client_id”: app.config[“APP_ID”], “client_secret”: app.config[“APP_SECRET”], “code”: app.config[“AUTH_CODE”] }) response_data = json.loads(response.text) return response_data   We must note that this access token expires in 5 minutes, because of this we must generate a new access token from the refresh token obtained. We’ll use the same URL endpoint we used earlier and also indicate our grant type as ‘refresh_token’.   @app.route(‘/refresh’) def NewAccessToken(): the_url = ‘ https://www.wix.com/oauth/access’ response = requests.post(the_url, headers = { ‘Content-Type’: ‘application/json’ }, data = { ‘grant_type’ : ‘refresh_token’, “client_id”: app.config[“APP_ID”], “client_secret”: app.config[“APP_SECRET”], “refresh_token”: app.config[“REFRESH_TOKEN”] }) response_data = json.loads(response.text) return response_data if __name__ == ‘__main__’: app.run(host=’localhost’,port=5000)   We can get data from any of the permissions categories we added by using its URL endpoint and with our Access token. You can as well use other HTTP Methods apart from GET such as POST, GET, PATCH, DELETE with any of the endpoints you’re interested in working with. An example code to get data from the Contact & Members Endpoint def getListContact(): the_url = ‘ https://www.wixapis.com/contacts/v4/contacts ? paging.limit=1000&paging.offset=1000’ response = requests.get(the_url, headers = { ‘Authorization’: wixAccessToken() }) response_data = json.loads(response.text) return response_data print(getListContact())   Follow the steps above and you will have your data extracted from Wix API. If you enjoyed this post, share it with your friends and colleagues. Connect with us on Twitter, Instagram, and YouTube. Thanks and see you next time

Uncategorized

How to Connect to Azure Blob from Alteryx

Azure blob storage is a Microsoft Azure feature that allows users to store a large amount of unstructured data. Azure blob storage can be used as a Delta Lake where you keep all the data that belongs to your organization. Blob is an acronym and it stands for Binary Large Object which includes images and multimedia files. In this article, I will show you how to connect to Azure Blob storage from Alteryx. Firstly, let’s connect with the files in the Azure container from Alteryx designer but before we can do that, we need to generate a Shared Access Signature (SAS) code for each file we would like to connect to. We are going to use wix.csv data as sample data. We are going to follow these steps below; Go to https://portal.azure.com and search for Azure Storage. Create a container (I named it alteryxtestcsvfiles) on the Blob section. Click on the three ellipses to the right of the window.     2. Click on Generate SAS     3. Adjust the expiry date to a future date   4. Click on Generate SAS token and URL     Successfully followed the four steps? Good job! Now use the SAS Code you generated to connect to the data from Alteryx download tool. With the SAS Code, we can now communicate with the resources privately. . . https://samplestorageyoucreated.blob.core.windows.net/alteryxtestcsvfiles/yourdata.csv?sp=r&st=2021-07-01T18:29:53Z&se=2021-07-06T02:29:53Z&spr=https&sv=2020-02-10&sr=b&sig=SsCdhhH%2FZG6qgeCEyAzvsuj0aoIZrS2qyhwyPenlsrk%3D The SAS code is split into two parts, we have Field1 and Field2. Field1 = https://samplestorageyoucreated.blob.core.windows.net/alteryxtestcsvfiles/yourdata.csv Field2 = sp=r&st=2021–07–01T18:29:53Z&se=2021–07–06T02:29:53Z&spr=https&sv=2020–02–10&sr=b&sig=SsCdhhH%2FZG6qgeCEyAzvsuj0aoIZrS2qyhwyPenlsrk%3D The view below shows the complete Alteryx workflow to get our CSV data, transform the data and use the browse tool to show the CSV data.       The Azure Blob Storage URL and the query string serve as the input to the workflow. Use the input tool as shown in the screenshot below.     Now we will concentrate on the Alteryx download tool, the configuration interface. I brought the Alteryx Download tool into the workflow. Take a look at the configuration window for the basic settings of the Download tool.     The Headers window has Accept: text/plain, and also has the field1 selected as the field to take care of the response body.     The screenshot below shows using the Payload Configuration tab to get and use the query string for authentication.   The Text to Columns tools (the first and the Second) help to get the text data into their table equivalence.     Use the Select Tool to pick the Fields needed for the Dataframe by deselecting unwanted fields in the configuration window.       I use the Dynamic Renaming tool to pick up the field name of a dataset from the first row of the data.       We have successfully connected to the CSV data in Azure Blob with Alteryx with the above steps. If you enjoyed this article, share it with your friends and colleagues!    

Uncategorized

Convert CSV to XML in Alteryx

Extensible Markup Language(XML) is a markup language that defines a set of rules for encoding documents in a format that is both human-readable and machine-readable. XML uses a series of simple tags to describe data and these tags can be created in an easier way. CSV is a simple file format used to store tabular data such as a database or spreadsheet. A CSV file stores data in a structure that can be read and written by a wide range of several spreadsheet software packages. In this article, I’ll walk you through how to convert CSV to XML in Alteryx. Let’s get right in. We will use some tools in Alteryx to achieve our aim. The first tool we will be using in this conversion is the input tool which shows an input file. This is where we will pass in our sample data. The second tool is the Record ID which will be used to number the data sequentially. We can call this an index. The third tool is the Transpose which we will use to change the data orientation from vertical to horizontal. Here is the result of this phase:   Add regex to remove spaces in the Name column using the Formula tool. After then, use the select tool to pick the fields and rename the datatype from string to integer. We will use the formula tool again to perform two different calculations for the name tag. Afterward, add both name tags and re-index using the Record ID tool. Use the text input tool to add xml initiation line such as this; <?xml version=”1.0″ encoding=”UTF-8″?>,and it will be passed to the data. The union tool will then be parsed in to connect different data coming from the workflow. The summation tool coming from the Record ID tool will be used to group the data using the name column.       The Append Field tool as shown below will be used to append some of the fields i.e. ColumnNumber to each of the fields(XmlLine, RecordId, Level). The other tools used include the Multi-Row Formula tool, the formula tool, and then parsing the join, union tool and summarizing them to obtain our XML file.       We’ll then use the output tool in viewing our results.  

Uncategorized

Connect CSV Data in S3 Bucket with Athena Database

Introduction Athena is a serverless query service that allows you to run SQL queries on your data stored in S3. It can easily query large datasets (or Data Lakes), whether they are structured, unstructured, or semi-structured data. A database in Athena is a logical grouping for tables you create in it. In this short guide, you will learn how to connect to CSV data in S3 bucket with Athena database. Let’s dive right in. The AWS glue crawler will help to catalog the table and give it the right schema. The steps are highlighted below.  Step 1: Open the Athena database from your AWS console and toggle to the database you want to create the table on. Click on Create table.   Step 2: Click on “from AWS Glue Crawler”. The “from S3 bucket data” is the manual process where you will enter the data columns manually. After you have clicked on the from AWS Glue Crawler you will be transferred to the AWS Glue platform for the crawler definition. Step 3: Input the name of the crawler and press the Next button. Step 4: Leave every entry at the default and enter the S3 bucket directory where the CSV file is located. Ensure you add a forward slash after the directory definition. Step 5: Choose “No” for Add another data store. Step 6: On the IAM role section, make sure you create a new role for the crawler and press the “Next” button. Step 7: After you have created the crawler, locate the name of the crawler. Select the crawler and click on run Crawler. The crawler will then create a table from the CSV file in the S3 bucket.   NOTE: A comma (,) at any point in a row of a CSV in the s3 bucket will surely disrupt the Athena data table, so make sure you replace any comma symbol in all rows of the CSV files. As a culture, I like to replace comma symbols with spaces. In this way, Athena will not return a bad table and will not split up any rows using the comma. Below is the original data from EXCEL.   The picture below shows the CSV file after being ingested from the S3 bucket to Athena Table. As you can see the data was returned as unstructured data in the Athena database. Resolution: Replace comma (,) symbols in all rows of the CSV files, then bring the data back into the S3 bucket. Then this will be perfectly provision by Athena Database because of the absence of comma symbols. That’s pretty much it. Follow the step-by-step guide to connect your CSV data in S3 to Athena database. Share your thoughts in the comment box below.

Uncategorized

Snowflake Connector for Python

Introduction The snowflake connector is a native, pure Python package that does not depend on JDBC or ODBC. It can be installed using pip on Linux, macOS, and Windows platforms where Python 3.6 and upwards are installed. The connector supports developing applications using the Python Database API v2 specification (PEP-249), including using the following standard API objects: Cursor objects for executing DDL/DML statements and queries. How to install the Connector Python Package Installer & Setup Tools The Snowflake Connector for Python is installed with pip, first execute the following command to ensure the required version is installed: 1python -m pip install –upgrade pip To install the Snowflake Connector for Python and the dependent libraries: Determine the version of the Snowflake Connector for Python that you plan to install. To install the dependent libraries, run the pip (or pip3) command and point to the requirement for that version of the connector. 1pip install -r https://raw.githubusercontent.com/snowflakedb/snowflake-connector-python/v2.6.2/tested_requirements/requirements_36.reqs Validate your installation Create a file known as validate.py, and run the file which shows the version of the snowflakes. If this code brings up an error when ran, this shows that something is wrong with the installation. #!/usr/bin/env python import snowflake.connector # Gets the version ctx = snowflake.connector.connect( user=’john’, password=’xxxxxxxx’, account=’xxxxxxxx’ ) cs = ctx.cursor() try: cs.execute(“SELECT current_version()”) one_row = cs.fetchone() print(one_row[0]) finally: cs.close() ctx.close() How to Use the Python Connector You can Connect to Snowflake using the login parameters: import snowflake.connector PASSWORD = os.getenv(‘SNOWSQL_PWD’) WAREHOUSE = os.getenv(‘WAREHOUSE’) conn = snowflake.connector.connect( user=USER, password=PASSWORD, account=ACCOUNT, warehouse=WAREHOUSE, database=DATABASE, schema=SCHEMA ) After you have logged in, create a database, schema, and warehouse with the connection. How to Create Database, Schema, and Warehouse  You can use the CREATE DATABASE, CREATE SCHEMA and CREATE WAREHOUSE commands to create this. I displayed an example below for each, whereby I specified the name of the database in which to create the schema and also a name specified for the data warehouse. conn.cursor().execute(“CREATE WAREHOUSE IF NOT EXISTS john_warehouse”) conn.cursor().execute(“CREATE DATABASE IF NOT EXISTS testing”) conn.cursor().execute(“USE DATABASE testing”) conn.cursor().execute(“CREATE SCHEMA IF NOT EXISTS testingschema”) Using the Database, Schema, and Warehouse You can also reference your database, schema, and warehouse. Make use of each with the following commands with which we can start creating tables afterward. conn.cursor().execute(“USE WAREHOUSE tiny_warehouse_mg”) conn.cursor().execute(“USE DATABASE testdb_mg”) conn.cursor().execute(“USE SCHEMA testdb_mg.testschema_mg”) How to Create Tables You can create tables and insert data into your database using the connection. Check the  example below. onn.cursor().execute( “CREATE OR REPLACE TABLE ” “first_t(col1 integer, col2 string)”) conn.cursor().execute( “INSERT INTO first_t(col1, col2) VALUES ” + ” (123, ‘test string1’), ” + ” (456, ‘test string2’)”) Hope you found this article helpful. If you have any questions, feel free to ask in the comments section.

Scroll to Top