Blogs

Blogs

10 Power BI Tips for Designing a Better Dashboard

The Power BI is an interactive data visualization software product developed by Microsoft with a primary focus on business intelligence. Power BI enables users to drill down into their data to gain deeper insights and make more informed decisions. This data is usually displayed in reports and dashboards. The following tips below are what we can use to design a better dashboard; Consider your audience The first thing which must take place when designing a dashboard is the consideration of the audience, whereby you need to know the information your audience will need to improve their business decisions. When designing our dashboard, we also need to know the device the audience will be viewing the dashboard from, is it from a desktop or tablet, putting this in place helps to know how to do a proper arrangement of our dashboard design.      2. Use the right visualization for the data When producing your visualization, you must make it easier to read and interpret. You must also ensure to avoid the use of different varieties of visualization in order to make it look simple, so as to pass the right message to the audience. The use of appropriate title headers, labels must be used, to make it easier for the audience to read.    3. Avoid over-complicating the dashboard Your dashboard must paint a single picture. You must avoid the use of scroll bars, your audience must be able to glance through your dashboard at once. The dashboard shouldn’t be too overcomplicated, you can remove any non-essential thing present on the dashboard.   4. The page layout You must always provide a layout for your dashboard, so your contents can be well placed. The contents on the dashboard can be placed in a left to the right manner, whereby the most important information is placed at the top.   5. Use Tooltips The tooltip is a very good function to add to a dashboard. It makes the reader grab the most important information they need about the dashboard and as well allows them to drill down more information with just a click.   6. Use bookmarks Using bookmarks is very good to maintain a conservative design while the user still has as much information needed. The bookmarks are useful for hiding visualizations.   7. Use a full-screen mode The full-screen mode is very useful for presenting to the audience, with this it helps to cover the dashboard on a wider view, which also helps to prevent distractions.   8. Use filters and slicers The use of filters and slicers needs to be encouraged in a report. This helps to improve the look and ease of use of the report.   9. Use a theme The use of a theme in a report will definitely make your report more solid. A theme includes colors, background, fonts e.t.c.   10. Page navigation We can as well provide page navigation for our report. You can, for example, add another view, something like tabbed page navigation which allows users to navigate through the landing page to the detailed report pages.

Blogs

How to Set up Row-Level Security in Tableau

It is possible to restrict users’ views when you share workbooks to Tableau Server or Online. Those who have access to the workbook are the only ones who can see the data shown in the view. We can demonstrate this by using the row-level security in Tableau. Here, we apply a type of filter that allows users to specify which data any given person signed in to the server can view. How to Set up User-based Filtering We’ll be using one of our sales reports to demonstrate this. In this report, let’s say we’ll like to show the number of sales for different geographical regions. Also, we just want to assign each of our users to view their own designated region whenever they view the report. The users won’t have access to other regions except for the ones they were given access to, we will follow the steps below on how to make this happen; 1)We first publish our report to the Tableau Server. 2)The next thing we need is to create a user filter for our report, we then locate the Server on the menu tab and click on Create User Filter 3)Under the create user filter, we will create users to assume the exact region data they will have access to view. Therefore one user maps to any given row of data. 4)After we’re done we then add our Manager with Region filter we created to the Filters Pane. 5)After this, we put this in a dashboard and publish it again to the server. We must ensure to apply our filter to be active on all worksheets using the related data source, so all users can view data designated to them. Note, Tableau offers the following approaches to row-level security: Create a user filter and map users to values manually; This applies to an example we demonstrated above. This method is convenient but high maintenance. It must be done per workbook, and we must ensure to update the filter and republish the data source as our user base changes. Create a dynamic filter using a security field in the data; In creating a dynamic filter, we create a calculated field that automates the process of mapping users to data values. This method usually requires that the underlying data include the security information we want to use for filtering. The most common way to do this is to use a reference (“look-up,” “entitlements,” or “security”) table that contains this information. We must also know that Row-level security with extracts is faster to create and have better performance than Row-level security with data sources that use live connections. How do you secure your data in Tableau? Let us know in the comments section.

Blogs

How to Set up Row-Level Security in Power BI

What is Row-Level Security in Power BI? Row-level security in Power BI is a way to restrict the kind of data an end-user views in a Power BI report. The use of filters is usually used to limit data access at the row level and we can as well define filters within the roles. In the Power BI service, members of a particular workspace can have access to datasets in the workspace; row-level Security doesn’t restrict this data access. Defining Roles & Rules in Power BI Desktop The following steps will guide us on how to define roles in our Power BI Desktop; Open report   2)From the Modeling tab, select Manage Roles.   3)From the Manage roles window, select Create.   4)Under Roles, you provide a name for the role. 5)At Tables, select the table you would like to apply a DAX rule. 6)In the Table filter DAX expression box, enter the DAX expressions, just like what we have in the picture below After the DAX expression has been created, select the checkmark above the expression box to validate the expression. 7)Select Save The next step is to validate our role on PowerBI Desktop by navigating to the Modelling tab and selecting view as The View as window then appears as this whereby we can then select the checkmark for our created role and then click ok The report for the FRANCE country then opens as shown below; We then move to how we can validate our role on the Power BI service. We also need to know that the roles we applied to this report will definitely take effect after we have published it. How to Validate Role on Power BI service After we have designed our report, we then publish it to Power BI service, by selecting Publish on the Home Tab After this report has been published, we’ll open Power BI service to check our report and then we move to manage security on our Data model. To do this, we will carefully follow the steps below; 1)In the Power BI service, select the More options menu for a dataset. 2)Select Security. 3)Under the security, it takes us to the Role-Level Security page where we can add members to the role we created in Power BI Desktop. Only the owners of the dataset will see the Security. After we have added the members by typing in the email address, we then select Add and save We can also type in name of the user or security group when adding members but we can’t add Groups created in Power BI. Members external to your organization can be added as well. The following groups below are usually used in setting up row-level security. Distribution Group Mail-enabled Group Security Group Also Note, if the dataset is in a Group, only administrators of the group will see the security option. You can only create or modify roles within Power BI Desktop.

Blogs

Creating Paginated Reports in Power BI Report Server

Paginated reports are reports created to be printed or shared. From the name, they are formatted to fit well on a page. Paginated reports can be used to run many pages and are usually laid out in a fixed format which has detailed customization. They are stored and managed on the Power BI Report web portal and we can edit them using the Report Builder. This report can as well be viewed on the browser or on a mobile device. In this post, we will learn how to create paginated reports in Power BI report server. How to create Paginated Reports using Report Builder The first step is to install a report builder to create reports for Power BI Report Server. We can install using the procedure below; Navigate to the Power BI Report Server web portal Select New Select Paginated Report. This will guide you through the installation process. After it has been installed, Report Builder opens to the New Report or Dataset screen. We then Select the wizard for the kind of report we’re interested in creating which might be table or matrix, chart, map e.t.c. The Chart Wizard Let’s demonstrate further by clicking on the chart wizard to show how we can create a visualization in a report. After selecting the Chart wizard, we will see “Choose a dataset”, bring in our dataset or choose a shared dataset on a server. Then choose our chart type. For example, Bar, Column, Area, Line. We will use the Bar chart type in this tutorial. Afterwards, we arrange the fields by dragging them to the Categories, Series, and Values boxes. we then Select Next → Finish Design your report Now we’re now in the Report design view where we take a step further to design our report. We can view our data by clicking on Run and as well go back to the Design view by selecting Design. There is an availability to modify our chart to what we want exactly by changing the layout, legends, values and also add any visualization ranging from map, matrixes, tables e.t.c. You can also add headers and footers for multiple pages. Save the Report to Report Server When we’re done working on our report, we can save it to the Power BI Report Server using the following; Open the File menu. Select Save as, and save it to the report server Afterwards, you can view your report on the browser. Conclusion In this post, we have learnt how to create paginated reports in Power BI by following the steps above.

Blogs

How to Run Python Scripts in Power BI Desktop

In this tutorial, we will learn how we can use python in Power BI, and how to set up a Power BI Python Integration. Power BI Python Integration allows you to set up informative dashboards & reports and carry out an insightful analysis. We can run Python Scripts in Power BI Desktop directly and import the dataset into Power Bi Data Model.  How to Get Started  The first thing we need to do is to install these two Python packages:  Pandas: Pandas is a software library written for the Python programming language for data manipulation and analysis. Particularly, it offers data structures and operations for manipulating numerical tables and time series. Matplotlib: Matplotlib is a plotting library for the Python programming language and its numerical mathematics extension NumPy. It provides an object-oriented API for embedding plots into applications using general-purpose GUI toolkits like Tkinter, wxPython, Qt, or GTK.  We can install both Pandas and Matplotlib on our machine by using the code below;  py -m pip install pandas py -m pip install matplotlib Enable Python Scripting We also need to enable the Python scripting on Power Bi Desktop before running script on it. In Power BI Desktop, we select File > Options and settings > Options > Python scripting. The Python script options page appears. In the above image, we specified our local Python installation path in Detected Python home directories. C:\Users\CNDRO\AppData… we then selected OK. Now, we’re ready to run Python scripts in Power BI Desktop. How to Run Python Scripts in Power BI Desktop Now, we will run our Python script to create our data model which we can use for building our reports. We can simply prepare our script by first running it on  Jupyter notebook to test before we use it on Power BI Desktop. There are a few limitations we need to take note of while running a python Script in Power BI Desktop; Only pandas data frames are imported; we must make sure that the data we are bringing in to Power BI is represented in a data frame Python script that runs longer than 30 minutes will time out; ensure your script doesn’t take much time to run Interactive calls in the Python script, such as waiting for user input, halts the script’s execution When setting the working directory within the Python script, you must ensure to define a full path to the working directory rather than using a relative path The use of Nested tables is currently not supported. Import Data with Python Script In Power BI Desktop Prepare your Script We first prepare the data we want to bring in by running it on Jupyter notebook first, then we use the code in PowerBI. Import Data with Python Script The next thing to do is to bring this script into Power BI Desktop: Go to the Home ribbon, select Get data > Other, Select Other > Python script as shown in the following image: After it has connected, paste your script which is to load the CSV data to Power BI as shown in the image below; We then load this data to our Data Model and we are now ready to use this data in our Report Tab How to Create Visualizations using Python Visual You might be wondering how you can create a visualization using Python Visuals. To do this,  first, drag the python visual into the report tab and then drag all the fields you want to use at the values pane. Use the code below in the python visuals code box. # Paste or type your script code here: import matplotlib.pyplot as plt Top_customers = dataset.groupby([“Customer Name”]).sum().sort_values(“Sales”, ascending=False).head(20) # Sort the Customers as per the sales Top_customers = Top_customers[[“Sales”]].round(2) # Round off the Sales Value up to 2 decimal places Top_customers.reset_index(inplace=True) plt.figure(figsize = (15,5)) # width and height of figure is defined in inches plt.title(“Most Valuable Customers (2015-2019)”, fontsize=18) plt.bar(Top_customers[“Customer Name”], Top_customers[“Sales”],color= ‘#99ff99′,edgecolor=’green’, linewidth = 1) plt.xlabel(“Customers”,fontsize=15) # x axis shows the customers plt.ylabel(“Revenue”,fontsize=15) # y axis shows the Revenue plt.xticks(fontsize=12, rotation=90) plt.yticks(fontsize=12) for k,v in Top_customers[“Sales”].items(): #To show the exact revenue generated on the figure plt.text(k,v-8000,’$’+ str(v), fontsize=12,rotation=90,color=’k’, horizontalalignment=’center’); plt.show()     After we input our code in the Python Script editor, we then run it by clicking on the icon on the python Script editor which resembles the “play button”, for which we’ll see our chart. The chart is displayed on the report tab.

Blogs

How to Create a Conversational Chatbot in Python

A chatbot is an AI-based software designed to interact with humans in their natural languages. These chatbots usually converse via auditory or textual methods, and they can effortlessly mimic human languages to communicate with human beings in a human-like manner. In the last few years, chatbots in Python have gained a huge following in both the business and tech sectors. These smart bots are so proficient at mimicking the human language and conversing with humans. Bots are responsible for the majority of internet traffic. For e-commerce sites, traffic can be significantly higher, accounting for up to 90% of total traffic. They can communicate with people and on social media accounts, as well as on websites. We will be demonstrating on how you can create your chatbot below. Import libraries We will import the NLKT library which we would use in this tutorial. import nltk import io import random import string # to process standard python strings import warnings import numpy as np from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.metrics.pairwise import cosine_similarity import warnings warnings.filterwarnings(‘ignore’) from nltk.stem import WordNetLemmatizer nltk.download(‘popular’)   Tokenization Tokenization is the process by which a large quantity of text is divided into smaller parts called tokens. These tokens are very useful for finding patterns and are considered as a base step for stemming and lemmatization. Stemming and lemmatization are techniques used by search engines and chatbots to analyze the meaning behind a word. Tokenization also helps to substitute sensitive data elements with non-sensitive data elements. We will pass our data through both tokenization and lemmatization and remove punctuation marks from the data. f=open(‘data.txt’,’r’,errors = ‘ignore’) raw=f.read() raw = raw.lower() sent_tokens = nltk.sent_tokenize(raw)# converts to list of sentences word_tokens = nltk.word_tokenize(raw)# lemmer = nltk.stem.WordNetLemmatizer() #WordNet is a semantically-oriented dictionary of English included in NLTK. def LemTokens(tokens): return [lemmer.lemmatize(token) for token in tokens] remove_punct_dict = dict((ord(punct), None) for punct in string.punctuation) def LemNormalize(text): return LemTokens(nltk.word_tokenize(text.lower().translate(remove_punct_dict)))   Keywords Matching We created a list of data we refer to as greeting input and responses, for which we are going to pick our results at random if any of the keywords is found to match. #Keyword matching GREETING_INPUTS = (“hello”, “hi”, “greetings”, “sup”, “what’s up”,”hey”,) GREETING_RESPONSES = [“hi”, “hey”, “*nods*”, “hi there”, “hello”, “I am glad! You are talking to me”] def greeting(sentence): for word in sentence.split(): if word.lower() in GREETING_INPUTS: return random.choice(GREETING_RESPONSES)   Chatbot Implementation This is the point where we write a function which uses the transformed data and pass it through the TFidfvectorizer and as well use the cosine similarity on it to train our bot. def response(user_response): robo_response=” sent_tokens.append(user_response) TfidfVec = TfidfVectorizer(tokenizer=LemNormalize, stop_words=’english’) tfidf = TfidfVec.fit_transform(sent_tokens) vals = cosine_similarity(tfidf[-1], tfidf) idx=vals.argsort()[0][-2] flat = vals.flatten() flat.sort() req_tfidf = flat[-2] if(req_tfidf==0): robo_response=robo_response+”Oops, I can’t understand you!” return robo_response else: robo_response = robo_response+sent_tokens[idx] return robo_response   Testing This is where we call our function and also obtain the user’s input while we run our code. flag=True print(” John: Hi!! My name is John. I’ll answer all your queries about Chatbots. If you want to exit, just type – bye!”) while(flag==True): user_response = input() user_response=user_response.lower() if(greeting(user_response)!=None): print(“John: “+greeting(user_response)) else: print(“John: “,end=””) print(response(user_response)) sent_tokens.remove(user_response)   Output After we ran our code . These were the results from the conversation we had with the bot which shows that the bot performed well and was able to capture most of the questions asked. We can as well improve on this. Once you have a good understanding of the structure of the chatbot built using Python, you can then play around with it using various techniques or commands that will make the bot more efficient.

Blogs

SQL Server Installation using Configuration File

Mostly in all organizations, there are diverse SQL server environments such as development, test and production. The databases in these environments require almost similar configurations and usually database administrators had to install database management on several servers. For us to enhance this process, we will discuss how we can set up a configuration file which can be used for installation in these different environments. In this article, we’ll walk through the steps in setting up SQL Server environments by using a configuration file.  This will be in two phases.  How to Create a SQL server Installation File  Installing SQL Server from a configuration file  How to Create a SQL Server Installation File  We can generate the configuration file without installing the SQL Server. Yes, it’s possible! We will generate this  file on our local computer (we are using a computer with Windows 10 operating system). We must have downloaded the SQL Server version installation files on our computer. The generation of the configuration file is almost identical to the installation of SQL Server.   To get started, we will perform a double-click on the setup file. When the installation centre starts, we will choose “Installation” > “New SQL Server stand-alone installation or add features to an existing installation”:     After that, we will continue as we are installing a new instance on our local machine:     Then, we accept the license terms:    When the services are ready to be installed, instead of clicking on “Install”, we just locate the configuration file – “ConfigurationFile.ini”:   Finally, having generated this file, we can cancel the setup   Hence, we possess an installation file, with predefined settings that can serve as a template for deploying the SQL Server instances in new environments. Let’s move on to the second phase.   Installing SQL Server from a configuration file Now, we want to install SQL Server on one of our environments using the preconfigured file. To do this, we will double-click on the installation file and choose “Advanced” > “Install based on configuration file”. Then find the configuration file, choose it and click “Open”: We will click “Next” to move forward and in the next step, we can see that all necessary features and services have been  selected already . However, for each environment, we might need to modify the data directories: In the final step, we review our settings and click on “Install”: We have successfully installed SQL server using configuration file. If you have any questions while trying this, don’t hesitate to ask in the comment section below.    

Blogs

Five-Letter Word Guessing Game App

This guessing game makes users suggest hidden letters for a five-letter word. The implementation of this guessing game is you have five empty boxes where you supply just one special letter and the position where you will place the letters. Then, it generates five-letter words based on your input. In this article, we will guide you through the steps in creating a five-letter word guessing game app and how to play the game. Let’s jump right in. The Game Design A blank five-letter space appears to the player in the game, each space is for one letter. The player then guesses a letter and a position of which the player can only enter one letter into each space. Based on the letter he enters and the position he places the letter in the five blank spaces, the game will generate five-letter words in the exact position. For instance, let’s say I have five empty spaces, and I am to fill in just a letter, I then enter the letter ‘S’ at position 1; this will generate 5 letter words like smile, Sahel, sager, and so on. This game filters words for the user as well. We can also specify characters we don’t want to find in our five-letter words. Let’s say for all five-letter words that I enter the letter ‘S’ at position 1, I can specify characters as many as possible. We might not want to find ‘e’ and ‘a’ in each of the five different words generated. This game will handle that to print out all five-letter words that don’t contain ‘e’ and ‘a’. Quite interesting right? Data Collection Firstly, collect data to make the word generation easier. Then, clean the data to remove any dirty data and check for duplicates. Game Implementation Import Libraries We will first import the libraries we want to use and then bring in our five-letter words generated and cleaned. The next step is to provide players with instructions to know which of the levels they are interested in playing and also obtain their input on that level(by picking either 1 , 2, 3, 4, 0) import pandas as pd file=’all_joined_fivelett_16plus.csv’ def intro(): print(“*******************************************”) print(‘Hii, you are welcome to the guessing game’) print(‘Instructions: This game has different level of difficulty, Level 1, Level2, Level3, Level4, Level0’) print(‘Level1: You only supplied one character’) print(‘Level2: You only supplied two character’) print(‘Level3: You only supplied three character’) print(‘Level4: You only supplied four character’) print(‘Level0: it return negative words’) print(‘You indicate which level you will be playing’) print(“*******************************************”) def words(csv_file): data=pd.read_csv(csv_file) five_lw=data[‘Letters’].tolist() return five_lw def get_input_lev(): lev_input = input(‘ Enter level to play ‘) return lev_input Game Initiation We will define different functions for each level to play. For any level the player selects, we will obtain our input from the player to know the letters he wants to supply to the empty boxes and at what position he would like them to be placed. def one_game_input(): #we designed and get input from the user here print(“_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _”) print(‘At what postion would you like your letter to be placed’) pos=int(input())-1 print(‘which letter would you like to enter’) the_input=input() return pos, the_input def two_game_input(): print(“_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _”) print(‘At what postion would you like your first letter to be placed’) letter_posone=int(input())-1 print(‘which letter would you like to enter first’) letter_inputone=input() print(“_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _”) print(‘At what postion would you like your next letter to be placed’) letter_postwo=int(input())-1 print(‘which second letter would you like to enter next’) letter_inputtwo=input() return letter_posone, letter_inputone, letter_postwo, letter_inputtwo def three_game_input(): print(“_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _”) print(‘At what postion would you like your first letter to be placed’) letter_posone=int(input())-1 print(‘which letter would you like to enter first’) letter_inputone=input() print(“_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _”) print(‘At what postion would you like your next letter to be placed’) letter_postwo=int(input())-1 print(‘which second letter would you like to enter next’) letter_inputtwo=input() print(“_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _”) print(‘At what postion would you like your next letter to be placed’) letter_posthree=int(input())-1 print(‘which third letter would you like to enter next’) letter_inputthree=input() return letter_posone, letter_inputone, letter_postwo, letter_inputtwo, letter_posthree, letter_inputthree def four_game_input(): print(“_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _”) print(‘At what postion would you like your first letter to be placed’) letter_posone=int(input())-1 print(‘which letter would you like to enter first’) letter_inputone=input() print(“_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _”) print(‘At what postion would you like your next letter to be placed’) letter_postwo=int(input())-1 print(‘which second letter would you like to enter next’) letter_inputtwo=input() print(“_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _”) print(‘At what postion would you like your next letter to be placed’) letter_posthree=int(input())-1 print(‘which third letter would you like to enter next’) letter_inputthree=input() print(“_ _ _ _ _ _ _ _

Blogs

How to send Messages in Microsoft Teams using Incoming Webhook Adaptive Card

Incoming Webhook permits any external apps to share content in Teams channels where the webhooks are majorly for tracking and notifying tools. It provides a unique URL to which you send a JSON payload with a message in card format. Cards are user interface containers that comprise of content and actions related to a single topic. Microsoft Teams use cards to perform the following functions: Bots Messaging extensions Connectors We will implement an Alert system that acts as a bot for sending messages via Microsoft Teams. We will use an Incoming webhook to demonstrate this. How to Create Incoming Webhook in Teams To create a webhook from Microsoft teams, open Microsoft teams app and go to your selected channel Open Microsoft Teams app and go to the channel you would like to add the webhook and select ••• More options from the top navigation bar. Select Connectors from the dropdown menu: 3. Search for Incoming Webhook and select Add. 4. Select Configure, provide a name, and upload an image for your webhook if you want: 5. The dialog window presents a unique URL that maps to the channel. Copy and save the webhook URL, to send information to Microsoft Teams and select Done: Now, we can use this URL in our script to post messages to teams. Template for using Adaptive Cards The JSON format of the payload type we will include in our script when using the request method along with our webhook must be in this format. { “type”:”message”, “attachments”:[ { “contentType”:”application/vnd.microsoft.card.adaptive”, “contentUrl”:null, “content”:{ “$schema”:”http://adaptivecards.io/schemas/adaptive-card.json”, “type”:”AdaptiveCard”, “version”:”1.2″, “body”:[ { “type”: “TextBlock”, “text”: “For Samples and Templates, see [https://adaptivecards.io/samples](https://adaptivecards.io/samples)” }, { “type”: “Image”, “url”: “https://adaptivecards.io/content/cats/1.png” } ] } } ] } Sending Message to Teams Here, we will use webhook URL which uses the Adaptive card type to send data to the Teams channel. With this Adaptive card, we can tag different people in the organization to whom we want to show this message.  These messages include sending alert messages to team members for a daily standup meeting. #import libraries here import requests import json #specify the webhook url url = “https://cndrocom.webhook.office.com/webhookb2/xxxxxxxxxxxxxxxxxxxxxx@xxxxxxxxxxxxxxxxxxx/IncomingWebhook/xxxxxxxxxxxxxxxxx/xxxxxxxxxxxxx” #the payload request was passed in here using json dump to wrap it as a string format payload = json.dumps({ “type”: “message”, “attachments”: [ { “contentType”: “application/vnd.microsoft.card.adaptive”, “content”: { “type”: “AdaptiveCard”, “body”: [ { #this is to indicate how we want our text to be formatted(size, weight e.t.c.) “type”: “TextBlock”, “size”: “Medium”, “weight”: “Bolder”, “text”: “Daily Standup Bot” }, { “type”: “TextBlock”, #passing my message here, whereby I specify the names of each team members and indicate the UPN alongside “text”: “Hi, what’s the update on our board today <at>Jones UPN</at>, <at>Michael UPN</at>, <at>Grace UPN</at>, <at>Paul UPN</at>, <at>Kate UPN</at>, <at>Mark UPN</at>” } ], “$schema”: “http://adaptivecards.io/schemas/adaptive-card.json”, “version”: “1.0”, “msteams”: { “width”: “Full”, “entities”: [ #Each team members are being tagged under this part { “type”: “mention”, #the text refer to the name of that team memeber “text”: “<at>Jones UPN</at>”, #Here, we specify the member email as the id and the name as the username of that user “mentioned”: { “id”: “[email protected]”, “name”: “Jones” } }, { “type”: “mention”, “text”: “<at>Michael UPN</at>”, “mentioned”: { “id”: “[email protected]”, “name”: “Michael” } }, { “type”: “mention”, “text”: “<at>Grace UPN</at>”, “mentioned”: { “id”: “[email protected]”, “name”: “Grace” } }, { “type”: “mention”, “text”: “<at>Paul UPN</at>”, “mentioned”: { “id”: “[email protected]”, “name”: “Paul” } }, { “type”: “mention”, “text”: “<at>Kate UPN</at>”, “mentioned”: { “id”: “[email protected]”, “name”: “Kate” } }, { “type”: “mention”, “text”: “<at>Mark UPN</at>”, “mentioned”: { “id”: “[email protected]”, “name”: “Mark” } } ] } } } ] }) #we specify the headers to our request here headers = { ‘Content-Type’: ‘application/json’ } #the request method will be used in sending a post request method using our webhook url and the payload data response = requests.request(“POST”, url, headers=headers, data=payload) #we get response to our text here print(response.text) Automate the Process We can automate this process so we don’t need to run the code each time instead use Windows Task Scheduler which we can configure for instance to run the script at 12 noon every day to send a message to us on Teams.

Blogs

How to Normalize Data Point Axis in Tableau

 In this article, I will show you how to normalize data point axis in Tableau. Recently, I got a request to build a Panel chart of all the measures in a data set and I came up with something below (I built this with a sample superstore);  You can see above that the discount and Quantity line chart looks flat and this was because they share the same axis with Units and DV that have pretty big axes. This is not too good because it is very difficult to see trends in both Discount and Quantity.  Request: Plot each measure to clearly show their trend.  Proposed Solution: I came up with the mathematical solution below;   Pivot all the measure fields, this is done to have all the measures plotted in one panel  Pick up each data point per date (in my case monthly)  Pick up the Maximum value of all the Measure field  Sum the measure field per pivot field (Measure name)   Mathematical expression;   Normalize X = (Xi * MAX (Xi-n)) /∑X (i-n)t  Where Xi == b  MAX (Xi-n) == c  ∑X (i-n)t ==d  Note that I came up with this formula and it is not a standard formula for Normalizing data  Tableau Desktop  I will not go into the details of how to create a panel chart in Tableau. This is how my data looks like after pivoting;  Pick each data point tagged ‘Data point’ with the formula below;    { FIXED [Pivot Field Names],DATETRUNC(‘month’,[Weekly Start Date]):SUM([Pivot Field Values])}  2. Find Maximum value from all the data point, tag this ‘Max’  {MAX( { FIXED [Pivot Field Names],DATETRUNC(‘month’,[Weekly Start Date]):SUM([Pivot Field Values])})}  3. Estimate sum Pivot value per Pivot Name, tagged ‘Sum of Metric’;    { FIXED [Pivot Field Names]:SUM([Pivot Field Values])}  4. Finally, compute the Normalized value for each data point using the formula provided above;                 ([Data point] * [maximum])/[sum per metrics]    Below is how the panel chart looks after normalizing both the Discount and Quantity line trend. Check out the dashboard on Tableau Public. Link to the dashboard  https://public.tableau.com/app/profile/kayode.ajewole7777/viz/PanelChart_16390261051060/AllModel?publish=yes

Scroll to Top