SQL Group By and Partition By Scenarios: When and How to Combine Data in Data Science
Learn the generic scenarios and techniques of grouping and aggregating data, partitioning and ranking data in SQL, which will be very helpful in reporting requirements.
Image by Freepik
Introduction
SQL (Structured Query Language) is a programming language used for managing and manipulating data. That is why SQL queries are very essential for interacting with databases in a structured and efficient manner.
Grouping in SQL serves as a powerful tool for organizing and analyzing data. It helps in extraction of meaningful insights and summaries from complex datasets. The best use case of grouping is to summarize and understand data characteristics, thus helping businesses in analytical and reporting tasks.
We generally have a lot of requirements where we need to combine the dataset records by common data to calculate statistics in the group. Most of these instances can be generalized into common scenarios. These scenarios can then be applied whenever a requirement of similar kind comes up.
SQL Clause: Group By
The GROUP BY clause in SQL is used for
- grouping data on some columns
- reducing the group to a single row
- performing aggregation operations on other columns of the groups.
Grouping Column = The value in the Grouping column should be same for all rows in the group
Aggregation Column = Values in the Aggregation column are generally different over which a function is applied like sum, max etc.
The Aggregation column should not be the Grouping Column.
Scenario 1: Grouping to find the sum of Total
Let's say we want to calculate the total sales of every category in the sales table.
So, we will group by category and aggregate individual sales in every category.
select category,
sum(amount) as sales
from sales
group by category;
Grouping column = category
Aggregation column = amount
Aggregation function = sum()
category | sales |
toys | 10,700 |
books | 4,200 |
gym equipment | 2,000 |
stationary | 1,400 |
Scenario 2: Grouping to find Count
Let’s say we want to calculate the count of employees in each department.
In this case, we will group by the department and calculate the count of employees in every department.
select department,
count(empid) as emp_count
from employees
group by department;
Grouping column = department
Aggregation column = empid
Aggregation function = count
department | emp_count |
finance | 7 |
marketing | 12 |
technology | 20 |
Scenario 3: Grouping to find the Average
Let’s say we want to calculate the average salary of employees in each department
Similarly, we will again group them by department and calculate the average salaries of employees in every department separately.
select department,
avg(salary) as avg_salary
from employees
group by department;
Grouping column = department
Aggregation column = salary
Aggregation function = avg
department | avg_salary |
---|---|
finance | 2,500 |
marketing | 4,700 |
technology | 10,200 |
Scenario 4: Grouping to find Maximum / Minimum
Let’s say we want to calculate the highest salary of employees in each department.
We will group the departments and calculate the maximum salary in every department.
select department,
max(salary) as max_salary
from employees
group by department;
Grouping column = department
Aggregation column = salary
Aggregation function = max
department | max_salary |
---|---|
finance | 4,000 |
marketing | 9,000 |
technology | 12,000 |
Scenario 5: Grouping to Find Duplicates
Let’s say we want to find duplicate or same customer names in our database.
We will group by the customer name and use count as an aggregation function. Further we will use having a clause over the aggregation function to filter only those counts that are greater than one.
select name,
count(*) AS duplicate_count
from customers
group by name
having count(*) > 1;
Grouping column = name
Aggregation column = *
Aggregation function = count
Having = filter condition to be applied over aggregation function
name | duplicate_count |
Jake Junning | 2 |
Mary Moone | 3 |
Peter Parker | 5 |
Oliver Queen | 2 |
SQL Clause: Partition By
The PARTITION BY clause in SQL is used for
- grouping/partitioning data on some columns
- Individual rows are retained and not combined into one
- performing ranking and aggregation operations on other columns of the group/partition.
Partitioning column = we select a column on which we group the data. The data in the partition column must be the same for each group. If not specified, the complete table is considered as a single partition.
Ordering column = With each group created based on the Partitioning Column, we will order/sort the rows in the group
Ranking function = A ranking function or an aggregation function will be applied to the rows in the partition
Scenario 6: Partitioning to find the Highest record in a Group
Let’s say we want to calculate which book in every category has the highest sales - along with the amount that the top seller book has made.
In this case, we cannot use a group by clause - because grouping will reduce the records in every category to a single row.
However, we need the record details such as book name, amount, etc., along with category to see which book has made the highest sales in each category.
select book_name, amount
row_number() over (partition by category order by amount) as sales_rank
from book_sales;
Partitioning column = category
Ordering column = amount
Ranking function = row_number()
This query gives us all the rows in the book_sales table, and the rows are ordered in every book category, with the highest-selling book as row number 1.
Now we need to filter only row number 1 rows to get the top-selling books in each category
select category, book_name, amount from (
select category, book_name, amount
row_number() over (partition by category order by amount) as sales_rank
from book_sales
) as book_ranked_sales
where sales_rank = 1;
The above filter will give us only the top seller books in each category along with the sale amount each top-seller book has made.
category | book_name | amount |
science | The hidden messages in water | 20,700 |
fiction | Harry Potter | 50,600 |
spirituality | Autobiography of a Yogi | 30,800 |
self-help | The 5 Love Languages | 12,700 |
Scenario 7: Partitioning to Find Cumulative Totals in a Group
Let’s say we want to calculate the running total (cumulative total) of the sale as they are sold. We need a separate cumulative total for every product.
We will partition by product_id and sort the partition by date
select product_id, date, amount,
sum(amount) over (partition by product_id order by date desc) as running_total
from sales_data;
Partitioning column = product_id
Ordering column = date
Ranking function = sum()
product_id | date | amount | running_total |
1 | 2023-12-25 | 3,900 | 3,900 |
1 | 2023-12-24 | 3,000 | 6,900 |
1 | 2023-12-23 | 2,700 | 9,600 |
1 | 2023-12-22 | 1,800 | 11,400 |
2 | 2023-12-25 | 2,000 | 2,000 |
2 | 2023-12-24 | 1,000 | 3,000 |
2 | 2023-12-23 | 7,00 | 3,700 |
3 | 2023-12-25 | 1,500 | 1,500 |
3 | 2023-12-24 | 4,00 | 1,900 |
Scenario 8: Partitioning to Compare Values within a Group
Let’s say we want to compare the salary of every employee with the average salary of his department.
So we will partition the employees based on department and find the average salary of each department.
The average can be further easily subtracted from the employee's individual salary to calculate if employee's salary is higher or below the average.
select employee_id, salary, department,
avg(salary) over (partition by department) as avg_dept_sal
from employees;
Partitioning column = department
Ordering column = no order
Ranking function = avg()
employee_id | salary | department | avg_dept_sal |
1 | 7,200 | finance | 6,400 |
2 | 8,000 | finance | 6,400 |
3 | 4,000 | finance | 6,400 |
4 | 12,000 | technology | 11,300 |
5 | 15,000 | technology | 11,300 |
6 | 7,000 | technology | 11,300 |
7 | 4,000 | marketing | 5,000 |
8 | 6,000 | marketing | 5,000 |
Scenario 9: Partitioning to divide results into equal groups
Let’s say we want to divide the employees into 4 equal (or nearly equal) groups based on their salary.
So we will derive another logical column tile_id, which will have the numeric id of each group of employees.
The groups will be created based on salary - the first tile group will have the highest salary, and so on.
select employee_id, salary,
ntile(4) over (order by salary desc) as tile_id
from employees;
Partitioning column = no partition - complete table is in the same partition
Ordering column = salary
Ranking function = ntile()
employee_id | salary | tile_id |
4 | 12,500 | 1 |
11 | 11,000 | 1 |
3 | 10,500 | 1 |
1 | 9,000 | 2 |
8 | 8,500 | 2 |
6 | 8,000 | 2 |
12 | 7,000 | 3 |
5 | 7,000 | 3 |
9 | 6,500 | 3 |
10 | 6,000 | 4 |
2 | 5,000 | 4 |
7 | 4,000 | 4 |
Scenario 10: Partitioning to identify islands or gaps in data
Let’s say we have a sequential product_id column, and we want to identify gaps in this.
So we will derive another logical column island_id, which will have the same number if product_id is sequential. When a break is identified in product_id, then the island_id is incremented.
select product_id,
row_number() over (order by product_id) as row_num,
product_id - row_number() over (order by product_id) as island_id,
from products;
Partitioning column = no partition - complete table is in the same partition
Ordering column = product_id
Ranking function = row_number()
product_id | row_num | island_id |
1 | 1 | 0 |
2 | 2 | 0 |
4 | 3 | 1 |
5 | 4 | 1 |
6 | 5 | 1 |
8 | 6 | 2 |
9 | 7 | 2 |
Conclusion
Group By and Partition By are used to solve many problems like:
Summarizing Information: Grouping allows you to aggregate data and summarize information in every group.
Analyzing Patterns: It helps in identifying patterns or trends within data subsets, providing insights into various aspects of the dataset.
Statistical Analysis: Enables the calculation of statistical measures such as averages, counts, maximums, minimums, and other aggregate functions within the groups.
Data Cleansing: Helps identify duplicates, inconsistencies, or anomalies within groups, making data cleansing and quality improvement more manageable.
Cohort Analysis: Useful in cohort-based analysis, tracking and comparing groups of entities over time etc.
Hanu runs the HelperCodes Blog which mainly deals with SQL Cheat Sheets. I am a full stack developer and interested in creating reusable assets.