15 Basic And Highly Used Hive Queries that All Data Engineers Must know
Overview
- Get to know 15 basic hive queries including-
- Simple selects ‐ selecting columns
- Simple selects – selecting rows
- Creating new columns
- Hive Functions
- By no way is this list exhaustive. Feel free to add more in the comments section.
Introduction
Using Apache Hive queries, you can query distributed data storage including Hadoop data.
Hive supports ANSI SQL and atomic, consistent, isolated, and durable (ACID) transactions. For updating data, you can use the MERGE statement, which now also meets ACID standards. Materialized views optimize queries based on access patterns. Hive supports tables up to 300PB in Optimized Row Columnar (ORC) format. Other file formats are also supported.
You can create tables that resemble those in a traditional relational database. You use familiar insert, update, delete, and merge SQL statements to query table data.
The insert statement writes data to tables. Update and delete statements modify and delete values already written to Hive. The merge statement streamlines updates, deletes, and changes data capture operations by drawing on co-existing tables.
These statements support auto-commit that treats each statement as a separate transaction and commits it after the SQL statement is executed. In this article, we are going to cover some basic Hive queries and functions that will give you a basic understanding of how to run Hive queries over distributed data.
Table of Contents:
- Simple Selects – Selecting columns
- Simple Selects – Selecting rows
- Creating new columns
- Hive Functions
- Simple Functions
- Concat
- Substr
- Aggregations
- Date functions
- Simple Functions
Simple Selects – Selecting columns
In Hive, querying data is performed by a SELECT statement. A select statement has 6 key components;
- SELECT column names
- FROM table-name
- GROUP BY column names
- WHERE conditions
- HAVING conditions
- ORDER by column names
In practice, very few queries will have all of these clauses in them simplifying many queries. On the other hand, conditions in the WHERE clause can be very complex and if you need to JOIN two or more tables together then more clauses (JOIN and ON) are needed.
All of the clause names above have been written in uppercase for clarity. HQL is not case sensitive. Neither do you need to write each clause on a new line, but it is often clearer to do so for all but the simplest of queries.
In this lesson, we will start with the very simple and work our way up to the more complex.
Simple selects ‐ selecting columns
Amongst all the hive queries, the simplest query is effectively one which returns the contents of the whole table
SELECT * FROM geog_all; CREATE VIEW [IF NOT EXISTS] [db_name.]view_name [(column_name [COMMENT column_comment], ...) ]
It is better to practice, and generally more efficient to explicitly list the column names that you want to be returned.
SELECT anonid, fueltypes, acorn_type FROM geog_all;
Simple selects – selecting rows
In addition to limiting the columns returned by a query, you can also limit the rows returned. The simplest case is to say how many rows are wanted using the Limit clause.
SELECT anonid, fueltypes, acorn_type FROM geog_all LIMIT 10;
This is useful if you just want to get a feel for what the data looks like. Usually, you will want to restrict the rows returned based on some criteria. i.e. certain values or ranges within one or more columns.
SELECT anonid, fueltypes, acorn_type FROM geog_all WHERE fueltypes = "ElecOnly";
The Expression in the where clause can be more complex and involve more than one column.
SELECT anonid, fueltypes, acorn_type FROM geog_all WHERE fueltypes = "ElecOnly" AND acorn_type > 42; SELECT anonid, fueltypes, acorn_type FROM geog_all WHERE fueltypes = "ElecOnly" AND acorn_type > 42 AND nuts1 <> "--";
Notice that the columns used in the conditions of the Where clause don’t have to appear in the Select clause. Other operators can also be used in the where clause. For complex expressions, brackets can be used to enforce precedence.
SELECT anonid, fueltypes, acorn_type, nuts1, ldz FROM geog_all WHERE fueltypes = "ElecOnly" AND acorn_type BETWEEN 42 AND 47 AND (nuts1 NOT IN ("UKM", "UKI") OR ldz = "--");
Creating new columns
It is possible to create new columns in the output of the query. These columns can be from combinations from the other columns using operators and/or built-in Hive functions.
SELECT anonid, eprofileclass, acorn_type, (eprofileclass * acorn_type) AS multiply, (eprofileclass + acorn_type) AS added FROM edrp_geography_data b;
A full list of the operators and functions available within the Hive can be found in the documentation.
When you create a new column it is usual to provide an ‘alias’ for the column. This is essentially the name you wish to give to the new column. The alias is given immediately after the expression to which it refers. Optionally you can add the AS keyword for clarity. If you do not provide an alias for your new columns, Hive will generate a name for you.
Although the term alias may seem a bit odd for a new column that has no natural name, alias’ can also be used with any existing column to provide a more meaningful name in the output.
Tables can also be given an alias, this is particularly common in join queries involving multiple tables where there is a need to distinguish between columns with the same name in different tables. In addition to using operators to create new columns, there are also many Hive built‐in functions that can be used.
Hive Functions
Simple functions
Concat can be used to add strings together
SELECT anonid, acorn_category, acorn_group, acorn_type, concat (acorn_category, ",", acorn_group, ",", acorn_type) AS acorn_code FROM geog_all;
substr can be used to extract a part of a string
SELECT anon_id, advancedatetime, substr (advancedatetime, 1, 2) AS day, substr (advancedatetime, 3, 3) AS month, substr (advancedatetime, 6, 2) AS year FROM elec_c;
Examples of length, instr, and reverse
SELECT anonid, acorn_code, length (acorn_code), instr (acorn_code, ',') AS a_catpos, instr (reverse (acorn_code), "," ) AS reverse_a_typepo
Where needed functions can be nested within each other cast and type conversions
SELECT anonid, substr (acorn_code, 7, 2) AS ac_type_string, cast (substr (acorn_code, 7, 2) AS INT) AS ac_type_int, substr (acorn_code, 7, 2) +1 AS ac_type_not_sure FROM geog_all;
Aggregations
Aggregate functions are used to perform some kind of mathematical or statistical calculation across a group of rows. The rows in each group are determined by the different values in a specified column or columns. A list of all of the available functions is available in the apache documentation.
SELECT anon_id, count (eleckwh) AS total_row_count, sum (eleckwh) AS total_period_usage, min (eleckwh) AS min_period_usage, avg (eleckwh) AS avg_period_usage, max (eleckwh) AS max_period_usage FROM elec_c GROUP BY anon_id;
In the above example, five aggregations were performed over the single column anon_id. It is possible to aggregate over multiple columns by specifying them in both the select and the group by clause. The grouping will take place based on the order of the columns listed in the group by clause. What is not allowed is specifying a non‐aggregated column in the select clause which is not mentioned in the group by clause.
SELECT anon_id, substr (advancedatetime, 6, 2) AS reading_year, count (eleckwh) AS total_row_count, sum (eleckwh) AS total_period_usage, min (eleckwh) AS min_period_usage, avg (eleckwh) AS avg_period_usage, max (eleckwh) AS max_period_usage FROM elec_c GROUP BY anon_id, substr (advancedatetime, 6, 2);
Unfortunately, the group by clause will not accept alias’.
SELECT anon_id, substr (advancedatetime, 6, 2) AS reading_year, count (eleckwh) AS total_row_count, sum (eleckwh) AS total_period_usage, min (eleckwh) AS min_period_usage, avg (eleckwh) AS avg_period_usage, max (eleckwh) AS max_period_usage FROM elec_c GROUP BY anon_id, substr (advancedatetime, 6, 2) ORDER BY anon_id, reading_year;
But the Order by clause does.
The Distinct keyword provides a set of a unique combination of column values within a table without any kind of aggregation.
SELECT DISTINCT eprofileclass, fueltypes FROM geog_all;
Date functions
In the elec_c and gas_c tables, the advance DateTime column, although it contains timestamp type information, it is defined as a string type. For much of the time, this can be quite convenient. However, there will be times when we really do need to be able to treat the column as a Timestamp. Perhaps the most obvious example is when you need to sort rows based on the advanced at time column.
Hive provides a variety of date related functions to allow you to convert strings into Timestamp and to additionally extract parts of the Timestamp.
unix_timestamp returns the current data and time – as an integer!
from_unixtime takes an integer and converts it into a recognizable Timestamp string
SELECT unix_timestamp () AS currenttime FROM sample_07 LIMIT 1; SELECT from_unixtime (unix_timestamp ()) AS currenttime FROM sample_07 LIMIT 1;
There are various date part functions which will extract the relevant parts from a Timestamp string
SELECT anon_id, from_unixtime (UNIX_TIMESTAMP (reading_date, 'ddMMMyy')) AS proper_date, year (from_unixtime (UNIX_TIMESTAMP (reading_date, 'ddMMMyy'))) AS full_year, month (from_unixtime (UNIX_TIMESTAMP (reading_date, 'ddMMMyy'))) AS full_month, day (from_unixtime (UNIX_TIMESTAMP (reading_date, 'ddMMMyy'))) AS full_day, last_day (from_unixtime (UNIX_TIMESTAMP (reading_date, 'ddMMMyy'))) AS last_day_of_month, date_add ( (from_unixtime (UNIX_TIMESTAMP (reading_date, 'ddMMMyy'))),10) AS added_days FROM elec_days_c ORDER BY proper_date;
Conclusion:
In the article, we covered some basic Hive functions and queries. Running queries on distributed data is not much different from running queries in MySQL. This will follow a few articles in which we will cover more advanced functions and queries. I hope you might have enjoyed the article. Please don’t forget to drop in your comments in the comments section below.
I recommend you go through these articles to get acquainted with tools for big data-
- Getting Started with Apache Hive – A Must Know Tool For all Big Data and Data Engineering Professionals
- Introduction to the Hadoop Ecosystem for Big Data and Data Engineering
- PySpark for Beginners – Take your First Steps into Big Data Analytics (with Code)
Let us know your thoughts in the comments below.