Most Important PySpark Functions with Example
This article was published as a part of the Data Science Blogathon.
Introduction
The Python API for Apache Spark is known as PySpark.To develop spark applications in Python, we will use PySpark.ย It also provides the Pyspark shell for real-time data analysis. PySpark supports most of the Apache Spark functionality, including Spark Core, SparkSQL, DataFrame, Streaming, MLlib (Machine Learning), and MLlib (Machine Learning).
This article will explore useful PySpark functions with scenario-based examples to understand them better.
The expr() function
It is a SQL function in PySpark to ๐๐ฑ๐๐๐ฎ๐ญ๐ ๐๐๐-๐ฅ๐ข๐ค๐ ๐๐ฑ๐ฉ๐ซ๐๐ฌ๐ฌ๐ข๐จ๐ง๐ฌ. It will accept a SQL expression as a string argument and execute the commands written in the statement. It enables the use of SQL-like functions that are absent from the PySpark Column type and pyspark.sql.functions API. Ex:- ๐๐๐๐ ๐๐๐๐. We are allowed to use ๐๐๐ญ๐๐ ๐ซ๐๐ฆ๐ ๐๐จ๐ฅ๐ฎ๐ฆ๐ง๐ฌ in the expression. The syntax for this function is ๐๐ฑ๐ฉ๐ซ(๐ฌ๐ญ๐ซ).
# importing necessary libs from pyspark.sql import SparkSession from pyspark.sql.functions import expr # creating session spark = SparkSession.builder.appName("practice").getOrCreate() # create data data = [("Prashant","Banglore",25, 58, "2022-08-01", 1), ("Ankit","Banglore",26,54,"2021-05-02", 2), ("Ramakant","Gurugram",24, 60, "2022-06-02", 3), ("Brijesh","Gazipur", 26,75,"2022-07-04", 4), ("Devendra","Gurugram", 27, 62, "2022-04-03", 5), ("Ajay","Chandigarh", 25,72,"2022-02-01", 6)] columns= ["friends_name","location", "age", "weight", "meetup_date", "offset"] df_friends = spark.createDataFrame(data = data, schema = columns) df_friends.show()
.png)
Let’s see the practical Implementations:-
Example:- A.) Concatenating one or more columns using expr()
# concate friend's name, age, and location columns using expr() df_concat = df_friends.withColumn("name-age-location",expr("friends_name|| '-'|| age || '-' || location")) df_concat.show()
.png)
We have joined the name, age, and location columns and stored the result in a new column called “name-age-location.”
Example:- B.) Add a new column based on a condition (CASE WHEN) using expr()ย
# check if exercise needed based on weight # if weight is more or equal to 60 -- Yes # if weight is less than 55 -- No # else -- Enjoy df_condition = df_friends.withColumn("Exercise_Need", expr("CASE WHEN weight >= 60 THEN 'Yes' " + "WHEN weight < 55 THEN 'No' ELSE 'Enjoy' END")) df_condition.show()
.png)
Our “Exercise_Need” column received three values (Enjoy, No, and Yes) based on the condition given in CASE WHEN. The first value of the weight column is 58, so it’s less than 60 and more than 55, so the result is “Enjoy.”
Example:- C.) Creating a new column using the current column value inside the expression.
# let increment the meetup month by the number of offset df_meetup = df_friends.withColumn("new_meetup_date", expr("add_months(meetup_date,offset)")) df_meetup.show()
.png)
The “meetup_date” month value increases by the offset value, and the newly generated result isย stored in the “new_meetup_date” column.
The Padding Functions
A.) lpad():-ย
This function provides padding to the left side of the column, and the inputs for this function are column name, length, and padding string.
B.) rpad ():-ย
This function is used to add padding to the right side of the column.ย Column name, length, and padding string are additional inputs for this function.
Note:-ย
- If the column value is longer than the specified length, the return value will be shortened to length characters or bytes.
- If the padding value is not specified, then the column value will be padded to the left or right depending on the function you are using, with space characters if it is a character string and with zeros if it is a byte sequence.
Let’s first create a data Frame:-
# importing necessary libs from pyspark.sql import SparkSession from pyspark.sql.functions import col, lpad, rpad # creating session spark = SparkSession.builder.appName("practice").getOrCreate() # creating data data = [("Delhi",30000),("Mumbai",50000),("Gujrat",80000)] columns= ["state_name","state_population"] df_states = spark.createDataFrame(data = data, schema = columns) df_states.show()
.png)
Example:- 01 – Use of left padding
# left padding df_states = df_states.withColumn('states_name_leftpad', lpad(col("state_name"), 10, '#')) df_states.show(truncate =False)
.png)
We added theย ‘#’ symbol to the left of the “state_name” column values, and the total length of column values becomes “10″ after the padding.
Example:-02 – Right padding
# right padding df_states = df_states.withColumn('states_name_rightpad', rpad(col("state_name"), 10, '#')) df_states.show(truncate =False)
.png)
We added theย “#” symbol to the right of the “state_name” column values, and the total length becomes ten after the right padding.
Example:-03 – When the column string length is longer than the padded string length
df_states = df_states.withColumn('states_name_condition', lpad(col("state_name"), 3, '#')) df_states.show(truncate =False)
.png)
In this case, the return column value will be shortened to the length of the padded string length. You can see the “state_name_condition” column only has values of length 3, which is the padded length we have given in the function.
The repeat() Function
In PySpark, we use the repeat function to duplicate the column values. The repeat(str,n) function returns the string containing the specified string value repeated n times.
Example:- 01ย
# importing necessary libs from pyspark.sql import SparkSession from pyspark.sql.functions import expr, repeat # creating session spark = SparkSession.builder.appName("practice").getOrCreate() # # create data data = [("Prashant",25, 80), ("Ankit",26, 90),("Ramakant", 24, 85)] columns= ["student_name", "student_age", "student_score"] df_students = spark.createDataFrame(data = data, schema = columns) df_students.show() # repeating the column (student_name) twice and saving results in new column df_repeated = df_students.withColumn("student_name_repeated",(expr("repeat(student_name, 2)"))) df_repeated.show()
.png)
We have repeated the “student_name” column values in the above example twice.
We can also use this function with the Concat function, where we can repeat some string values n times before column values, working like padding, where n may be the length of some values.
The startswith() and endswith() function
startswith():-
It will produce a boolean result of True or False. When the Dataframe column value ends with the string provided as a parameter to this method, it returns True. If no match is found, it returns False.
endswith():-
The boolean value (True/False) will be returned. When the DataFrame column value ends with a string supplied as an input to this method, it returns True. False is returned if not matched.
Note:-
- Return ๐๐๐๐ if either of the column values or input strings are ๐๐๐๐.
- Return ๐ง๐ฟ๐๐ฒ if the input check strings are empty.
- These methods are case-sensitive.
Create a data frame:-
# importing necessary libs from pyspark.sql import SparkSession from pyspark.sql.functions import col # creating session spark = SparkSession.builder.appName("practice").getOrCreate() # # create dataframe data = [("Prashant",25, 80), ("Ankit",26, 90),("Ramakant", 24, 85), (None, 23, 87)] columns= ["student_name", "student_age", "student_score"] df_students = spark.createDataFrame(data = data, schema = columns) df_students.show()
.png)
Example – 01ย First, check the output type.
df_internal_res = df_students.select(col("student_name").endswith("it").alias("internal_bool_val")) df_internal_res.show()
.png)
- The output is a boolean value.
- The output value is null for the last row value because the corresponding value of the “students_name” column is NULL.
Example – 02
- Now we use the filter() method to fetch the rows corresponding to the True values.
df_check_start = df_students.filter(col("student_name").startswith("Pra")) df_check_start.show()
.png)
Here we got the first row as output because the “student_name” column value starts with the value mentioned inside the function.
Example – 03ย
df_check_end = df_students.filter(col("student_name").endswith("ant")) df_check_end.show()
.png)
Here we got the two rows as output because the “student_name” column value ends with the value mentioned inside the function.
Example – 04 – What if arguments in functions are empty?
df_check_empty = df_students.filter(col("student_name").endswith("")) df_check_empty.show()
.png)
In this case, we get a True value corresponding to each row, and no False value returned.
Conclusion
In this article, we started our discussion by defining PySpark and its features. Then we talk about functions, their definitions, and their syntax. After discussing each function, we created a data frame and practiced some examples using it. We covered six functions in this article.
Key takeaways from this article are:-
- I use the “expr“ function to concatenate columns with SQL-like expressions in PySpark.
- We passed the column’s name as a string in the above function.
- Creating a new column using the column value inside the expression.
- Add padding to the column values.
- Repeat the column values multiple times using the repeat function.
- We also checked whether column values start or end with a particular word or not.
I hope this article helps you to understand the PySpark functions. If you have any opinions or questions, then comment down below. Connect with me on LinkedIn for further discussion.
Keep Learning!!!
The media shown in this article is not owned by Analytics Vidhya and is used at the Authorโs discretion.