Siddharth M — Updated On June 29th, 2021
Advanced Data Engineering Project Python Spark

This article was published as a part of the Data Science Blogathon


One of the major problem everyone face when they first try structured streaming is setting up the required environment to stream their data. We have a few tutorials online about how we can set up this. Most of them focus on asking you to install a virtual machine and an ubuntu operating system on it and then set up all the required files by changing the bash file. This works fine, but not for everyone. Once we use a virtual machine, sometimes we may have to wait long if we have machines with lower memory. The process could get stuck due to memory lag issues. So, for a better way of doing it and easy operation, I will show you how we can set up Structured streaming on our Windows operating system.

Tools used

For the setup we use the following tools:

1. Kafka (For streaming of data – acts as producer)

2. Zookeeper

3. Pyspark (For generating the streamed data – acts as a consumer)

4. Jupyter Notebook (Code Editor)

Environment variables

Important to note that here, I have added all files into C drive. Also naming should be the same as that files you install online.

We have to set up the environment variables as we go on installing these files. Refer to these images during the installation for a hassle-free experience.

The last image is Path from System variables.


Real-time Structured streaming environment variables


Real-time Structured streaming 2


Real-time Structured streaming 3

Required files

Installing Kafka

The first step is to install Kafka in our system. To do this we have to go to this link:

 We require to install Java 8 initially and set up the environment variables. You can get all the instructions from the link.

Once we are done with Java, we must install a zookeeper. I have added zookeeper files into google drive. Feel free to use it or just follow all the instructions given in the link. If you have installed zookeeper correctly and set up the environment variable, you can see this output when you run zkserver as admin in the command prompt.


Next, install Kafka as per instruction in the link and run it using the command specified.


Once everything is set up try creating a topic and checking if it’s working properly. If it does, you have completed Kafka installation.

Real-time Structured streaming kafka install

Installing Spark 

In this step, we install spark. You can basically follow this link to set up spark on your windows machine.

During one of the steps, it will ask for the winutils file to be set up. For your ease, I have added the file in the drive link I shared. In a folder called Hadoop. Just put that folder on your C drive and set up the environment variable as shown in the images.  I would highly recommend you use the spark file I have added to google drive. One of the main reasons is to stream data we need to manually set up a structured streaming environment. In our case, I set up all the required things and modified the files after testing a lot. In case you want to freshly set up, feel free to do so. If the setup doesn’t go correctly, we end up with an error like this while streaming data in pyspark:

Failed to find data source: kafka. Please deploy the application as per the deployment section of "Structured Streaming + Kafka Integration Guide".;

Once we are one with spark, we can now stream the required data from a CSV file in a producer and get it in a consumer using Kafka topic. I mostly work with Jupiter notebook and so, I have used a notebook for this tutorial.

In your notebook first, you have to install few libraries:

1. pip install pyspark

2. pip install Kafka

3. pip install py4j

How does structured streaming work with Pyspark?


Real-time Structured streaming pyspark

We have a CSV file that has data we want to stream. Let us proceed with the classic Iris dataset. Now if we want to stream the iris data, we need to use Kafka as a producer. Kafka, we create a topic to which we stream the iris data and the consumer can retrieve data frame this topic.

The following is the producer code to stream iris data:

import pandas as pd
from kafka import KafkaProducer
from datetime import datetime
import time
import random
import numpy as np

# pip install kafka-python


if __name__ == "__main__":
    print("Kafka Producer Application Started ... ")

    kafka_producer_obj = KafkaProducer(bootstrap_servers=KAFKA_BOOTSTRAP_SERVERS_CONS,
                                       value_serializer=lambda x: x.encode('utf-8'))
    filepath = "IRIS.csv"
    flower_df = pd.read_csv(filepath)
    flower_df['order_id'] = np.arange(len(flower_df))

    flower_list = flower_df.to_dict(orient="records")

    message_list = []
    message = None
    for message in flower_list:
        message_fields_value_list = []

        message = ','.join(str(v) for v in message_fields_value_list)
        print("Message Type: ", type(message))
        print("Message: ", message)
        kafka_producer_obj.send(KAFKA_TOPIC_NAME_CONS, message)

    print("Kafka Producer Application Completed. ")

To start producer, we have to run zkserver as admin in windows command prompt and then start Kafka using:  .binwindowskafka-server-start.bat from command prompt in Kafka directory. If you get a “no broker” error then it means Kafka isn’t running properly.

The output after running this code on jupyter notebook looks like this:

output 1

Now, let us check the consumer. Run the following code to see if it works fine in a new notebook.

from pyspark.sql import SparkSession
from pyspark.sql.functions import *
from import Normalizer, StandardScaler
import random

import time

kafka_topic_name = "Topic"
kafka_bootstrap_servers = 'localhost:9092'

spark = SparkSession \
        .builder \
        .appName("Structured Streaming ") \
        .master("local[*]") \


# Construct a streaming DataFrame that reads from topic
flower_df = spark \
        .readStream \
        .format("kafka") \
        .option("kafka.bootstrap.servers", kafka_bootstrap_servers) \
        .option("subscribe", kafka_topic_name) \
        .option("startingOffsets", "latest") \

flower_df1 = flower_df.selectExpr("CAST(value AS STRING)", "timestamp")

flower_schema_string = "order_id INT,sepal_length DOUBLE,sepal_length DOUBLE,sepal_length DOUBLE,sepal_length DOUBLE,species STRING"

flower_df2 = flower_df1 \
        .select(from_csv(col("value"), flower_schema_string) \
                .alias("flower"), "timestamp")

flower_df3 ="flower.*", "timestamp")

song_find_text = spark.sql("SELECT * FROM flower_find")
flower_agg_write_stream = song_find_text \
        .writeStream \
        .trigger(processingTime='5 seconds') \
        .outputMode("append") \
        .option("truncate", "false") \
        .format("memory") \
        .queryName("testedTable") \


Once you run this you should obtain an output like this:

output 2

As you can see I have run few queries and checked if data was streaming. The first time count was 5 and after few seconds count increased to 14 which confirms that data is streaming.

Here, basically, the idea is to create a spark context. We get the data using Kafka streaming on our Topic on the specified port. A spark session can be created using the getOrCreate() as shown in the code. The next step includes reading the Kafka stream and the data can be loaded using the load(). Since the data is streaming, it would be useful to have a timestamp at which each of the records has arrived. We specify the schema as we do in our SQL and finally create a data frame with the values of streamed data with their timestamp. At last with a processing time of 5 seconds, we can receive data in batches. We make use of SQL View to temporarily store the data in memory in append mode and we can perform all operations on it using our spark data frame.

Refer to the complete code here:

This is one of my spark streaming project, you can refer to this for more detailed queries and use of machine learning in spark:







6. Thumbnail Image ->


If you follow these steps, you can easily set up all the environments as well as run your first structured streaming program with spark and Kafka. In case of any difficulties in setting it up, feel free to contact me :

[email protected]

The media shown in this article are not owned by Analytics Vidhya and are used at the Author’s discretion.

About the Author

Our Top Authors

Download Analytics Vidhya App for the Latest blog/Article

Leave a Reply Your email address will not be published. Required fields are marked *