Are you working with image data? There are so many things we can do using computer vision algorithms:
In this article, we will talk about multi-label image classification, utilizing the power of deep learning and advanced methodologies. Instead of relying on conventional toy datasets, we draw inspiration from real-world scenarios, particularly movie and TV series posters, which inherently contain diverse visual elements representing various genres.
But how do we navigate this complex task effectively? Fear not; we will dig deep into the intricacies of building a multi-label image classification model, leveraging cutting-edge technologies such as convolutional neural networks (CNNs) and transfer learning. Along the way, we harness the capabilities of popular frameworks like TensorFlow, PyTorch, and scikit-learn, using their APIs to streamline development and implementation.
By leveraging transfer learning and pre-trained models, we expedite the training process and enhance the efficiency of our classifiers. Additionally, we explore the resources available on platforms like Kaggle, tapping into rich datasets and collaborative communities to fuel our experiments.
Whether you’re a seasoned practitioner or a curious enthusiast, join us as we unravel the mysteries of multi-label image classification, equipped with tensors, Kaggle datasets, and the latest advancements in deep learning.
Excited? Good, let’s dive in!
Let’s understand the concept of multi-label image classification with an intuitive example. Check out the below image:
The object in image 1 is a car. That was a no-brainer. However, there is no car in image 2 – only a group of buildings. Can you see where we are going with this? We have classified the images into two classes, i.e., car and non-car.
When we have only two classes in which the images can be classified, this is known as a binary image classification problem.
Let’s look at one more image:
How many objects did you identify? There are too many – a house, a pond with a fountain, trees, rocks, etc. So,
When we can classify an image into more than one class (as in the image above), it is known as a multi-label image classification problem.
Here’s a catch: most of us confuse multi-label and multi-class image classification. Even I was bamboozled the first time I came across these terms. Now that I understand the two topics better let me clarify the difference for you.
Suppose we are given images of animals to be classified into corresponding categories. For ease of understanding, let’s assume there are a total of 4 categories (cat, dog, rabbit, and parrot) in which a given image can be classified. Now, there can be two scenarios:
Let’s understand each scenario through examples, starting with the first one:
Here, we have images that contain only a single object. The keen-eyed among you will have noticed 4 different types of objects (animals)Â in this collection.
Each image here can only be classified as a cat, dog, parrot, or rabbit. There are no instances where a single image will belong to more than one category.
1. When there are more than two categories in which the images can be classified.
2. An image does not belong to more than one category.
If both of the above conditions are satisfied, it is referred to as a multi-class image classification problem.
Now, let’s consider the second scenario – check out the below images:
These are all labels of the given images. Each image here belongs to more than one class; hence, it is a multi-label image classification problem.
These two scenarios should help you understand the difference between multi-class and multi-label image classification. Connect with me in the comments section below this article if you need any further clarification.
Before we jump into the next section, I recommend going through this article –Â Build your First Image Classification Model in just 10 Minutes! It will help you understand how to solve a multi-class image classification problem.
Now that we have an intuition about multi-label image classification let’s dive into the steps you should follow to solve such a problem.
The first step is to get our data in a structured format. This applied to both binary and multi-class image classification.
You should have a folder containing all the images you want to train your model. For training this model, we also require the true labels of images. So, you should also have a .csv file that contains the names of all the training images and their corresponding true labels.
We will learn how to create this .csv file later in this article. For now, remember that the data should be in a particular format. Once the data is ready, we can divide the further steps as follows:
First, load all the images and then pre-process them per your project’s requirement. We create a validation set to check how our model will perform on unseen data (test data). We train our model on the training set and validate it using the validation set (standard machine learning practice).
The next step is to define the architecture of the model. This includes deciding the number of hidden layers, neurons in each layer, the activation function, etc.
Time to train our model on the training set! We pass the training images and their corresponding true labels to train the model. We also pass the validation images here to help us validate how well the model performs on unseen data.
Finally, we use the trained model to get predictions on new images.
The pre-processing steps for a multi-label image classification task will be similar to that of a multi-class problem. The key difference is in the step where we define the model architecture.
We use a softmax activation function in the output layer for a multi-class image classification model. We want to maximize the probability for each image for a single class. As the probability of one class increases, the probability of the other class decreases. So, we can say that the probability of each class is dependent on the other classes.
But in the case of multi-label image classification, we can have more than one label for a single image. We want the probabilities to be independent of each other. Using the softmax activation function will not be appropriate. Instead, we can use the sigmoid activation function. This will predict the probability for each class independently. It will internally create n models (n here is the total number of classes), one for each class, and predict the probability for each class.
The sigmoid activation function will turn the multi-label problem into an n-binary classification problem. So, for each image, we will get probabilities defining whether the image belongs to class 1 or not, and so on. Since we have converted it into a n-binary classification problem, we will use the binary_crossentropy loss. We aim to minimize this loss to improve the performance of the model.
We must make This major change while defining the model architecture for solving a multi-label image classification problem. The training part will be similar to that of a multi-class problem. We will pass the training images, their corresponding true labels, and the validation set to validate our model’s performance.
Finally, we will take a new image and use the trained model to predict the labels for this image. With me so far?
Congratulations on making it this far! Your reward – solving an awesome multi-label image classification problem in Python. That’s right – time to power up your favorite Python IDE!
Let’s set up the problem statement. We aim to predict the genre of a movie using just its poster image. Can you guess why it is a multi-label image classification problem? Think about it for a moment before you look below.
A movie can belong to more than one genre, right? It doesn’t just have to belong to one category, like action or comedy. The movie can be a combination of two or more genres. Hence, multi-label image classification.
The dataset we’ll be using contains the poster images of several multi-genre movies. I have made some changes in the dataset and converted it into a structured format, i.e. a folder containing the images and a .csv file for true labels. You can download the structured dataset from here. Below are a few posters from our dataset:
You can download the original dataset along with the ground truth values here if you wish.
Let’s get coding!
First, import all the required Python libraries:
Now, read the .csv file and look at the first five rows:
There are 27 columns in this file. Let’s print the names of these columns:
The genre column contains the list for each image, which specifies the genre of that movie. So, from the head of the .csv file, the genre of the first image is Comedy and Drama.
The remaining 25 columns are the one-hot encoded columns. So, if a movie belongs to the Action genre, its value will be 1; it is 0. The image can belong to 25 different genres.
We will build a model that will return to the genre of a given movie poster. But before that, do you remember the first step for building any image classification model?
That’s right – loading and preprocessing the data. So, let’s read in all the training images:
A quick look at the shape of the array:
There are 7254 poster images, and all the images have been converted to a shape of (400, 300, 3). Let’s plot and visualize one of the images:
This is the poster for the movie ‘Trading Places’. Let’s also print the genre of this movie:
This movie has a single genre – Comedy. Our model would next require the true label(s) for all these images. Can you guess the shape of the true labels for 7254 images?
Let’s see. We know there are a total of 25 possible genres. We will have 25 targets for each image, i.e., whether the movie belongs to that genre or not. So, all these 25 targets will be either 0 or 1.
We will remove the ID and genre columns from the train file and convert the remaining columns to an array, which will be the target for our images:
The shape of the output array is (7254, 25) as we expected. Now, let’s create a validation set that will help us check the performance of our model on unseen data. We will randomly separate 10% of the images as our validation set:
The next step is to define the architecture of our model. The output layer will have 25 neurons (equal to the number of genres), and we’ll use sigmoid as the activation function.
I will use a certain architecture (given below) to solve this problem. You can also modify this architecture by changing the number of hidden layers, activation functions, and other hyperparameters.
Let’s print our model summary:
Quite a lot of parameters to learn! Now, compile the model. I’ll use binary_crossentropy as the loss function and ADAM as the optimizer (again, you can use other optimizers as well):
Finally, we are at the most interesting part – training the model. We will train the model for 10 epochs and also pass the validation data that we created earlier to validate the model’s performance:
We can see that the training loss has been reduced to 0.24, and the validation loss is also in sync. What’s next? It’s time to make predictions!
The Game of Thrones (GoT) and Avengers fans – this one’s for you. Let’s take the posters for GoT and Avengers and feed them to our model. Download the poster for GOT and Avengers before proceeding.
Before making predictions, we need to preprocess these images using the same steps we saw earlier.
Now, we will predict the genre for these posters using our trained model. The model will tell us the probability for each genre, and we will take the top 3 predictions from that.
Impressive! Our model suggests Drama, Thriller, and Action genres for Game of Thrones. That classifies GoT pretty well in my opinion. Let’s try our model on the Avengers poster. Preprocess the image:
And then make the predictions:
The genres our model comes up with are Drama, Action, and Thriller. Again, these are pretty accurate results. Can the model perform equally well for Bollywood movies? Let’s find out. We will use this Golmal 3 poster.
You know what to do at this stage – load and preprocess the image:
And then predict the genre for this poster:
Golmaal 3 was a comedy and our model has predicted it as the topmost genre. The other predicted genres are Drama and Romance – a relatively accurate assessment. We can see that the model is able to predict the genres just by seeing their poster.
This is how we can solve a multi-label image classification problem. Our model performed well even though we only had around 7000 images for training it.
You can try to collect more posters for training. I suggest making the dataset so that all the genre categories will have comparatively equal distribution. Why?
Well, if a certain genre repeats in most training images, our model might overfit that genre. And for every new image, the model might predict the same genre. To overcome this problem, you should have an equal distribution of genre categories.
These are some of the key points you can try to improve your model’s performance. Any other you can think of? Let me know!
This article delved into multi-label image classification, exploring its nuances and applications. We addressed the complexity of predicting multiple genres from movie posters by leveraging deep learning techniques, particularly the sigmoid activation function and binary_crossentropy loss. Through meticulous annotation and preprocessing of training data, we constructed a robust classifier capable of discerning various genres with impressive accuracy.
Our model, trained on a diverse dataset, demonstrated its prowess by accurately predicting genres for iconic movies like Game of Thrones and Avengers. Furthermore, we highlighted the significance of data distribution in enhancing model performance, emphasizing the need for balanced training datasets. This journey elucidated the power and versatility of multi-label image classification beyond genre prediction, offering insights into its broader applications, such as automatic image tagging. As we conclude, we invite readers to embark on their experimentation, exploring novel avenues and pushing the boundaries of this fascinating field.
Ans. Multi-label classification in machine learning refers to assigning multiple labels to instances. Unlike multi-class classification, where each instance is assigned only one label, multi-label classification allows for multiple labels per instance. This is common in scenarios like image datasets where an image may contain multiple objects. Evaluation metrics such as the F1 score can be used to measure the performance of multi-label classification models trained using frameworks like Keras.
Ans. The sigmoid activation function is used in multi-label image classification because it allows for independent probability predictions for each class. Unlike softmax, which is used in multi-class classification and enforces that probabilities sum up to one across all classes, sigmoid treats each class prediction independently. This is crucial in multi-label classification tasks where an image can belong to multiple classes simultaneously. Using sigmoid, the model can predict the presence or absence of each label separately, effectively transforming the problem into a series of binary classification tasks.
Ans. In multi-label image classification, compared to single-label classification, challenges arise due to the complexity of predicting multiple labels simultaneously. Annotating data becomes more intricate, requiring comprehensive labeling for each class present. Deep learning classifiers such as CNNs must handle this complexity efficiently, often necessitating specialized techniques like sigmoid activation and binary cross-entropy loss. Evaluation metrics like the F1 score become crucial in accurately assessing the classifier’s performance. These challenges underscore the heightened intricacy of multi-label classification tasks in computer vision and machine learning.
Lorem ipsum dolor sit amet, consectetur adipiscing elit,
Thanks Pulkit for explaining the Multi-Label Image Classification in such an easy way.
Glad you liked it Vijit!
Great Thanks for sharing
Amazing, thank you so much
How much memory takes when we convert the image to the array. I mean How much memory will hold X variable. I am running on the kaggle platform but I get a memory error.
Hi Shrikant, As there are more than 7000 images, you will require good memory space. You can try to run these codes on google colab.
What do you think about using sklearn's multi-label classifier to do this? Which one is better? Thanks
so glad to have found this site
Hi Pulkit. Thank you so much for this article. You really have a gift of explaining and simplifying these things so that even I can understand them! Have you perhaps done a similar article/tutorial on object detection (multiple objects per image and their bounding boxes)? If so I would be very interested in reading it.
Hi Mark, Earlier, I have worked on the object detection project as well. Below are some links that you can refer which will clear your concepts of what object detection is and how to build your own object detection model. Here are the links: 1. A Step-by-Step Introduction to the Basic Object Detection Algorithms 2. A Practical Implementation of the Faster R-CNN Algorithm for Object Detection 3. A Practical Guide to Object Detection using the Popular YOLO Framework
Hi PULKIT SHARMA Thank you first of all. When I tried on Colab, this code caused memory error. Espacially Error occured while I was creating X variable then splitting train test. How can I solve my problem? Thanks
Hi, As the image size is large (400,400,3) in this case, you can reduce this size which will reduce the memory consumption. You have to edit the following code: img = image.load_img('Multi_Label_dataset/Images/'+train['Id'][i]+'.jpg',target_size=(400,400,3)) Pass a smaller target size of let's say (224,224,3) or even smaller. If you are changing the size of the image, you also have to change the input shape while defining the architecture.
The article is written very well, i have a few questions about the train_image = [], i tried the kaggle kernel with GPU & without GPU but i keep running out of memory so the X data frame is not created, i also tried the google colab notebook also the same issue, is there a way to load all images without running out of memory, i.e some kind of batch processing the images. I thought of reducing the number of images from the data set itself by removing randomly 1500 images from the data set. I would be helpful if you could help me. Thanks
Hi Shyam, You can try to reshape the images to a smaller shape, let's say (224,224,3) or even less to reduce the size.
Hi Pulkit, You have a style to explain concepts so easily. Thank you so much. I have following thought - Can we have any unsupervised method for this problem?
Hi Pankaj, I am not sure whether an unsupervised method will work on such problem of genre prediction using the posters. I personally believe that having a supervised learning approach for such task will help you to achieve a better model. But again, this is my personal opinion. You can try some unsupervised techniques on the same project and see whether it perform any better than the supervised approaches. Do share your findings with the community here as it will be helpful for everyone.
Hi Pulkit, This is a great tutorial and thank you very much for sharing this! This one motivated me to write the same architecture and tsest it on PyTorch. One thing I do not get is that in your summary report (right after you defined your network architecture) and you present a summary of it, the shapes of your output are not consistent ,e.g. after your first convolutional step you get an output size of 396 x 296, which should be 396 x 396. That shouldn't be happening without any padding/stride, right? Maybe you wanted to read your images in 400x300x3 instead of 400x400x3? With this input, your numbers add up perfectly! Plus, Ithink I have a method to avoid overfitting in the loss function.
Thank you Leo for pointing it out. Actually, in the beginning, I trained the model on images of shape (400,300,3). Then, I changed this shape to (400,400,3) so missed to replace the summary part. I have updated the summary now. Also, I would be glad if you can share some methods which you are talking about to avoid overfitting. That would be helpful for the community as well.
Thanks for detailed explaination Pulkit. I tried to reproduce this code on my Laptop & Google Colab but in both cases RAM maxed out (20 GB). Any idea on hardware/cloud side so that I can spin new VM. Also you divided img by 255 "img = img/255" could you explain why ?
Hi Dinesh, You can try to reduce the shape of your images which will reduce the storage space. I have divided the pixel values of all the images by the max pixel value which is 255. This will bring the pixel values in the range of 0 to 1 and this helps to make our training faster. So, it is always suggested to normalize your pixel values.
Hi I have another an idea , you can reduce the number of images to be, let say 3000 and adjust the train.csv file as well. taking the consideration that each label (address) must point to an actual image in Images folder. I am ready for any further clarification
Hello! This is a really wonderful explanation. However while running this after model.add(Conv2D(filters=16, kernel_size=(5, 5), activation="relu", input_shape=(224,224,3))) it doesn't run since it shows an AttributeError: module 'tensorflow' has no attribute 'get_default_graph'. I checked on stack overflow and tried implementing changes however it still persisted. Could you give me some alternative approach to tackle this?
Hi Aishwariya, Please check the tensorflow version that you currently have. Updating it might resolve the issue. Or you can look at this discussion thread.
Getting error at this line: X = np.array(train_image) Maxed out of memory. How to use in chunks ? I mean if i have millions of images, it would be impossible for a ram to load all of it at once. How to solve that issue ? we can store as numpy array in chunks in local hard-disk with .npy extension and then use it in chunks too. That would solve the memory issue i guess.
Hi Prem, Yes! you are correct. Instead of loading all the images at once, you can load them in chunks. But the computation power of the system also plays a key role in deep learning. Having a higher computation will always be a plus if you are training your deep learning model.
Hi Pulkit, Great explanation.Good job bro!!! Could you please help me with an issue as when i am training my model the loss is showing as 0?
Glad that it is useful to you! Regarding the loss of the model, which loss function are you using and what are the arguments that you are passing while calculating the loss?
Hey, Nice post. But is accuracy_score a good metric to use for multi label classification?. As most of the labels are 0, so even an un trained/ 0 model will give a good accuracy score. for ex: label1 = [0,0,0,1,0,0,0,0,0,0,0,1] pred1 = [0,0,0,0,0,0,0,0,0,0,0,0] form a zero returning model. here the accuracy will be 83.33% As '1's are the deciding the performance of the model, we should use some metric which consideres the positive predictions and labels like precision,recall.....
Hi Shangeth, It is more of an imbalanced problem. If we have an imbalanced class problem, then yes, using accuracy is not a good option and you can use precision, recall or F1 score instead.
Can you tell me about the Krill Herd optimization algorithm? I got stuck in this problem. your help will be appreciable.
Great tutorial, I like it and very good explanations. Is there any recommendation how to run it on lower-memory cpus? Can I simply create Keras checkpoints and use smaller training sets (e.g. 1000 images with 90/10 test-split) and train it in multiple steps by reloading the weights file?
Hi Tom! First of all thank you for your feedback on the article. If you do not have high memory to run these models, then I would suggest using Google Colab instead of training model on your local system. They provide free GPU as well so the training will be faster.
Hi, Help me to understand technical details. how are we learning images with multiple labels? In a nutshell, are we learning {image, [g1, g2,g3]} or {[image1, g1], [image1, g2], [image1, g3]}. if we use the first one that will be simple image classification (that doesn't make sense!!!). later one may confuse the model while training if we use for some 1000 or 2000 classes. how to cope up with this situation.
Hi, We are following the second approach which you have mentioned. And yes as the number of classes increases, it will become harder and harder for the models to learn the insights and hence we have to build more complex models. But in today's world, the models are smart enough to understand and learn in case of 1000s of classes as well.
Hi Pulkit Kindly post codes for building image dataset into .csv file as required in multi label image classification problem.
Hi Ruchika, The link to download the dataset (images along with the csv file) has been provided in the article itself. Here is the link for your reference: https://drive.google.com/file/d/1dNa_lBUh4CNoBnKdf9ddoruWJgABY1br/view
Hi pulkit I want to convert my imagedatset into .csv file. I need your help in that. Kindly share some codes which will do the nedful conversion required by multi label problem. waiting for your reply Thanks
PULKIT , you did a great job. I am doing some research on industry inventory management. There are over 2000 kind of components to identify and counting by using artificial intelligence. your research really provide me some great hints. Thank you very much. I will use your article as reference in my thesis. Thank you very much.
Glad you found it useful Judy!
Hi Pulkit, Nice post. Link for the structured dataset is not working. Can you please update the link so that I download it. Thanks.
Hi Ekanshu, The link is working fine at my end. You can download the dataset using this link: https://www.cs.ccu.edu.tw/~wtchu/projects/MoviePoster/index.html
I appreciate you helping me learn more about image labeling. It is interesting that it can be used for both binary and multi-class image classification. My nephew is getting into all of this. He will be interested to know that you can do both binary and multi-class images.
Hi Pulkit, Thanks for such a amazing article it helped me to understand multi label image classification.Just want to know if you can help how we can use transfer-learning with such type of multi label classification?
While writing the code using RNN, I am not getting the problem conceptually. Like how should I solve the same problem using RNN.
Hi, I am trying to understand if I can use ML and classification to do my project. First I need the image to know how many zones are there in the image.. lets say I have either 2 zones or 3 zones. Then I would like that the zones are defined in pixels rectangle by the classification. So for your animals examples, it would either return 2 animals or 3 animals as the classifier. then be able to define the rectangle for each animal. After then in each zone be able to return additional classification results.. so lets say it would return color, type and hair length. Then I guess I can use the classification result to make a program and return some result to the user.. makes sense ? Thanks for your help
sir, every time you select best three, but what if images has different number of multi-label.. for ex. image1 {3,4} image2 {0,5,9,2} image3 {23} so, my question is ,how to select best number of class, what kind of threshold i need to apply to select 1's and 0's. i hope ,you understand my problem
I want to know that what objective is achieved on training a dataset containing a totally different kind of posters containing just images , I mean what all parameters will it be trained on..And then you would pass a different poster of a movie and ask it to return the genres. Personally I feel there is nothing common in the posters, I mean there are no similar parameters on the trained images and testing one. Change my mind....
Hi Akhil, There might be some similarities between posters of same genre. For example, posters of horror movies are generally dark, whereas if the genre is comedy, generally the posters are brighter. People in the posters are generally happy when it is a comedy movie and if it is horror movie, people might be tensed or in fear. So, there can be multiple types of similarity between the posters of same genre. This is what I tried to find using this model and it seemed to have worked well.
Hi Pulkit, Thanks for the great article. I just have one doubt in general about Multilabel classification - If we also have few sets of images which doesn't belong to any of the labels in the training data do we need to have a separate label as "No label" for differentiating these images or if the predicted probabilities for all other labels for an image is less than threshold can we consider it as no label image? Please let me know your thoughts and if we have any resources for this kind of problem. Thanks
Hi Ram, There is no need to introduce a new label at the time of training. As you have mentioned, if all the probabilities are less than the threshold, in that case, you can consider that the image does not belong to any of the available tags.
Hi Pulkit, I have downloaded the dataset and tried running the program, when I convert train_image list to numpy array X, I got memory error in spyder anaconda platform. So, I have uploaded the images to google drive and tried running in google colab. But still the images loading to train_image list stops at 89%. I reconnected this for 3 times and tried. The connection stops at that time. Can you give me any idea on how to solve this ?
Hi Haripriya, Since there are more than 7200 images and each have a size of (400,400,3), you might get memory error. The memory error is because the RAM is getting filled entirely before even loading the images. In this case, you can either try to increase the ram or else you can reduce the size of the images. To reduce the size, you have to make the following changes while reading the images: img = image.load_img('Multi_Label_dataset/Images/'+train['Id'][i]+'.jpg',target_size=(224,224,3)) Here I have changed the target_size to (224,224,3), you can increase or decrease this size as well.
Hello Pulkit How to validate accuracy of such multi label image classification model? Is there any sklearn library available for same? any idea?
I am not able to load the images with the code you provided: train_image = [] for i in tqdm(range(train.shape[0])): img = image.load_img('Multi_Label_dataset/Images/'+train['Id'][i],target_size=(224,224,3)) img = image.img_to_array(img) img = img/255 train_image.append(img) X = np.array(train_image) I keep getting the following error. I've tried different things to no avail. No such file or directory: ' Multi_Label_dataset/Images/tt0086425'
Hi orde, You have to pass the correct path to read the images.
Thank You So Much Sharma. This article motivate me to learn more about image processing.
Glad you liked it!
Hi, i have a question. Can I use this method to classify an image where multiple objects of the same class are on the picture? Or can this even solved with multi-class classification? I have pictures with a box with multiple same class objects in it. Those pictures I would like to take to train, is that possible or do I need a set where only one object is in the picture? Thanks
Hi Khani, This method will only classify whether an object is present or not. It will not be able to classify if an object is present multiple times. That can be done using object detection algorithms which will detect each object from the image depending on the training set.
Hi Pulkit Great Article .. I had a question -- Can You please tell me how to convert image dataset in .csv file? Is there any code for it..?
Hi Nisarg, You can create a csv file but the code will entirely depend on the format of the dataset. There is no specific code for this, you have to write the code according to the format of your data.
Hi, Great article I have a question, i dont have a csv file so what do i do to replace this line of code classes = np.array(train.columns[2:]) Basically what I'm trying to do is to predict an image with only the model. Can u give me a sample on how to predict an image only with the model? Thanks Hi, Great article I have a question, i dont have a csv file so what do i do to replace this line of code classes = np.array(train.columns[2:]) Basically what I'm trying to do is to predict an image with only the model. Can u give me a sample on how to predict an image only with the model? Thanks
Hello Pulkit the link about the dataset structured said it doesn't exist anymore can you do something to solve this issue please ? thanks for your help :)
Hi I can't download the dataset from the link that you provide for us "from here" and the original one doesn't have the csv file.
The link to the structured dataset is not working. Can you please check?
Hi Pulkit, Thanks for such a amazing article it helped me to understand multi label image classification. Just want to know if you can help how we can use multi-task learning problems with such a type of multi-label classification?
Hi Pulkit, Nice post and learn a lot! The google drive link with CSV file doesn't exist. Could you update the latest link, Thank you
Can you please check the link for the structured dataset? It doesn't seem to be working
Hi Pulkit Great Article ,Sorry to bother you, I would like to ask if you can request the csv file that has been organized on the article ( the link to the article has expired)
Hello Pulkit, I found this article interesting and was trying to implement your code but the dataset link is not working. Would be great if you can share it .
Hello Pulkit, I found this article interesting and was trying to implement it but the dataset link is not working. Would be great if you can share it.
Hi Pulkit, Link for the structured dataset is not working. Can you please update the link so that I download it. Thanks.
Hi, just wanted to let you know that the Google Drive link for your dataset is no longer operational. Too bad! This looks like an excellent tutorial.
Good explanation! By the way I want to ask something, the dataset is multi-classification right ? (value is 0 or 1, exist or not), but when you tested it the result is probability (its a regression?), so the result for this model is 25 regression value? If so, I would be so happy because right now im making a multi-label image regression model (predicting the composition of 6 types of algae in a pond image), not multi-label image classification model
Also can i get the dataset? because the link above doesnt work. Thanks
Same here. Thanks for your time for contributing this amazing tutorial, just wonder where can we get the updated link for this file? thanks:)
Pulkit, this drive link for .CSV is not working as of today for me. Can you please provide an updated link for csv?
Hello, the link for the structured dataset is not working anymore. I want to try this example. Can you fix or point me to the correct structured dataset to test this out?