Biases are stereotyped/unjust associations that a model encodes about protected attributes or poor model performance for specific groups. As a result, bias mitigation refers to the process of reducing the severity of stereotypical/unjust associations and disparate model performance.
Fairness is concerned with ensuring that a model performs equitably for all groups of people in terms of protected characteristics.
This DataHour will provide an overview of how biases can occur in NLP systems and why it is critical to identify them. The session will also cover techniques for identifying and mitigating these biases and how to estimate fairness.
Shantam currently works at Mckinsey and Company and has over 5 years of experience working primarily with NLP and related problems. He is currently pursuing a master’s degree in artificial intelligence at IIT Jodhpur and will begin his research on graph neural networks.
Shantam enjoys writing poetry in his spare time and is currently dabbling in photography, primarily street style.
Passion for learning Data Science and familiarity with ML and NLP.
Drop us an email at [email protected], or you can chat with the speaker directly during the session.
To take advantage of this fantastic opportunity. Register for this DataHour here.
If you missed any of the past episodes of “The DataHour,” you may watch the recordings on our YouTube channel. You can read a summary of previous DataHour sessions on our blog by clicking here.
If you’re having trouble enrolling or would like to conduct a session with us. Contact us at [email protected].
Lorem ipsum dolor sit amet, consectetur adipiscing elit,