Learn everything about Analytics

The Best Research Papers from ICML 2018 – A Must-Read for Data Scientists

Overview

  • The thirty-fifth edition of the International Conference on Machine Learning (ICML) kicks off on July 10th in Sweden
  • The panel of judges has picked out the two best research papers from all the submissions – one deals with adversarial attacks, and the other with fair machine learning
  • Three papers won the runner-up award – full list, including links, in the article below

 

Introduction

The thirty-fifth edition of the International Conference on Machine Learning (ICML) is almost here! Some of the best minds in the machine learning industry come together at this well known summit to present their research and discuss new ideas. It’s an event every data scientist and ML practitioner should have circled on their calendar!

Each year, hundreds of research papers are submitted to the conference but only a few make the cut. A panel of hand-picked judges run the rule over these papers and pick out what the conference calls the “Best Paper Awards”. It’s quite a prestigious award to win given that it’s picked from some of the best research in the ML space.

This year the competition was tougher than ever before. With more and more research being funded, papers are being churned out at an unprecedented rate. Without any further ado, below are the two papers that have won the Best Paper Award at ICML 2018:

Three runner-up awards have also been announced:

Let’s look at the two best papers in a bit more detail.

 

Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples

Obfuscated gradients are a kind of gradient masking that often lead to a false sense of security against adversarial attacks. The researchers found a way to circumvent defences that used and relied on these obfuscated gradients. They discovered three types of these gradients, and managed to design techniques to attack each of them. Using a case study to illustrate their point, the team found 7 out of 9 defences using obfuscated gradients – their technique circumvented 6 of those completely and 1 partially.

Why is this important? It offers organizations using this kind of defence to shore up their current methods and look for more robust measures. A very worthy co-winner of the best paper award.

 

Delayed Impact of Fair Machine Learning

Bias has always been a very pressing issue in machine learning models. Recent examples of facial recognition software failing to recognize people of certain regions has been in the news but there are examples from other fields as well, like loan lending, recruitment, advertising, etc. Researchers from Berkeley’s Artificial Intelligence Research lab have published this paper in which they talk about their work on making ML work with long term social welfare ideas.

They have introduced a one-step feedback model that looks at decision-making. The results of this model show how certain decisions change the underlying population over a period of time. An example of credit score in terms of loans has been discussed extensively.

 

Our take on this

These are two really fascinating research papers that explore two different but pressing issues in machine learning. I implore you to take time out and read them. While there are thousands of research papers out there these days, conferences like ICML and ICLR pick out the cream of the crop to make things easy for us.

Is there any other research paper you feel we should know about? Let me know in the comments below!

 

Subscribe to AVBytes here to get regular data science, machine learning and AI updates in your inbox!

 

You can also read this article on Analytics Vidhya's Android APP Get it on Google Play
Join 100000+ Data Scientists in our Community

Receive awesome tips, guides, infographics and become expert at:




 P.S. We only publish awesome content. We will never share your information with anyone.

Join 100000+ Data Scientists in our Community

Receive awesome tips, guides, infographics and become expert at:




 P.S. We only publish awesome content. We will never share your information with anyone.