There are multiple ways to learn data science, machine learning and deep learning concepts. You can watch videos, read articles, enroll in courses, attend meetups, among other things. But there’s one thing that there’s no substitute for.
I have personally learned a LOT from interacting with data science experts and industry thought leaders. Their experience in managing end-to-end machine learning and deep learning projects, their thinking when building a data science team from scratch, how they managed tough projects and overcame hurdles, etc. – we simply cannot learn all of these in any course.
So, I am thrilled to present an exclusive interview with one such data science expert and industry thought leader – Dr. Sunil Kumar Vuppala! He is the Director – Data Science, Ericsson GAIA (Global AI Accelerator), Bangalore, and brings a wealth of industry and research experience.
What I really liked about Dr. Sunil in this interview was his to-the-point answers. He cuts to the chase quickly and shares his rich experience and valuable advice for our community. You will learn a lot from his answers here, regardless of the data science role you’re in or are aiming for.
Dr. Sunil has had an illustrious academic and industry career. He started off as an Application Engineer at Oracle and then occupied various research roles in AI at Infosys. He was also the Principal Scientist at Philips before his current position. Not only this, but Dr. Vuppala has also completed his M.Tech from IIT Roorkee before getting his Ph.D. in ‘Optimization under uncertainty for Energy Management in Smart Grid’ from IIIT Bangalore. He is also the visiting faculty in India’s top institutes teaching AI and ML.
Enjoy the discussion and make sure you leave your thoughts and comments below this article!
Dr. Sunil Vuppala’s Industry and Research Experience
Purva Huilgol (PH): You have a background in Computer Science with previous roles in software engineering and software development. It was only after this that you crossed over to data science and then into deep learning.
What inspired you to make this transition and how did you achieve it?
Dr. Sunil Vuppala: Being in the research stream, it was a smooth transition for me.
- My career started as a software applications engineer in Oracle via campus placement in IIT Roorkee
- As my interest was towards research, I then moved to Infosys Research after working 2 years in Oracle
- There I worked on building the Internet of Things (IoT) platforms and analyzing sensor data
“I really bet on IoT 12 years ago. I realized that unless analytics and AI support the analysis of data captured from IoT, the cycle will not be complete. That motivated me to venture into the data science field.”
The organizational changes at Infosys gave me an opportunity to work in Automation and AI way back in 2012-13. Furthermore, my learnings in Ph.D. helped me to achieve the transitions. Andrew Ng inspired me in democratizing AI and serving the society with our technical contributions.
PH: You also have extensive industry experience at top companies, ranging from domains like networks and telecom to software applications and healthcare. You also hold a Ph.D. from IIIT-B, which focused on Smart Grids and IoT.
How did you address the gap between your industry and research roles?
Dr. Sunil: Good question – and this is something I have seen a lot of folks struggle with. My experience has been slightly different from what you might expect.
“Since I was a part of the research wing at both Infosys and Philips, there was mutual learning involved.”
At IIIT, I was dealing with solving millions of variables while at Infosys, I was deploying the implementations of those results as a testbed. When this further developed into an applied research problem for my Ph.D., the challenge was to translate it into benefits for my organization. I needed to balance both – my academic research which prioritized publishing papers and my industry role, which focused more on patents.
PH: Coming from a research background and working extensively with research labs, could you highlight the importance of research and where companies should focus on in machine learning?
Dr. Sunil: Research is a core part of any technology company and machine learning is not an exception. The focus of companies in machine learning research can be across multiple dimensions:
- Futuristic areas of machine learning: Reasoning, reinforcement learning, the security of machine learning models and Explainable AI (our focus at Ericsson Research and Global AI Accelerator [GAIA] divisions)
- Deployment of optimized machine learning/deep learning models at the edge devices such as drones, webcams, mobiles and end terminals (one of the focus areas in Ericsson)
- Build AI platforms for a specific domain to solve real-world problems (we concentrate on building telecom specific platform at Ericsson GAIA)
PH: A large part of your research and industry experience has been on energy management and smart grids. In today’s world, where efficient energy management has become so crucial, how do you think Data Scientists can contribute towards solving these issues?
Dr. Sunil: Interesting question. Smart energy management is much more than optimization. Machine learning in the smart grid can be applied to:
- Analyze demand, power and price data obtained at various points in the smart grid
- Predicting patterns, detect anomalies and suggest preventive actions
- Predicting renewable energy generation and
- Reducing wastage of resources and capital
PH: You have been in this field since before the words ‘Data Scientist’ or ‘Machine Learning Engineer’ became popular. Given your rich experience, I would love to know what your most challenging project was and how you overcame the obstacles.
Dr. Sunil: The most challenging project for me was when I represented my platform team for a large manufacturing client in the USA. The client was an initial customer of our Automation and AI platform. I was informed by the VP of products that he will share with me Tera Bytes (TB) of data and I need to find out the million dollars’ worth use cases for Automation and AI in his organization.
After a couple of rounds of discussions, we were able to come to an agreement that expecting magic after dumping the entire TBs of data into the platform was not the solution. Instead, the idea was to proceed by taking incremental steps. We started with 2 out of 55 applications of the client and identified the potential use cases of automation and AI within 2 days and were able to present the same client CIO. Those were the initial days of AI practical implementation.
“Now, AI is at the peak of inflated expectations and people think that AI can solve all their problems. We should formulate realistic business problems and convert them into data science problems and then work on what kind of methods will be needed to solve such problems.”
The most recent challenging project for me is at Ericsson. We are trying to predict the type of customer complaints about the telecom operators proactively and do the corrective action of configuration changes.
PH: The amount of breakthroughs happening in this space, especially deep learning, is unprecedented. What will be the next frontier for deep learning algorithms?
Dr. Sunil: I agree that the field is fast changing. I am betting more on deep reinforcement learning and killer applications in unsupervised learning including GANs in the future. We have seen tremendous applications of deep learning architectures across the domains.
However, most of the problems we are solving in the industry are of supervised learning. In the real world, the data available is not annotated.
“If we can extend the usage of Deep Learning directly with the data without the requirement of annotations, then the potential of this field is unlimited.”
Advice for Aspiring Data Scientists
PH: The role of a software developer/engineer is slowly starting to encompass more and more skills. What can software developers do to transition into the machine learning field by leveraging their software engineering experience?
It is important for software engineers to understand the difference between the deterministic Software Development Life Cycle (SDLC) and the vague, probabilistic Data Science Life Cycle(DSLC).
Successful data scientists are strong in mathematics, programming and domain knowledge. A software engineer can contribute to the programming aspect of Machine Learning models, their evaluation and visualization.
Therefore, software developers should identify their core strengths and choose where they can excel in this field. If they have a Computer Science background, they should concentrate on basic statistics for data scientist profiles. If they have data handling experience, they should aim for data engineering profiles.
PH: There’s no shortage of publicly available datasets to practice machine learning skills. What would your advice be on the kind of projects aspiring data scientists should do to enhance their resumes for the current job market?
Dr. Sunil: I strongly believe that students should at least target two projects (one during the course work as a capstone project and another in their own domain) before looking out for job opportunities.
The current job market is very good. The industry is desperately looking for bright data scientists and data engineers.
Here are a few project ideas one can target based on publicly available datasets :
- Computer vision: Image classification, object detection, segmentation and captioning, video analytics
- NLP: Sentiment analysis, Sarcasm, Inference, Indian languages Neural Machine translation of Indian languages
- Speech: Building applications with Alexa, Indian languages speech processing
- Multimodal (text, images, video) chatbots with the available conversational interface frameworks. One can start with retrieval-based chatbots and move towards generative based chats
- Govt. of India publishes a lot of data from various departments. Data scientists can use that data to solve real-world problems such as prediction in the areas of agriculture (crop yield), revenue, etc.
- ISRO satellite data analysis: Institutes can have MoU with ISRO to access high-resolution satellite images in their Hyderabad and Ahmedabad centers. One can build models using the available data and publish their models while the data still belongs to ISRO
PH: Taking this a step further – I wanted to pick your brain on an often asked question – how can people bridge the gap between learning data science in theory and applying it in the industry?
Dr. Sunil: Students should aim to have a strong foundation. For this, students need to:
- Start with the standard available datasets to build their foundations
- Move on to real-life datasets which are easily available on hackathons and on open-source platforms
- Acquaint themselves with IP and data rules such as the General Data Protection Regulation (GDPR)
Industry data, on the other hand, needs a lot of preprocessing and exploratory data analysis skills. Generally, such processes do take up a bulk of our time at the industry level.
“In short, Data Science enthusiasts should participate in hackathons and maintain strong Github profiles to enhance their learning.”
PH: Finally, could you give us a list of your favorite research papers in this domain that every data scientist, aspiring or experienced, should read?
Dr. Sunil: There are a lot of papers to mention!
“I strongly recommend aspiring/experienced data scientists to go through the seminal work done by the 2018 Turing award winners – Geoffrey Hinton, Yann LeCun and Yoshua Bengio.”
They are excellent professors and are now supporting the tech giants in Silicon Valley to democratize AI. Here are some of my favorite research papers in this domain:
- D. E. Rumelhart, G. E. Hinton, and R. J. Williams. 1986. Learning internal representations by error propagation. In Parallel distributed processing: explorations in the microstructure of cognition, vol. 1, MIT Press, Cambridge, MA, USA 318-362.
- LeCun, Yann, Bengio, Yoshua, Hinton, Geoffrey. Deep learning. Nature, 2015/05/27/online, vol 521, pg 436-444
- Yann LeCun, Léon Bottou, Yoshua Bengio, Patrick Haffner. Gradient-based learning applied to document recognition. 1998. Journal Proceedings of the IEEE, Volume 86, Issue 11, Pages 2278-2324
- ImageNet Classification with Deep Convolutional Neural Networks By Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton; Communications of the ACM, June 2017, Vol. 60 No. 6, Pages 84-90
- Yann LeCun and Yoshua Bengio. 1998. Convolutional networks for images, speech, and time series. In The handbook of brain theory and neural networks, MIT Press, Cambridge, MA, USA 255-258
- Yoshua Bengio. 2009. Learning Deep Architectures for AI, Foundations and Trends Machine Learning journal, 1 (January 2009), 1-127
- Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Proceedings of the 27th International Conference on Neural Information Processing Systems – Volume 2 (NIPS’14), MIT Press, Cambridge, MA, USA, 2672-2680
- R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh and D. Batra, “Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization,” 2017 IEEE International Conference on Computer Vision (ICCV), Venice, 2017, pp. 618-626. (DOI: 10.1109/ICCV.2017.74
- Mikolov, Tomas; et al. (2013). “Efficient Estimation of Word Representations in Vector Space”. arXiv:1301.3781
- Volodymyr Mnih et al. Human-level control through deep reinforcement learning, Nature, 2015 DOI:10.1038/nature14236
I learned a lot from the answers given by Dr. Vuppala. Coming from a software engineering background myself, his practical suggestions and insights on the Data Science industry are extremely beneficial to data science professionals.
Here are a couple of key takeaways from the interview which resonated with me:
- It is crucial for students to maintain updated Github profiles, participate in hackathons, work on real-life datasets to enhance their skills in this domain
- The future course of machine learning would be charted by companies with highly competent research departments and this would require a lot of skilled data scientists
If you have any questions or feedback or more points of view to discuss, please share your thoughts in the comments section below.