Many CEOs and COOs look for that one knob, that one trigger that they can turn or press to fix their enterprise troubles. Spoiler Alert: Such things only exist in Disney cartoons. However, in the real world, we can get pretty close. How does identifying the key risk areas and a list of potential paths to mitigating risk sound?
It is easier said than done. My doctoral research in disease and growth signaling networks in computational systems biology and now with enterprises shows that there are master regulators, which dramatically change how most processes operate. The good thing about enterprise systems is that data on certain processes is logged in well-established transaction systems, including human capital, finance/ERP, supply chain, manufacturing, customer relationship/service, sales, marketing, etc. We can attempt to discover such master regulators from the time series data available in these systems.
Can we identify risk in each of these systems on their own? What about risk to the overall company or a line of business driven by one or more of these areas? In this talk, we will demonstrate what it takes to create a hierarchical combination of knowledge-driven intelligent systems (inference on domain knowledge = GOFAI = Good Old Fashioned AI), statistical reasoning and data-driven machine learning-based systems (including tools like NLP, non-stationary time series modeling, image classification along with reinforcement learning) to create such an integrative tools for our enterprise leadership.
We will show two examples of such successful systems, one from EEO/OFCCP Diversity Adverse Impact Analysis and Affirmative Action Planning in ensuring parity in Hiring, Promotions, Terminations and Salary across gender, ethnicity, disability, age and veteran status, and the other from investment banking where data from emails, customer service notes, phone calls turned out to be much more important in determining the risk than well-known KPIs which were shown to be misleading. We will show how these systems make the “shortest path to success” recommendations to their users.
This class of hybrid hierarchical systems is expected to be the future of the field according to Gary Marcus at NYU (https://arxiv.org/ftp/arxiv/papers/1801/1801.00631.pdf) Gary says in a section called “Deep learning thus far has not been well integrated with prior knowledge” and that “Such apparently simple problems require humans to integrate knowledge across vastly disparate sources, and as such are a long way from the sweet spot of deep learning-style perceptual classification.” He quotes François Chollet from Google, and the author of Keras, who said in 2017: “For most problems where deep learning has enabled transformationally better solutions (vision, speech), we’ve entered diminishing returns territory in 2016-2017.”
- How to detect that our chosen tools or methods are misbehaving or are not modeling the process?
- How to ensure that the key sources of variation are included in the data used for modeling? Brief intro/reference to Active Learning, if time permits.
- How to integrate signals from multiple sources? Should one always use mathematical or algorithmic integration or is there a need for domain knowledge-based integration?
Check out the video below to know more about the talk.