
Pradeep Gupta

Raynor Ren

Jin Mi
Getting one or two AI models into production is very different to running an entire enterprise or product on AI, and as AI is scaled, problems can (and often do) scale too.
- Standardizing how you build and operationalize models
- Focusing teams where they’re strongest
- Introducing MLOps and establishing best practices and tools to facilitate rapid, safe, and efficient development and operationalization of AI

Raynor Ren

Pradeep Gupta
Failure to adequately explain model development, working and outcome inherently invites both regulatory and customer scrutiny, especially when things go wrong.
- The extent to which customers need to know how and why a particular outcome has been reached
- Do you need to understand black box models and if so, why?
- Where is explainability a luxury and where is it absolute necessity
- Lessons learned from failures and how explainability could have helped

Agus Sudjianto
Agus Sudjianto is an executive vice president, head of Model Risk and a member of the Management Committee at Wells Fargo, where he is responsible for enterprise model risk management. Prior to his current position, Agus was the modeling and analytics director and chief model risk officer at Lloyds Banking Group in the United Kingdom. Before joining Lloyds, he was an executive and head of Quantitative Risk at Bank of America. Prior to his career in banking, he was a product design manager in the Powertrain Division of Ford Motor Company. Agus holds several U.S. patents in both finance and engineering. He has published numerous technical papers and is a co-author of Design and Modeling for Computer Experiments. His technical expertise and interests include quantitative risk, particularly credit risk modeling, machine learning and computational statistics. He holds masters and doctorate degrees in engineering and management from Wayne State University and the Massachusetts Institute of Technology.
Even without intentionally prejudice data or development practices, AI can produce inequitable results. How can organizations ensure they are mitigating bias at all levels and reducing the risk of reputational, societal and regulatory harm?

Hooman Sedghamiz
Hooman Sedghamiz is Director of AI & ML at Bayer. He has lead algorithm development and generated valuable insights to improve medical products ranging from implantable, wearable medical and imaging devices to bioinformatics and pharmaceutical products for a variety of multinational medical companies.
He has lead projects, data science teams and developed algorithms for closed loop active medical implants (e.g. Pacemakers, cochlear and retinal implants) as well as advanced computational biology to study the time evolution of cellular networks associated with cancer , depression and other illnesses.
His experience in healthcare also extends to image processing for Computer Tomography (CT), iX-Ray (Interventional X-Ray) as well as signal processing of physiological signals such as ECG, EMG, EEG and ACC.
Recently, his team has been working on cutting edge natural language processing and developed cutting edge models to address the healthcare challenges dealing with textual data.
All systems fail at some point, no matter how much time and rigor are put into their design and development. AI is not immune, susceptible to attacks, exploitation and unexpected failures. This session will be broken into two presentations to explore:
Design and build:
- Top tips for designing, building and ensuring robustness and resilience in AI
- Improving the robustness of AI components and systems
- Designing for security challenges and strategies for risk mitigation
Testing and evaluation:
- How to test, evaluate and analyze AI systems
- Adopting comprehensive test and evaluation approaches
- Which protocols can be applied and where new approaches are required
