Episode 39: Serverless Machine Learning In Action with Carl Osipov
Carl Osipov is the Chief Technology Officer of CounterFactual.AI - a boutique machine learning consultancy he co-founded with his friend from IBM. Previously, he held engineering and technical leadership roles at Google and IBM, on programs and projects across both United States and Europe, in the areas of machine learning, computational natural language processing, cloud computing, and big data analytics. Carl is also an inventor with six patents at USPTO and is an author of "Serverless Machine Learning in Action," a book from Manning Publishers, currently available as an ebook subscription and expected in print in early 2021.
- (2:22) Carl talked about his early exposure to programming and his Bachelor’s degree in Computer Science at the University of Rochester in the late 90s.
- (5:12) Carl implemented his first fully connected, two-hidden-layer artificial neural network using the C programming language back in 2000 when using neural networks wasn’t nearly as cool as it is today.
- (8:00) Carl started his career as a software engineer at IBM, writing software for large-scale distributed systems and voice-dialog management system.
- (13:31) The first production machine learning system that Carl worked on is called Conversational Interaction Manager, which is a dialog management system for conversational mixed-initiative natural language applications. He brought up the challenges in DevOps and data quality.
- (20:05) The second production machine learning system that you worked on is called Smarter Campus, which is a project that enables staffing recommendations based on social networking, optimization, and text analytics.
- (27:16) Carl unpacked the evolution of his career at IBM, working on various leadership roles. In particular, he worked on IBM Bluemix, IBM’s cloud platform-as-a-service, with over 1 million registered users. He emphasized the importance of talking to customers and finding product-market fit.
- (33:01) Carl discussed his decision to pursue a Master’s degree in Computer Science at the University of Florida in the mid of his career.
- (35:24) Carl explained his research paper, which combines game theory and machine learning called “AmalgaCloud: Social Network Adaptation for Human and Computational Agent Team Formation.” The paper focuses on the relationship between network adaptation for candidate group participants and the performance of problem-solving groups.
- (40:50) Carl discussed his patent on learning ontologies for machine learning - which maps ontologies from data warehouses to computer systems.
- (47:00) Carl unpacked his 4-part blog series dated in 2016 that discusses server-less computing via tools such as Docker and Apache OpenWhisk.
- (52:58) Carl emphasized the importance of learning Docker to be productive as a Machine Learning practitioner.
- (55:02) Carl became a program manager at Google Cloud and helped manage the company’s efforts to democratize machine learning via the Advanced Solutions Lab in 2017.
- (59:07) Carl recalled his experience as an instructor at various machine learning boot camps.
- (01:01:44) Carl went over the growing popularity of semi-structured data, referring to his talk at Google’s 2018 Data Cloud Next event.
- (01:06:29) Currently, Carl is the CTO of CounterFactual AI, which works with various clients using tools such as PyTorch and AWS. He brought up an example of a food delivery application.
- (01:09:13) Carl went over his experience leading a workshop on Server-less Machine Learning with TensorFlow at the Reinforce AI Conference in Budapest last year.
- (01:10:52) Carl is writing a book with Manning called Server-less Machine Learning In Action. He explained that server-less tools help minimize the efforts to do MLOps.
- (01:13:47) Carl talked about the rise of PyTorch as a production-ready deep learning framework, as well as his preference for the PyTorch’s language design philosophy.
- (01:17:10) Carl shared his opinions on choosing different cloud platforms to host and run the server-less ML pipeline.
- (01:19:37) Carl described the data and tech community in Orlando, Florida.
- (01:21:53) Closing segment.
His Contact Information
His Recommended Resources
Serverless Machine Learning In Action
Here are highlights from my conversation with Carl:
ON WORKING AT IBM
- The first job at IBM out of college was really formative in helping me understand how to build large-scale distributed systems. Back in 2001, I was hired as a software engineer and responsible for high-performant/high-reliability C++ and Java software that power IBM’s FAB semiconductor manufacturing facility. I learned how to approach the development of distributed systems such that these systems are resilient to change and scalable to hundreds of equipment.
- Then I moved to the IBM TJ Watson Research Center working on voice-recognition and dialogue-management systems. There, I worked on machine learning for NLP (text classification, entity recognition) and constructed an end-to-end system that interacted with customers to manage their 401-k accounts.
- There were many DevOps challenges. It was exceptionally difficult for us to create new releases of software and deliver those releases to the customers. There were no CI/CD engineering practices back then!
- The machine learning challenges lie in data quality. In modern days, data quality tools have not improved since then.
- Later in my career at IBM, I worked on Bluemix (IBM’s platform-as-a-service), first as a developer advocate, and then as a biz dev. The software scaled to over 1M users. We basically went out to talk one developer at a time in trade shows across the US and Europe.
ON MACHINE LEARNING RESEARCH
- The limits of machine learning today is about the loss function. If I can’t engineer highly complex loss functions myself, I should use ideas from game theory such as Nash equilibrium to create an environment where models are trained.
ON SERVERLESS MACHINE LEARNING
- Serverless apps make me productive as a machine learning practitioner, as I can focus on writing code, as opposed to the operations aspect.
- In infrastructre-as-a-service and platform-as-a-service approaches, the operational overhead can be overwhelming. The serverless approach outsources operational concerns (managing middleware, runtime, operating systems, security updates) to the cloud providers. My blog series provides illustrations for common architectural patterns for using server-less.
- If you have to put your machine learning models into production, you need to learn Docker.
- I wrote “Serverless Machine Learning In Actions” for practitioners to be more productive contributors to their teams and organizations. More specifically, you will avoid the MLOps trap (tending to things like availability, latency, requests, etc.) by outsourcing those activities to the cloud providers (AWS, GCP, Azure)
- I loved the philosophy in the Pytorch design. I can drill deep behind specific components and use abstractions on top of them via well-documented API.
- When choosing a cloud provider, don’t obsess over their features/capabilities. Instead, pay attention to non-technical issues such as vendor lock-in.
ON WORKING AT GOOGLE
- At Google, I joined the Advanced Solutions Lab, where customers came to Google’s campus, attended lectures from Google’s engineers, and learned model deployment.
- I helped flush out learning materials on machine learning, data engineering, and data analytics. I also talked to engineers at trade shows and conferences for feedback on the materials.
ON FOUNDING COUNTERFACTUAL.AI AND BEING IN ORLANDO, FLORIDA
- I have full responsibility for all the technical decisions happening inside the company. We do a lot of work with PyTorch and AWS.
- Orlando has a community that historically focuses on simulation, virtual reality, and gaming applications. I believe Machine Learning 3.0 relies heavily on simulated environments to train machine learning models.