Discover more from The Brief
Quantitative Improvements with Open-Source tools
The newsletter for integrity engineering
📰 June 9th, 2023. Issue #15
Open-source Trust & Safety is new, but how it fits into the MLOps, LLMOps or DevOps lifecycle is not. It’s the very reason as to why I’m excited to be in this space with Apollo - Open-source MLOps tooling for user experience (AKA open-source trust & safety tools). As machine learning and artificial intelligence are increasingly adopted in various software products and services, the need for best practices and tools to facilitate the testing, deployment, management, and monitoring of ML models is also growing.
MLOps offers a set of best practices, tools, and frameworks that benefit businesses in the following ways:
Promoting project sustainability by automating ML processes and providing model auditing capabilities. This enables easy collaboration, transparency, and traceability in building and deploying models.
Enhancing practitioner productivity by providing the right infrastructure and collaborative environment for working on ML projects. This reduces time spent on manual tasks such as data sourcing, code management, and model training.
Ensuring reliability by enabling teams to establish key performance indicators (KPIs) and policies that guide the quality of every stage in the ML product lifecycle. When it comes to Trust & Safety this is very important! Great KPIs == Great UX.
* LLMOps = MLOps in my opinion
These benefits are particularly valuable in scaling ML applications in domains such as Natural Language Processing (NLP), computer vision, time series forecasting, anomaly detection, and predictive maintenance. By leveraging MLOps practices, businesses can achieve optimal product performance, efficient resource management, and customer satisfaction for their users.
This article explores an end-to-end MLOps architecture and highlights open source tools that can accelerate each stage of your machine learning solution. By adopting a platform-agnostic approach to discuss MLOps architecture, this article serves as a guide for selecting open source tools that can be utilized to build a comprehensive MLOps solution.
Google’s blurb on what it is. It’s pretty accurate. Essentially unifying:
ML system development (Dev)
ML system operations (Ops)
In the realm of machine learning, the MLOps methodology provides a valuable approach that empowers machine learning engineers and data scientists to concentrate their efforts on the fundamental development of models. By freeing them from the burdensome tasks of data preprocessing, environment configuration, and model monitoring, MLOps allows these professionals to dedicate their time and expertise to the heart of the ML process.
To achieve this efficiency, MLOps practitioners have the option to employ either commercial or open source MLOps tools. The selection of the tool largely depends on two key factors: the availability of resources, particularly financial resources, and the tool's stability. Considering these aspects helps organizations make informed decisions about which MLOps tool aligns best with their unique circumstances and objectives.
How does this play into Trust & Safety? Well automated systems used to detect harm or improve brand safety are in need of TLC. The age-old verbiage `My provider doesn’t work`.
Join The Community
Contrary to a widespread belief, the motivation behind businesses adopting open source solutions extends beyond mere cost reduction. There are numerous compelling reasons to choose open source software to fulfill your overall business requirements. Open source software exhibits stability and high performance due to continuous monitoring, security gap closures, and bug fixes by diligent developers. Additionally, successful open source tools benefit from vibrant communities of dedicated users and developers who offer built-in support, ensure codebase longevity, and introduce new features on an ongoing basis.
Open-source tools for MLOps
When embarking on the development of a comprehensive MLOps platform, it is essential to begin by crafting a framework or architecture that aligns with your specific needs. To ensure its effectiveness, it is crucial to gather input from all relevant teams involved, as successful MLOps solutions heavily rely on seamless cross-team communication and collaboration.
The architecture can be divided into two distinct phases, each encompassing a set of interconnected processes: Continuous Integration (CI) and Continuous Delivery (CD). These phases are supported by a CI/CD pipeline, which serves as a mechanism to ensure the continuous development and delivery of models.
Outlined below is an illustrative example of an end-to-end architecture:
Apollo integrates with the end of the MLOps lifecycle. (Tune → Instrument → Automate)
Overall, ML pipelines encompass various essential stages such as data manipulation, training, testing, model deployment, and the generation of reports and artifacts. It is important to highlight the significance of testing within the ML lifecycle, as it plays a critical role in identifying bugs and issues present in both the data and models. By establishing standardized feedback loops, testing provides ML teams with valuable insights that enable them to swiftly update their models, ensuring rapid performance recovery.
The ML Pipeline
Kubeflow emerges as a highly sought-after open source solution for constructing ML pipelines, which serve as workflows encompassing the building, training, and deployment of ML models (Apollo builds workflows for the building, training and automation of active response). Operating on Kubernetes, this versatile toolkit enables seamless scalability of ML models. With its built-in container orchestration and management capabilities, Kubeflow empowers data scientists to concentrate on the creation of their machine learning workflows. The architectural diagram above provides an overview of the ML workflow, involving data processing and manipulation steps, model training, and validation.
To streamline the development process, Kubeflow offers integrations with Git repositories like GitHub, allowing users to remain within a unified context. By utilizing Kubeflow Automated Pipelines Engine (KALE) in conjunction with GitOps via GitHub Actions, teams can efficiently execute their workflows. Alternatively, Apollo provides an automatic workflow execution framework for those who prefer not to utilize a separate CI tool. Notably, Kubeflow leverages the Kfp SDK, a user-friendly domain-specific language (DSL) for defining ML workflows.
Kubeflow stands on three core pillars: scalability, portability, and composability. It empowers projects to dynamically adjust resource allocation based on specific requirements and supports running on diverse infrastructure types. Composability ensures that each project component functions independently, even when subdivided into separate pieces. Moreover, Kubeflow allows for concurrent pipeline execution, facilitating the generation of multiple models simultaneously.
For model validation testing, Apollo serves as an open source tool that leverages test suites to validate ML models and datasets. It seamlessly integrates with popular tools such as Hugging Face, Databricks, and Apache Airflow, providing flexibility and reliability.
Following model validation, the CD phase comes into play, ensuring the efficient deployment of new changes, releases, or models to end-users. This practice not only alleviates the burden of release maintenance but also accelerates the delivery of software to customers, while fostering continuous learning through valuable customer feedback loops.
Now, let's delve deeper into our CD environment.
To ensure optimal performance, model monitoring is an ongoing and critical process that necessitates the identification of key elements for monitoring and the development of an effective strategy. Continuous monitoring of ML systems is essential to detect any potential degradation in models, which can result in suboptimal performance levels. It is imperative to keep track of operational resources such as GPU usage and the number of API calls to ensure the smooth operation of the system without any interruptions.
One noteworthy tool for model monitoring is Apollo, which not only facilitates testing and validation of models but also provides robust monitoring capabilities for deployed models. Apollo offers flexibility, with options for both open source and commercial offerings, catering to diverse needs. Additionally, Prometheus and Grafana are other notable model monitoring tools. These tools enable real-time measurements to be tracked and visualized on a centralized dashboard. They gather crucial information regarding model quality, including outlier detection and model drift, as well as operational metrics such as request rate and latency, among other key indicators. This comprehensive monitoring approach allows for proactive identification of potential issues and ensures the continued reliability and performance of ML systems.
The versatility and support community surrounding open source tools make them highly accessible and adaptable for teams. With a dedicated community consistently introducing new features and resolving bugs, these tools provide a user-friendly experience. The table presented below showcases the open source tools discussed in this article. Regardless of the specific tools chosen, the end-to-end architecture is meticulously designed to facilitate the smooth operation of a robust MLOps environment.
Function | Tool | Alternative Tools -------------------------|-----------|-------------------------- Source Code Management | GitHub | Bitbucket Feature Store | Feast | Hopsworks ML Pipeline | Apollo | Polyaxon, Kubeflow Model Validation Testing | Apollo | Etiq AI, Great Expectations Model Registry | MLflow | Neptune Model Serving | Cortex | Seldon Core Model Monitoring | Apollo | Prometheus, Grafana
This newsletter is just a really long way to say, I haven't found solutions that combine model validation through testing, monitoring and automation, so I'm building my own.
Check out Apollo, an open-source toolkit for integrity engineering that implements the process above and provides a way to extend model validation to enhance user experience.