AI/ML Lifecycle Automation at the Edge
Lab overview
This project showcases how the integration of AI and edge computing can effectively address specific customer use cases and fit seamlessly into various real-world scenarios. The demo will highlight the practical application of AI in resource-constrained environments, making it essential to use lightweight topologies and platforms such as Single Node OpenShift (SNO) and Red Hat Device Edge.
During the demo, we will develop a robust solution for the automotive industry, highlighting the importance of automation, as dedicated teams are not feasible at the edge. To address this challenge, our solution incorporates key components within Red Hat OpenShift AI to support the entire AI/ML lifecycle at the edge, including model training, data science pipelines, model serving, and model monitoring.
Lab contents
To bring this demo to life, several technologies will be used and covered during this lab. Below is a summary of the materials that will be covered:
Chapter | Duration | Contents |
---|---|---|
AI & Edge Use case |
10min |
|
RHOAI configuration |
15min |
|
Model Training |
15min |
|
Model Serving |
10min |
|
DataScience Pipelines |
10min |
|
Battery Management System |
10min |
|
Model Monitoring |
5min |
|
Charging optimization |
15min |
|