Date: Mar 2024 - Sep 2024
Roles: Data Scientist
This project was completed while I was a consultant at Melio AI, and some of the project details have been obfuscated.
Overview
Developed and deployed a multidimensional evaluation function for an MLOps Competition as well as creating an example machine translation model that participants could use for inspiration. Inference was done using KubeFlow’s Kserve.
My Responsibilities
- Designed and implemented a weighted scoring function for evaluating translation accuracy, resource usage, efficiency, and documentation quality.
- Deployed automated evaluation function that triggers upon solution submission.
- Built and deployed example solutions, including a fine-tuned
t5-small
model with Kserve for inference. - Documented processes for creating and evaluating translation models.
Outcomes/Impact
- Delivered an automated evaluation process that was used to determine competition leaderboard and rankings.
- Provided templates for competition participants to build solutions.
Tools Used
- Python
- AWS
- AWS Lambda
- AWS DynamoDB
- Docker
- HuggingFace
- TensorFlow
- Kubeflow
- Kserve