Evaluation: Acme AI – Senior AI Engineer

Evaluation: Acme AI – Senior AI Engineer

Date: 2026-04-01 Archetype: AI Platform / LLMOps Engineer Score: 4.2/5 URL: https://jobs.example.com/acme-ai-senior-engineer PDF: output/cv-candidate-acme-ai-2026-04-01.pdf


A) Role Summary

FieldValue
ArchetypeAI Platform / LLMOps Engineer
DomainPlatform / Infrastructure
FunctionBuild
SenioritySenior (IC4-IC5)
RemoteFull remote (US timezone overlap)
Team size~8 engineers
TL;DRSenior AI eng to build and scale LLM infrastructure for enterprise customers

B) CV Match

JD RequirementCV MatchSource
“Production LLM systems”Built real-time fraud detection + LLM eval toolkitcv.md: TechFin Corp
“Model monitoring and observability”Drift detection, Grafana dashboards, retraining triggerscv.md: ML Platform Lead
“Python + distributed systems”Python, Kafka, Kubernetes, Rediscv.md: Skills
“CI/CD for ML”Reduced deploy from 2 weeks to 4 hourscv.md: TechFin Corp

Gaps

GapSeverityMitigation
“LLM-specific experience”MediumLLM Eval Toolkit is direct proof. Frame fraud detection as “production ML → production LLM” progression
“Prompt engineering”LowMention eval toolkit’s prompt testing capabilities

C) Level and Strategy

Detected level: Senior (IC4) Candidate’s natural level: Senior-Staff boundary

“Sell senior” plan: Lead with platform ownership at TechFin (“led 3-person team, built MLOps for 4 teams”). Frame as ready for Staff scope.

D) Comp and Demand

Data PointValueSource
Base salary range$180-220KLevels.fyi, similar AI infra roles
Total comp (with equity)$250-320KGlassdoor estimates
Demand trendHigh – LLM infra is top-5 most in-demandLinkedIn job trends

E) Personalization Plan

#SectionCurrentProposed ChangeWhy
1Summary“Full-stack AI engineer”“AI platform engineer focused on LLM infrastructure and observability”Match JD language
2TechFin bulletsGeneric ML platformAdd “LLM serving” contextJD specifically mentions LLMs
3ProjectsBoth listed equallyLead with LLM Eval ToolkitDirect LLM experience proof

F) Interview Plan

#JD RequirementSTAR StorySTAR
1Production LLM systemsFraudShield scaling10K TPS requirementBuilt streaming pipelineKafka + ensemble + feature store99.7% precision, $2M saved
2Team leadershipML Platform team4 teams needed MLOpsLed 3-eng team, built platformRegistry + A/B + feature storeDeploy time 2 weeks → 4 hours

Recommended case study: LLM Eval Toolkit – shows LLM-specific expertise + open source impact


Keywords Extracted

LLM infrastructure, model serving, observability, ML platform, distributed systems, Python, Kubernetes, model monitoring, CI/CD, prompt engineering, evaluation, production ML, enterprise AI, scalability, reliability