AI Engineer
Anuncio original
ATRC's Data & AI Office is a product and platform execution team - not a traditional enterprise data function. We build and operate AI-enabled systems that create measurable operational impact across the organization. Our primary output is working software in production.
What we are building - now:
An executive reporting portal serving senior leadership, integrating live data from SAP, Microsoft 365, SuccessFactors, and internal systems across multiple operational views (finance & budget, KPIs, technology portfolio, news, wellbeing, and a personalized executive home)
A sovereign data and AI platform: data warehouse, governed pipelines, metadata catalog, and classification enforcement across all entities
Data mesh architecture with federated ownership - each entity owns its data products; the platform provides governed access
Data readiness for the organization's agentic AI platform - clean, structured, governed data so AI agents can operate on trusted sources
Operational AI agents delivering real value on top of the agentic platform (document triage, anomaly detection, automated briefings, classification automation)
AI-enabled situational awareness and crisis management systems
Data governance and classification - managing an active external vendor engagement, bridging framework deliverables into platform-enforced rules, and building the ongoing
governance operations the organization requires
How we work:
Small team, high ownership - every engineer owns a module end-to-end
Two-week sprints with demonstrated output at every review
Ship and improve - working systems beat perfect designs
Core team owns all architecture and product decisions; external capacity executes within defined scope
Decisions take hours, not weeks. Daily technical choices are made by the engineer.
Feature scope decisions take 24 hours. Architecture pivots take 48 hours. Nothing sits in limbo.
Oversee 5 other workstreams powered by 3rd party vendors within the ecosystem but whom we are ultimately accountable for
Builds the AI capabilities that operations depend on - classification automation, production agents, and the data bridge to the agentic platform.
Mission
Build, own, and continuously improve the AI capabilities across the office's production systems, working under the direction of the Principal AI Engineer. Your scope: automated data classification (remediating what's currently broken), AI-generated briefings, and operational AI agents that deliver real value in weeks.
An external partner is building an agentic platform that ATRC will use. Your role in that relationship: define the data contracts the platform consumes, evaluate the sample agents delivered, and build the production agents on top of the platform that go beyond demonstrations into real operational use. You don't build the platform - you make sure ATRC's data is ready for it and that what gets built on top of it actually works.
What This Role Owns
Data classification automation: implementing automated classification to remediate current failures, embedding classification into data pipelines alongside the Governance
Lead Operational AI agents: building production agents on top of the agentic platform - going beyond the sample agents the external partner delivers into real operational workflows
Agentic platform data contracts: defining what data the platform needs, in what format, with what quality guarantees - working with the Principal AI Engineer
AI service implementation: FastAPI service around LLM APIs with versioned prompt templates
Classification and briefing prompts: structured prompts returning validated JSON with tags, confidence levels, source attribution
Prompt versioning: templates in configuration, editable without code changes
Observability: every LLM call logged with input hash, model version, output, latency, token count
Fallback logic: graceful degradation when LLM APIs are unavailable
Quality evaluation: running precision/recall evaluations against human reviewer samples, reporting results, iterating prompts
Key Decisions
Prompt implementation - few-shot examples, output parsing, error handling
Confidence threshold tuning based on evaluation results
Context window packing - what data fits within token budgets
Fallback behavior when APIs degrade
Agent evaluation - whether an external partner's sample agent is production-ready or needs rework
Bug triage for AI components - autonomous fix vs. escalate to Principal AI Engineer
Does Not Do
Define AI architecture or agent design patterns (Principal AI Engineer)
Build the agentic platform itself (external partner)
Build the backend API or data pipelines
Fine-tune or train models
Define governance policies (Governance Lead)
Ideal Candidate
Has shipped an LLM-based feature that non-technical users depend on daily - and has been responsible when it breaks. Knows the hardest part of applied AI is the fallback, the observability, and the human review loop - not the prompt. Can evaluate someone else's agent demo and quickly identify whether it's production-ready or held together with string. Works well under a senior AI lead - takes architectural direction and executes with high quality and speed. Not attached to a particular model - the job is reliable output.
LLM APIs (Claude, GPT-4, open-weight models) - structured output, JSON mode, system prompts
Prompt engineering for classification - zero-shot and few-shot
Python - async API calls, retry logic, exponential backoff
LLM evaluation - precision/recall, human-AI agreement scoring
Structured output - JSON schema enforcement, Pydantic validation
Open-weight / sovereign model APIs (Falcon, Llama, or equivalent)
Token budgeting and context window management
AI observability - output quality monitoring, anomaly detection
FastAPI and Docker
Candidatura gestionada por ZooLATECH