Principal AI Engineer
About Apollo.io
Apollo.io is the leading go-to-market solution for revenue teams, trusted by over 500,000 companies and millions of users globally, from rapidly growing startups to some of the world's largest enterprises. Our platform helps sales and marketing teams prospect smarter, automate workflows, and close more deals, all within a unified, AI-native platform.
Backed by world-class investors including Sequoia and Bain Capital, and recently valued at $1.6B following our Series D, Apollo is one of the fastest-growing SaaS companies in the world.
Why This Role Matters
We are entering a new phase of AI-native growth and are looking for a Principal AI Engineer to lead the design and deployment of cutting-edge agentic systems, AI assistants, and LLM-powered features. This role is pivotal to Apollo’s AI roadmap, owning the architecture and productionization of intelligent systems that power the workflows of thousands of GTM teams globally.
You’ll be a technical thought leader on the AI Engineering team and work alongside product, backend, and platform leaders to drive forward Apollo’s competitive edge in applied AI.
What You'll Own & Deliver
End-to-End Agentic Systems
Autonomous AI Agents: Architect and lead the development of multi-agent systems capable of long-horizon planning, reasoning, and API orchestration.
Workflow Automation: Build reusable agentic components that integrate deeply into sales and marketing processes.
LLM Platformization: Own and evolve our in-house platform for scalable, low-latency, and cost-efficient LLM and agent deployments.
AI Assistants and Search
Conversational AI & UI: Lead design of interfaces powered by natural language understanding and retrieval-augmented generation (RAG).
Semantic & Personalized Search: Build embedding-based, intent-aware search and personalization systems tuned to business user needs.
Email Intelligence: Drive innovation in personalized outreach generation using context-aware generation pipelines.
Production-Grade Applied AI
Latency & Cost Optimization: Tune inference pipelines, caching layers, and model selection logic for high-scale, cost-aware performance.
Evaluation at Scale: Define and drive robust offline and online testing methodologies (A/B, sandboxing, human evals) across agents and LLM flows.
Feedback Loops: Architect human-in-the-loop systems and telemetry to improve accuracy, UX, and explainability over time.
What We’re Looking For
Technical & Production Depth
10+ years of software engineering experience, with at least 3 years in applied LLM or agentic AI systems (2023–present).
Proven success in deploying LLM-powered products used by real users at scale, not just prototypes or internal tools.
Deep backend & systems engineering expertise with Python, distributed systems, and scalable APIs.
Familiarity with LangChain, LlamaIndex, or similar orchestration frameworks.
Experience with RAG pipelines, vector DBs, embedding models, and semantic search tuning.
Experience managing performance across cloud providers (e.g., AWS Bedrock, OpenAI, Anthropic, etc.).
Agentic Systems & Prompt Engineering
Demonstrated experience building multi-step agents, planning workflows, chaining reasoning steps, and integrating APIs with agent memory/state.
Comfort with advanced prompting strategies, few-shot and chain-of-thought reasoning, and embedding retrieval setups.
Evaluation, Monitoring & CI for AI
Strong understanding of AI system evaluation, human ratings, A/B experimentation, and feedback loop pipelines.
Experience designing safety-aware, reliable LLM systems in production environments.
Experience owning logging, monitoring, and observability for live AI systems.
Leadership & Strategic Impact
Principal-Level Ownership: You thrive in an ambiguous environment, define company wide roadmaps, drive the most important Engineering decisions, and mentor others. You lead from the front.
AI-Native Mentality: You leverage AI to ship faster and smarter, and champion automation across engineering workflows.
Applied Focus: You prioritize impact over novelty. You’re deeply pragmatic in your application of AI research to product features.
What Makes This Role Different
A Role with Real Autonomy
You’ll own complex systems end-to-end, from LLM prompt design and API orchestration to production deployment and A/B testing at scale. You’re not building demos, you’re shipping critical infrastructure that millions rely on.
A Product-First AI Culture
We believe in AI that moves the needle, and your work will directly impact Apollo’s AI Assistants, scoring models, research agents, and automation flows, products that define our differentiation in the market.
Principal AI Engineer
About Apollo.io
Apollo.io is the leading go-to-market solution for revenue teams, trusted by over 500,000 companies and millions of users globally, from rapidly growing startups to some of the world's largest enterprises. Our platform helps sales and marketing teams prospect smarter, automate workflows, and close more deals, all within a unified, AI-native platform.
Backed by world-class investors including Sequoia and Bain Capital, and recently valued at $1.6B following our Series D, Apollo is one of the fastest-growing SaaS companies in the world.
Why This Role Matters
We are entering a new phase of AI-native growth and are looking for a Principal AI Engineer to lead the design and deployment of cutting-edge agentic systems, AI assistants, and LLM-powered features. This role is pivotal to Apollo’s AI roadmap, owning the architecture and productionization of intelligent systems that power the workflows of thousands of GTM teams globally.
You’ll be a technical thought leader on the AI Engineering team and work alongside product, backend, and platform leaders to drive forward Apollo’s competitive edge in applied AI.
What You'll Own & Deliver
End-to-End Agentic Systems
Autonomous AI Agents: Architect and lead the development of multi-agent systems capable of long-horizon planning, reasoning, and API orchestration.
Workflow Automation: Build reusable agentic components that integrate deeply into sales and marketing processes.
LLM Platformization: Own and evolve our in-house platform for scalable, low-latency, and cost-efficient LLM and agent deployments.
AI Assistants and Search
Conversational AI & UI: Lead design of interfaces powered by natural language understanding and retrieval-augmented generation (RAG).
Semantic & Personalized Search: Build embedding-based, intent-aware search and personalization systems tuned to business user needs.
Email Intelligence: Drive innovation in personalized outreach generation using context-aware generation pipelines.
Production-Grade Applied AI
Latency & Cost Optimization: Tune inference pipelines, caching layers, and model selection logic for high-scale, cost-aware performance.
Evaluation at Scale: Define and drive robust offline and online testing methodologies (A/B, sandboxing, human evals) across agents and LLM flows.
Feedback Loops: Architect human-in-the-loop systems and telemetry to improve accuracy, UX, and explainability over time.
What We’re Looking For
Technical & Production Depth
10+ years of software engineering experience, with at least 3 years in applied LLM or agentic AI systems (2023–present).
Proven success in deploying LLM-powered products used by real users at scale, not just prototypes or internal tools.
Deep backend & systems engineering expertise with Python, distributed systems, and scalable APIs.
Familiarity with LangChain, LlamaIndex, or similar orchestration frameworks.
Experience with RAG pipelines, vector DBs, embedding models, and semantic search tuning.
Experience managing performance across cloud providers (e.g., AWS Bedrock, OpenAI, Anthropic, etc.).
Agentic Systems & Prompt Engineering
Demonstrated experience building multi-step agents, planning workflows, chaining reasoning steps, and integrating APIs with agent memory/state.
Comfort with advanced prompting strategies, few-shot and chain-of-thought reasoning, and embedding retrieval setups.
Evaluation, Monitoring & CI for AI
Strong understanding of AI system evaluation, human ratings, A/B experimentation, and feedback loop pipelines.
Experience designing safety-aware, reliable LLM systems in production environments.
Experience owning logging, monitoring, and observability for live AI systems.
Leadership & Strategic Impact
Principal-Level Ownership: You thrive in an ambiguous environment, define company wide roadmaps, drive the most important Engineering decisions, and mentor others. You lead from the front.
AI-Native Mentality: You leverage AI to ship faster and smarter, and champion automation across engineering workflows.
Applied Focus: You prioritize impact over novelty. You’re deeply pragmatic in your application of AI research to product features.
What Makes This Role Different
A Role with Real Autonomy
You’ll own complex systems end-to-end, from LLM prompt design and API orchestration to production deployment and A/B testing at scale. You’re not building demos, you’re shipping critical infrastructure that millions rely on.
A Product-First AI Culture
We believe in AI that moves the needle, and your work will directly impact Apollo’s AI Assistants, scoring models, research agents, and automation flows, products that define our differentiation in the market.