At Algolia, we’re proud to be a pioneer and market leader in AI Search, empowering 17,000+ businesses to deliver blazing-fast, predictive search and browse experiences at internet scale. Every week, we power over 30 billion search requests — four times more than Microsoft Bing, Yahoo, Baidu, Yandex, and DuckDuckGo combined.
In 2021, we raised $150 million in Series D funding, quadrupling our valuation to $2.25 billion. This strong foundation enables us to keep investing in our market-leading platform and serving incredible customers like Under Armour, PetSmart, Stripe, Gymshark, and Walgreens.
The AI Research team at Algolia combines fundamental research with product engineering to deliver customer-facing AI-powered features.
The team is highly cross-functional, made up of PhD researchers, full-stack engineers, and infrastructure specialists working together to explore new ideas, validate impact, and bring successful research outcomes into production. While the work is research-driven, the output is real, customer-facing systems.
The OpportunityWe are looking for an embedded Senior Site Reliability Engineer to join the AI Research team as a full member of the group. In this role, you will support both the research and product-engineering aspects of the team by ensuring the stability, scalability, and operability of the infrastructure that enables this work.
This is a classic SRE role focused on cloud-first, service-oriented architectures running on Google Cloud Platform. While the team builds AI-powered systems, AI or ML experience is not required for this role. Our priority is strong SRE fundamentals, experience operating production services, and comfort working in an environment with ambiguity and high ownership.
You will play an important role in day-to-day execution as well as in longer-term (12-month) planning, helping shape how the team builds and operates its platforms over time.
What You’ll Work OnPlatform Reliability & Enablement- Support and evolve the reliability of platforms used by the AI Research team. Examples of our infrastructure work to date include:
- A production inference service (embedding model serving API)
- AI data feature store
- Internal tools used for novel research and experimentation
- Infrastructure that combines the above to enable offline testing of customer deployments to agentically discover configuration improvements.
- Ensure production services meet expectations for availability, latency, and operational readiness, particularly for systems that sit on customer-critical paths
- Design infrastructure and operational patterns that prioritize iteration speed while maintaining appropriate safeguards for production systems
- Work closely with researchers and engineers in a cross-functional setting, acting as an advisor on infrastructure, reliability, and operational concerns
- Participate directly in team planning and execution, from early exploration through production rollout
- Help researchers self-serve infrastructure safely and effectively, without becoming a bottleneck
- Build and maintain Kubernetes-based services on GCP using infrastructure-as-code and GitOps (Terraform, ArgoCD)
- Own and improve CI/CD pipelines for services written primarily in Go, with some Python-based services
- Design and operate observability systems using tools such as Datadog
- Participate in an on-call rotation (relatively light), responding to incidents and helping improve systems over time
- Strong experience operating cloud-first infrastructure
- Hands-on experience running production services on Kubernetes
- Proficiency with infrastructure-as-code (Terraform) and CI/CD systems
- Experience supporting production services written in Go (Python experience is a plus)
- Solid grounding in service reliability, incident response, and operational best practices
- Comfort working in environments with ambiguity, where problems are not always well-defined upfront
- Experience supporting mission-critical internal platforms
- Exposure to research or experimentation-heavy environments
- Familiarity working alongside researchers or highly specialized domain experts
- AI, ML, or deep learning experience
- Model training, tuning, or ML framework expertise (e.g. PyTorch, JAX)
This role may not be a good match if:
- You are only interested in maintaining existing infrastructure without contributing to what is being built
- You want to work exclusively on customer-facing product features
- You are looking to avoid on-call or production systems entirely
- You are seeking narrowly defined work with low ambiguity and limited ownership
- You want to build or train AI models yourself rather than enable the systems around them
- High Impact: Your work directly enables new AI-powered capabilities that reach customers
- High Agency: You’ll help shape what gets built, how it’s built, and whether it’s worth building
- Strong Peers: Collaborate with experienced SREs, engineers, and PhD researchers
- Growth: Build expertise in research-adjacent infrastructure and platform reliability
- Flexibility: Australia-based role with remote-friendly culture; occasional off-hours collaboration may be required
FLEXIBLE WORKPLACE STRATEGY:
Algolia’s flexible workplace model is designed to empower all Algolians to fulfill our mission to power search and discovery with ease. We place an emphasis on an individual’s impact, contribution, and output, over their physical location. Algolia is a high-trust environment and many of our team members have the autonomy to choose where they want to work and when.
We have a global presence with offices in Paris, NYC, London, Sydney and Bucharest, however we also offer many of our team members the option to work remotely either as fully remote or hybrid-remote employees. Positions listed as "Remote" are only available for remote work within the specified country. Positions listed within a specific city are only available in that location - depending on the role it may be available with either a hybrid-remote or in-office schedule.
WE’RE LOOKING FOR SOMEONE WHO CAN LIVE OUR VALUES:
- GRIT - Problem-solving and perseverance capability in an ever-changing and growing environment.
- TRUST - Willingness to trust our co-workers and to take ownership.
- CANDOR - Ability to receive and give constructive feedback.
- CARE - Genuine care about other team members, our clients and the decisions we make in the company.
- HUMILITY - Aptitude for learning from others, putting ego aside.
We’re looking for talented, passionate people to help build the world’s best search and discovery technology. We value autonomy, diversity, and collaboration. We’re committed to creating an inclusive workplace where everyone is respected and supported—regardless of race, age, ancestry, religion, sex, gender identity, sexual orientation, marital status, color, veteran status, disability, or socioeconomic background.
IMPORTANT NOTICE FOR CANDIDATES - Recruitment Fraud Notice
We’ve recently seen an increase in recruitment scams targeting job seekers. To help protect yourself, please keep the following in mind:
- Our open positions may appear on third-party job boards, but the best way to apply safely is directly through our careers page.
- All genuine communication from Algolia will come from an @algolia.com email address. If you receive an email from someone claiming to work at Algolia who does not have an @algolia.com email address, please do not respond or share any personal information.
- We’ll never ask for payments, purchases, or financial details during the hiring process.
READY TO APPLY?
If you share our values and our enthusiasm for building the world’s best search & discovery technology, we’d love to review your application!


