HomeServicesAbout CareersPartners Get in Touch โ†’
Careers

Build AI that
actually ships.

We're assembling a founding team of exceptional AI engineers. Small team, real clients, hard problems. Remote-friendly. You'll build things that go to production, not demos.

View open roles โ†’ General enquiry
๐ŸŒฑ

You'd be joining as a founding team member.

Aevon.ai is early. The first 10 engineers will shape how we build, how we deliver, and what we stand for. You'll have equity, autonomy, and a direct line to clients. If you want to be part of something from the ground up โ€” this is it.

Why Aevon.ai

What it's actually
like here.

๐Ÿš€

Production, not prototypes

Every engagement ships to production. We don't do POCs that die in a drawer. Your work will be running in real enterprise environments within weeks.

๐Ÿง 

Deep AI, not wrapper code

We work at the infrastructure and model layer โ€” GPU clusters, fine-tuning, inference optimisation, eval frameworks. Not prompt engineering wrappers.

๐ŸŒ

US enterprise exposure

You'll work directly with Series Bโ€“D US tech companies. Client visibility from day one. Your name on the deliverable.

๐Ÿ“ˆ

Equity that means something

Early team members receive meaningful equity stakes with standard vesting. We're building toward a real outcome โ€” and you'll share in it.

Life at Aevon.ai

How we work.

Remote-friendly. Outcome-first.

We work in pods, move fast, and communicate with clarity. Remote-friendly for the right people โ€” we care about output, not office hours. We don't believe in performative presence.

Pod structure, real ownership

You're not a ticket-taker. Each pod has genuine ownership over architecture decisions, delivery timelines, and quality standards. We hire people who can lead, then let them lead.

Craft matters here

We care about well-designed systems, clean evaluation frameworks, and honest technical recommendations. We don't cut corners to ship fast. Quality is how we retain clients.

You'll grow fast

Early in your career at Aevon.ai, you'll touch more of the AI stack than you would in 3 years at a large company. Platform partners, enterprise clients, hard infrastructure problems.

Open Roles ยท India & Remote

Current openings.

All roles are India-based with remote flexibility. We are building the founding engineering team and moving quickly.

Senior AI Architect
Architecture Founding team India ยท Remote-friendly
Apply โ†’
The role

This is the most critical hire. You will be the technical backbone of Aevon.ai's first delivery pod. You own the architecture, run technical discovery with US enterprise clients, and set quality standards for every engineer we hire after you.

What you'll do
  • Design and own end-to-end AI system architecture for enterprise clients
  • Lead technical discovery calls directly with client CTOs and engineering heads
  • Make and document model selection, infrastructure, and fine-tuning decisions
  • Mentor a pod of 2โ€“4 LLM and MLOps engineers
  • Build Aevon.ai's internal methodology and reusable accelerators
  • Contribute to proposals, scoping, and technical sections of SOWs
What we're looking for
  • 5โ€“8 years of experience with ML systems in production
  • Deep LLM experience: RAG, fine-tuning, evaluation, inference optimisation
  • GPU/compute knowledge: cluster setup, CUDA fundamentals, cost modelling
  • Experience with Databricks, Snowflake, or equivalent data platform
  • Can run a client meeting without hand-holding โ€” composed under pressure
  • Strong architectural thinking: you explain tradeoffs, not just solutions
  • Published models, open-source contributions, or shipped production LLM systems
Preferred background
  • Ex-Google, Microsoft, or Amazon AI/ML teams in India
  • Hugging Face model contributors or active open-source maintainers
  • Previous SI, consulting, or client-delivery experience
Apply for this role โ†’ Or email [email protected] with subject: Senior AI Architect
LLM / NLP Engineer
Engineering Founding team India ยท Remote-friendly
Apply โ†’
The role

You are the hands. You build the RAG pipelines, run the fine-tuning jobs, design the eval frameworks, and optimise inference until latency and cost targets are met. Production-grade LLM engineering, not demos.

What you'll do
  • Build and optimise RAG pipelines for enterprise document corpora
  • Run fine-tuning jobs on client proprietary datasets (LoRA, QLoRA, full fine-tune)
  • Design evaluation frameworks: accuracy, latency, cost, hallucination rate
  • Optimise inference: quantisation, batching, speculative decoding
  • Implement model monitoring and drift detection in production
  • Write clear technical documentation clients can hand to their own teams
What we're looking for
  • 3โ€“5 years of experience; at least 1 year on LLM or NLP production systems
  • Hugging Face Transformers โ€” not just API calls, actual model manipulation
  • Experience deploying and serving open-source LLMs (Llama, Mistral, Qwen, Gemma)
  • RAG architecture: chunking strategies, embedding models, vector DBs, reranking
  • Python fluency โ€” clean, testable, reviewable code
  • Production mindset: you think about failure modes and monitoring from day one
Nice to have
  • Published models or fine-tunes on Hugging Face Hub
  • Kaggle Master or active ML competition history
  • Experience with vLLM, TGI, TensorRT-LLM for inference serving
Apply for this role โ†’ Or email [email protected] with subject: LLM Engineer
MLOps / Infrastructure Engineer
Infrastructure Founding team India ยท Remote-friendly
Apply โ†’
The role

You own the infrastructure layer. GPU clusters, Kubernetes orchestration, CI/CD pipelines for ML workloads, cost monitoring, and the platform integrations that make the whole stack run reliably at scale.

What you'll do
  • Provision and manage GPU clusters on CoreWeave, Nebius, Lambda Labs, AWS
  • Build and maintain ML CI/CD pipelines: training, evaluation, deployment
  • Kubernetes orchestration for inference workloads โ€” autoscaling, cost caps
  • Implement infrastructure monitoring, alerting, and runbooks
  • Cloud cost modelling and optimisation across multi-cloud deployments
  • Build reusable infrastructure templates across client engagements
What we're looking for
  • 3โ€“5 years in MLOps, platform engineering, or cloud infrastructure
  • Kubernetes in production โ€” not just tutorial experience
  • At least one cloud platform deeply: AWS, GCP, or Azure ML
  • GPU infrastructure experience: provisioning, CUDA drivers, resource limits
  • Infrastructure-as-code: Terraform or Pulumi
  • Cost-optimisation mindset โ€” you treat GPU hours like cash
Nice to have
  • Experience on ex-AWS, GCP, or Azure ML platform teams
  • Familiarity with MLflow, Weights & Biases, or Ray
  • CoreWeave or Nebius deployment experience
Apply for this role โ†’ Or email [email protected] with subject: MLOps Engineer
AI Solutions Engineer
Client-facing Founding team India ยท Remote-friendly
Apply โ†’
The role

You are the bridge between the client's business problem and Aevon.ai's technical capability. You run discovery workshops, write the technical sections of proposals, and own client success through delivery. Half engineer, half trusted advisor.

What you'll do
  • Lead discovery workshops with client engineering and business teams
  • Translate business requirements into scoped technical proposals
  • Own client communication and status reporting across active engagements
  • Identify expansion opportunities within existing accounts
  • Build reusable proposal templates and case study frameworks
  • Support pre-sales: demos, POC scoping, technical Q&A on calls
What we're looking for
  • 4โ€“6 years in a pre-sales, solutions engineering, or technical delivery role
  • Technical depth: you can hold a serious architecture conversation without notes
  • Strong written communication โ€” your proposals get signed
  • Comfortable in a room with a CTO you've never met before
  • Experience with AI/ML projects either in delivery or pre-sales context
  • Commercial awareness: you understand deal economics, not just technical scope
Nice to have
  • Ex-Databricks, Snowflake, or Hugging Face Solutions Architect or SE
  • Experience closing or managing $100K+ technical engagements
  • Background at a consulting firm or systems integrator
Apply for this role โ†’ Or email [email protected] with subject: AI Solutions Engineer

Don't see your exact role?

We're building fast. If you're exceptional at something we haven't listed, we want to hear from you.

Send a general application โ†’
Our process

Fast, fair,
no games.

We respect your time. The entire process takes 7โ€“10 days from first contact to offer.

01

Screening call

20 minutes. We learn about you, you learn about us. No trick questions. We ask what you're optimising for in your next role.

02

Technical interview

60โ€“90 minutes. Architecture design, system thinking, and one realistic client scenario. We assess depth, not trivia.

03

Founder conversation

30 minutes with Dharmesh. Not another interview โ€” a conversation about the company, your role, and whether this is right for both of us.

04

Offer within 24 hours

If you're an A, we move same day. Written offer with full terms. No exploding deadlines. You have reasonable time to decide.

Ready to build something
that ships?

Send your CV and a note on what you've built to [email protected]

Email us โ†’