




Summary: Corva is seeking a Senior Backend Developer to design, build, and maintain backend systems for data pipelines and automation, integrating AI/ML capabilities. Highlights: 1. Work on efficient, well-documented backend services and automation pipelines 2. Integrate AI/ML models and LLM-based features into backend services 3. Dive into new technologies and drive innovation in AI and data engineering **About Corva** Corva has built a first\-of\-its\-kind energy app store on a bedrock of best\-in\-class technologies, data pipelines, and a secure and scalable architecture. Our energy solutions solve today's toughest well delivery challenges, from well design through drillout. The ever\-evolving platform is not only future\-proof for digitizing operations but is your toolkit to accelerate sustainability and energy transition goals. Our platform is built for speed and reliability and delivers unmatched features and capabilities. Corva is powering worldwide innovation by driving efficiency, productivity, and profitability with our innovative energy solutions. **Mission** *Corva’s mission is to accelerate the future of energy.* **Values** **Boldness**: *Corvanauts have the confidence and courage to question status quo for the products we make and the relationships we cultivate.* **Own End\-to\-End**: *We take ownership of what we start and see it through to completion through trust and dependability.* **Transparency**: *It's crucial to be open and honest and consistent with updates and data flow with customers and colleagues. We value the free\-flowing of information and data to make better decisions.* **Bias Action**: *Corvanauts don't sit still \- our default mode is taking action! We make progress through high\-quality iterations. Failure is built into the process and success is defined by the number of shots on goal.* We are looking for a **Senior Backend Developer** to join our Data Ops team. You will design, build, and maintain the backend systems that power our data pipelines, automation workflows, and internal tooling. This role blends deep Python expertise with cloud\-native architecture, a pragmatic approach to automation, and a growing focus on integrating AI/ML capabilities into our platform. You will work closely with data engineers, front\-end developers, and product stakeholders to ship reliable, scalable software. **Technology Stack** Python 3\.11 — 3\.14, AWS (Lambda, ECS, S3, Step Functions, CloudWatch), Kubernetes, MongoDB, Redis, Apache Kafka, Pytest, GitHub Actions / Jenkins, Docker, SciPy, NumPy, pandas, scikit\-learn, LLM APIs (OpenAI / Anthropic) **Responsibilities \& Duties** — Architect and deliver efficient, well\-documented, and highly readable backend services that set the quality bar for the team. — Design and maintain automation pipelines (CI/CD, scheduled jobs, event\-driven workflows) that reduce manual effort and improve reliability. — Build lightweight internal dashboards, admin panels, or API\-driven front\-end components using frameworks such as React, Vue, or Streamlit to surface data and system health to stakeholders. — Integrate AI/ML models and LLM\-based features into backend services, including prompt engineering, embeddings pipelines, and retrieval\-augmented generation (RAG) patterns. — Dive into new technologies and product disciplines, driving innovation and staying current with the evolving AI and data engineering landscape. — Define development plans based on project requirements and ensure timely delivery while remaining flexible to changing priorities. — Oversee the stability of your services, monitoring system health, uptime, and performance post\-release through observability tooling (CloudWatch, Datadog, or similar). — Lead code reviews with peers, fostering a culture of continuous improvement, knowledge sharing, and engineering best practices. **Qualifications \& Skills** **Required** — 5\+ years of hands\-on Python development on large\-scale, production systems. — Strong experience with NoSQL databases (MongoDB preferred) and ability to design performant data models. — Practical knowledge of AWS services and cloud\-native design patterns. — Experience building or maintaining CI/CD pipelines, automated testing suites, and infrastructure\-as\-code. — Comfortable presenting ideas and technical details clearly to cross\-functional teams. **Nice to Have** — Familiarity with front\-end frameworks (React, Vue, or Streamlit) for building internal tools or dashboards. — Hands\-on experience with AI/ML workflows: training pipelines, model serving, or integrating LLM APIs. — Knowledge of Kubernetes for container orchestration and scaling. — Experience with event\-driven architectures using Kafka or similar streaming platforms. — Contributions to open\-source projects or a visible engineering blog / portfolio. **What We Offer** — Working on great tech stack (app platform) with truly big data (processing 1TB every day) — Product company with a long\-term vision — Project exposure and ownership that impacts our users, product, and business — Medical insurance — Sport benefit


