Job Description
Join us as we work to create a thriving ecosystem that delivers accessible, high-quality, and sustainable healthcare for all.
athenahealth is a progressive & innovative U.S. health-tech leader, delivering cloud-based solutions that improve clinical and financial performance across the care continuum. Our modern, open ecosystem connects care teams and delivers actionable insights that drive better outcomes. Acquired by Bain Capital in a $17B deal. We foster a values-driven culture focused on flexibility, collaboration, and work-life balance.
Headquartered in Boston, we have offices in Atlanta, Austin, Belfast, Burlington, and in India: Bangalore, Chennai and Pune.
Position Summary: We’re looking for a Senior Data Engineer to join our Microservices Data Ingestion Platform (MDIP) team within the Datalake Zone in Bangalore. This team builds and maintains the data infrastructure that enables seamless data flow from microservices datastores to our datalake. We specialize in continuous data streaming, ETL/ELT processes, and ensuring data quality and reliability for downstream analytics and reporting.
🚨 Before You Apply: Your Resume Needs to Shine!
Did you know? 75% of applications get rejected before reaching a human recruiter – all because of poorly formatted resumes that fail ATS scans!
🔥 Get Interview-Ready in Minutes with Our Professionally Designed Resume Templates!
✅ 5+ ATS-Friendly Designs – Beat the bots and get noticed
✅ Recruiter-Approved Layouts – Highlight your skills the right way
✅ Easy-to-Edit (Word & Google Docs) – No design skills needed
✅ Free Bonus: Cover Letter Template + Resume Writing Guide
🎁 Limited-Time Offer: Get yours for just ₹249 (originally ₹999)
📥 Instant Download – Apply to Google with confidence today!
👉 Grab Your Resume Template Now: Tap Here to get your resume Templates
Responsibilities
Collaborate with microservices teams to understand schemas and integration needs
Design low-impact data extraction strategies
Implement CDC solutions for synchronization
Ensure consistency across distributed systems
Tune Kafka performance and throughput
Design cost-effective S3 storage with lifecycle policies
Build automated testing frameworks for pipeline validation
Ensure compliance with healthcare regulations (HIPAA, etc.)
Design and maintain scalable data transfer pipelines
Build Kafka-based streaming pipelines from PostgreSQL RDS to Snowflake
Optimize S3-based data staging and transformation
Configure Snowpipe for automated data loading
Implement validation, alerting, monitoring, and error handling
Explore new technologies for scalable, high-performance solutions