Optum Logo

Optum

Data Engineering Analyst - Python, Scala, Spark

Posted 3 Days Ago
In-Office
Hyderabad, Telangana
Junior
In-Office
Hyderabad, Telangana
Junior
The Data Engineering Analyst will develop and maintain scalable Big Data applications using Azure and Spark, focusing on data pipelines and cloud technologies.
The summary above was generated by AI
Requisition Number: 2348586
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together.
The Data Engineer (Grade 25) will support the design, development, and maintenance of scalable Big Data solutions on cloud platforms. This role is ideal for early career data engineers with hands-on experience in Spark-based data processing, Azure cloud services, and data pipeline orchestration.
The individual will work closely with senior engineers, architects, and cross-functional teams to build reliable data pipelines, improve existing solutions, and ensure secure and efficient data operations. The role emphasizes solid fundamentals in data engineering, cloud technologies, and production support, while providing opportunities to grow into advanced Big Data and cloud-native architectures.
Primary Responsibilities:
  • Design, code, test, document, and maintain high-quality, scalable Big Data applications using PySpark and Scala Spark on Azure Cloud platforms
  • Develop and manage data pipelines and schedule workflows using Apache Airflow, ensuring proper job dependencies and execution order
  • Securely manage secrets and credentials using Azure Key Vault, following enterprise security best practices
  • Analyze existing data pipelines and applications to identify gaps, risks, and opportunities for improvement
  • Assist in analyzing data architecture and design frameworks, working with multiple databases and data warehouses
  • Create prototypes and proof-of-concepts (POCs) and participate in design and code reviews
  • Write and maintain technical documentation for data pipelines, workflows, and operational procedures
  • Participate in production support activities, including monitoring, troubleshooting, and issue resolution
  • Perform performance optimization of data pipelines and assist with application migration efforts across environments
  • Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so

Required Qualifications:
  • Bachelor's or Master's degree in Computer Science, Information Technology, or equivalent, with more than 1 year of relevant work experience
  • Hands-on experience with Python and Scala for data engineering and Big Data development
  • Hands-on exposure to Azure Data Lake, Azure Databricks, Azure Data Factory, and Azure Key Vault
  • Hands-on exposure to AI-assisted development tools such as Microsoft Copilot, with a basic understanding of prompt engineering to improve coding efficiency, data analysis productivity, and documentation quality
  • Working experience with Apache Spark and good understanding of Hadoop ecosystem concepts
  • Experience with job scheduling and orchestration tools, particularly Apache Airflow
  • Experience with Snowflake and writing Shell scripts for automation and operational tasks
  • Good experience working in cloud environments, preferably Microsoft Azure
  • Solid experience in writing complex SQL and PL/SQL queries
  • Exposure to CI/CD pipelines using tools such as Jenkins and GitHub Actions
  • Basic understanding of software development best practices, version control, and collaborative development
  • Proven good analytical skills, attention to detail, and willingness to learn new technologies and frameworks
  • Proven ability to work effectively in a team-oriented and fast-paced environment

At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Top Skills

Apache Airflow
Azure Data Factory
Azure Data Lake
Azure Databricks
Azure Key Vault
Ci/Cd
Github Actions
Jenkins
Pl/Sql
Python
Scala
Shell Scripts
Snowflake
Spark
SQL

Similar Jobs at Optum

Yesterday
In-Office
Senior level
Senior level
Artificial Intelligence • Big Data • Healthtech • Information Technology • Machine Learning • Software • Analytics
As a Senior Software Engineer I, contribute to platform operations, enhance customer support, collaborate with teams, and improve processes in a cloud-based environment, utilizing Kubernetes and CI/CD practices.
Top Skills: Ai/MlAmazon Web ServicesCi/CdGitGoGoogle Cloud PlatformKubernetesAzurePython
Yesterday
In-Office
Senior level
Senior level
Artificial Intelligence • Big Data • Healthtech • Information Technology • Machine Learning • Software • Analytics
The role involves auditing medical records for coding accuracy, ensuring compliance with regulations, preparing reports, and educating coding personnel. It requires in-depth audits and ongoing monitoring based on trends, contributing to compliance frameworks and addressing billing irregularities.
Top Skills: 3MCernerCoding SoftwareCptEcacEhrEmrEpicHcpcsIcd 10
Yesterday
In-Office
Mid level
Mid level
Artificial Intelligence • Big Data • Healthtech • Information Technology • Machine Learning • Software • Analytics
Manage a team driving AI/ML initiatives from inception to production, ensuring compliance with company policies and procedures.
Top Skills: AIMl

What you need to know about the Melbourne Tech Scene

Home to 650 biotech companies, 10 major research institutes and nine universities, Melbourne is among one of the top cities for biotech. In fact, some of the greatest medical advancements were conceptualized and developed here, including Symex Lab's "lab-on-a-chip" solution that monitors hormones to predict ovulation for conception, and Denteric's vaccine for periodontal gum disease. Yet, the thousands of people working in the city's healthtech sector are just getting started, to say nothing of the tech advancements across all other sectors.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account