accesa.eu Logo

accesa.eu

Data Engineer (with Spark, Airflow)

Posted 2 Days Ago
Be an Early Applicant
Remote
Mid level
Remote
Mid level
As a Data Engineer, you will create and maintain efficient data transformation pipelines, manage complex financial datasets, optimize data delivery processes, and build scalable data infrastructure. Collaboration with cross-functional teams is crucial to addressing data-related issues and to support data infrastructure needs.
The summary above was generated by AI

Company Description

Company Description

Accesa is a leading technology company headquartered in Cluj-Napoca, with offices in Oradea, Bucharest, Timisoara and 20 years of experience in turning business challenges into opportunities and growth.

A value-driven organisation, it has established itself as a partner of choice for major brands in Retail, Manufacturing, Finance, and Banking. It covers the complete digital evolution journey of its customers, from ideation and requirements setup to software development and managed services solutions.

With more than 1,200 IT professionals, Accesa also has a fast-growing footprint, establishing itself as an employer of choice for IT professionals who are passionate about problem-solving through technology. Coming together in strong tech teams with a customer-centric approach, they enable businesses to grow, delivering value for our clients, partners, industry, and community.

Job Description

One of our clients operates prominently in the financial sector, where we enhance operations across their extensive network of 150,000 workstations and support a workforce of 4,500 employees. As part of our commitment to optimizing data management strategies, we are migrating data warehouse (DWH) models into data products within the Data Integration Hub (DIH). 

Responsibilities:

  • Drive Data Efficiency: Create and maintain optimal data transformation pipelines.

  • Master Complex Data Handling: Work with large, complex financial data sets to generate outputs that meet functional and non-functional business requirements.

  • Lead Innovation and Process Optimization: Identify, design, and implement process improvements such as automating manual processes, optimizing data delivery, and re-designing infrastructure for higher scalability. 

  • Architect Scalable Data Infrastructure: Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using open-source technologies. 

  • Unlock Actionable Insights: Build/use analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics.

  • Collaborate with Cross-Functional Teams: Work clients and internal stakeholders, including Senior Management, Departments Heads, Product, Data, and Design teams, to assist with data-related technical issues and support their data infrastructure needs. 

Qualifications

Qualifications 

Must have 

  • 3+ years of experience in a similar role, preferably within Agile teams.

  • Strong analytical skills in working with both structured and unstructured data 

  • Skilled in SQL and relational databases for data manipulation.

  • Experience in building and optimizing Big Data pipelines and architectures.

  • Knowledge of Apache Spark framework and object-oriented programming in Java; experience with Python is a plus.

  • Experience with ETL processes, including scheduling and orchestration using tools like Apache Airflow (or similar). 

  • Proven experience in performing data analysis and root cause analysis on diverse datasets to identify opportunities for improvement.

Nice to have:

  • Expertise in manipulating and processing large, disconnected datasets to extract actionable insights 

  • Automate CI/CD pipelines using ArgoCD, Tekton, and Helm to streamline deployment and improve efficiency across the SDLC 

  • Manage Kubernetes deployments on OpenShift, focusing on scalability, security, and optimized container orchestration 

  • Technical skills in the following areas are a plus: relational databases (e.g. Postgresql), Big Data Tools: (e.g. Databricks), and workflow management (e.g. Airflow), and backend development using Spring Boot. 

Additional Information

Enjoy our holistic benefits program that covers the four pillars that we believe come together to support our wellbeing, covering social, physical, emotional wellbeing, as well as work-life fusion.

  • Physical: premium medical package for both our colleagues and their children, dental coverage up to a yearly amount, eyeglasses reimbursement every two years, voucher for sport equipment expenses, in-house personal trainer
  • Emotional: individual therapy sessions with a certified psychotherapist, webinars on self-development topics
  • Social: virtual activities, sports challenges, special occasions get-togethers
  • Work-life fusion: yearly increase in days off, flexible working schedule, birthday, holiday and loyalty gifts for major milestones

Top Skills

Spark
Java
Python
SQL

Similar Jobs

Be an Early Applicant
3 Days Ago
Sydney, New South Wales, AUS
Remote
11,000 Employees
Senior level
11,000 Employees
Senior level
Cloud • Information Technology • Productivity • Security • Software • App development • Automation
The Senior Data Platform Engineer will design and develop high-quality solutions for Atlassian's data platform, collaborating with teams to enhance code quality and operational excellence while addressing business needs through innovative approaches.
Be an Early Applicant
3 Days Ago
Sydney, New South Wales, AUS
Remote
11,000 Employees
Senior level
11,000 Employees
Senior level
Cloud • Information Technology • Productivity • Security • Software • App development • Automation
The Principal Data Platform Engineer will design and develop innovative big data and software solutions to meet business needs, collaborate with teams to enhance project quality, and improve operational excellence. Responsibilities include leading code reviews, mentoring junior engineers, and driving improvements across multiple projects.
10 Days Ago
8 Locations
Remote
578,950 Employees
Mid level
578,950 Employees
Mid level
Big Data • Cloud • Logistics • Machine Learning • Retail
As a Data Engineer, you'll be responsible for designing and implementing data pipelines using Spark and Scala while ensuring data quality, optimizing workflows, and integrating data solutions that meet business needs. You'll collaborate with teams to build efficient data processing systems and support data analysis efforts.

What you need to know about the Melbourne Tech Scene

Home to 650 biotech companies, 10 major research institutes and nine universities, Melbourne is among one of the top cities for biotech. In fact, some of the greatest medical advancements were conceptualized and developed here, including Symex Lab's "lab-on-a-chip" solution that monitors hormones to predict ovulation for conception, and Denteric's vaccine for periodontal gum disease. Yet, the thousands of people working in the city's healthtech sector are just getting started, to say nothing of the tech advancements across all other sectors.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account