Karbon Logo

Karbon

Senior Data Engineer

Reposted 18 Hours Ago
Be an Early Applicant
In-Office
Melbourne, Victoria
Senior level
In-Office
Melbourne, Victoria
Senior level
The Senior Data Engineer will build a unified data platform, develop pipelines, ensure data quality, and champion governance while collaborating with cross-functional teams.
The summary above was generated by AI

About Karbon

Karbon is the global leader in AI-powered practice management software for accounting firms. We provide an award-winning cloud platform that helps tens of thousands of accounting professionals work more efficiently and collaboratively every day. With customers in 40 countries, we have grown into a globally distributed team across the US, Australia, New Zealand, Canada, the United Kingdom, and the Philippines. We are well-funded, ranked #1 on G2, growing rapidly, and have a people-first culture that is recognized with Great Place To Work® certification and on Fortune magazine's Best Small Workplaces™ List.

We are seeking an experienced data engineer who thrives in a fast paced environment. You will have the unique opportunity to build the new unified data platform to power our suite of AI tools and insight delivery.

About this role and the work

Karbon is at the start of its Data & AI journey meaning that you will have the opportunity to revolutionize our data platform. This role supports both our AI team and our Insights team, critical in delivering features for the Karbon platform. You’ll improve our new data platform centered around Databricks. The successful candidate will be a hands-on builder and a strategic thinker, capable of designing scalable, robust, and forward-looking data solutions.

Some of your main responsibilities will include:

  • Developing a unified data platform: Develop our new unified data platform on Databricks. You will be instrumental in establishing the Medallion Architecture (Bronze, Silver, Gold layers) using dlt for data modeling and transformations.
  • Develop Data Pipelines: Create and manage resilient data pipelines for both batch and real-time processing from various sources in our Azure data ecosystem. This includes building a "hot path" for streaming data and orchestrating complex dependencies using Databricks Workflows.
  • Enable Data Integration and Access: Implement and manage data replication processes from Databricks to Snowflake. You will also be responsible for developing a low-latency query endpoint to serve our production Karbon application.
  • Champion Data Quality and Governance: Establish best practices for data quality, integrity, and observability. You will build automated quality checks, tests, and monitoring for all data assets and pipelines to ensure trust in our data.
  • Implement Robust Security and Governance Practices: Design and enforce a comprehensive security model for the data platform. This includes management of PII and implementing a fine-grained Role-Based Access Control (RBAC) model through IaC
  • Cross functional collaboration: Work within a cross-functional team of AI engineers, analysts, and developers to deliver impactful data products.
  • Use AI tools thoughtfully - Leverage AI to move faster for example drafting code, exploring solutions, or writing tests, while applying good judgement and always reviewing what ships.
About you

If you’re the right person for this role, you have:

  • 5+ years of relevant work experience as a data engineer, with a proven track record of building and scaling data platforms
  • Previous experience with Databricks
  • Previous experience architecting ETL & ELT data migration patterns with strong proficiency in DLT.
  • Experience scaling data pipelines in a multi-cloud environment
  • Strong proficiency in Python
  • Strong proficiency in SQL and a deep understanding of relational DBMS
  • DevOps experience, including CI/CD, and infrastructure-as-code (e.g., Terraform)

It would be advantageous if you have:

  • Previous experience with Azure cloud services (Highly desirable)
  • DevOps experience is highly desirable
  • Experience with both batch and streaming data technologies
  • Experience building and maintaining APIs or query endpoints for application data access
  • Practical MLOps experience, such as implementing solutions with MLflow, feature stores, and automated model deployment and evaluation pipelines.
Why work at Karbon?
  • Gain global experience across the USA, Australia, New Zealand, UK, Canada and the Philippines
  • 4 weeks annual leave plus 5 extra "Karbon Days" off a year
  • Flexible working environment
  • Work with (and learn from) an experienced, high-performing team
  • Be part of a fast-growing company that firmly believes in promoting high performers from within
  • A collaborative, team-oriented culture that embraces diversity, invests in development, and provides consistent feedback
  • Generous parental leave

Karbon embraces diversity and inclusion, aligning with our values as a business. Research has shown that women and underrepresented groups are less likely to apply to jobs unless they meet every single criteria. If you've made it this far in the job description but your past experience doesn't perfectly align, we do encourage you to still apply. You could still be the right person for the role!

We recruit and reward people based on capability and performance. We don’t discriminate based on race, gender, sexual orientation, gender identity or expression, lifestyle, age, educational background, national origin, religion, physical or cognitive ability, and other diversity dimensions that may hinder inclusion in the organization.

Generally, if you are a good person, we want to talk to you. 😛

If there are any adjustments or accommodations that we can make to assist you during the recruitment process, and your journey at Karbon, contact us at [email protected] for a confidential discussion.

 

At this time, we request that agency referrals are not submitted for this position. We appreciate your understanding and encourage direct applications from interested candidates. Thank you!

Top Skills

Azure
Ci/Cd
Databricks
Dlt
Python
SQL
Terraform

Similar Jobs

6 Days Ago
Easy Apply
In-Office
Melbourne, Victoria, AUS
Easy Apply
Senior level
Senior level
Software
The role involves designing and building data models for analytical insights, optimizing queries, and collaborating with data scientists on production models.
Top Skills: BigQueryClickhouseDbtKafkaPythonSnowflakeSQL
3 Days Ago
Hybrid
Melbourne, Victoria, AUS
Senior level
Senior level
Information Technology
As a Senior Data Engineer, you will design and implement data transformation pipelines, gather business requirements, and work with various big data technologies.
Top Skills: Apache KafkaBig Data TechnologiesCloud Big Data WarehousesPythonRedshiftSnowflakeSparkSQL
6 Hours Ago
Hybrid
Melbourne, Victoria, AUS
Mid level
Mid level
Big Data • Food • Hardware • Machine Learning • Retail • Automation • Manufacturing
The Legal Counsel will develop legal strategies for Mondelēz International, empower decision-makers with legal insights, and collaborate globally to ensure compliance with laws.

What you need to know about the Melbourne Tech Scene

Home to 650 biotech companies, 10 major research institutes and nine universities, Melbourne is among one of the top cities for biotech. In fact, some of the greatest medical advancements were conceptualized and developed here, including Symex Lab's "lab-on-a-chip" solution that monitors hormones to predict ovulation for conception, and Denteric's vaccine for periodontal gum disease. Yet, the thousands of people working in the city's healthtech sector are just getting started, to say nothing of the tech advancements across all other sectors.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account