YOUR MISSION
As Data Platform Engineer, you will grow our business by applying knowledge of data architectures, APIs, and the delivery and transformation of data in a reliable way and help us expand globally.
The ideal candidate is passionate about developing software and working with data, can challenge the status quo, and redesign existing solutions. In addition, the ideal candidate is a great teammate and always willing to collaborate with others.
You’re responsible for helping build and maintain the data pipeline architecture of Mural, as well as writing APIs and tools to help other teams work with data. You will collaborate closely with Product, Analytics, and Data Science teams to help them achieve their goals. You will report directly to the Data Engineering Manager.
In this role, you will:
Help build the platform, tools, and APIs vital to enable other teams to work with data.
Improve the existing data platform and propose solutions.
Work closely with Product teams to help them explore the feasibility of experimental data-driven features, helping them narrow down preliminary or unclear requirements and building the tools and APIs vital to support those features. A strong analytical attitude is a must.
Efficiently handle vast amounts of data from multiple sources and destinations, including relational and NoSQL databases and external systems, both in batch processing and real-time delivery.
Follow modern development standards and methodologies such as code reviews, unit testing, continuous integration, and agile methodology.
Work as part of a team. We value teammates who share their knowledge and like collaborating with others.
Show initiative, completing your tasks and providing timely status updates to the rest of your team and all of the customers, collaborators, and partners.
Take full ownership of the solutions you build. This means analyzing requirements, building, tracking, and monitoring them, and troubleshooting them if problems arise.
2+ years of hands-on administration experience maintaining Databricks in any cloud environment
1-2 years of experience in engineering data pipelines using big data tooling (Spark, Hudi, Kafka)
Experience with Python software development
Working knowledge of a variety of databases (ideally both relational and non-relational)
Advanced relational database experience, including the ability to author and navigate highly-complex SQL queries
Experience managing Databricks clusters and job configurations
Hands-on experience in building data pipelines using Databricks and scheduling Databricks jobs.
Working knowledge of a variety of databases (ideally both relational and non-relational)
Experience building production data pipelines
Excellent command of English, both written and verbal
Ability to work independently from a remote location
Experience with Airflow and dbt
ETL job design and development, ELT/ETL solutions, data lakes, and data warehouses at a large scale
Successful track record in Data migrations, Database operations, and maintenance
Analytic/MPP data warehouse optimization
Experience with Unity Catalog (Databricks)
Please submit your resume in English. #LI-Remote #LI-AB1
Success story sharing