What you will do
- Build the underneath data platform and maintain data processing pipelines using best in class technologies.
- Special focus on R&D to challenge status-quo and build the next generation data mesh that is efficient and cost effective.
- Translate complex technical and functional requirements into detailed designs.
- Collaborate and create the roadmap and socialize among team members and stake holders.
- Create and instill a team culture that focuses on sound scientific processes and encourages deep engagement with our customers.
- Handle project scope and risks with data, analytics, and creative problem-solving.
- Help define data governance policies and implement security and privacy guard rails.
Who you are
- 10+ years of experience developing data engineering solutions, including ETLs, with a minimum of 3 years in a senior or lead role.
- Education: Bachelor’s degree or higher in Computer Science, Data Science, Engineering, or a related technical field.
- Hands-on engineer with extensive experience ingesting, storing, processing, and analyzing large datasets
- Good understanding of data-mesh principles and data-modelling techniques
- Proficient in some of the following tools and technologies:
- Languages: Python, Java/Scala, SQL
- Dev-ops pipelines
- Container orchestration: Docker, Kubernetes
- Data Engineering ecosystem: DeltaLake, DataHub, Datadog etc.
- Hadoop ecosystem: Hadoop, Spark, Hive etc.
- Stream processing: Storm, Flink etc.
- Workflow: Airflow, Flyte, etc.
- Strong understanding of data structures, algorithms, multi-threaded programming, and distributed computing concepts
- Experience with data quality, data discoverability and governance tools.
- Good exposure to GenAI technologies
- Strong communication skills, experience communicating across various groups, and levels of leadership within a global organization
- Ability to drive outcomes as opposed to task-based execution
Sorry! This job has expired.