Tech Lead and Big Data expert

  • full time
  • 6+ years
  • Sharon area

Job Description:

Design, build, and maintain Big Data workflows/pipelines using technologies like Spark, Flink, Pulsar, Hadoop, Kafka Streams, Druid, Clickhouse, and Apache Beam.

Develop observability solutions for our cloud-native applications, focusing on areas such as application performance monitoring (APM), log analytics, distributed tracing, application and infrastructure telemetry data.

Implement data validation procedures to ensure high data quality and process integrity.

Leverage streaming and batch processing tools to provide real-time analytics insights.

Work with NoSQL databases like HBase, Cassandra, Hive, Pig, and Impala to handle our data needs.

Collaborate with cross-functional teams to define and implement data models that provide intuitive analytics.

Additional Positions:

Backend JAVA

Category:

Software

Job Qualifications:

B.sc in Computer Science or equivalent space

Proven experience as a Big Data Engineer or similar role.

10+ years of experience in leading, architecting & implementing large scale, multi-tenant SaaS offerings

Experience with cloud services (AWS, Google Cloud, Azure) and understanding of distributed systems.

Knowledge of various Big Data frameworks and libraries (like Hadoop, Spark, Hive)

Strong knowledge of observability in a cloud-native environment.

Significant experience with Java, big data technologies and application state modelling

Outstanding research capabilities

Experience with product definition and project management

Company Occupation:

High Tech, Networking/datacom/telecom

Company Size:

500+

חפש משרה

חפש
חיפוש מתקדם