Data Architect

  • full time
  • 6+ years
  • Sharon area

Job Description:

For a hi-tech company developing games for various platforms, with offices in the Sharon

Multi-disciplinary Big Data Architect to design and build scalable and robust data platform, Marketing, CRM, campaign management and other back-office applications Responsible for multi projects at once, being the main source of technical knowledge and experience, facilitating, building (hands-on) and mentoring RnD teams Responsible for cloud infrastructure, architecture, micro-services, CICD, production environments and for new innovative and complex developments. Responsible for research, analysis and performing proof of concepts for new technologies, tools and design concepts Design and development core modules in our big data platform infrastructures (hosting in Google Cloud, services and Kubernetes, based on: Spark Core/Streaming Structure/SQL, Scala, Python, AngularJS, Node.js, Kafka, BigQuery, Redis, Elasticsearch, Google Cloud Machine Learning Engine and TensorFlow) The platform handles huge amounts of data, through complex processing in batch and real time modes, complex data manipulation, using services, UI frameworks and interactive notebooks.

Additional Positions:

System architect

Category:

Software

Job Qualifications:

8+ years of practical experience with Scala/Python programming languages, excellent programming skills - functional programming, design patterns, data structures and TDD approach - must

6+ years hands-on experience building large-scale (petabytes), low-latency distributed systems using modern cloud computing technologies (GCP - preferred , AWS) - must.

5+ years experience working in microservices environments, building state-of-the-art data-driven solutions - must

5+ years experience in efficient data modeling using SQL/NoSQL solutions (BQ, ES, MySQL etc) - must

Expert SQL queries knowledge - you know how to write efficient, low-latency queries vs modern data-warehouse solutions (BQ - preferred, Redshift, Athana etc) - must

Experience building large-scale (petabytes) of Streaming/Batch ETL using modern processing engines (Spark - preferred, Beam, Flask etc).

Experience working with data streams systems (Kafka - preferred , Pub-Sub or Kinesis) - must

Experience working Data Science/ML (production grade) infrastructure - big advantage

Experience working with notebooks solutions (Zeppelin, Jupyter) - big advantage

Experience working storage solutions (hdfs, S3, GCS - preferred) - big advantage

Company Occupation:

High Tech, Software

Company Size:

Large (150+)

חפש משרה

חפש
חיפוש מתקדם