Job Description:
• Lead hands-on development of complex features and components in our Big Data pipelines from concept to deployment
• Serve as the go-to technical expert and problem solver for the team
• Mentor team members, support their growth, and foster knowledge sharing
• Investigate and adopt new technologies, driving innovation into the product
• Take ownership across the full development lifecycle: from translating product requirements into designs, through coding and testing, to troubleshooting in production
• Design for scalability, sustainability, and high performance in production-grade environments
Additional Positions:
Job Qualifications:
• At least 5 years of hands-on experience in developing data-oriented highly complex products, using Java/Python/Other OOP languages – Must
• Experience in data processing FW such as Spark or Pandas – Must (Experience with Minio / Airflow – an Advantage)
• Production level troubleshooting; Experience working with advanced SDLC methodologies – Must
• Experience in Distributed Databases (Such as Elasticsearch, Mongo, Redis, etc)
• Experience working as a software developer in an Agile environment
• Familiarity with AI tools to enhance and accelerate development workflows
• Experience with developing microservices-based architecture
• Experience working in container-based environments using tools such as K8s, Helm
• Curiosity, accountability, and strong analytical thinking. Teamwork skills, including close collaboration with cross-functional interfaces such as Product, QA, and DevOps
Nice to have:
• Familiarity with machine learning frameworks (e.g., scikit-learn, TensorFlow)
• Experience with RedHat OpenShift
• Dev Environment knowledge: GIT, Jenkins, Docker
• Working with hybrid teams remotely
• Strong plus will be a mentoring experience, team guidance, and unofficial leadership responsibilities
Company Occupation:
High Tech
Company Size:
Medium (50 - 150)