Job Description Key accountabilities
Qualifications & Experience
- Candidate should understand the basics of micro-batch and stream processing with engines like Apache Spark, Flink, Kafka, or Storm, and how to employ them to solve problems like ETL and machine learning.
- Candidate should be able create networked applications using modern semantics and protocols within the same vein as RESTful, gRPC, GraphQL, Avro, or Thrift.
- Should be highly motivated, love learning, and continually strive to achieve your growth objectives.
- Should be able to work in global development environment.
- Should keep up to date on latest software development technologies and methodologies.
- Degree / Diploma in Computer Science / Information Technology or equivalent
- At least 10 years experience in Big Data
- Good knowledge on Big Data Tools
- Preferably with minimum of 8 years of experience as Big Data Engineer
832 ### ####
Associated topics: back end, c c++, c++, develop, matlab, perl, php, python, software engineer, sw